The daybreak of AI started years in the past with knowledge assortment. Commercialism has grow to be pinpointed. Inputting telephone numbers throughout each industrial transaction, “free” serps, social media platforms sending consumer data to firms—all the things we purchase is documented, all the things we do is tracked and famous by governments and firms alike.
The announcement that AI platforms will now acquire private medical data underneath the banner of “serving to folks handle their well being” is being bought as progress. However historical past reveals us that each time data is centralized, it’s ultimately weaponized — both politically, financially, or legally.
“ChatGPT Well being is one other step towards turning ChatGPT into a private super-assistant that may help you with data and instruments to attain your targets throughout any a part of your life,” Fidji Simo, CEO of purposes at OpenAI, wrote in a put up on Substack. Smartwatches will now hook up with bigger centralized databases. Your each step is calculated and tracked.
The creators declare the info is not going to be used for coaching. They declare enhanced privateness and safeguards. Governments and establishments at all times make these claims in the beginning of each cycle, not the tip. The true difficulty shouldn’t be what they intend at this time, however what the system will demand tomorrow.
Well being knowledge shouldn’t be merely private data it’s a supply of leverage and energy. As soon as digitized and centralized, it turns into topic to subpoenas, regulatory seize, political agendas, and social engineering. Folks overlook that HIPAA doesn’t defend you from the federal government. Each database has been hacked in some unspecified time in the future in time. Well being data are delicate data that folks wouldn’t willingly share. Accessing that data might wield large energy. The corporate said that “tons of of hundreds of thousands” of ChatGPT customers ask health-related questions each week. What if these questions have been publicized? The federal government calls for backdoor entry to each platform and can undoubtedly demand entry to those data.
The hazard right here shouldn’t be synthetic intelligence. The hazard is centralization with out accountability. AI itself is impartial and has acted as extra of a search engine, however one should surprise how they supply such a service for “free.” The issue is who controls the change when political strain inevitably arrives. No system stays voluntary as soon as it turns into important.

