A bit over a yr after TikTok temporarily went dark in the United States and customers have been greeted with a message explaining that “a legislation banning TikTok has been enacted,” those self same U.S. customers opened the app to discover a pop-up message requiring them to comply with new phrases earlier than they may proceed scrolling.
The brand new terms of service and privacy policy went into impact on January 22, 2026, following the app’s sale from ByteDance to TikTok USDS Joint Enterprise LLC, a majority American-owned firm that reportedly will control U.S. customers’ information and content material and the app’s recommendation algorithm.
Individuals see this type of pop-up on a regular basis, and in response to analysis, the “biggest lie on the internet” is that individuals ever learn something earlier than clicking “agree.” However given many customers’ unease concerning the possession change—together with fears of swapping Chinese surveillance for U.S. surveillance—it’s unsurprising that this time, folks paid consideration. Screenshots of authorized language unfold shortly on-line, accompanied by warnings about sweeping new information assortment.
I’m each a TikTok content creator and a tech ethics and policy researcher who has studied website terms and conditions, particularly whether or not folks learn them (they don’t) and the way properly they perceive them (additionally they don’t). After I noticed the outrage on social media, I instantly dove down a phrases of service and privateness coverage rabbit gap that had me tumbling into the wayback machine and in addition taking a look at related insurance policies on different apps and TikTok’s insurance policies in different nations.
In the long run, I found that in essentially the most broadly shared examples, the language that sounded most alarming had both hardly modified in any respect or described practices which can be pretty commonplace throughout social media.
Some adjustments aren’t actually adjustments
Contemplate the checklist of “delicate private data” in TikTok’s new privateness coverage, which incorporates gadgets like sexual orientation and immigration standing. Many customers interpreted this checklist as proof that TikTok had begun gathering extra private information. Nevertheless, this very same checklist appeared within the previous version of TikTok’s U.S. privacy policy, which was final up to date in August 2024. And in each instances, the language focuses on “data you disclose”—for instance, in your content material or in responses to consumer surveys.
This language is in place presumably to adjust to state privateness legal guidelines akin to California’s Consumer Privacy Act, which incorporates necessities for disclosure of the gathering of sure classes of knowledge. TikTok’s new coverage particularly cited the California legislation. Meta’s privacy policy lists very related classes, and this language general tends to sign regulatory compliance by disclosing current information assortment quite than further surveillance.
Location monitoring additionally prompted considerations. The brand new coverage states that TikTok might “acquire exact location information, relying in your settings.” It is a change, however it’s additionally common practice for the foremost social media apps.
The change additionally brings the corporate’s U.S. coverage in keeping with TikTok insurance policies in different nations. For instance, the corporate’s European Economic Area privacy policy has very related language, and users in the U.K. need to grant exact location entry to make use of a “Close by Feed” for locating occasions and companies close to them.
Although apps produce other methods to approximate location, akin to IP tackle, a consumer should grant permission by their cellphone’s location companies to ensure that TikTok to entry exact location through GPS—permission that TikTok has not but requested from U.S. customers. Nevertheless, the brand new coverage opens the door for customers having the choice to grant that permission sooner or later.
This CBC report describes the aftermath of the TikTok sale and why many customers are deleting the app.
No information doesn’t equal excellent news
None of that is to say that customers are flawed to be cautious. Even when TikTok’s authorized language round information privateness is commonplace for the trade, who controls your information and your feed remains to be very related. Uninstalls for the app spiked 130% within the days following the change, with many users expressing concern concerning the ties that the brand new homeowners need to President Donald Trump—notably Oracle, the corporate led by Trump supporter Larry Ellison.
It additionally didn’t assist that TikTok’s first week below American possession was a complete disaster. Extreme technical issues—later attributed by TikTok to a data center power failure—occurred to coincide with the brand new possession announcement, fueling widespread concerns about censorship of content critical of the U.S. government. Maybe some customers remembered that Trump once joked about making the platform “100% MAGA.”
However no matter what truly occurred, at this level, distrusting tech corporations isn’t precisely irrational.
Readability and belief
Conflating very actual structural dangers with unfamiliar sentences in authorized paperwork, nevertheless, can obscure what is definitely altering and what isn’t. The deceptive details about TikTok’s coverage adjustments that unfold throughout social media can also be proof of a widely known design failure: Most tech insurance policies aren’t made to be learn.
My own work revealed that these paperwork are sometimes written at a university and even graduate college studying stage. Another analysis as soon as calculated that if each American learn the privateness coverage for every web site they go to for only a yr, it could value $785 billion in misplaced leisure and productivity time.
So the dialogue about TikTok’s insurance policies is a case research within the deep mismatch between how tech corporations talk and the way folks interpret danger, significantly in an period of exceptionally low belief in each Big Tech and government. Proper now, ambiguity doesn’t really feel impartial. It feels threatening.
As a substitute of dismissing these reactions as overblown, I consider that corporations ought to acknowledge that if an enormous portion of their consumer base assumes the worst, that’s not a studying comprehension drawback; it’s a belief drawback. So writing information privateness insurance policies extra legibly is a begin, however rebuilding any form of inherent belief within the stewardship of that information might be the extra necessary problem.
Casey Fiesler is an affiliate professor of knowledge science on the University of Colorado Boulder.
This text is republished from The Conversation below a Inventive Commons license. Learn the original article.

