TikTok’s new owners will include Rupert Murdoch (responsible for creating Fox News, the most effective mass media right wing propaganda platform ever) and Trump bestie Larry Ellison, who is in the process of turning CBS News into basically the same thing via his nepo baby son and Bari Weiss.
Normally you’d want to be a little subtle about the plan to turn TikTok into a pro-Trump and pro-Netanyahu propaganda machine to avoid scaring off customers, but that’s not Trump’s style. So last week he basically just blurted out the whole plan, then insisted he was just “joking”:
“Trump signed an executive order to “save” TikTok, while supposedly joking that he’d like to censor influencers by tweaking the algorithm so that content is “100 percent MAGA.”
“Everyone is going to be treated fairly,” the president added—seemingly covering his tracks as critics warn that TikTok under US ownership could soon carry a right-wing bias, perhaps going the way of Twitter after Elon Musk took over and rebranded it as X.”
Yes, “perhaps.”
From Twitter and the Washington Post to CBS News, the right wing billionaire tendency to buy up major media properties and convert them into right wing propaganda and bullshit machines has not been subtle. Yet, as the framing of this Ars piece makes clear, the press still seems somewhat confused as to whether TikTok will be any sort of reliable source of information (spoiler: it won’t) under far right wing billionaire ownership.
TikTok under Bytedance ownership certainly raised privacy, propaganda, and national security concerns. But under Bytedance the platform at least tried to behave so it could continue operating in the U.S. With Trump having dismantled all our privacy, NatSec, and fraud regulators, the new U.S. ownership of TikTok will see arguably fewer regulatory constraints on their worst impulses than ever.
Murdoch clearly wants a modern media extension of his existing Fox News empire given his core audience is dying off. Ellison, a staunch supporter of Netanyahu and his industrialized mass murder of children, clearly wants to leverage TikTok as a new media extension for whatever fresh hell he and Bari Weiss are building over at CBS. I’d expect ample authoritarian apologia.
To be clear the deal hasn’t been fully finalized yet. It’s still not clear if the deal will meet the legal requirements of the Protecting Americans from Foreign Adversary Controlled Applications Act, especially given there seems to be some ongoing debate over who’ll exactly own the underlying algorithm. There will likely be some opportunities for activists and lawyers to throw sand in the gears.
But make no mistake: if this deal goes through TikTok will absolutely be headed the way of Twitter under Elon Musk. They’ll likely try to leave things much the same for a 6-12 months to pretend that’s not going to be the case, but I suspect that, ultimately, its use for right wing propaganda will be obvious.
Creating an internet full of wall-to-wall racist and corporatist right wing agitprop was always the end game of MAGA’s bogus “Conservative censorship” and “we support antitrust reform now” claims, it was never remotely subtle, and you can’t say you weren’t warned, repeatedly.
You’d like to think that the conversion of TikTok into a far right wing safe space will cause a mass exodus of ethical people off of the platform, but as we’ve seen with Twitter (especially when it comes to journalists’ continued use of a website owned by an overt white supremacist) that’s clearly not really something you can truly rely on.
You’d also like to think that the hijacking of TikTok will create the opportunity for innovators to create a better, more ethical short-form video platform not owned by assholes actively cheering on the destruction of foundational democracy. Here too, time will tell.
Their goal is obvious but as some are quick to point out: their success is far from guaranteed. Remember what happened with Rupert Murdoch and MySpace? AT&T’s attempted domination of video? These sorts of domination plays, especially in mass modern media, never quite go the way rich brunchlords planned, and it’s not like Oracle executives have any sort of serious experience with consumer-facing product success, much less any understanding of modern media.
That said, you’d need to define “success.” The billionaire right wing architects of this new modern era right wing propaganda bullhorn (that may soon be comprised of Fox, CNN, Sinclair, TikTok, Twitter, and countless other media properties) have no limit of money to burn on profit-losing propaganda ventures in a country that just took a hatchet to any remaining financial or consumer protection regulators.
They may never have the competency to actually execute, but given the already extremely shaky status of journalism, the media, and informed consensus, I think emphatic alarmism remains the right response to the grand, unsubtle mass media plans of our shittiest billionaires.
Donald Trump has successfully used xenophobia and fake concerns about propaganda and national security to get what he’s long wanted: TikTok (and its fat ad revenues) are poised to be sold off to his right wing billionaire buddies and, inevitably, slowly converted into a right wing propaganda safe space.
Andreessen and Ellison are, to be clear, technofascists who don’t believe in democracy, regulatory oversight, or basic privacy protections for consumers. The remaining 20 percent would remain in the hands of Chinese ownership and the Chinese government, which still has to finalize the deal. This is not, contrary to what you’ll read in the pages of WAPO or CNN, a net improvement.
Oh, and Donald Trump will get to appoint a board member. Remember when Republicans were against government interference in private businesses?
Trump still didn’t truly get what he really wanted: reporting in the Financial Times suggests that China will still technically own and control the algorithm used to power TikTok, something Trump had previously said was essential to any deal. Early reporting by the Wall Street Journal also indicates that existing TikTok users will have to migrate to a new app.
“Come join an app majority owned by Donald Trump’s unhinged right wing billionaire friends where there’s no competent hate speech and right wing propaganda safeguards” is going to be a tricky selling point that could ultimately throw sand in the gears, and create the potential for another (hopefully better?) company to disrupt their plans.
Now is the time for Silicon Valley to engage in that boundless innovation we’ve all heard so much about.
It WasNever About Privacy And National Security
I’ve noted more times than I can count that the push to ban TikTok was never really about protecting American privacy. If that were true, we would pass a real privacy law and craft serious penalties for companies and executives that play fast and loose with sensitive American data, be it TikTok or the myriad of super dodgy apps, telecoms, and hardware vendors monetizing your phone usage.
It was never really about propaganda. If that were true, we’d take aim at the extremely well funded authoritarian propaganda machine and engage in content moderation of race-baiting political propaganda that’s filling the brains of young American men with pudding and hate. We’d push for education media literacy reforms common in countries like Finland.
TikTok’s Chinese ownership did pose some very real legitimate security, privacy, and NatSec concerns, but the MAGA folks “fixing” the problem were never competent or good faith actors, and the push to ban hijack TikTok was always about ego, money, and information control.
American authoritarians are following the same playbook we’ve seen in countries like Hungary, where new and old media and journalism is either destroyed or hijacked in service to authoritarian leadership. It’s happening here, now, and the very least ethical people can do is recognize it and put up a fight.
One afternoon in mid-September, a group of middle school girls in rural East Tennessee decided to film a TikTok video while waiting to begin cheerleading practice.
In the 45-second video posted later that day, one girl enters the classroom holding a cellphone. “Put your hands up,” she says, while a classmate flickers the lights on and off. As the camera pans across the classroom, several girls dramatically fall back on a desk or the floor and lie motionless, pretending they were killed.
When another student enters and surveys the bodies on the ground in poorly feigned shock, few manage to suppress their giggles. Throughout the video, which ProPublica obtained, a line of text reads: “To be continued……”
Penny Jackson’s 11-year-old granddaughter was one of the South Greene Middle School cheerleaders who played dead. She said the co-captains told her what to do and she did it, unaware of how it would be used. The next day, she was horrified when the police came to school to question her and her teammates.
By the end of the day, the Greene County Sheriff’s Department charged her and 15 other middle school cheerleaders with disorderly conduct for making and posting the video. Standing outside the school’s brick facade, Lt. Teddy Lawing said in a press conference that the girls had to be “held accountable through the court system” to show that “this type of activity is not warranted.” The sheriff’s office did not respond to ProPublica’s questions about the incident.
Widespread fear of school shootings is colliding with algorithms that accelerate the spread of the most outrageous messages to cause chaos across the country. Social videos, memes and retweets are becoming fodder for criminal charges in an era of heightened responses to student threats. Authorities say harsh punishment is crucial to deter students from making threatening posts that multiply rapidly and obscure their original source.
In many cases, especially in Tennessee, police are charging students for jokes and misinterpretations, drawing criticism from families and school violence prevention experts who believe a measured approach is more appropriate. Students are learning the hard way that they can’t control where their social media messages travel. In central Tennessee last fall, a 16-year-old privately shared a video he created using artificial intelligence, and a friend forwarded it to others on Snapchat. The 16-year-old was expelled and charged with threatening mass violence, even though his school acknowledged the video was intended as a private joke.
Other students have been charged with felonies for resharing posts they didn’t create. As ProPublica wrote in May, a 12-year-old in Nashville was arrested and expelled this year for sharing a screenshot of threatening texts on Instagram. He told school officials he was attempting to warn others and wanted to “feel heroic.”
In Greene County, the cheerleaders’ video sent waves through the small rural community, especially since it was posted several days after the fatal Apalachee High School shooting one state away. The Georgia incident had spawned thousands of false threats looping through social media feeds across the country. Lawing told ProPublica and WPLN at the time that his officers had fielded about a dozen social media threats within a week and struggled to investigate them. “We couldn’t really track back to any particular person,” he said.
But the cheerleaders’ video, with their faces clearly visible, was easy to trace.
Jackson understands that the video was in “very poor taste,” but she believes the police overreacted and traumatized her granddaughter in the process. “I think they blew it completely out of the water,” she said. “To me, it wasn’t serious enough to do that, to go to court.”
That perspective is shared by Makenzie Perkins, the threat assessment supervisor of Collierville Schools, outside of Memphis. She is helping her school district chart a different path in managing alleged social media threats. Perkins has sought specific training on how to sort out credible threats online from thoughtless reposts, allowing her to focus on students who pose real danger instead of punishing everyone.
The charges in Greene County, she said, did not serve a real purpose and indicate a lack of understanding about how to handle these incidents. “You’re never going to suspend, expel or charge your way out of targeted mass violence,” she said. “Did those charges make that school safer? No.”
When 16-year-old D.C. saw an advertisement for an AI video app last October, he eagerly downloaded it and began roasting his friends. In one video he created, his friend stood in the Lincoln County High School cafeteria, his mouth and eyes moving unnaturally as he threatened to shoot up the school and bring a bomb in his backpack. (We are using D.C.’s initials and his dad’s middle name to protect their privacy, because D.C. is a minor.)
D.C. sent it to a private Snapchat group of about 10 friends, hoping they would find it hilarious. After all, they had all teased this friend about his dark clothes and quiet nature. But the friend did not think it was funny. That evening, D.C. showed the video to his dad, Alan, who immediately made him delete it as well as the app. “I explained how it could be misinterpreted, how inappropriate it was in today’s climate,” Alan recalled to ProPublica.
It was too late. One student in the chat had already copied D.C.’s video and sent it to other students on Snapchat, where it began to spread, severed from its initial context.
That evening, a parent reported the video to school officials, who called in local police to do an investigation. D.C. begged his dad to take him to the police station that night, worried the friend in the video would get in trouble — but Alan thought it could wait until morning.
The next day, D.C. rushed to school administrators to explain and apologize. According to Alan, administrators told D.C. they “understood it was a dumb mistake,” uncharacteristic for the straight-A student with no history of disciplinary issues. In a press release, Lincoln County High School said administrators were “made aware of a prank threat that was intended as a joke between friends.”
But later that day, D.C. was expelled from school for a year and charged with a felony for making a threat of mass violence. As an explanation, the sheriff’s deputy wrote in the affidavit, “Above student did create and distribute a video on social media threatening to shoot the school and bring a bomb.”
During a subsequent hearing where D.C. appealed his school expulsion, Lincoln County Schools administrators described their initial panic when seeing the video. Alan shared an audio recording of the hearing with ProPublica. Officials didn’t know that the video was generated by AI until the school counselor saw a small logo in the corner. “Everybody was on pins and needles,” the counselor said at the hearing. “What are we going to do to protect the kids or keep everybody calm the next day if it gets out?” The school district declined to respond to ProPublica’s questions about how officials handled the incident, even though Alan signed a privacy waiver giving them permission to do so.
Alan watched D.C. wither after his expulsion: His girlfriend broke up with him, and some of his friends began to avoid him. D.C. lay awake at night looking through text messages he sent years ago, terrified someone decades later would find something that could ruin his life. “If they are punishing him for creating the image, when does his liability expire?” Alan wondered. “If it’s shared again a year from now, will he be expelled again?”
Alan, a teacher in the school district, coped by voraciously reading court cases and news articles that could shed light on what was happening to his son. He stumbled on a case hundreds of miles north in Pennsylvania, the facts of which were eerily similar to D.C.’s.
In April 2018, two kids, J.S. and his friend, messaged back and forth mocking another student by suggesting he looked like a school shooter. (The court record uses J.S. instead of his full name to protect the student’s anonymity.) J.S. created two memes and sent them to his friend in a private Snapchat conversation. His friend shared the memes publicly on Snapchat, where they were seen by 20 to 40 other students. School administrators permanently expelled J.S., so he and his parents sued the school.
In 2021, after a series of appeals, Pennsylvania’s highest court ruled in J.S.’s favor. While the memes were “mean-spirited, sophomoric, inartful, misguided, and crude,” the state Supreme Court justices wrote in their opinion, they were “plainly not intended to threaten Student One, Student Two, or any other person.”
The justices also shared their sympathy with the challenges schools faced in providing a “safe and quality educational experience” in the modern age. “We recognize that this charge is compounded by technological developments such as social media, which transcend the geographic boundaries of the school. It is a thankless task for which we are all indebted.”
After multiple disciplinary appeals, D.C.’s school upheld the decision to keep him out of school for a year. His parents found a private school that agreed to let him enroll, and he slowly emerged from his depression to continue his straight-A streak there. His charge in court was dismissed in December after he wrote a 500-word essay for the judge on the dangers of social media, according to Alan.
Thinking back on the video months later, D.C. explained that jokes about school violence are common among his classmates. “We try to make fun of it so that it doesn’t seem as serious or like it could really happen,” he said. “It’s just so widespread that we’re all desensitized to it.”
He wonders if letting him back to school would have been more effective in deterring future hoax threats. “I could have gone back to school and said, ‘You know, we can’t make jokes like that because you can get in big trouble for it,’” he said. “I just disappeared for everyone at that school.”
When a school district came across an alarming post on Snapchat in 2023, officials reached out to Safer Schools Together, an organization that helps educators handle school threats. In the post, a pistol flanked by two assault rifles lay on a rumpled white bedsheet. The text overlaid on the photo read, “I’m shooting up central I’m tired of getting picked on everyone is dying tomorrow.”
Steven MacDonald, training manager and development director for Safer Schools Together, recounted this story in a virtual tutorial posted last year on using online tools to trace and manage social media threats. He asked the school officials watching his tutorial what they would do next. “How do we figure out if this is really our student’s bedroom?”
According to MacDonald, it took his organization’s staff only a minute to put the text in quotation marks and run it through Google. A single local news article popped up showing that two kids had been arrested for sharing this exact Snapchat post in Columbia, Tennessee — far from the original district.
“We were able to reach out and respond and say, ‘You know what, this is not targeting your district,’” MacDonald said. Administrators were reassured there was a low likelihood of immediate violence, and they could focus on finding out who was recirculating the old threat and why.
In the training video, MacDonald reviewed skills that, until recently, have been more relevant to police investigators than school principals: How to reverse image search photos of guns to determine whether a post contains a stock image. How to use Snapchat to find contact names for unknown phone numbers. How to analyze the language in the social media posts of a high-risk student.
“We know that why you’re here is because of the increase and the sheer volume of these threats that you may have seen circulated, the non-credible threats that might have even ended up in your districts,” he said. Between last April and this April, Safer Schools Together identified drastic increases in “threat related behavior” and graphic or derogatory social media posts.
Back in the Memphis suburbs, Perkins and other Collierville Schools administrators have attended multiple digital threat assessment training sessions hosted by Safer Schools Together. “I’ve had to learn a lot more apps and social media than I ever thought,” Perkins said.
The knowledge, she said, came in handy during one recent incident in her district. Local police called the district to report that a student had called 911 and reported an Instagram threat targeting a particular school. They sent Perkins a photo of the Instagram profile and username. She began using open source websites to scour the internet for other appearances of the picture and username. She also used a website that allows people to view Instagram stories without alerting the user to gather more information.
With the help of police, Perkins and her team identified that the post was created by someone at the same IP address as the student who had reported the threat. The girl, who was in elementary school, confessed to police that she had done it.
The next day, Perkins and her team interviewed the student, her parents and teachers to understand her motive and goal. “It ended up that there had been some recent viral social media threats going around,” Perkins said. “This individual recognized that it drew in a lot of attention.”
Instead of expelling the girl, school administrators worked with her parents to develop a plan to manage her behavior. They came up with ideas for the girl to receive positive attention while stressing to her family that she had exhibited “extreme behavior” that signaled a need for intensive help. By the end of the day, they had tamped down concerns about immediate violence and created a plan of action.
In many other districts, Perkins said, the girl might have been arrested and expelled for a year without any support — which does not help move students away from the path of violence. “A lot of districts across our state haven’t been trained,” she said. “They’re doing this without guidance.”
Watching the cheerleaders’ TikTok video, it would be easy to miss Allison Bolinger, then the 19-year-old assistant coach. The camera quickly flashes across her standing and smiling in the corner of the room watching the pretend-dead girls.
Bolinger said she and the head coach had been next door planning future rehearsals. Bolinger entered the room soon after the students began filming and “didn’t think anything of it.” Cheerleading practice went forward as usual that afternoon. The next day, she got a call from her dad: The cheerleaders were suspended from school, and Bolinger would have to answer questions from the police.
“I didn’t even know the TikTok was posted. I hadn’t seen it,” she said. “By the time I went to go look for it, it was already taken down.” Bolinger said she ended up losing her job as a result of the incident. She heard whispers around the small community that she was responsible for allowing them to create the video.
Bolinger said she didn’t realize the video was related to school shootings when she was in the room. She often wishes she had asked them at the time to explain the video they were making. “I have beat myself up about that so many times,” she said. “Then again, they’re also children. If they don’t make it here, they’ll probably make it at home.”
Jackson, the grandmother of the 11-year-old in the video, blames Bolinger for not stopping the middle schoolers and faults the police for overreacting. She said all the students, whether or not their families hired a lawyer, got the same punishment in court: three months of probation for a misdemeanor disorderly conduct charge, which could be extended if their grades dropped or they got in trouble again. Each family had to pay more than $100 in court costs, Jackson said, a significant amount for some.
Jackson’s granddaughter successfully completed probation, which also involved writing and submitting a letter of apology to the judge. She was too scared about getting in trouble again to continue on the cheerleading team for the rest of the school year.
Jackson thinks that officials’ outsize response to the video made everything worse. “They shouldn’t even have done nothing until they investigated it, instead of making them out to be terrorists and traumatizing these girls,” she said.
I wasn’t wrong when I wrote that Apple, Google, Akamai, and others faced tremendous liability risk if they continued to provide any of their hosting services to TikTok. Of course, not because it should be illegal – the operative law is incredibly unconstitutional, despite the trite reasoning by the Supreme Court finding it otherwise. But because, as long as it remains an enforceable law, it includes terms that make providing these services to TikTok punishable by exorbitant sanctions that can potentially run into the billions of dollars.
And yet, here all these companies are, nevertheless providing these services, as if there were no law telling them they can’t. So what happened?
Apparently, Trump and US Attorney General Pam Bondi are what happened.
Some of this we knew already. The TikTok ban was a ticking time bomb for whomever won the 2024 Presidential Election because from almost the very first moment their term began the law’s teeth would be fully sharpened, effectively banning TikTok in America and penalizing anyone who helped it provide service anyway. As a now fully-ripe law the President would have no choice but to enforce it, consistent with his constitutional obligation to “take Care that the Laws be faithfully executed,” no matter how crummy, stupid, or illiberal those laws may be. Sometimes some presidents have refused to enforce laws that they consider unconstitutional and thus inconsistent with their oath to uphold the Constitution, but taking that position looks extremely dubious when the law in question has already been found constitutional by the Supreme Court (no matter how speciously). And in no circumstance does the President have the constitutional authority to change any laws duly passed by Congress, which has exclusive legislative authority in our constitutional system—none belongs to the President. The President can neither pass legislation nor modify legislation that has been passed. Thus the President’s discretion with respect to this law is limited, except possibly in two potential ways, neither of which allow what has happened here.
One way is by the terms of the law itself, which allowed the President to provide a short reprieve to TikTok before it got fully banned, but only if certain conditions were met, namely that negotiations for its imminent sale were significantly underway. Trump has so far now issued several executive orders purporting to give TikTok and its third party enablers a stay of execution, yet never permissibly because those statutory conditions that would have entitled him to provide them were never met. These “extensions” that he granted were therefore an abuse of an imaginary power Trump does not actually have, either by the terms of the law itself or any other constitutional authority. They are thus a legal nullity no one can safely rely on.
Then there is the other way, which is through the exercise of prosecutorial discretion. The Constitution itself does not actually authorize the President to pick and choose which laws will get enforced—in fact, per its plain language, his job is to enforce all of them—but the realities of law enforcement mean that these sorts of choices effectively happen all the time, at least to some extent. Prosecutors are always deciding whom to charge and how because it can’t realistically be “everyone for everything” nor would we want it to be. Still, there have been other rules and norms that have tried to ensure that federal prosecutions would not be arbitrary and unjust, including the long-recognized separation between the President and the Department of Justice, which helped to ensure that prosecutions would be consistent with the rule of law and not vulnerable to the President’s political whims.
Yet here we are, now learning that, at Trump’s behest, AG Bondi has exercised this supposed prosecutorial discretion by sending letters to these third party companies promising not to enforce the law against them. For example, here is some language from one letter to Apple, with the promise phrased as a(n extremely ludicrous if not also illiterate) determination that there is no liability that could be prosecuted:
Based on the Attorney General’s review of the facts and circumstances, Apple Inc. has committed no violation of the Act and Apple Inc. has incurred no liability under the Act during the Covered Period or the Extended Covered Period. Apple Inc. may continue to provide services to TikTok as contemplated by these Executive Orders without violating the Act, and without incurring any legal liability.
The bigger problem, however, is that the letters do more than tell the companies that Trump will not prosecute them, probably because that promise alone would not be enough to ameliorate the legal risk the companies face by providing TikTok services in violation of the statutory language telling them not to. After all, at five years the statute of limitations—or the time period after which a violation of the law could still be prosecuted—extends beyond a single presidency term. If there’s a new president, with a new Attorney General, violations happening now could still be prosecuted then.
Perhaps realizing that this promise not to prosecute would probably not be enough to induce the platforms to continue to provide their services, Bondi, on behalf of Trump, attempted to sweeten the pot, by offering an irrevocable guarantee that no one could ever prosecute any of the third party companies for continuing to provide services to TikTok (despite any pesky statutory language to the contrary):
The Department of Justice is also irrevocably relinquishing any claims the United States might have had against Apple Inc. for the conduct proscribed in the Act during the Covered Period and Extended Covered Period, with respect to TikTok and the larger family of ByteDance Ltd. and TikTok, Inc. applications covered under the Act. This is derived from the Attorney General’s plenary authority over all litigation, civil and criminal, to which the United States, its agencies, or departments, are parties, as well as the Attorney General’s authority to enter settlements limiting the future exercise of executive branch discretion.
It appears that this “no backsies forever!” promise has done the trick, as everyone’s back in business, but the question is why, because this sort of promise is not a thing that she, or anyone else, can provide under American Constitutional law. What she calls a “plenary power” (aka a thing she thinks her job entitles were to do) is what Steve Vladeck calls a “dispensing power,” which is most definitely something that she does not get to exercise, and nor does Trump. As he explains, this sort of law-by-regal-decree was a creature of the English monarchy before America’s founding, which both pro-democracy forces in England eventually did away with and America’s founders refused to allow from the start.
The “dispensing” power claimed by pre-18th-century English kings was the power to decide, on an ad hoc basis, which laws could and should be set aside in individual cases—to exempt the King’s favorites not just from the retrospective operation of criminal laws (for which after-the-fact pardons could have the same effect), but from the retrospective and prospective application of civil laws, as well. The idea was that the King could literally “dispense” with application of whichever laws he wanted, for whatever reasons he wanted, in whatever cases he wanted.
Here in America, our Constitution provides no room for such executive power. Laws emanate from the people as expressed through Congress, and the President of the United States has no power to mess with that democratic authority. That Trump has, via Bondi, is yet another unconstitutional power grab by Trump and thus yet another legal nullity.
Which means that the third party companies violating the law by providing services are still in just as much legal jeopardy as they would have been to provide the services without the Bondi letter, which is devoid of legal effect. These companies are openly violating the law, and not only do they have to still worry about enforcement from the next president, but given that none of Trump’s promises are worth anything, they are still in jeopardy from this one too! In fact, now that there is further news that Trump is currently unhappy with TikTok and now a lot more keen to see it banned, it looks like a lot of jeopardy.
Of course, perhaps in this new era of apparently tolerable corruption by the Chief Executive of the United States the third party companies made the pragmatic decision that they might be tempting more trouble from the Trump Administration if they did not go back to providing the services he at least did once seem to want them to provide, as suggested by the letters. Perhaps they decided it would be better to go along to get along, even though if they were to lose the bet and find themselves at the receiving end of an enforcement action, in any administration, it would likely result in an enormous financial liability.
On the other hand, should that day come, the third party companies would still have some cards to play to try to fight back. One might be based on reliance harms, in light of Bondi’s promises, although given how facially void they are a court could fairly ask how the companies could have been so dumb to rely on them. Courts are usually only sympathetic to reliance harms that are reasonable and having unlawful activities blessed by an unconstitutional authority is arguably not particularly reasonable. On the other hand, since we are so far through the looking glass with unlawful unconstitutional and corrupt executive behavior, the companies might also be able to raise some sort of defense based on duress. Perhaps plausibly even, but it is a rather bet-the-company decision to presume it will work.
And the better argument is likely the one suggested in the earlier post, which has so far, disappointingly, never been raised in court at all: that this law is still massively unconstitutional, particularly as applied to them, the third party companies. So far the Supreme Court has only said it is perfectly fine for Congress to ban a platform, but it has not said that it is equally fine to ban any other platforms from providing service to it. And given lots of other precedent, including the pretty fresh Moody v. NetChoice decision, which acknowledged their own First Amendment rights to provide their facilitating services, it is not clear that it would find it ok.
But these companies have now bet billions that the Court won’t bless the law with respect to them. Even though they would in the dubious position, should that argument eventually having to be made, of never having chosen to challenge the law and instead only openly defied it.
In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Ben is joined by guest host Mercy Mutemi, lawyer and managing partner of Nzili & Sumbi Advocates. Together, they cover:
Remember when TikTok was supposedly an urgent national security threat that required emergency legislation? Funny how that “emergency” keeps getting 75-day extensions.
Trump is reportedly about to hit the snooze button on TikTok enforcement for the third time, extending a deadline that was supposedly so urgent that Congress had to rush through legislation ignoring basic First Amendment protections. This will be the third extension since January — which should tell you everything about how “urgent” this national security threat actually was.
With a mid-June deadline approaching and trade talks with China in limbo, Trump is expected to sign an executive order staving off enforcement of a law banning or forcing the sale of the app, according to people familiar with his plan.
It would be the third extension since Trump took office in January. The current one expires June 19.
The pattern here is obvious: Biden championed the ban, then refused to enforce it on his way out the door. Trump promised to fix everything with a deal in 75 days, then extended that deadline when China predictably balked. Now he’s extending it again, treating federal law like a negotiating chip he can deploy when convenient.
Want proof this was never really about national security? When Trump spoke with Xi Jinping this week about trade — you know, the perfect opportunity to address this supposed existential threat — TikTok didn’t even come up.
Trump spoke by phone on Thursday with Chinese President Xi Jinping amid a breakdown in trade negotiations. The two leaders agreed that their teams would hold a new round of trade talks soon. The Chinese team is led by Vice Premier He Lifeng. The U.S. would be represented by Treasury Secretary Scott Bessent, Commerce Secretary Howard Lutnick and U.S. Trade Representative Jamieson Greer, Trump said.
TikTok didn’t come up on the call Thursday, according to a Trump administration official.
If TikTok really posed the kind of national security risk that justified circumventing the First Amendment (which the Supreme Court said was only justified based on the supposed severity of the threat), wouldn’t it be a priority in direct talks with the Chinese president? Instead, it’s apparently not worth mentioning.
But, also, even if the entire law weren’t a moral panic smokescreen, we have a more fundamental problem: in a country where the rule of law is functioning, presidents don’t get to selectively ignore federal laws via executive order. That’s not how the Constitution is supposed to work. But Trump is doing exactly that — and worse, he’s using the threat of future enforcement as leverage to engineer his preferred outcome.
I know that Trump is making a mockery of the Constitution in so many different ways right now, but it doesn’t mean that this particular attack on it should be ignored.
The TikTok saga has become a perfect case study in how moral panics work: manufacture urgency, rush through bad legislation, then quietly let it fade when the political winds shift. The only difference here is that the law is still on the books, being wielded like a sword of Damocles over a platform that hosts American speech.
We called this nonsense from the beginning, and every snooze button press proves us right. The real threat to American democracy isn’t kids posting dance videos — it’s politicians who treat the rule of law like a game show where they get to pick which laws to enforce based on what plays well on any given day.
If you’re the President of the United States and you don’t like a law, you can apparently just… decide not to enforce it for a while? I mean, it’s not supposed to work that way, but for the past 74 days, that’s exactly what’s happened with the TikTok ban. Not just ignoring it quietly — Trump has explicitly declared we’re ignoring it. And today, he announced we’ll keep ignoring it for another 75 days.
The history here is instructive. First, Trump wanted to ban TikTok because teens were mean to him on it. Then Biden wanted to ban it because… China bad? Then Congress actually passed a ban because kids were using TikTok to express opinions about Gaza. Throughout all of this, the ban remained both stupid and unconstitutional (yes, even though the Supreme Court disagrees).
Somehow, this collection of terrible reasons resulted in an actual law, scheduled to take effect the day before the new administration started. But then Trump, whose stance conveniently shifted after a major TikTok investor donated to his campaign, simply declared “let’s ignore the law for 75 days” while floating vague ideas about “the US” buying TikTok.
For 75 days, we’ve mostly heard whispers about potential buyers expressing interest. There was some talk about how a deal was “imminent,” though many of the leaked details sound suspiciously familiar — China would retain control of the algorithm, data would be hosted on Oracle servers, with Oracle auditing for safety. If this sounds like déjà vu, it should: this already happened back in 2022. We wrote about it at the time, but apparently that was in a parallel universe, because everyone has been acting like it didn’t happen.
Anyway, apparently that “imminent” deal wasn’t actually so imminent. Because what is time, really?
Again, let’s be clear, because this is beyond ridiculous. The President has no authority to just declare “we’re ignoring this law for 75 days unless you do the thing I want.” But, that seems to be what a bunch of people are just going with. Astounding.
And, remember, this comes after years of politicians and the media insisting loudly and repeatedly that TikTok was “digital fentanyl” and the most dangerous thing in the world. The reasons would change based on who you were talking to, but either it was the Chinese Communist Party spying on all your phones (not how this works) or they were promoting pro-China propaganda (even as US views towards China are at all time lows) or they were promoting division (seems like that was cable news actually) or they were promoting terrorists (I dunno, man, none of this makes sense).
The fundamental problem isn’t just that this is Calvinball policymaking — though it absolutely is that. It’s that we’ve stumbled into a world where federal laws have expiration dates determined by presidential mood swings. And while everyone’s focused on whether TikTok will sell or survive (that is, if they’re not focused on their retirement savings being drained by the whole “destroying the economy through not understanding trade deficits” thing), they’re missing the bigger story here: we’re running an experiment to see if laws still matter when the president decides they don’t. Early results aren’t encouraging.
We’ve noted more times than I can’t count that the push to ban TikTok was never really about protecting American privacy. If that were true, we would pass a real privacy law and craft serious penalties for companies and executives that play fast and loose with sensitive American data.
It was never really about propaganda. If that were true, we’d take aim at the extremely well funded authoritarian propaganda machine and engage in content moderation of race-baiting political propaganda that’s filling the brains of young American men with pudding and hate.
Enter the fine folks at (Trump friendly) Andreessen Horowitz, who are emerging as a late-stage bidder for a big chunk of whatever winds up being left of TikTok alongside (Trump friendly) Oracle:
“US venture capital giant Andreessen Horowitz is in talks to invest in social media platform TikTok as part of an effort led by Donald Trump to wrest control of the popular video app from its Chinese owners. The venture capital group, whose co-founder Marc Andreessen is a vocal supporter of the US president, is in talks to add new outside investment that will buy out TikTok’s Chinese investors, as part of a bid led by Oracle and other American investors to carve it out of its parent company ByteDance.”
Thanks to America’s silly and performative ban, ByteDance has until April 5 to sell TikTok to U.S. controlled companies. There’s still no word on what a finalized deal will look like, and ByteDance has had strong reservations in including the company’s engagement algorithms as part of any deal.
Marc Andreessen, who has become increasingly incoherent as he prostrates himself and his empire to King Dingus, clearly wants TikTok ad money, but he also wants information control. Andreessen is already on the board of Meta and one of the investors in Elon’s takeover of Twitter. If he grabs a large stake in TikTok, an overt authoritarian will have meaningful power over the country’s three biggest social media platforms. That is, you know, bad for a long list of reasons that should be obvious.
Modern U.S. authoritarians don’t want major popular tech platforms engaging in content moderation of right wing propaganda and disinformation, a cornerstone of Trump power (since their actual policies, like letting shitty corporations do whatever they want, dismantling civil and labor rights, and giving billionaires more tax cuts, are broadly unpopular amongst the plebs).
There was, if you recall, a whole three year news cycle where major news outlets propped up the myth that this wasn’t about control, propaganda, and forcing unpopular right wing policies down everybody’s throat, it was about reining in corporate power and “holding big tech accountable.” These GOP efforts were, time and time again, portrayed in the press as serious, adult, good faith policymaking.
Really a great job on all fronts, from policymakers to U.S. journalism. Everybody really nailed it.
TikTok always heavily trafficked in a lot of right wing engagement bait because, as an amoral algorithmic engagement machine, they like to shovel more of the stuff you already like your direction. But at the same time, I personally found I was more likely to find left wing content on TikTok than I would on, say, Facebook’s reels. Ultimately, TikTok has veered even harder right as it tried to appease U.S. authoritarians.
However right wing friendly you think TikTok is now, it will be notably worse under Oracle and Andreessen Horowitz, and far more likely to take action against content and creators Trumpism doesn’t like. All in service to authoritarian control, and chasing where the real money is in America media right now: telling young angry men all of their worst lizard-brained impulses are correct.