This episode is brought to you by our sponsor WebPurify, an Intouch company. IntouchCX is a global leader in digital customer experience management, back office processing, trust and safety, and AI services.
This episode is brought to you by our sponsor Clavata.ai, a first-of-its-kind, automated content safety platform that allows you to go from defining a policy to enforcement in minutes. In our Bonus Chat, we speak with founder Brett Levenson on how to make T&S more consistent and explainable and the benefits of treating policy as code.
Brian Reed’s “Question Everything” podcast built its reputation on careful journalism that explores moral complexity within the journalism field. It’s one of my favorite podcasts. Which makes his latest pivot so infuriating: Reed has announced he’s now advocating to repeal Section 230—while demonstrating he fundamentally misunderstands what the law does, how it works, and what repealing it would accomplish.
If you’ve read BestNetTech for basically any length of time, you’ll know that I feel the exact opposite on this topic. Repealing, or really almost all proposals to reform Section 230, would be a complete disaster for free speech on the internet, including for journalists.
The problem isn’t advocacy journalism—I’ve been doing that myself for years. The problem is Reed’s approach: decide on a solution, then cherry-pick emotional anecdotes and misleading sources to support it, while ignoring the legal experts who could explain why he’s wrong. It’s the exact opposite of how to do good journalism, which is unfortunate for someone who holds out his (otherwise excellent!) podcast as a place to explore how to do journalism well.
Last week, he published the first episode of his “get rid of 230” series, and it has so many problems, mistakes, and nonsense, that I feel like I had to write about it now, in the hopes that Brian might be more careful in future pieces. (Reed has said he plans to interview critics of his position, including me, but only after the series gets going—which seems backwards for someone advocating major legal changes.)
The framing of this piece is around the conspiracy theory regarding the Sandy Hook school shootings, and someone who used to believe them. First off, this feels like a cheap journalistic hook, basing a larger argument on an emotional hook that clouds the issues and the trade-offs. The Sandy Hook shooting was horrible! The fact that some jackasses pushed conspiracy theories about it is also horrific! That primes you in the form of “something must be done, this is something, we must do this” to accept Reed’s preferred solution: repeal 230.
But he doesn’t talk to any actual experts on 230, misrepresents Section 230, misleads people into understanding how repealing 230 would impact that specific (highly emotional) story, and then closes on an emotionally manipulative hook (convincing the person he spoke to who used to believe in Sandy Hook conspiracy theories, that getting rid of 230 would work, despite her lack of understanding or knowledge of what would actually happen).
In listening to the piece, it struck me that Reed here is doing part of what he (somewhat misleadingly) claims social media companies are doing: hooking you with manipulative lies and misrepresentations to keep you hooked and to convince you something false is true by lying to his listeners. It’s a shame, but it’s certainly not journalism.
Let’s dig into some of the many problems with the piece.
The Framing is Manipulative
I already mentioned that the decision to frame the entire piece around one extraordinary, but horrific story is manipulative, but it goes beyond that. Reed compares the fact that some of the victims from Sandy Hook successfully sued Alex Jones for defamation over the lies and conspiracy theories he spread regarding that event, to the fact that they can’t sue YouTube.
But in 2022, family members of 10 of the Sandy Hook victims did win a defamation case against Alex Jones’s company, and the verdict was huge. Jones was ordered to pay the family members over a billion dollars in damages.
Just this week, the Supreme Court declined to hear an appeal from Jones over it. A semblance of justice for the victims, though infuriatingly, Alex Jones filed for bankruptcy and has avoided paying them so far. But also, and this is what I want to focus on, the lawsuits are a real deterrent to Alex Jones and others who will likely think twice before lying like this again.
So now I want you to think about this. Alex Jones did not spread this lie on his own. He relied on social media companies, especially YouTube, which hosts his show, to send his conspiracy theory, out to the masses. One YouTube video spouting this lie shortly after the shooting got nearly 11 million views in less than 2 weeks. And by 2018 when the family sued him. Alex Jones had 1.6 billion views on his YouTube channel. The Sandy Hook lie was laced throughout that content, burrowing its way into the psyche of millions of people, including Kate and her dad.
Alex Jones made money off of each of those views. But so did YouTube. Yet, the Sandy Hook families, they cannot sue YouTube for defaming them because of section 230.
There are a ton of important details left out of this, that, if actually presented, might change the understanding here. First, while the families did win that huge verdict, much of that was because Jones defaulted. He didn’t really fight the defamation case, basically ignoring court orders to turn over discovery. It was only after the default that he really tried to fight things at the remedy stage. Indeed, part of the Supreme Court cert petition that was just rejected was because he claimed he didn’t get a fair trial due to the default.
You simply can’t assume that because the families won that very bizarre case in which Jones treated the entire affair with contempt, that means that the families would have a case against YouTube as well. That’s not how this works.
This is Not How Defamation Law Works
Reed correctly notes that the bar for defamation is high, including that there has to be knowledge to qualify, but then immediately seems to forget that. Without a prior judicial determination that specific content is defamatory, no platform—with or without Section 230—is likely to meet the knowledge standard required for liability. That’s kind of important!
Now this is really important to keep in mind. Freedom of speech means we have the freedom to lie. We have the freedom to spew absolute utter bullshit. We have the freedom to concoct conspiracy theories and even use them to make money by selling ads or subscriptions or what have you.
Most lies are protected by the First Amendment and they should be.
But there’s a small subset of lies that are not protected speech even under the First Amendment. The old shouting fire in a crowded theater, not necessarily protected. And similarly, lies that are defamatory aren’t protected.
In order for a statement to be defamatory, okay, for the most part,whoever’s publishing it has to know it’s untrueand it has to cause damage to the person or the institution the statement’s about. Reputational damage, emotional damage, or a lie could hurt someone’s business. The bar for proving defamation is high in the US. It can be hard to win those cases.
I bolded the key part here: while there’s some nuance here, mostly, the publisher has to know the statement is untrue. And the bar here is very high. To survive under the First Amendment, the knowledge standard is important.
It’s why booksellers can’t be held liable for “obscene” books on their shelves. It’s why publishers aren’t held liable for books they publish, even if those books lead people to eat poisonous mushrooms. The knowledge standard matters.
And even though Reed mentions the knowledge point, he seems to immediately forget it. Nor does he even attempt to deal with the question of how an algorithm can have the requisite knowledge (hint: it can’t). He just brushes past that kind of important part.
But it’s the key to why his entire argument premise is flawed: just making it so anyone can sue web platforms doesn’t mean anyone will win. Indeed, they’ll lose in most cases. Because if you get rid of 230, the First Amendment still exists. But, because of a bunch of structural reasons explained below, it will make the world of internet speech much worse for you and I (and the journalists Reed wants to help), while actually clearing the market of competitors to the Googles and Metas of the world Reed is hoping to punish.
That’s Not How Section 230 Works
Reed’s summary is simply inaccurate. And not in the “well, we can differ on how we describe it.” He makes blatant factual errors. First, he claims that “only internet companies” get 230 protections:
These companies have a special protection that only internet companies get. We need to strip that protection away.
But that’s wrong. Section 230 applies to any provider of an interactive computer service (which is more than just “internet companies”) and their users. It’s right there in the law. Because of that latter part, it has protected people forwarding emails and retweeting content. It has been used repeatedly to protect journalists on that basis. It protects you and me. It is not exclusive to “internet companies.” That’s just factually wrong.
The law is not, and has never been, some sort of special privilege for certain kinds of companies, but a framework for protecting speech online, by making it possible for speech distributing intermediaries to exist in the first place. Which helps journalists. And helps you and me. Without it, there would be fewer ways in which we could speak.
Reed also appears to misrepresent or conflate a bunch of things here:
Section 230, which Congress passed in 1996, it makes it so that internet companies can’t be sued for what happened happens on their sites. Facebook, YouTube, Tik Tok, they bear essentially no responsibility for the content they amplify and recommend to millions, even billions of people. No matter how much it harms people, no matter how much it warps our democracy under section 230, you cannot successfully sue tech companies for defamation, even if they spread lies about you. You can’t sue them for pushing a terror recruitment video on someone who then goes and kills your family member. You can’t sue them for bombarding your kids. with videos that promote eating disorders or that share suicide methods or sexual content.
First off, much of what he describes is First Amendment protected speech. Second, he ignores that Section 230 doesn’t apply to federal criminal law, which is what things like terrorist content would likely cover (I’m guessing he’s confused based on the Supreme Court cases from a few years ago, where 230 wasn’t the issue—the lack of any traceability of the terrorist attacks to the websites was).
But, generally speaking, if you’re advocating for legal changes, you should be specific in what you want changed and why. Putting out a big list of stuff, some of which would be protected, some of which would not be, as well as some that the law covers and some it doesn’t… isn’t compelling. It suggests you don’t understand the basics. Furthermore, lumping things like eating disorders in with defamation and terrorist content, suggests an unwillingness to deal with the specifics and the complexities. Instead, it suggests a desire for a general “why can’t we pass a law that says ‘bad stuff isn’t allowed online?'” But that’s a First Amendment issue, not a 230 issue (as we’ll explain in more detail below).
Reed also, unfortunately, seems to have been influenced by the blatantly false argument that there’s a platform/publisher distinction buried within Section 230. There isn’t. But it doesn’t stop him from saying this:
I’m going to keep reminding you what Section 230 is, as we covered on this show, because I want it to stick. Section 230, small provision in a law Congress passed in 1996, just 26 words, but words that were so influential, they’re known as the 26 words that created the internet.
Quick fact check: Section 230 is way longer than 26 words. Yes, Section (c)(1) is 26 words. But, the rest matters too. If you’re advocating to repeal a law, maybe read the whole thing?
Those words make it so that internet platforms cannot be treated as publishers of the content on their platform. It’s why Sandy Hook parents could sue Alex Jones for the lies he told, but they couldn’t sue the platforms like YouTube that Jones used to spread those lies.
And there is a logic to this that I think made sense when Section 230 was passed in the ’90s. Back then, internet companies offered chat rooms, message boards, places where other people posted, and the companies were pretty passively transmitting those posts.
Reed has this completely backwards. Section 230 was a direct response to Stratton Oakmont v. Prodigy, where a judge ruled that Prodigy’s active moderation to create a “family friendly” service made it liable for all content on the platform.
The two authors of Section 230, Ron Wyden and Chris Cox, have talked about this at length for decades. They wanted platforms to be active participants and not dumb conduits passively transmitting posts. Their fear was without Section 230, those services would be forced to just be passive transmitters, because doing anything to the content (as Prodigy did) would make them liable. But given the amount of content, that would be impossible.
So Cox and Wyden’s solution to encourage platforms to be more than passive conduits was to say “if you do regular publishing activities—such as promoting, rearranging, and removing certain content then we won’t treat you like a publisher.”
The entire point was to encourage publisher-like behavior, not discourage it.
Reed has the law’s purpose exactly backwards!
That’s kind of shocking for someone advocating to overturn the law! It would help to understand it first! Because if the law actually did what Reed pretends it does, I might be in favor of repeal as well! The problem is, it doesn’t. And it never did.
One analogy that gets thrown around for this is that the platforms, they’re like your mailman. They’re just delivering somebody else’s letter about the Sandy Hook conspiracy. They’re not writing it themselves. And sure, that might have been true for a while, but imagine now that the mailman reads the letter he’s delivering, sees it’s pretty tantalizing. There’s a government conspiracy to take away people’s guns by orchestrating a fake school shooting, hiring child actors, and staging a massacre and a whole 911 response.
The mailman thinks, “That’s pretty good stuff. People are going to like this.” He makes millions of copies of the letter and delivers them to millions of people. And then as all those people start writing letters to their friends and family talking about this crazy conspiracy, the mailman keeps making copies of those letters and sending them around to more people.
And he makes a ton of money off of this by selling ads that he sticks into those envelopes. Would you say in that case the mailman is just a conduit for someone else’s message? Or has he transformed into a different role? A role more like a publisher who should be responsible for the statements he or she actively chooses to amplify to the world. That is essentially what YouTube and other social media platforms are doing by using algorithms to boost certain content. In fact, I think the mailman analogy is tame for what these companies are up to.
Again, the entire framing here is backwards. It’s based on Reed’s false assumption—an assumption that any expert in 230 would hopefully disabuse him of—that the reason for 230 was to encourage platforms to be “passive conduits” but it’s the exact opposite.
Cox and Wyden were clear (and have remained clear) that the purpose of the law was exactly the opposite. It was to give platforms the ability to create different kinds of communities and to promote/demote/moderate/delete at will.
The key point was that, because of the amount of content, no website would be willing and able to do any of this if they were potentially held liable for everything.
As for the final point, that social media companies are now way different from “the mailman,” both Cox and Wyden have talked about how wrong that is. In an FCC filing a few years back, debunking some myths about 230, they pointed out that this claim of “oh sites are different” is nonsense and misunderstands the fundamentals of the law:
Critics of Section 230 point out the significant differences between the internet of 1996 and today.Those differences, however, are not unanticipated. When we wrote the law, we believed the internet of the future was going to be a very vibrant and extraordinary opportunity for people to become educated about innumerable subjects, from health care to technological innovation to their own fields of employment. So we began with these two propositions: let’s make sure that every internet user has the opportunity to exercise their First Amendment rights; and let’s deal with the slime and horrible material on the internet by giving both websites and their users the tools and the legal protection necessary to take it down.
The march of technology and the profusion of e-commerce business models over the last two decadesrepresent precisely the kind of progress that Congress in 1996 hoped would follow from Section 230’s protectionsfor speech on the internet and for the websites that host it. The increase in user-created content in the years since then is both a desired result of the certainty the law provides, and further reason that the law is needed more than ever in today’s environment.
The Understanding of How Incentives Work Under the Law is Wrong
Here’s where Reed’s misunderstanding gets truly dangerous. He claims Section 230 removes incentives for platforms to moderate content. In reality, it’s the opposite: without Section 230, websites would have less incentive to moderate, not more.
Why? Because under the First Amendment, you need to show that the intermediary had actual knowledge of the violative nature of the content. If you removed Section 230, the best way to prove that you have no knowledge is not to look, and not to moderate.
You potentially go back to a Stratton Oakmont-style world, where the incentives are to do less moderation because any moderation you do introduces more liability. The more liability you create, the less likely someone is to take on the task. Any investigation into Section 230 has to start from understanding those basic facts, so it’s odd that Reed so blatantly misrepresents them and suggests that 230 means there’s no incentive to moderate:
We want to make stories that are popular so we can keep audiences paying attention and sell ads—or movie tickets or streaming subscriptions—to support our businesses. But in the world that every other media company occupies, aside from social media, if we go too far and put a lie out that hurts somebody, we risk getting sued.
It doesn’t mean other media outlets don’t lie or exaggerate or spin stories, but there’s still a meaningful guard rail there. There’s a real deterrent to make sure we’re not publishing or promoting lies that are so egregious, so harmful that we risk getting sued, such as lying about the deaths of kids who were killed and their devastated parents.
Social media companies have no such deterrent and they’re making tons of money. We don’t know how much money in large part because the way that kind of info usually gets forced out of companies is through lawsuits which we can’t file against these tech behemoths because of section 230. So, we don’t know, for instance, how much money YouTube made from content with the Sandy Hook conspiracy in it. All we know is that they can and do boost defamatory lies as much as they want, raking cash without any risk of being sued for it.
But this gets at a fundamental flaw that shows up in these debates: that the only possible pressure on websites is the threat of being sued. That’s not just wrong, it, again, totally gets the purpose and function of Section 230 backwards.
There are tons of reasons for websites to do a better job moderating: if your platform fills up with garbage, users start to go away. As do advertisers, investors, other partners as well.
This is, fundamentally, the most frustrating part about every single new person who stumbles haphazardly into the Section 230 debate without bothering to understand how it works within the law. They get the incentives exactly backwards.
230 says “experiment with different approaches to making your website safe.” Taking away 230 says “any experiment you try to keep your website safe opens you up to ruinous litigation.” Which one do you think leads to a healthier internet?
It Misrepresents how Companies Actually Work
Reed paints tech companies as cartoon villains, relying on simplistic and misleading interpretations of leaked documents and outdated sources. This isn’t just sloppy—it’s the kind of manipulative framing he’d probably critique in other contexts.
For example, he grossly misrepresents (in a truly manipulative way!) what the documents Frances Haugen released said, just as much of the media did. For example, here’s how Reed characterizes some of what Haugen leaked:
Haugen’s document dump showed that Facebook leadership knew about the harms their product is causing, including disinformation and hate speech, but also product designs that were hurting children, such as the algorithm’s tendency to lead teen girls to posts about anorexia. Francis Haugen told lawmakers that top people at Facebook knew exactly what the company was doing and why it was doing.
Except… that’s very much out of context. Here’s how misleading Reed’s characterization is. The actual internal research Haugen leaked—the stuff Reed claims shows Facebook “knew about the harms”—looked like this:
The headline of that slide sure looks bad, right? But then you look at the context, which shows that in nearly every single category they studied across boys and girls, they found that more users found Instagram made them feel better, not worse. The only category where that wasn’t true was teen girls and body image, where the split was pretty equal. That’s one category out of 24 studied! And this was internal research calling out that fact because the point was to convince the company to figure out ways to better deal with that one case, not to ignore it.
And, what we’ve heard over and over again since all this is that companies have moved away from doing this kind of internal exploration, because they know that if they learn about negative impacts of their own service, it will be used against them by the media.
Reed’s misrepresentation creates exactly the perverse incentive he claims to oppose: companies now avoid studying potential harms because any honest internal research will be weaponized against them by journalists who don’t bother to read past the headline. Reed’s approach of getting rid of 230’s protections would make this even worse, not better.
Because as part of any related lawsuit there would be discovery, and you can absolutely guarantee that a study like the one above that Haugen leaked would be used in court, in a misleading way, showing just that headline, without the necessary context of “we called this out to see how we could improve.”
So without Section 230 and with lawsuits, companies would have much less incentive to look for ways to improve safety online, because any such investigation would be presented as “knowledge” of the problem. Better not to look at all.
There’s a similar problem with the way Reed reports on the YouTube algorithm. Reed quotes Guillaume Chaslot but doesn’t mention that Chaslot left YouTube in 2013—12 years ago. That’s ancient history in tech terms. I’ve met Chaslot and been on panels with him. He’s great! And I think his insights on the dangers of the algorithm in the early days were important work and highlighted to the world the problems of bad algorithms. But it’s way out of date. And not all of the algorithms are bad.
Conspiracy theories are are really easy to make. You can just make your own conspiracy theories in like one hour shoot it and then it get it can get millions of views. They’re addictive because people who live in this filter bubble of conspiracy theories and they don’t watch the classical media. So they spend more time on YouTube.
Imagine you’re someone who doesn’t trust the media, you’re going to spend more time on YouTube. So since you spend more time on YouTube, the algorithm thinks you’re better than anybody else. The definition of better for the algorithm, it’s who spends more time. So it will recommend you more. So there’s like this vicious call.
It’s a vicious circle, Chaslot says, where the more conspiratorial the videos, the longer users stay on the platform watching them, the more valuable that content becomes, the more YouTube’s algorithm recommends the conspiratorial videos.
Since Chaslot left YouTube, there have been a series of studies that have shown that, while some of that may have been true back when Chaslot was at the company, it hasn’t been true in many, many years.
A study in 2019 (looking at data from 2016 onwards) found that YouTube’s algorithm actually pushed people away from radicalizing content. A further study a couple of years ago similarly found no evidence of YouTube’s algorithm sending people down these rabbit holes.
It turns out that things like Chaslot’s public berating of the company, as well as public and media pressure, not to mention political blowback, had helped the company re-calibrate the algorithm away from all that.
And you know what allowed them to do that? The freedom Section 230 provided, saying that they wouldn’t face any litigation liability for adjusting the algorithm.
A Total Misunderstanding of What Would Happen Absent 230
Reed’s fundamental error runs deeper than just misunderstanding the law—he completely misunderstands what would happen if his “solution” were implemented. He claims that the risk of lawsuits would make the companies act better:
We need to be able to sue these companies.
Imagine the Sandy Hook families had been able to sue YouTube for defaming them in addition to Alex Jones. Again, we don’t know how much money YouTube made off the Sandy Hook lies. Did YouTube pull in as much cash as Alex Jones, five times as much? A hundred times? Whatever it was, what if the victims were able to sue YouTube? It wouldn’t get rid of their loss or trauma, but it could offer some compensation. YouTube’s owned by Google, remember, one of the most valuable companies in the world. More likely to actually pay out instead of going bankrupt like Alex Jones.
This fantasy scenario has three fatal flaws:
First, YouTube would still win these cases. As we discussed above, there’s almost certainly no valid defamation suit here. Most complained about content will still be First Amendment-protected speech, and YouTube, as the intermediary, would still have the First Amendment and the “actual knowledge” standard to fall back on.
The only way to have actual knowledge of content being defamatory is for there to be a judgment in court about the content. So, YouTube couldn’t be on the hook in this scenario until after the plaintiffs had already taken the speaker to court and received a judgment that the content was defamatory. At that point, you could argue that the platform would then be on notice and could no longer promote the content. But that wouldn’t stop any of the initial harms that Reed thinks they would.
Second, Reed’s solution would entrench Big Tech’s dominance. Getting a case dismissed on Section 230 grounds costs maybe $50k to $100k. Getting the same case dismissed on First Amendment grounds? Try $2 to $5 million.
For a company like Google or Meta, with their buildings full of lawyers, this is still pocket change. They’ll win those cases. But it means that you’ve wiped out the market for non-Meta, non-Google sized companies. The smaller players get wiped out because a single lawsuit (or even a threat of a lawsuit) can be existential.
The end result: Reed’s solution gives more power to the giant companies he paints as evil villains.
Third, there’s vanishingly little content that isn’t protected by the First Amendment. Using the Alex Jones example is distorting and manipulative, because it’s one of the extremely rare cases where defamation has been shown (and that was partly just because Jones didn’t really fight the case).
Reed doubles down on these errors:
But on a wider scale, The risk of massive lawsuits like this, a real threat to these companies’ profits, could finally force the platforms to change how they’re operating. Maybe they change the algorithms to prioritize content from outlets that fact check because that’s less risky. Maybe they’d get rid of fancy algorithms altogether, go back to people getting shown posts chronologically or based on their own choice of search terms. It’d be up to the companies, but however they chose to address it, they would at least have to adapt their business model so that it incorporated the risk of getting sued when they boost damaging lies.
This shows Reed still doesn’t understand the incentive structure. Companies would still win these lawsuits on First Amendment grounds. And they’d increase their odds by programming algorithms and then never reviewing content—the exact opposite of what Reed suggests he wants.
And here’s where Reed’s pattern of using questionable sources becomes most problematic. He quotes Frances Haugen advocating for his position, without noting that Haugen has no legal expertise on these issues:
For what it’s worth, this is what Facebook whistleblower Frances Haugen argued for in Congress in 2021.
I strongly encourage reforming Section 230 to exempt decisions about algorithms. They have 100% control over their algorithms and Facebook should not get a free pass on choices it makes to prioritize growth and virality and reactiveness over public safety. They shouldn’t get a free pass on that because they’re paying for their profits right now with our safety. So, I strongly encourage reform of 230 in that way.
But, as we noted when Haugen said that, this is (again) getting it all backwards. At the very same time that Haugen was testifying with those words, Facebook was literally running ads all over Washington DC, encouraging Congress to reform Section 230 in this way. Facebook wants to destroy 230.
Why? Because Zuckerberg knows full well what I wrote above. Getting rid of 230 means a few expensive lawsuits that his legal team can easily win, while wiping out smaller competitors who can’t afford the legal bills.
Meta’s usage has been declining as users migrate to smaller platforms. What better way to eliminate that competition than making platform operation legally prohibitive for anyone without Meta’s legal budget?
Notably, not a single person Reed speaks to is a lawyer. He doesn’t talk to anyone who lays out the details of how all this works. He only speaks to people who dislike tech companies. Which is fine, because it’s perfectly understandable to hate on big tech companies. But if you’re advocating for a massive legal change, shouldn’t you first understand how the law actually works in practice?
For a podcast about improving journalism, this represents a spectacular failure of basic journalistic practices. Indeed, Reed admits at the end that he’s still trying to figure out how to do all this:
I’m still trying to figure out how to do this whole advocacy thing. Honestly, pushing for a policy change rather than just reporting on it. It’s new to me and I don’t know exactly what I’m supposed to be doing. Should I be launching a petition, raising money for like a PAC? I’ve been talking to marketing people about slogans for a campaign. We’ll document this as I stumble my way through. It’s all a bit awkward for me. So, if you have ideas for how you can build this movement to be able to sue big tech. Please tell me.
There it is: “I’m still trying to figure out how to do this whole advocacy thing.” Reed has publicly committed to advocating for a specific legal change—one that would fundamentally reshape how the internet works—while admitting he doesn’t understand advocacy, hasn’t talked to experts, and is figuring it out as he goes. Generally it’s a bad idea to come up with a slogan when you still don’t even understand the thing you’re advocating for.
This is advocacy journalism in reverse: decide your conclusion, then do the research. It’s exactly the kind of shoddy approach that Reed would rightly criticize in other contexts.
I have no problem with advocacy journalism. I’ve been doing it for years. But effective advocacy starts with understanding the subject deeply, consulting with experts, and then forming a position based on that knowledge. Reed has it backwards.
The tragedy is that there are so many real problems with how big tech companies operate, and there are thoughtful reforms that could help. But Reed’s approach—emotional manipulation, factual errors, and backwards legal analysis—makes productive conversation harder, not easier.
Maybe next time, try learning about the law first, then deciding whether to advocate for its repeal.
You may recall a year or so ago, when Mark Zuckerberg whined to Jim Jordan about how the Biden administration “repeatedly pressured our teams for months to censor certain… content.” Or maybe you remember when he went on Joe Rogan and whined some more about Biden pressure on moderation, even though he admitted there that he rejected their requests:
And they pushed us super hard to take down things that were honestly were true. Right, I mean they they basically pushed us and and said, you know, anything that says that vaccines might have side effects, you basically need to take down.
And I was just like,well we’re not going to do that. Like,we’re clearly not going to do that.
Zuckerberg also made a pledge that they were supposedly going to stop being pushed around. From now on, he swore, there was a new Meta that wouldn’t bow at all to government officials demanding content be removed.
He was a new Zuck. A Zuck who would stand up to oppressive government demands.
So, about that.
On Tuesday, Attorney General Pam Bondi publicly bragged about the Trump administration doing exactly what Mark Zuckerberg falsely claimed the Biden administration did to him. She bragged about how the Justice Department successfully pressured Facebook into removing First Amendment-protected speech:
If you can’t see that, it’s Bondi tweeting:
Today following outreach from the Justice Department, Facebook removed a large group page that was being used to dox and target ICE agents in Chicago. The wave of violence against ICE has been driven by online apps and social media campaigns designed to put ICE officers at risk just for doing their jobs. The Department of Justice will continue engaging tech companies to eliminate platforms where radicals can incite imminent violence against federal law enforcement.
This is actual government censorship—direct pressure from the DOJ to remove constitutionally protected speech. And unlike the Biden administration’s communications that Zuck admitted he easily refused, in this case, Facebook immediately complied.
The content in question? Tracking the public movements of law enforcement officials. This is classic protected First Amendment activity, with well-established case law protecting the right to record and monitor police in public. It’s nowhere close to meeting the Brandenburg standard for “inciting imminent lawless action” that Bondi misquotes in her tweet.
So, once again, let’s take a step back and look at this. When it was the Biden administration asking Facebook about COVID misinfo, Zuck had no problem saying “well, we’re not going to do that.” And as it became clear Trump had a decent chance of winning the election, it gave Zuck an opportunity to throw the Biden admin under the bus, while insisting that they’d changed and would stop being pressured by governments.
But then, as soon as Bondi calls Zuck and says “jump,” he asks “how high?”
And, of course, it’s not just Zuckerberg who is being a cowardly hypocrite here.
Remember how Judge Terry Doughty, in the Missouri v. Biden case, took similar anecdotes of supposed pressure (which the Supreme Court later rejected, noting that Doughty’s findings were “clearly erroneous” and based on “no evidence”) and claimed that any sign of governments merely communicating with social media companies about moderation practices clearly represented an epic violation of the First Amendment. He said that “the present case arguably involves the most massive attack against free speech in United States’ history.”
Of course, the Supreme Court eventually laughed that off, because it was based on him both fabricating evidence (including quotes that were not said) and misunderstanding other evidence. But where are the people who cheered on Doughty’s ruling about Bondi’s “massive attack against free speech?”
Or, perhaps, you remember the “Twitter Files” gang of Matt Taibbi, Michael Shellenberger, and new CBS News Editor in Chief Bari Weiss, claiming that a few misrepresented stories of government officials asking platforms about their content moderation practices represented the “censorship industrial complex” and were huge attacks on free speech. Matt Taibbi insisted that any suppression of “true speech that undermined confidence in government policies” was “precisely the situation the First Amendment was designed to avoid.”
Shellenberger touted a supposed whistleblower “proving” that the government “pressured” social media, such as Facebook, to take down content (the actual evidence he presented said no such thing). He’s spent years since then laughably presenting himself as an expert on government and social media “censorship”, even getting a “professorship” at Bari Weiss’s fake university on the subject.
Weiss herself wrote a typically self-congratulating article about how Elon Musk bought Twitter to “save the world” from “censorship” and whined about how government-induced content moderation “curtailed public debate.”
Where are they on this? I see nothing from Taibbi, Shellenberger, or Weiss. Not a single story about this on the CBS-owned The Free Press. Nothing on X from any of them. Nothing on their various Substacks. Just… silence as the Trump administration does the very thing, loudly and proudly, that they spent years falsely accusing Biden of, while claiming it was an attack on the very foundations of democracy. How odd.
Or how about this: top Trump confidant and conspiracy theorist Laura Loomer went around taking credit for the DOJ getting the page removed from Facebook, just a week after her own lawsuit, which tried to argue that Facebook (and Twitter) did the RICO in banning her, got rejected by the Supreme Court.
That shows Laura Loomer first tweeting about “ICE tracking pages” on Facebook and complaining that Facebook shouldn’t allow them, followed by her breaking the news that the DOJ told her they contacted Facebook to remove them:
Fantastic news. DOJ source tells me they have seen my report and they have contacted Facebook and their executives at META to tell them they need to remove these ICE tracking pages from the platform.
We will see if they comply. There are DOZENS of pages like the one below that endanger the lives of ICE agents.
It’s further evidence Big Tech is continuing to subvert and undermine President Trump and his agenda.
The hypocrisy level here is off the charts. She’s literally spent years suing Facebook for banning her account, claiming it was an attack on her speech… and now she’s demanding that the government tell Facebook to suppress speech, and celebrating when they do so.
The only consistency is “speech I like should be allowed, speech I don’t like shouldn’t be.”
(a) secure the right of the American people to engage in constitutionally protected speech;
(b) ensure that no Federal Government officer, employee, or agent engages in or facilitates any conduct that would unconstitutionally abridge the free speech of any American citizen
Bondi clearly violated that.
Will anyone point that out?
Now, because we have enough MAGA trolls around here, I can already predict the reply: “this is different,” they will say, “because this is ‘doxxing’ and a threat to ICE.”
Hell, Bondi even hints at that in her tweet, as well as pretending this fits under the Brandenburg standard of “inciting imminent lawless action” which she misquotes in her tweet. Except that’s bullshit. Simply tracking the location of law enforcement officials in public is not anywhere close to crossing the Brandenburg line. It’s also not “doxxing” in any meaningful manner, which is about revealing private info about someone (and, in most cases, is also not against the law).
So what we’re left with is yet another example of the extreme hypocrisy of the MAGA cult. They claimed, falsely, that Biden was “censoring” social media (a lie debunked by even the conservatives on the Supreme Court) and then as soon as they got into power, they not only did exactly what they falsely accused Biden of doing, but they did so openly, publicly, and proudly.
And where are all those “free speech warriors”? Where are Taibbi, Shellenberger, and Weiss? They were soooooooo concerned that what Biden didn’t actually do was the end of free speech in America. Yet, when Trump does way worse than even what they pretended Biden did… it’s crickets.
How odd.
Or how about Joe Rogan? He spent hours with Zuck, helping him spin a blatantly misleading tale of “censorship” from Biden (which again, even Zuck admitted to Rogan didn’t lead to any speech being taken down). But here, Zuck folded like a cheap card table… and what? Silence?
These grifters spent years telling us that free speech was under attack, but they never had the actual goods. Yet now it’s actually happening, but by the guy they supported, and they’re all off hiding somewhere?
How pathetic.
But this isn’t just about individual hypocrisy—it reveals something more troubling about the entire “free speech” discourse we’ve been subjected to for the past several years. The people who positioned themselves as champions of free speech never actually cared about the principle. They cared about weaponizing the concept to attack their political opponents while laying groundwork for their own censorship regime.
The supposed champions of free speech who spent years manufacturing outrage over nonexistent government censorship are now silent in the face of actual government censorship. Their hypocrisy is complete, and they should never, ever, be seen as credible sources on the subject of free speech.
Zuckerberg, meanwhile, has revealed himself as exactly what critics always said he was: a coward who bends to whoever holds power. His theatrical resistance to Biden was performative. His instant capitulation to Trump is revealing.
The real lesson here isn’t just that these people are frauds—though they obviously are. It’s that we now have a crystal-clear example of what actual government pressure on speech looks like, versus the manufactured controversies of the past few years. When Bondi tweets about successful DOJ pressure campaigns, when Facebook immediately complies, when demands result in immediate content removal—that’s the difference between real government coercion and the communications that resulted in no platform action, which the Supreme Court found insufficient to establish standing because plaintiffs couldn’t show they were actually harmed.
The free speech grifters won’t learn from this, of course. But the rest of us should.
Just when you think corporate content moderation can’t get any more absurd, Apple has managed to redefine “protected class” in a way that would make Orwell proud. According to internal correspondence obtained by Migrant Insider, Apple has removed the DeICER app—which allowed users to log sightings of ICE enforcement activity—by invoking guidelines normally reserved for protecting marginalized communities from hate speech.
Apple justified this by treating federal immigration agents as a protected class equivalent to groups protected from discrimination based on “religion, race, sexual orientation, gender, national/ethnic origin.”
According to internal correspondence reviewed by Migrant Insider, Apple told developer Rafael Concepcion that the app violated Guideline 1.1.1, which prohibits “defamatory, discriminatory, or mean-spirited content” directed at “religion, race, sexual orientation, gender, national/ethnic origin, or other targeted groups.”
But Apple’s justification went further. “Information provided to Apple by law enforcement shows that your app violates Guideline 1.1.1 because its purpose is to provide location information about law enforcement officers that can be used to harm such officers individually or as a group,” the company wrote in its removal notice.
The decision effectively treats federal immigration agents as a protected class — a novel interpretation of Apple’s hate-speech policy that shields one of the most powerful arms of government from public scrutiny.
Apple is now treating federal agents—who are public employees exercising government power—as if they’re a vulnerable minority group in need of protection from “discrimination.” This isn’t just a misapplication of content policies; it’s a fundamental inversion of what those policies were designed to do.
Of course, this is not entirely unprecedented. As we’ve covered over the years, whenever laws and rules against hate speech exist, inevitably the powerful seek to use them to protect themselves rather than those who are actually marginalized or vulnerable.
The DeICER app, developed by former Syracuse journalism professor Rafael Concepcion, was designed as a civic accountability tool—which seems like a good thing. Users could log ICE enforcement activity in their communities, with each report automatically expiring after four hours and requiring GPS verification within a quarter mile of the reported activity. As Concepcion explained to Migrant Insider:
“It isn’t meant to harvest people,” Concepcion said. “It is meant to inform people.”
But Apple wasn’t done with this particular category of apps. As 404 Media reports, the company also removed Eyes Up, an app that preserved videos documenting ICE abuses from social media platforms and news reports. Unlike real-time tracking apps, Eyes Up was purely about creating an archive of publicly available information.
Apple removed an app for preserving TikToks, Instagram reels, news reports, and videos documenting abuses by ICE, 404 Media has learned. The app, called Eyes Up, differs from other banned apps such as ICEBlock which were designed to report sightings of ICE officials in real-time to warn local communities. Eyes Up, meanwhile, was more of an aggregation service pooling together information to preserve evidence in case the material is needed in the future in court.
As the Eyes Up administrator told 404 Media:
“Our goal is government accountability, we aren’t even doing real-time tracking…. I think the [Trump] admin is just embarrassed by how many incriminating videos we have.”
This is part of a broader pattern we’ve seen recently. Just last week, we covered how the Department of Justice explicitly demanded Apple remove the ICEBlock app, with Attorney General Pam Bondi bragging that “We reached out to Apple today demanding they remove the ICEBlock app from their App Store—and Apple did so.”
But now we’re seeing Apple go even further, expanding this logic to proactively shield law enforcement from accountability tools—whether as a retroactive justification for caving to government demands or as anticipatory compliance with future pressure.
With DeICER, Concepcion appealed to Apple, pointing out that this wasn’t a tool for “targeting or tracking of law enforcement.” And yet, Apple rejected the appeal, even as courts have made it clear that you absolutely can videotape law enforcement actions in public.
Apple didn’t care:
But Apple rejected that reasoning. In its final ruling, the company’s App Review Board upheld the removal, stating: “Information provided to Apple by law enforcement shows that your app violates Guideline 1.1.1 … because its purpose is to provide location information about law enforcement officers that can be used to harm such officers individually or as a group.”
So we’re right back to the Guideline 1.1.1 bit, where we see that Apple is clearly defining “law enforcement” as a “protected class” which just seems difficult to justify when law enforcement, far from being oppressed, appears to be the oppressors.
And, yes, I’ll be the first to tell you that content moderation at scale is impossible to do well, and that applies to app stores as well. But when you see a pattern this consistent—and this convenient for state power—pointing to scale problems feels inadequate. This looks less like algorithmic confusion and more like Apple systematically bending its policies to accommodate government preferences while trying to maintain plausible deniability.
This reasoning is deeply problematic on multiple levels. First, it treats documentation of public officials’ public actions as equivalent to hate speech against marginalized groups. Second, it accepts law enforcement’s own assessment of what constitutes “harm” to them without any independent review. Third, it creates a precedent where any app that allows citizens to track government activity could be banned as “discriminatory” against public officials.
Apple’s anti-harassment guidelines were created to protect actually vulnerable groups from genuinely harmful content. Now those same guidelines are being weaponized to protect one of the most powerful arms of the federal government from public scrutiny. And that’s happening at a time when actual marginalized groups are the direct targets of this out of control law enforcement agency that has been “unleashed” by the likes of Donald Trump, Kristi Noem, Tom Homan, and Pam Bondi.
This isn’t about protecting people from discrimination—it’s about protecting power from accountability. ICE agents aren’t a marginalized group facing systemic oppression. They’re federal law enforcement officers wielding enormous government power, often with minimal oversight. The idea that documenting their public activities constitutes “discrimination” turns the concept of civil rights on its head.
At a moment when ICE is conducting mass deportation operations with documented civil liberties abuses, Apple has decided that the real problem is citizens having tools to document and preserve evidence of those abuses. It’s hard to imagine a more backwards approach to civil liberties.
What’s next? Will reporting on police misconduct be considered “discriminatory” against law enforcement? Will documenting government corruption be classified as “hate speech” against public officials? Apple’s logic here opens the door to treating any form of government accountability as harassment of a “protected class.”
This is exactly the kind of corporate deference to state power that should alarm anyone who cares about democratic accountability. When private companies start treating the documentation of government actions as equivalent to hate speech against minorities, we’ve crossed that dangerous line from content moderation into state protection.
The most charitable interpretation is that Apple simply misunderstood what these apps do and mechanically applied policies designed for very different situations. But given the pattern of removals and the company’s apparent willingness to accept law enforcement’s framing without question, it looks more like a deliberate choice to prioritize government preferences over civil liberties.
If documenting the actions of federal agents is now “hate speech,” then we’ve fundamentally lost the plot on what civil rights protections are supposed to accomplish. They’re meant to protect the powerless from the powerful, not the other way around.
In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Ben is joined by Thomas Hughes, CEO of Appeals Centre Europe and former Director at the Oversight Board. Together they discuss:
I have a simple question for Senator Ted Cruz: Who was president in 2018? How about 2020?
I ask because Cruz just released a “bombshell” report claiming that the Biden administration “converted” CISA into “the Thought Police.” There’s just one tiny problem with this narrative: Cruz’s own report shows that everything he’s mad about started under Donald Trump, under whose leadership CISA was created. And also that Cruz’s researchers think responding to false information is censorship. Also, studying disinformation is, somehow, censorship.
But, most importantly, apparently Ted Cruz doesn’t seem to know how time works.
Look, we’ve been through this dance before. The Supreme Court, in a decision written by Justice Amy Coney Barrett, already examined these exact claims about government “censorship” and found them to be bullshit. Barrett’s decision mentions “no evidence” at least five times and includes a devastating footnote explaining how the “evidence” was “clearly erroneous.”
The Fifth Circuit relied on the District Court’s factual findings, many of which unfortunatelyappear to be clearly erroneous. The District Court found that the defendants and the platforms had an “efficient report-and-censor relationship.” Missouri v. Biden, 680 F. Supp. 3d 630, 715 (WD La. 2023).But much of its evidence is inapposite. For instance, the court says that Twitter set up a “streamlined process for censorship requests” after the White House “bombarded” it with such requests. Ibid., n. 662 (internal quotation marks omitted).The record it cites says nothing about “censorship requests.”See App. 639–642. Rather, in response to a White House official asking Twitter to remove an impersonation account of President Biden’s granddaughter, Twitter told the official about a portal that he could use to flag similar issues. Ibid. This has nothing to do with COVID–19 misinformation. The court also found that “[a] drastic increase in censorship . . . directly coincided with Defendants’ public calls for censorship and private demands for censorship.” 680 F. Supp. 3d, at 715. As to the “calls for censorship,” the court’s proof included statements from Members of Congress, who are not parties to this suit. Ibid., and n. 658. Some of the evidence of the “increase in censorship” reveals that Facebook worked with the CDC to update its list of removable false claims, but these examples do not suggest that the agency “demand[ed]” that it do so. Ibid. Finally, the court, echoing the plaintiffs’ proposed statement of facts, erroneously stated that Facebook agreed to censor content that did not violate its policies. Id., at 714, n. 655.Instead, on several occasions, Facebook explained that certain content did not qualify for removalunder its policies but did qualify for other forms of moderation.
Cruz and his team apparently missed all that. Or they know about it and have decided to misrepresent it. I’m not sure which is worse.
The centerpiece of Cruz’s report is CISA—the Cybersecurity & Infrastructure Security Agency. According to Cruz, this agency was created with pure intentions under Trump but was then “converted” by Biden into a censorship machine.
Cruz’s report repeatedly undermines its own thesis. He writes:
“Beginning in 2018, CISA organized and attended regular meetings with industry and government officials to push its censorship agenda”
2018, Ted. Who was president then, Ted? Do you know? I’ll give you a hint: it wasn’t Joe Biden.
Oh, and remember how Cruz claimed that dealing with misinformation wasn’t part of CISA’s original plan? Well, CISA was created on November 16, 2018. So if they started these “regular meetings” in 2018, that means… they started immediately. Under Trump. As part of the original plan.
Also, Cruz keeps calling this a “censorship agenda,” but his own report shows that CISA’s role was coordination and information sharing. You know, the thing they were explicitly created to do.
The supposed smoking gun in Cruz’s report is something called “switchboarding”—where CISA would pass along reports from state election officials to social media companies. Cruz presents this as evidence of censorship.
But here’s what actually happened: Election officials would flag potential election misinformation to CISA, specifically where such misinformation might undermine the integrity of the election system (i.e., telling people to vote in the wrong place or on the wrong day). CISA would forward it to platforms with a clear disclaimer that this was not a demand. Platforms would then review the content against their own policies.
Every single message CISA sent included this disclaimer:
The U.S. Department of Homeland Security (DHS) Cybersecurity and Infrastructure Security Agency (CISA)is not the originator of this information. CISA is forwarding this information, unedited, from its originating source-this information has not been originated or generated by CISA. This information may also be shared with law enforcement or intelligence agencies.
In the event that CISA follows up to request further information,such as a request is not a requirement or demand. Responding to this request is voluntary and CISA will not take any action, favorable or unfavorable, based on decisionsabout whether or not to respond to this follow-up request for information.
Throughout the report, Cruz makes baseless claims that his own sources immediately contradict. He says CISA “directly instructed social media companies to moderate specific content.” Then, in the very same paragraph, admits that what actually happened was CISA forwarding content with disclaimers, and platforms reviewing it “based on their policies.”
Take the following for example:
During the 2020 election, CISA directed state and local election officials to report supposed election-related MDM to CISA. CISA would then review the reports and forward them to social media companies so they could remove the content. This process is referred to as “switchboarding.” As Mr. Scully, who led the CISA team performing this work, explained, switchboarding “was essentially an [election] official…identify[ing] something on social media they deemed to be disinformation aimed at their jurisdiction. They could forward that to CISA, and CISA would share that with the appropriate social media companies.”
The emails below between Scully, the Maryland State Board of Electors, and Twitter illustrate how the switchboarding process worked. Step One: the Maryland Official emailed Scully a few tweets regarding mail-in ballots. Step Two: Scully forwarded that email to Twitter. Step Three: Twitter immediately responded that it would “escalate” the tweets and later confirmed that the “[t]weets have been actioned for violations of our policies.”
We’ve covered this before, but let’s cover it again. Anyone could (and still can!) flag any content on Twitter, saying that it violates their policies. It is true that most sites also did set up separate portals to handle such flags from government actors, in part because those might require extra legal scrutiny.
And, note what happened: Twitter reviewed the content to see if they violated its policies. They did not take them down because the government requested it, but because they violated policies.
And we know, for a fact, that Twitter (and other social media sites) actually rejected the vast majority of such flags. Hell, in Cruz’s own report he includes a quote from a CISA employee, Brian Scully, noting that they knew the companies would review it against their own policies:
According to Scully, CISA knew social media companies would apply their content moderation policies to “disinformation” if CISA alerted them to it. “The idea was,” he explained, that social media companies “would make [a] decision on the content that [CISA] forward[ed] to thembased on their policies.” He acknowledged that if the content had not been brought to social media companies’ attention, the platforms would not have otherwise moderated it.
Also, Scully couldn’t possibly know that last line is true. Because he has no idea what sort of monitoring the companies would do otherwise or who else might flag the same information to them. And, just the fact that Cruz’s report doesn’t quote Scully and only summarizes that he “acknowledged” such a claim is suspect.
So, I did what Cruz didn’t do for the readers of the report and looked up Scully’s full deposition to see how that conversation actually went down. And… Cruz is totally misrepresenting what was said. Scully notes that they only shared information for the companies to decide what to do with it.
Q. Switchboard work, what does that mean?
A. It was essentially an audit official to identify something on social media they deemed to be disinformation aimed at their jurisdiction. They could forward that to CISA and CISA would share that with the appropriate social media companies.
Q. And what was the purpose of sharing it with social media companies?
A. Mostlyfor informational awareness purposes, just to make sure that the social media companies were aware of potential disinformation.
Q. Was there an understanding that if the social media platforms were aware of disinformation that they might apply their content moderation policies to it?
A. Yes. So the idea was thatthey would make decision on the content that was forwarded to them based on their policies.
Q. Whereas, if it hadn’t been brought to their attention then they obviously wouldn’t have moderated it as content; correct?
A. Yeah, I suppose that’s true, as far as I’m aware of it.
Note the full consistency all along here. At no point was the idea here about censorship. It was always flagging content for the platforms to decide what to do with it (and later reports showed they took no action on over 60% of the URLs reported).
There’s also a subtle, but very important, nuance in that final question. The question was not about whether or not the moderation would or would not have happened if CISA called it out. It just says “if it hadn’t been brought to their attention.”
Cruz’s team pretends Scully was only asked about whether or not action would have occurred if CISA flagged it. Cruz’s report pretends CISA “acknowledges” here that the takedown would only occur because of CISA, which is not what Scully actually said.
It’s finally halfway through the report that Cruz tries to tie all this to the Biden administration, by suggesting that there was a change under Biden. But, as per usual, he is taking things out of context and presenting them in the most misleading light possible.
Cruz claims that CISA ramped up its “speech policing” efforts under Biden, while his own report shows they actually stopped switchboarding in 2022:
CISA told the Committee that it stopped switchboarding in 2022. Brian Scully testified that former CISA Director Jen Easterly apparently made the decision to forgo this work… as Scully explained, switchboarding “was not a role [CISA] necessarily wanted to play” any longer “because it is very resource intensive.”
So let me get this straight, Ted: Biden supposedly ramped up the censorship operation… by shutting it down? That’s some 4D chess there.
The most absurd part of Cruz’s report comes when he tries to explain how CISA supposedly “groomed” private organizations to continue the work after they stopped switchboarding. His smoking gun? CISA introduced two organizations doing similar work so they wouldn’t duplicate efforts.
That’s it. That’s the conspiracy.
“There was a point where one of the platforms was concerned about too much kind of duplicate reporting coming in, and so we did have some conversations with EIP and CIS on how to kind of better manage that activity to make sure we weren’t overwhelming the platforms.” Scully further testified that CISA “facilitated some meetings between Stanford folks, the Center for Internet Security, and election officials, where they had discussions about how they would work together.”
That doesn’t sound like “grooming” agencies for censorship. It sounds like CISA seeing that multiple private groups were duplicating efforts and living up to its coordination and information sharing mandate by… connecting them, so they could coordinate and share information.
Beyond not understanding linear time, it’s not clear if Ted Cruz understands what words mean.
Throughout this trainwreck of a report, Cruz consistently conflates “monitoring and responding to misinformation” with “censorship.” He quotes the Election Integrity Partnership saying that no government agency has the explicit mandate “to monitor and correct” election misinformation, then claims this proves they didn’t have authority “to censor.”
But “monitor and correct” is not “censor.” Correcting misinformation means responding to it with accurate information—you know, counter-speech. The thing the First Amendment actually protects.
The entire report boils down to this: Ted Cruz thinks that studying misinformation, sharing information about it, and responding to it with factual corrections constitutes “censorship.” By that logic, every fact-checker, every news organization, and every person who’s ever said “actually, that’s not true” is engaged in censorship.
Like here, Ted, I’m correcting your bullshit. Is that censorship?
Again, I have to remind you, because it’s important, anyone can flag any content for any social media website, and that website will review it against its policies, and if they find it violates those policies, they will take action.
And yet, Ted Cruz pretends that’s censorship:
The Committee found evidence indicating that CISA directly instructed social media companies to moderate specific content. For instance, in one document the Committee reviewed, a lawyer hired by Twitter reviewed Twitter’s communications with government entities and summarized the instances in which CISA had either raised its “direct concerns” with Twitter or forwarded an email from an election official about “inaccurate” information on the platform, and Twitter “took action.”125 Documents like these reinforced the Committee’s suspicion that CISA was hiding the true extent of its relationship with social media companies and its content moderation pressure campaign.
The first sentence claims that CISA “directly instructed social media companies to moderate specific content.” So you would think there would be evidence of that. Instead, what the rest of the paragraph shows is, as described above (and publicly throughout the past five years) CISA would pass along content—with a clear statement that it wasn’t from CISA and it wasn’t a demand—and platforms would independently review it to see if it violated their policies. And if it did violate the policies, they would take action.
Okay, but what about CISA work on “mis- and disinformation” through its “MDM subcommittee.” Again, it’s not clear Ted Cruz understands English. Because the report notes that this was a key recommendation of that group:
“[R]apidly respond—through transparency and communication—to emergent informational threats to critical infrastructure. . . . These response efforts can be actor-agnostic, but special attention should be paid to countering foreign threats.”
Yes. Rapidly responding, through transparency and communication.
Does that sound like “censorship” to anyone other than Ted Cruz?
Up is down, black is white, day is night. Ted Cruz is either a mendacious liar. Or an idiot.
Let’s be clear about what actually happened here. CISA was created by Donald Trump in November 2018. According to Cruz’s own timeline, it immediately began the work he’s now calling “censorship” or “speech policing” though anyone looking at the details would realize it was no such thing. That work continued through 2020—still under Trump. Biden took office, and then CISA scaled back these activities in 2022.
So Cruz is literally blaming Biden for things that didn’t happen, but the things he misinterpreted are things that started under Trump and were reduced under Biden.
This isn’t just wrong—it’s historically illiterate and spectacularly, embarrassingly wrong. And it’s part of a broader pattern of MAGA lies about government “censorship” that the Supreme Court has already debunked.
The goal here isn’t accuracy. It’s creating a false narrative to justify actual retaliation against platforms that don’t toe the line. Cruz knows that most people won’t read the actual documents or check his timeline. They’ll just see “Biden censorship” in the headlines and accept it as fact.
But the documents don’t lie, even when Ted Cruz does. His own report proves that everything he’s (misleadingly) mad about started under Trump, operated under the legal authorities Trump granted, were not actually about speech policing, and were scaled back under Biden.
Ted Cruz either doesn’t know who was president when, or he’s counting on you not knowing. He also either doesn’t know what actual censorship is, or he’s… counting on you not knowing. Either way, it’s a pretty damning indictment of a sitting U.S. Senator.
The entire 22-page report boils down to this: “How dare the agency Donald Trump created to coordinate information sharing… coordinate information sharing.”
And sadly, because Cruz put “censorship” and “Biden” in the same sentence, many people will now treat this nonsense as gospel truth.
For years now, the MAGA crowd has been absolutely convinced that the Biden administration engaged in the most egregious censorship campaign in American history. They’ve waved around the Murthy v. Missouri case as proof that Biden officials illegally pressured tech companies to remove content (even as the Supreme Court concluded there wasn’t even enough evidence of any coercion to give any of the plaintiffs standing). Just last week, Rep. Jim Jordan was wildly celebrating what heclaimedwas Google’s admission that the Biden administration forced YouTube to censor people (which wasn’t actually what Google said at all, but reading comprehension has never been Jordan’s strong suit).
But now we have an actual, crystal-clear example of government officials using direct threats to pressure a tech company into removing disfavored speech—and suddenly, the free speech warriors have gone mysteriously quiet.
404 Media has the story of Apple removing the ICEBlock app from its App Store on Thursday after direct pressure from Department of Justice officials acting at the direction of Attorney General Pam Bondi. The app, which allows people to crowdsource sightings of ICE officials, was pulled following what Fox News described as the DOJ “reaching out” to Apple and “demanding” the removal.
Aaron provided 404 Media with a copy of the email he received from Apple regarding the removal. It says “Upon re-evaluation, we found that your app is not in compliance with the App Review Guidelines.” It then points to parts of those guidelines around “Objectionable Content,” and specifically “Defamatory discriminatory, or mean-spirited content, including references or commentary about religion, race, sexual orientation, gender, national/ethnic origin, or other targeted groups, particularly if the app is likely to humiliate, intimidate, or harm a targeted individual or group.”
The email then says “Information provided to Apple by law enforcementshows that your app violates Guideline 1.1.1 because its purpose is to provide location information about law enforcement officers that can be used to harm such officers individually or as a group.”
And Bondi herself was quite explicit about the government’s role in this censorship:
Bondi told Fox “ICEBlock is designed to put ICE agents at risk just for doing their jobs, and violence against law enforcement is an intolerable red line that cannot be crossed. This Department of Justice will continue making every effort to protect our brave federal law enforcement officers, who risk their lives every day to keep Americans safe.”
“We reached out to Apple today demanding they removethe ICEBlock app from their App Store—and Apple did so,” Bondi added according to the Fox report.
Now, some will inevitably argue that Apple made an independent decision based on its own guidelines. But the MAGA crowd refused to accept that exact same argument when it was made in defense of what happened during the Biden administration. When companies explained that their content moderation decisions were based on their own policies, not government pressure, the MAGA crowd dismissed those explanations as irrelevant. They’ve spent years refusing to acknowledge the difference between government persuasion and government coercion.
In all of the communications from the Biden administration that were revealed in Murthy v. Missouri, officials never demanded removal of content. They did request reviews against existing policies (which is why companies rejected over 60% of flagged content) and occasionally suggested policy changes (which were mostly ignored). Even when companies did take action, they consistently maintained it was based on their own policy determinations.
But here? Bondi explicitly states she demanded Apple remove the app. There’s no ambiguity, no gentle suggestion, no “request for review.” It’s a direct government demand for censorship that was immediately complied with.
So let’s be clear about what happened here: A government official made a demand to a private tech company to remove an app based on the content of that app, and the company complied. This is exactly—and I mean exactly—what Jordan, Trump, and the entire MAGA ecosystem have been claiming (falsely) was the greatest violation of the First Amendment in modern history when they imagined Biden officials did it.
But somehow, I doubt we’ll see Jordan holding hearings about this. I doubt we’ll see breathless segments about government censorship. I doubt we’ll see any of the usual suspects who spent years screaming about the Biden administration’s supposed “jawboning” saying a single word about this actual, documented case of government officials pressuring a tech company to remove content.
Now, to be fair, ICEBlock has legitimate issues that have been well-documented. Security researcher Micah Lee has written extensively about how the app is “activism theater” that wasn’t developed with input from actual immigrant defense groups and spreads unverified information that can cause panic rather than provide useful protection. He also documented serious security vulnerabilities in the app’s infrastructure that the developer ignored for weeks. These are legitimate concerns about the app’s effectiveness and security.
But here’s the thing: the quality or effectiveness of the app is irrelevant to the First Amendment question. The government cannot pressure private companies to remove apps based on the content of those apps, regardless of whether that content is high-quality, low-quality, effective, or ineffective. As we documented earlier this year, ICEBlock and similar apps serve a purpose that many people find valuable—providing early warning systems for ICE activities in local communities at a time when people (for good reasons!) are quite concerned about ICE’s abusive tactics.
The Supreme Court made this distinction crystal clear in both the Murthy and Vullo cases. In Vullo, the Court explicitly stated:
A government official can share her views freely and criticize particular beliefs, and she can do so forcefully in the hopes of persuading others to follow her lead. In doing so, she can rely on the merits and force of her ideas, the strength of her convictions, and her ability to inspire others.What she cannot do, however, is use the power of the State to punish or suppress disfavored expression….
Bondi didn’t just share her views or criticize the app. She explicitly used the power of the state by “demanding” Apple remove it, and Apple complied within hours. This is textbook government coercion of the type that the Supreme Court has repeatedly said violates the First Amendment.
Just last week, we had Trump supporters lying about Biden “censorship” to justify FCC Chair Brendan Carr’s explicit threats against Disney over Jimmy Kimmel’s speech. They keep pointing to Murthy v. Missouri as if it blessed government pressure on tech companies, when it actually said the opposite—that such pressure would violate the First Amendment if there was evidence it occurred.
But, as we discussed, in Murthy, the Supreme Court made it clear that explicit threats would, in fact, cross the First Amendment line. The problem in Murthy was the lack of evidence of “coercion” or “significant encouragement” to suppress speech—the Court specifically looked for explicit demands or threats and found none (while it did find such explicit demands in the Vullo case, which they heard the same day). The majority ruling states that the conduct needs to involve coercion and “not mere communication.”
Well, here’s your coercion. Here’s your “significant encouragement.” Here’s your smoking gun in the form of the Attorney General literally telling the media she demanded the removal of an app.
Here’s the actual government censorship that Jordan and company have been claiming to fight against for years.
Where are they now?
The silence reveals something fundamental about the entire “censorship” crusade: It was never about protecting free speech or preventing government overreach. It was about creating a permission structure for their own authoritarian impulses while weaponizing victimhood narratives against their political opponents.
When faced with actual, explicit, documented government censorship—the kind they’ve been breathlessly warning about for years—they have nothing to say. Because this censorship serves their agenda, targets their enemies, and advances their political goals.
The mask has slipped completely. The “free speech” warriors have shown themselves to be exactly what critics always said they were: not principled defenders of civil liberties, but partisan actors who only care about speech when it benefits them.
In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Mike is joined by Dave Willner, founder of Zentropi, and long-time trust & safety expert who worked at Facebook, AirBnB, and OpenAI in Trust & Safety roles. Together they discuss: