Fourteen years ago, right after the FCC issued its order on net neutrality, Marsha Blackburn flipped out and released a video talking (misleadingly!) about how wonderful Facebook and Twitter were and how they would be destroyed if the big evil government interfered in any way with the internet. As she says “there has never been a time when a consumer needed a federal bureaucrat to intervene…”
The Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act (TRUMP AMERICA AI) Act
The acronym doesn’t work. You’ve got “The” included and “by” ignored, an “I” from “Intelligence” skipped, and “Act” appearing twice. It’s actually TRUMP AMIBERICA AIA Act if you follow the words. Clearly some staffer was told “make this spell TRUMP AMERICA AI Act” and fed it into Grok, got this “republic unifying meritocratic” nonsense, and no one checked because slapping Trump’s name on things is the whole point.
Which matters, because given that Blackburn named it after Trump, if it somehow catches Trump’s fancy, this thing might actually move. And the bill itself is a disaster—an omnibus massively destructive internet policy overhaul masquerading as AI legislation.
First off, the part that the bill’s name references is an attempt to have Congress pass the law that Trump asked Congress for in his recent AI executive order that pretended to ban states from passing AI laws. As we noted at the time, you need Congress to do that. An executive order doesn’t cut it. And even Republican governors like Florida’s Ron DeSantis and Utah’s Spencer Cox have both said “fuck no” when asked about this.
But, loyal Trumpist Blackburn is trying to have Congress block states from regulating AI. From her section-by-section explanation of the bill:
Preempt state laws and regulations related to the regulation of frontier AI developers related to the management of catastrophic risk.
But it would also do a lot of other stuff, including introducing a problematic “duty of care” on AI developers to “prevent and mitigate foreseeable harm to users.” This is one of those things that I’m sure sounds good to folks, but as we’ve explained over and over again this kind of “duty of care” is basically an anti-230 that would do real damage. It’s basically just an invitation for lawyers to sue any time anything bad happens and someone involved in the bad thing that happened somehow used an AI tool at some point.
And then you have to go through a big expensive legal process to explain “no, this thing was not because of AI” or whatever. It’s just a massive invitation to sue everyone, meaning that in the end you have just a few giant companies providing AI because they’ll be the only ones who can afford the lawsuits.
But there’s a whole lot more in the bill that has nothing to do with AI at all. It effectively repeals Section 230 by “reforming it” in a manner that flips the way 230 works. Rather than the “Good Samaritan” section that’s in there now, it will have a “Bad Samaritan” section, that would make providers potentially liable for “facilitating or soliciting third-party content that violates federal criminal law.” And, of course, some people will say that that’s fine, because you don’t want platforms doing that.
Two quick problems: one, Section 230 already exempts federal criminal law. It’s right there in section (e)(1). So to the extent this is supposedly about dealing with criminal behavior by platforms, you don’t need this change.
But the real problem is what this “Bad Samaritan” carve-out does to Section 230’s core function. Right now, 230 lets platforms get frivolous lawsuits dismissed quickly at the motion to dismiss stage. This change would force every platform to go through lengthy, expensive litigation to prove they weren’t “facilitating” (an incredibly vague term) or “soliciting” third-party content that violates federal criminal law.
That’s gutting the main reason Section 230 exists. Instead of quick dismissals, you get discovery, depositions, and trials, all while someone argues that because your algorithm showed someone a post, you were “facilitating” whatever criminal content they claim to find.
Next up; the bill effectively shoves KOSA into the bill. Blackburn’s been pushing KOSA forever. Remember, she wants KOSA to stop “the transgender in our culture.” KOSA keeps stalling out in Congress because it’s a really bad bill that would encourage tremendous online censorship, and sooner or later enough elected officials on both sides of the aisle realize “shit, this would be bad if the other side were in power.”
It also throws in the “NO FAKES Act” for funsies. If you don’t recall, the “NO FAKES” Act would mandate filters and scanning across the internet, destroy anonymous speech, and block a wide variety of useful innovations. For all the complaining MAGA has done about EU internet regulations, NO FAKES goes way beyond anything that the EU requires in terms of blatant censorship.
You know, the kind that Marsha Blackburn warned about, claiming Obama was coming for your internet and was going to suppress speech?
And that’s not all. The bill also has some nonsense requiring AI to undergo “audits” to make sure they’re not biased against conservatives. I only wish I were kidding.
Oh, and it completely upends copyright law in multiple concerning ways, effectively wiping out fair use, creating a new form of copyright infringement specifically for AI-generated works, giving the FTC a role in enforcing copyright law, and revamping how collective licensing works. This is, of course, a gift to the recording industry which has a large presence in her state of Tennessee.
Basically, this is an omnibus bill that would change nearly every US government policy regarding how the internet works, tackling AI, Section 230, copyright, and a bunch of other nonsense all in one bill. And Blackburn has cynically named it after Donald Trump hoping he’ll get on board and hound the MAGA folks in Congress to pass it.
So to recap: the Marsha Blackburn who said 14 years ago that “there has never been a time when a consumer needed a federal bureaucrat to intervene” has introduced a bill that would have federal bureaucrats intervene in basically every aspect of how the internet works: content moderation decisions, mandate bias audits, preempt state laws, require speech scanning across the internet, and fundamentally reshape how platforms, AI developers, and copyright holders operate online.
All (literally) in the name of Donald Trump. Because apparently when you need federal bureaucrats to intervene, what really matters is whose name is on the bill.
A bipartisan group of the most anti-internet Senators around have released their latest version of a plan to “sunset Section 230.” We went over this last year when they floated the same idea: they have no actual plan for how to make sure the open internet can continue. Instead, their “plan” is to put a gun to the head of the open internet and say they’re going to shoot it… unless Meta gives them a different alternative. Let’s bring back Eric Goldman’s meme:
There is no plan for how to protect speech on the internet. There’s just hostage-taking. And remarkably, the hostage-takers are saying the quiet part out loud. Here’s Senator Dick Durbin’s comment on releasing this bill:
“Children are being exploited and abused because Big Tech consistently prioritizes profits over people. Enough is enough.Sunsetting Section 230 will force Big Tech to come to the table take ownership over the harms it has wrought.And if Big Tech doesn’t, this bill will open the courtroom to victims of its platforms. Parents have been begging Congress to step in, and it’s time we do so. I’m proud to partner with Senator Graham on this effort, and we will push for it to become law,” said Durbin.
Read that bolded part again. Durbin is admitting—in a press release, for the record—that he wants Big Tech to write internet policy. He’s threatening to blow up the legal framework that allows everyday people to speak online unless Mark Zuckerberg comes to his office and tells him what laws to pass. This is Congress openly abdicating its responsibility to govern in favor of letting a handful of tech CEOs do it instead.
The problem? The people who benefit from Section 230 aren’t the big tech CEOs. They’re you. They’re me. They’re every small forum, every Discord server, every newsletter with comments, every community space online where people can actually talk to each other without first getting permission from a building full of lawyers.
They want “big tech” to come to the table, even though (as we’ve explained over and over and over again) the damage from repealing 230 is not to “big tech.” Hell, Meta has been calling for the removal of Section 230 for years.
Why? Because Meta (unlike Durbin) knows exactly what every 230 expert has been saying for years: its main benefit has fuck all to do with “big tech” and is very much about protecting you, me, and the everyday users of the internet, creating smaller spaces where they can speak, interact, build community and more.
Repealing Section 230 doesn’t hurt Meta at all. Because if you get rid of Section 230, Meta can afford the lawsuits. They have a building full of lawyers they’re already paying. They can pay them to take on the various lawsuits and win. Why will they win? Because the First Amendment is what actually protects most of the speech these dipshit Senators are mad about.
But winning on First Amendment grounds probably costs between $2 million and $5 million. Winning on 230 grounds happens at an earlier stage with much less work, and probably costs $100k. A small company can survive a few $100k lawsuits. But a few $5 million lawsuits puts them out of business.
We already know this. We can see it with the DMCA, which was always weaker than Section 230. A decade and a half ago, Veoh was poised to be a big competitor to YouTube. But it got sued. It won the lawsuit… but went out of business anyway, because the legal fees killed it before it won. And now YouTube dominates the space.
When you weaken intermediary protection laws, you help the big tech providers.
Separately, notice Durbin’s phrasing about children being exploited. Can some reporter please ask Dick Durbin to explain how removing Section 230 protects children? He won’t be able to answer, because it won’t help. Or maybe he’ll punt to Senator Graham, whose press release at least attempts an answer:
“Giant social media platforms are unregulated, immune from lawsuits and are making billions of dollars in advertising revenue off some of the most unsavory content and criminal activity imaginable.It is past time to allow those who have been harmed by these behemoths to have their day in court,”said Graham.
Day in court… for what? Most “unsavory content” is constitutionally protected speech. The rap sheet is mostly First-Amendment activity—Section-230 just spares hosting it; repeal means litigating over legal speech, one plaintiff at a time.
As for criminal activity, well, that’s a law enforcement issue, not related to Section 230. If you don’t think that criminal activity is being properly policed online, maybe that’s something you should focus on?
Section 230 gives companies the freedom to make changes to protect children. That was the entire point of it. Literally, Chris Cox and Ron Wyden wanted a structure that would create incentives for platforms to be able to protect their users (including children!) without having to face legal liability for any little mistake.
If you take away Section 230, you actually tie the hands of companies trying to protect children. Because, now, every single thing they do to try to make their site safer opens them up to legal liability. That means you no longer have trust & safety or child safety experts making decisions about what’s best: you have lawyers. Lawyers who just want to protect companies from liability.
So, what will they do? They’ll do the thing that won’t protect children (which is risky), the thing that avoids liability, which tends to be to putting your head in the sand. Avoiding knowledge gets you out of these lawsuits, because under existing distributor liability concepts, knowledge is key to holding a distributor liable.
The only benefits to killing Section 230 are (1) to the biggest tech companies who wipe out competitors, (2) to the trial lawyers who plan to get rich suing the biggest tech companies, and (3) to Donald Trump, who can use the new rules to put even more pressure on the internet to suppress speech he doesn’t like.
I know for a fact that Senator Wyden has tried to explain this to his colleagues in the Senate, and they just refuse to listen.
Reminding everyone for no particular reason that Section 230 is one of the last things standing between free speech online and Trump having control over everything you see and say on the internet
This is exactly why BestNetTech needs your support. When the most powerful people in government are ignoring experts and pushing legislation based on lies, someone needs to keep explaining what’s actually happening. We’ve been doing that for over 25 years, and we’re going to keep doing it—but we need your help to make sure that continues.
Meanwhile, the actual users of the open internet—and the children Durbin claims to be protecting—come out worse off. Senator Durbin and his cosponsors (Senators Graham, Grassley, Whitehouse, Hawley, Klobuchar, Blackburn, Blumenthal, Moody, and Welch) know all this. They’ve been told all of this. Sometimes by Senator Wyden himself. But all of them (with the possible exception of Welch who I don’t know as much about) have a long and well-known history of simply hating the fact that the open internet exists.
The bill isn’t child protection, and it sure isn’t tech regulation. It’s a suicide pact drafted by people who’ve always despised an internet they don’t control. Zuckerberg gets handed the pen; we get handed the bill—and the bullet.
Yesterday, Rep. Harriet Hageman released her bill to repeal Section 230. She’s calling it “reform,” but make no mistake—it’s a repeal, and I’ll explain why below. The law turns 30 in February, and there’s a very real chance this could be its last anniversary.
Which is why we’re running BestNetTech’s fundraising campaign right now, offering our very first commemorative coin for donations of at least $100 made before January 5th. That coin celebrates those 30 years of Section 230. But more importantly, your support funds the kind of coverage that can actually cut through the bullshit at a moment when it matters most.
Because here’s the thing: for nearly three decades, we’ve been one of the only sources to report fully and accurately on both how Section 230 works and why it’s so important. And right now, with a bipartisan coalition gunning to kill it based on myths and misinformation, that expertise is desperately needed.
Section 230 remains one of the most misunderstood laws in America, even among the people in Congress trying to destroy it. Some of that confusion is deliberate—political expediency wrapped in talking points. But much of it has calcified into “common knowledge” that’s actively wrong. The “platform or publisher” distinction that doesn’t exist in the law. The idea that 230 protects illegal content. The claim that moderation choices forfeit your protections. All myths. All dangerous. All getting repeated by people who should know better.
So below, I’m highlighting some of our essential Section 230 coverage—not as a greatest hits compilation, but as a roadmap to understanding what’s actually at stake. If you believe in the open internet, you need Section 230. And if you need Section 230, you need someone who actually understands it fighting back against the tsunami of bullshit. That’s what you’re funding when you support BestNetTech.
Let’s start with the big one. Our most popular post ever on Section 230:
Five years later, this is still the single most useful thing you can hand someone who’s confidently wrong about Section 230. It systematically demolishes every major myth—the platform/publisher nonsense, the “neutrality” requirement that doesn’t exist, the “good faith” clause people misread, all of it—in a format designed to be shared. And people do share it, constantly, because the same wrong arguments keep recycling. Consider this your foundation.
This is the piece that exposes the semantic game. Politicians love to say they’re not repealing 230, just “reforming” it. But as Cathy Gellis explains, nearly every reform proposal accomplishes the same thing: it forces websites into expensive, extended litigation to reach an outcome the law currently reaches in weeks. That’s not reform—it’s sabotage by procedure. The real benefit of 230 isn’t the outcome (most of these cases would eventually win on First Amendment grounds anyway), it’s that you get there for $100k instead of $5 million. Strip that away and you’ve effectively repealed the law for everyone except the richest companies. Which, spoiler alert, is exactly the point of most “reform” proposals.
A near universal trait of those who show up with some crazy idea to “reform” Section 230 is that they don’t understand how the law works, despite the many explainers out there (and an entire book by Jeff Kosseff). And that’s why, as with Cathy’s article above, the advocates for reform lean in on the claim that they’re just “reforming” it when they’re actually leading to an effective repeal.
Law professor James Boyle asks the more interesting question: why do smart people keep getting this so catastrophically wrong? His answer—cognitive biases, analogies to other areas of law that don’t actually apply, and the sheer difficulty of thinking clearly about speech policy—explains why the same bad ideas keep resurfacing despite being debunked repeatedly. Understanding the psychology of the confusion is almost as important as correcting it.
So many complaints about Section 230 are actually complaints about the First Amendment in disguise. People angry that a website won’t remove certain speech often blame 230, but the reality is that the First Amendment likely protects that speech anyway. Prof. Jess Miers explains why killing 230 won’t magically enable the censorship people want—it’ll just make the process more expensive and unpredictable. Some people hear that and think “great, we can rely on the First Amendment alone then!” Which brings us to:
This is the piece that clicks it all into place. Prof. Eric Goldman’s paper explains that 230 isn’t an alternative to First Amendment protection—it’s a procedural shortcut to the same outcome. Without 230, most of these lawsuits would still eventually fail on First Amendment grounds. The difference is it would cost $3-5 million in legal fees to get there instead of $100k. That $100k vs $5 million gap is the difference between an ecosystem where small companies can exist and one where only giants survive. Anyone telling you we can just rely on the First Amendment either doesn’t understand this or is deliberately trying to consolidate the internet into a handful of megacorps.
And now we get to the part where even the supposed experts fuck it up. The NY Times—the Paper of Record—has made the same basic factual error about Section 230 so many times they’ve had to run variations of this correction repeatedly:
If it feels like you can’t trust the mainstream media to accurately report on Section 230, you’re not wrong. And that’s why we do what we do at BestNetTech.
Even the tech press—outlets that should know better—regularly faceplants on this stuff. This Wired piece was so aggressively wrong it read like parody. The value here is watching us dissect not just the errors, but how someone can write thousands of words about a law while fundamentally misunderstanding what it does.
The title says it all. When former members of Congress—people who theoretically understand how laws work—produce something this catastrophically wrong, it reveals the scope of the problem. These aren’t random trolls; these are people with institutional credibility writing op-eds that influence policy. The danger here is that their ignorance carries weight.
The pattern is almost comical: someone decides 230 is bad, spends zero time understanding it, then announces a “solution” that would either accomplish nothing or catastrophically backfire. This piece is representative of dozens we’ve written, each time responding to a new flavor of the same fundamental confusion, like no other publication online.
People have assigned Section 230 almost mystical properties—that it’s the reason democracy is failing, or that repealing it would somehow fix polarization, or radicalization, or misinformation. The law does none of these things, good or bad. This piece dismantles the fantasy thinking that treats 230 like a magic wand.
Amid all the doom-saying, it’s worth remembering what 230 actually enables. Jess Miers walks through five specific cases where the law protected communities, support groups, review sites, and services that improve people’s lives. Repealing 230 doesn’t just hurt Facebook—it destroys the ecosystem of smaller communities that depend on user-generated content.
Please support our continued reporting on Section 230
There are dozens more pieces in our archives, each responding to new variations of the same fundamental misunderstandings. We’ve been doing this for nearly three decades—long before it was politically fashionable to attack 230, and we’ll keep doing it as long as the law is under threat.
Because here’s what happens if we lose this fight: the internet consolidates into a handful of platforms big enough to survive the legal costs. Smaller communities die. Innovation gets strangled in the crib. And ironically, the problems people blame on 230—misinformation, radicalization, abuse—all get worse, because only the giants with the resources to over-moderate will survive, and they’ll moderate in whatever way keeps advertisers and governments happy, not in whatever way actually serves users.
That’s the stakes. Not whether Facebook thrives, but whether the next generation of internet services can even exist.
We’re committed to making sure policymakers, journalists, and anyone who cares about this stuff actually understand what they’re about to destroy. But we need support to keep doing it. If you agree that Section 230 matters, and that someone needs to keep telling the truth about it when even the NY Times can’t get basic facts right, support BestNetTech today. Consider a $230 donation and get our first commemorative coin, celebrating 30 years of a law that’s under existential threat and making sure it survives to see 31.
The Court of Justice of the EU—likely without realizing it—just completely shit the bed and made it effectively impossible to run any website in the entirety of the EU that hosts user-generated content.
Obviously, for decades now, we’ve been talking about issues related to intermediary liability, and what standards are appropriate there. I am an unabashed supporter of the US’s approach with Section 230, as it was initially interpreted, which said that any liability should land on the party who contributed the actual violative behavior—in nearly all cases the speaker, not the host of the content.
The EU has always held itself to a lower standard of intermediary liability, first with the E-Commerce Directive and more recently with the Digital Services Act (DSA), which still generally tries to put more liability on the speaker but has some ways of shifting the liability to the platform.
No matter which of those approaches you think is preferable, I don’t think anyone could (or should) favor what the Court of Justice of the EU came down with earlier this week, which is basically “fuck all this shit, if there’s any content at all on your site that includes personal data of someone you may be liable.”
As with so many legal clusterfucks, this one stems from a case with bad facts, which then leads to bad law. You can read the summary as the CJEU puts it:
The applicant in the main proceedings claims that, on 1 August 2018, an unidentified third party published on that website an untrue and harmful advertisement presenting her as offering sexual services. That advertisement contained photographs of that applicant, which had been used without her consent, along with her telephone number. The advertisement was subsequently reproduced identically on other websites containing advertising content, where it was posted online with the indication of the original source. When contacted by the applicant in the main proceedings, Russmedia Digital removed the advertisement from its website less than one hour after receiving that request. The same advertisement nevertheless remains available on other websites which have reproduced it.
And, yes, no one is denying that this absolutely sucks for the victim in this case. But if there’s any legal recourse, it seems like it should be on whoever created and posted that fake ad. Instead, the CJEU finds that Russmedia is liable for it, even though they responded within an hour and took down the ad as soon as they found out about it.
The lower courts went back and forth on this, with a Romanian tribunal (on first appeal) finding, properly, that there’s no fucking way Russmedia should be held liable, seeing as it was merely hosting the ad and had nothing to do with its creation:
The Tribunalul Specializat Cluj (Specialised Court, Cluj, Romania) upheld that appeal, holding that the action brought by the applicant in the main proceedings was unfounded, since the advertisement at issue in the main proceedings did not originate from Russmedia, which merely provided a hosting service for that advertisement, without being actively involved in its content. Accordingly, the exemption from liability provided for in Article 14(1)(b) of Law No 365/2002 would be applicable to it. As regards the processing of personal data, that court held that an information society services provider was not required to check the information which it transmits or actively to seek data relating to apparently unlawful activities or information. In that regard, it held that Russmedia could not be criticised for failing to take measures to prevent the online distribution of the defamatory advertisement at issue in the main proceedings, given that it had rapidly removed that advertisement at the request of the applicant in the main proceedings.
With the case sent up to the CJEU, things get totally twisted, as they argue that under the GDPR, the inclusion of “sensitive personal data” in the ad suddenly makes the host a “joint controller” of the data under that law. As a controller of data, the much stricter GDPR rules on data protection now apply, and the more careful calibration of intermediary liability rules get tossed right out the window.
And out the window, right with it, is the ability to have a functioning open internet.
The court basically shreds basic intermediary liability principles here:
In any event, the operator of an online marketplace cannot avoid its liability, as controller of personal data, on the ground that it has not itself determined the content of the advertisement at issue published on that marketplace. Indeed, to exclude such an operator from the definition of ‘controller’ on that ground alone would be contrary not only to the clear wording, but also the objective, of Article 4(7) of the GDPR, which is to ensure effective and complete protection of data subjects by means of a broad definition of the concept of ‘controller’.
Under this ruling, it appears that any website that hosts any user-generated content can be strictly liable if any of that content contains “sensitive personal data” about any person. But how the fuck are they supposed to handle that?
The basic answer is to pre-scan any user-generated content for anything that might later be deemed to be sensitive personal data and make sure it doesn’t get posted.
How would a platform do that?
¯\_(ツ)_/¯
There is no way that this is even remotely possible for any platform, no matter how large or how small. And it’s even worse than that. As intermediary liability expert Daphne Keller explains:
The Court said the host has to
pre-check posts (i.e. do general monitoring)
know who the posting user is (i.e. no anonymous speech)
try to make sure the posts don’t get copied by third parties (um, like web search engines??)
Basically, all three of those are effectively impossible.
Think about what the court is actually demanding here. Pre-checking posts means full-scale automated surveillance of every piece of content before it goes live—not just scanning for known CSAM hashes or obvious spam, but making subjective legal determinations about what constitutes “sensitive personal data” under the GDPR. Requiring user identification kills anonymity entirely, which is its own massive speech issue. And somehow preventing third parties from copying content? That’s not even a technical problem—it’s a “how do you stop the internet from working like the internet” problem.
Some people have said that this ruling isn’t so bad, because the ruling is about advertisements and because it’s talking about “sensitive personal data.” But it’s difficult to see how either of those things limit this ruling at all.
There’s nothing inherently in the law or the ruling that limits its conclusions to “advertisements.” The same underlying factors would apply to any third party content on any website that is subject to the GDPR.
As for the “sensitive personal data” part, that makes little difference because sites will have to scan all content before anything is posted to guarantee no “sensitive personal data” is included and then accurately determine what a court might later deem to be such sensitive personal data. That means it’s highly likely that any website that tries to comply under this ruling will block a ton of content on the off chance that maybe that content will be deemed sensitive.
As the court noted:
In accordance with Article 5(1)(a) of the GDPR, personal data are to be processed lawfully, fairly and in a transparent manner in relation to the data subject. Article 5(1)(d) of the GDPR adds that personal data processed must be accurate and, where necessary, kept up to date. Thus, every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay. Article 5(1)(f) of that regulation provides that personal data must be processed in a manner that ensures appropriate security of those data, including protection against unauthorised or unlawful processing.
Good luck figuring out how to do that with third-party content.
And they’re pretty clear that every website must pre-scan every bit of content. They claim it’s about “marketplaces” and “advertisements” but there’s nothing in the GDPR that limits this ruling to those categories:
Accordingly, inasmuch as the operator of an online marketplace, such as the marketplace at issue in the main proceedings, knows or ought to know that, generally, advertisements containing sensitive data in terms of Article 9(1) of the GDPR, are liable to be published by user advertisers on its online marketplace,that operator, as controller in respect of that processing, is obliged, as soon as its service is designed, to implement appropriate technical and organisational measures in order to identify such advertisements before their publicationand thus to be in a position to verify whether the sensitive data that they contain are published in compliance with the principles set out in Chapter II of that regulation. Indeed, as is apparent in particular from Article 25(1) of that regulation, the obligation to implement such measures is incumbent on it not only at the time of the processing, but already at the time of the determination of the means of processing and, therefore, even before sensitive data are published on its online marketplace in breach of those principles, that obligation being specifically intended to prevent such breaches.
No more anonymity allowed:
As regards, in the second place, the question whether the operator of an online marketplace, as controller of the sensitive data contained in advertisements published on its website, jointly with the user advertiser, must verify the identity of that user advertiser before the publication, it should be recalled that it follows from a combined reading of Article 9(1) and Article 9(2)(a) of the GDPR that the publication of such data is prohibited, unless the data subject has given his or her explicit consent to the data in question being published on that online marketplace or one of the other exceptions laid down in Article 9(2)(b) to (j) is satisfied, which does not, however, appear to be the case here.
On that basis, while the placing by a data subject of an advertisement containing his or her sensitive data on an online marketplace may constitute explicit consent, within the meaning of Article 9(2)(a) of the GDPR, such consent is lacking where that advertisement is placed by a third party, unless that party can demonstrate that the data subject has given his or her explicit consent to the publication of that advertisement on the online marketplace in question. Consequently, in order to be able to ensure, and to be able to demonstrate, that the requirements laid down in Article 9(2)(a) of the GDPR are complied with,the operator of the marketplace is required to verify, prior to the publication of such an advertisement, whether the user advertiser preparing to place the advertisement is the person whose sensitive data appear in that advertisement, which presupposes that the identity of that user advertiser is collected.
Finally, as Keller noted above, the CJEU seems to think it’s possible to require platforms to make sure content is never displayed on any other platform as well:
Thus, where sensitive data are published online,the controller is required, under Article 32 of the GDPR, to take all technical and organisational measuresto ensure a level of security apt to effectively prevent the occurrence of a loss of control over those data.
To that end, the data controller must consider in particular all technical measures available in the current state of technical knowledge thatare apt to block the copying and reproduction of online content.
Again, the CJEU appears to be living in a fantasy land that doesn’t exist.
This is what happens when you over-index on the idea of “data controllers” needing to keep data “private.” Whoever revealed sensitive data should have the liability placed on them. Putting it on the intermediary is misplaced and ridiculous.
There is simply no way to comply with the law under this ruling.
In such a world, the only options are to ignore it, shut down EU operations, or geoblock the EU entirely. I assume most platforms will simply ignore it—and hope that enforcement will be selective enough that they won’t face the full force of this ruling. But that’s a hell of a way to run the internet, where companies just cross their fingers and hope they don’t get picked for an enforcement action that could destroy them.
There’s a reason why the basic simplicity of Section 230 makes sense. It says “the person who creates the content that violates the law is responsible for it.” As soon as you open things up to say the companies that provide the tools for those who create the content can be liable, you’re opening up a can of worms that will create a huge mess in the long run.
That long run has arrived in the EU, and with it, quite the mess.
Democratic Senator Mark Kelly and Republican Senator John Curtis want to gut Section 230 to combat “political radicalization”—in honor of Charlie Kirk, whose entire career was built on political radicalization.
Kirk styled himself as a “free speech warrior” because he would show up on college campuses to “debate” people, but as we’ve covered, the “debate me bro” shtick was just trolling designed to generate polarizing content for social media. He made his living pushing exactly the kind of inflammatory political content that these senators now claim is so dangerous it requires dismantling core legal protections for speech. Their solution to political violence inspired by online rhetoric is to create a legal framework that will massively increase censorship of political speech.
Which they claim they’re doing… in support of free speech.
Almost everything about what they’re saying is backwards.
The two Senators spoke at an event at Utah Valley University, where Charlie Kirk was shot, to talk about how they were hoping to stop political violence. That’s a worthwhile goal, but their proposed solution reveals they don’t understand how Section 230 actually works.
The senators also used their bipartisan panel on Wednesday to announce plans to hold social media companies accountable for the type of harmful content promoted around the assassination of Kirk, which they say leads to political violence.
During their televised discussion, Curtis and Kelly previewed a bill they intend to introduce shortly that would remove liability protection for social media companies that boost content that contributes to political radicalization and violence.
The “Algorithm Accountability Act” would transform one of the pillars of internet governance by reforming a 30-year-old regulation known as Section 230 that gives online platforms legal immunity for content posted by their users.
“What we’re saying is this is creating an environment that is causing all sorts of harm in our society and particularly with our youth, and it needs to be addressed,” Curtis told the Deseret News.
The bill would strip Section 230 protections from companies if it can be proven in court that they used an algorithm to amplify content that caused harm. This change means tech giants would “own” the harmful content they promote, creating a private cause of action for individuals to sue.
Like so many politicians who want to gut Section 230, Kelly and Curtis clearly don’t understand how it actually works. Their “Algorithm Accountability Act” would create exactly the kind of censorship regime they claim to oppose.
It’s kind of incredible how many times I’ve had to say this to US Senators, but repealing 230 doesn’t make companies automatically responsible for speech. That’s literally not how it works. They’re still protected by the First Amendment.
It just makes it much more expensive to defend hosting speech, which means they will take one of two approaches: (1) host way less speech and become much, much more restricted in what people can say or (2) do little to no moderation, because under the First Amendment, they can only be held liable if they have knowledge of legally violative content.
And most of the content that would be covered by this bill “speech that contributes to political radicalization” is, um, kinda quintessentially protected by the First Amendment.
Kelly’s comments reveal the stunning cognitive dissonance at the heart of this proposal:
“I did not agree with him on much. But I’ll tell you what, I will go to war to fight for his right to say what he believes,” said Kelly, who is a former Navy pilot. “Even if you disagree with somebody, doesn’t mean you put a wall up between you and them.”
This is breathtaking doublethink. Kelly claims he’ll “go to war” to protect Kirk’s right to speak while literally authoring legislation that will silence the platforms where that speech happens. It’s like saying “I’ll defend your right to assembly” while bulldozing every meeting hall in town.
Curtis manages to be even more confused:
What this bill would do, Curtis explained, is open up these trillion-dollar companies to the same kind of liability that tobacco companies and other industries face.
“If they’re responsible for something going out that caused harm, they are responsible. So think twice before you magnify. Why do these things need to be magnified at all?” Curtis said.
This comparison is absurdly stupid. Tobacco is a physical product that literally destroys your lungs and causes cancer. Speech is expression protected by the First Amendment. Curtis is essentially arguing that if political speech influences someone’s behavior in a way he doesn’t like, the platform should be liable—as if words and ideas are chemically addictive carcinogens.
The entire point of the First Amendment is that we don’t consider speech to be harmful.
What Curtis is proposing is holding companies liable whenever speech “causes harm,” which is fucking terrifying when Trump and his FCC are already threatening platforms for hosting criticism of the administration.
The political implications here are staggering. Kelly, a Democrat, is signing onto a bill that will let Trump and MAGA supporters (the bill has a private right of action that will let anyone sue!) basically sue every internet platform for “promoting” content they deem politically polarizing, which they will say is anything that criticizes Trump or promotes “woke” views.
And why is he pushing such a bill in supposed support of Charlie Kirk, a person whose only job was pushing political polarization, and whose entire “debate me bro” shtick was entirely designed to push political polarization online?
What are we even doing here?
This entire proposal is a monument to confused thinking. Kelly and Curtis claim they want to honor Charlie Kirk by passing legislation that would have silenced the very platforms where he built his career. They claim to support free speech while authoring a bill designed to chill political expression. They worry about political polarization while creating a legal weapon that will be used almost exclusively by the most polarizing political actors to silence their critics.
Rolling back Section 230 will lead to much greater censorship, not less. Claiming it’s necessary to diminish political polarization is disconnected from reality. But at least it will come in handy for whoever challenges this law as unconstitutional—the backers are out there openly admitting they’re introducing legislation designed to violate the First Amendment.
Brian Reed’s “Question Everything” podcast built its reputation on careful journalism that explores moral complexity within the journalism field. It’s one of my favorite podcasts. Which makes his latest pivot so infuriating: Reed has announced he’s now advocating to repeal Section 230—while demonstrating he fundamentally misunderstands what the law does, how it works, and what repealing it would accomplish.
If you’ve read BestNetTech for basically any length of time, you’ll know that I feel the exact opposite on this topic. Repealing, or really almost all proposals to reform Section 230, would be a complete disaster for free speech on the internet, including for journalists.
The problem isn’t advocacy journalism—I’ve been doing that myself for years. The problem is Reed’s approach: decide on a solution, then cherry-pick emotional anecdotes and misleading sources to support it, while ignoring the legal experts who could explain why he’s wrong. It’s the exact opposite of how to do good journalism, which is unfortunate for someone who holds out his (otherwise excellent!) podcast as a place to explore how to do journalism well.
Last week, he published the first episode of his “get rid of 230” series, and it has so many problems, mistakes, and nonsense, that I feel like I had to write about it now, in the hopes that Brian might be more careful in future pieces. (Reed has said he plans to interview critics of his position, including me, but only after the series gets going—which seems backwards for someone advocating major legal changes.)
The framing of this piece is around the conspiracy theory regarding the Sandy Hook school shootings, and someone who used to believe them. First off, this feels like a cheap journalistic hook, basing a larger argument on an emotional hook that clouds the issues and the trade-offs. The Sandy Hook shooting was horrible! The fact that some jackasses pushed conspiracy theories about it is also horrific! That primes you in the form of “something must be done, this is something, we must do this” to accept Reed’s preferred solution: repeal 230.
But he doesn’t talk to any actual experts on 230, misrepresents Section 230, misleads people into understanding how repealing 230 would impact that specific (highly emotional) story, and then closes on an emotionally manipulative hook (convincing the person he spoke to who used to believe in Sandy Hook conspiracy theories, that getting rid of 230 would work, despite her lack of understanding or knowledge of what would actually happen).
In listening to the piece, it struck me that Reed here is doing part of what he (somewhat misleadingly) claims social media companies are doing: hooking you with manipulative lies and misrepresentations to keep you hooked and to convince you something false is true by lying to his listeners. It’s a shame, but it’s certainly not journalism.
Let’s dig into some of the many problems with the piece.
The Framing is Manipulative
I already mentioned that the decision to frame the entire piece around one extraordinary, but horrific story is manipulative, but it goes beyond that. Reed compares the fact that some of the victims from Sandy Hook successfully sued Alex Jones for defamation over the lies and conspiracy theories he spread regarding that event, to the fact that they can’t sue YouTube.
But in 2022, family members of 10 of the Sandy Hook victims did win a defamation case against Alex Jones’s company, and the verdict was huge. Jones was ordered to pay the family members over a billion dollars in damages.
Just this week, the Supreme Court declined to hear an appeal from Jones over it. A semblance of justice for the victims, though infuriatingly, Alex Jones filed for bankruptcy and has avoided paying them so far. But also, and this is what I want to focus on, the lawsuits are a real deterrent to Alex Jones and others who will likely think twice before lying like this again.
So now I want you to think about this. Alex Jones did not spread this lie on his own. He relied on social media companies, especially YouTube, which hosts his show, to send his conspiracy theory, out to the masses. One YouTube video spouting this lie shortly after the shooting got nearly 11 million views in less than 2 weeks. And by 2018 when the family sued him. Alex Jones had 1.6 billion views on his YouTube channel. The Sandy Hook lie was laced throughout that content, burrowing its way into the psyche of millions of people, including Kate and her dad.
Alex Jones made money off of each of those views. But so did YouTube. Yet, the Sandy Hook families, they cannot sue YouTube for defaming them because of section 230.
There are a ton of important details left out of this, that, if actually presented, might change the understanding here. First, while the families did win that huge verdict, much of that was because Jones defaulted. He didn’t really fight the defamation case, basically ignoring court orders to turn over discovery. It was only after the default that he really tried to fight things at the remedy stage. Indeed, part of the Supreme Court cert petition that was just rejected was because he claimed he didn’t get a fair trial due to the default.
You simply can’t assume that because the families won that very bizarre case in which Jones treated the entire affair with contempt, that means that the families would have a case against YouTube as well. That’s not how this works.
This is Not How Defamation Law Works
Reed correctly notes that the bar for defamation is high, including that there has to be knowledge to qualify, but then immediately seems to forget that. Without a prior judicial determination that specific content is defamatory, no platform—with or without Section 230—is likely to meet the knowledge standard required for liability. That’s kind of important!
Now this is really important to keep in mind. Freedom of speech means we have the freedom to lie. We have the freedom to spew absolute utter bullshit. We have the freedom to concoct conspiracy theories and even use them to make money by selling ads or subscriptions or what have you.
Most lies are protected by the First Amendment and they should be.
But there’s a small subset of lies that are not protected speech even under the First Amendment. The old shouting fire in a crowded theater, not necessarily protected. And similarly, lies that are defamatory aren’t protected.
In order for a statement to be defamatory, okay, for the most part,whoever’s publishing it has to know it’s untrueand it has to cause damage to the person or the institution the statement’s about. Reputational damage, emotional damage, or a lie could hurt someone’s business. The bar for proving defamation is high in the US. It can be hard to win those cases.
I bolded the key part here: while there’s some nuance here, mostly, the publisher has to know the statement is untrue. And the bar here is very high. To survive under the First Amendment, the knowledge standard is important.
It’s why booksellers can’t be held liable for “obscene” books on their shelves. It’s why publishers aren’t held liable for books they publish, even if those books lead people to eat poisonous mushrooms. The knowledge standard matters.
And even though Reed mentions the knowledge point, he seems to immediately forget it. Nor does he even attempt to deal with the question of how an algorithm can have the requisite knowledge (hint: it can’t). He just brushes past that kind of important part.
But it’s the key to why his entire argument premise is flawed: just making it so anyone can sue web platforms doesn’t mean anyone will win. Indeed, they’ll lose in most cases. Because if you get rid of 230, the First Amendment still exists. But, because of a bunch of structural reasons explained below, it will make the world of internet speech much worse for you and I (and the journalists Reed wants to help), while actually clearing the market of competitors to the Googles and Metas of the world Reed is hoping to punish.
That’s Not How Section 230 Works
Reed’s summary is simply inaccurate. And not in the “well, we can differ on how we describe it.” He makes blatant factual errors. First, he claims that “only internet companies” get 230 protections:
These companies have a special protection that only internet companies get. We need to strip that protection away.
But that’s wrong. Section 230 applies to any provider of an interactive computer service (which is more than just “internet companies”) and their users. It’s right there in the law. Because of that latter part, it has protected people forwarding emails and retweeting content. It has been used repeatedly to protect journalists on that basis. It protects you and me. It is not exclusive to “internet companies.” That’s just factually wrong.
The law is not, and has never been, some sort of special privilege for certain kinds of companies, but a framework for protecting speech online, by making it possible for speech distributing intermediaries to exist in the first place. Which helps journalists. And helps you and me. Without it, there would be fewer ways in which we could speak.
Reed also appears to misrepresent or conflate a bunch of things here:
Section 230, which Congress passed in 1996, it makes it so that internet companies can’t be sued for what happened happens on their sites. Facebook, YouTube, Tik Tok, they bear essentially no responsibility for the content they amplify and recommend to millions, even billions of people. No matter how much it harms people, no matter how much it warps our democracy under section 230, you cannot successfully sue tech companies for defamation, even if they spread lies about you. You can’t sue them for pushing a terror recruitment video on someone who then goes and kills your family member. You can’t sue them for bombarding your kids. with videos that promote eating disorders or that share suicide methods or sexual content.
First off, much of what he describes is First Amendment protected speech. Second, he ignores that Section 230 doesn’t apply to federal criminal law, which is what things like terrorist content would likely cover (I’m guessing he’s confused based on the Supreme Court cases from a few years ago, where 230 wasn’t the issue—the lack of any traceability of the terrorist attacks to the websites was).
But, generally speaking, if you’re advocating for legal changes, you should be specific in what you want changed and why. Putting out a big list of stuff, some of which would be protected, some of which would not be, as well as some that the law covers and some it doesn’t… isn’t compelling. It suggests you don’t understand the basics. Furthermore, lumping things like eating disorders in with defamation and terrorist content, suggests an unwillingness to deal with the specifics and the complexities. Instead, it suggests a desire for a general “why can’t we pass a law that says ‘bad stuff isn’t allowed online?'” But that’s a First Amendment issue, not a 230 issue (as we’ll explain in more detail below).
Reed also, unfortunately, seems to have been influenced by the blatantly false argument that there’s a platform/publisher distinction buried within Section 230. There isn’t. But it doesn’t stop him from saying this:
I’m going to keep reminding you what Section 230 is, as we covered on this show, because I want it to stick. Section 230, small provision in a law Congress passed in 1996, just 26 words, but words that were so influential, they’re known as the 26 words that created the internet.
Quick fact check: Section 230 is way longer than 26 words. Yes, Section (c)(1) is 26 words. But, the rest matters too. If you’re advocating to repeal a law, maybe read the whole thing?
Those words make it so that internet platforms cannot be treated as publishers of the content on their platform. It’s why Sandy Hook parents could sue Alex Jones for the lies he told, but they couldn’t sue the platforms like YouTube that Jones used to spread those lies.
And there is a logic to this that I think made sense when Section 230 was passed in the ’90s. Back then, internet companies offered chat rooms, message boards, places where other people posted, and the companies were pretty passively transmitting those posts.
Reed has this completely backwards. Section 230 was a direct response to Stratton Oakmont v. Prodigy, where a judge ruled that Prodigy’s active moderation to create a “family friendly” service made it liable for all content on the platform.
The two authors of Section 230, Ron Wyden and Chris Cox, have talked about this at length for decades. They wanted platforms to be active participants and not dumb conduits passively transmitting posts. Their fear was without Section 230, those services would be forced to just be passive transmitters, because doing anything to the content (as Prodigy did) would make them liable. But given the amount of content, that would be impossible.
So Cox and Wyden’s solution to encourage platforms to be more than passive conduits was to say “if you do regular publishing activities—such as promoting, rearranging, and removing certain content then we won’t treat you like a publisher.”
The entire point was to encourage publisher-like behavior, not discourage it.
Reed has the law’s purpose exactly backwards!
That’s kind of shocking for someone advocating to overturn the law! It would help to understand it first! Because if the law actually did what Reed pretends it does, I might be in favor of repeal as well! The problem is, it doesn’t. And it never did.
One analogy that gets thrown around for this is that the platforms, they’re like your mailman. They’re just delivering somebody else’s letter about the Sandy Hook conspiracy. They’re not writing it themselves. And sure, that might have been true for a while, but imagine now that the mailman reads the letter he’s delivering, sees it’s pretty tantalizing. There’s a government conspiracy to take away people’s guns by orchestrating a fake school shooting, hiring child actors, and staging a massacre and a whole 911 response.
The mailman thinks, “That’s pretty good stuff. People are going to like this.” He makes millions of copies of the letter and delivers them to millions of people. And then as all those people start writing letters to their friends and family talking about this crazy conspiracy, the mailman keeps making copies of those letters and sending them around to more people.
And he makes a ton of money off of this by selling ads that he sticks into those envelopes. Would you say in that case the mailman is just a conduit for someone else’s message? Or has he transformed into a different role? A role more like a publisher who should be responsible for the statements he or she actively chooses to amplify to the world. That is essentially what YouTube and other social media platforms are doing by using algorithms to boost certain content. In fact, I think the mailman analogy is tame for what these companies are up to.
Again, the entire framing here is backwards. It’s based on Reed’s false assumption—an assumption that any expert in 230 would hopefully disabuse him of—that the reason for 230 was to encourage platforms to be “passive conduits” but it’s the exact opposite.
Cox and Wyden were clear (and have remained clear) that the purpose of the law was exactly the opposite. It was to give platforms the ability to create different kinds of communities and to promote/demote/moderate/delete at will.
The key point was that, because of the amount of content, no website would be willing and able to do any of this if they were potentially held liable for everything.
As for the final point, that social media companies are now way different from “the mailman,” both Cox and Wyden have talked about how wrong that is. In an FCC filing a few years back, debunking some myths about 230, they pointed out that this claim of “oh sites are different” is nonsense and misunderstands the fundamentals of the law:
Critics of Section 230 point out the significant differences between the internet of 1996 and today.Those differences, however, are not unanticipated. When we wrote the law, we believed the internet of the future was going to be a very vibrant and extraordinary opportunity for people to become educated about innumerable subjects, from health care to technological innovation to their own fields of employment. So we began with these two propositions: let’s make sure that every internet user has the opportunity to exercise their First Amendment rights; and let’s deal with the slime and horrible material on the internet by giving both websites and their users the tools and the legal protection necessary to take it down.
The march of technology and the profusion of e-commerce business models over the last two decadesrepresent precisely the kind of progress that Congress in 1996 hoped would follow from Section 230’s protectionsfor speech on the internet and for the websites that host it. The increase in user-created content in the years since then is both a desired result of the certainty the law provides, and further reason that the law is needed more than ever in today’s environment.
The Understanding of How Incentives Work Under the Law is Wrong
Here’s where Reed’s misunderstanding gets truly dangerous. He claims Section 230 removes incentives for platforms to moderate content. In reality, it’s the opposite: without Section 230, websites would have less incentive to moderate, not more.
Why? Because under the First Amendment, you need to show that the intermediary had actual knowledge of the violative nature of the content. If you removed Section 230, the best way to prove that you have no knowledge is not to look, and not to moderate.
You potentially go back to a Stratton Oakmont-style world, where the incentives are to do less moderation because any moderation you do introduces more liability. The more liability you create, the less likely someone is to take on the task. Any investigation into Section 230 has to start from understanding those basic facts, so it’s odd that Reed so blatantly misrepresents them and suggests that 230 means there’s no incentive to moderate:
We want to make stories that are popular so we can keep audiences paying attention and sell ads—or movie tickets or streaming subscriptions—to support our businesses. But in the world that every other media company occupies, aside from social media, if we go too far and put a lie out that hurts somebody, we risk getting sued.
It doesn’t mean other media outlets don’t lie or exaggerate or spin stories, but there’s still a meaningful guard rail there. There’s a real deterrent to make sure we’re not publishing or promoting lies that are so egregious, so harmful that we risk getting sued, such as lying about the deaths of kids who were killed and their devastated parents.
Social media companies have no such deterrent and they’re making tons of money. We don’t know how much money in large part because the way that kind of info usually gets forced out of companies is through lawsuits which we can’t file against these tech behemoths because of section 230. So, we don’t know, for instance, how much money YouTube made from content with the Sandy Hook conspiracy in it. All we know is that they can and do boost defamatory lies as much as they want, raking cash without any risk of being sued for it.
But this gets at a fundamental flaw that shows up in these debates: that the only possible pressure on websites is the threat of being sued. That’s not just wrong, it, again, totally gets the purpose and function of Section 230 backwards.
There are tons of reasons for websites to do a better job moderating: if your platform fills up with garbage, users start to go away. As do advertisers, investors, other partners as well.
This is, fundamentally, the most frustrating part about every single new person who stumbles haphazardly into the Section 230 debate without bothering to understand how it works within the law. They get the incentives exactly backwards.
230 says “experiment with different approaches to making your website safe.” Taking away 230 says “any experiment you try to keep your website safe opens you up to ruinous litigation.” Which one do you think leads to a healthier internet?
It Misrepresents how Companies Actually Work
Reed paints tech companies as cartoon villains, relying on simplistic and misleading interpretations of leaked documents and outdated sources. This isn’t just sloppy—it’s the kind of manipulative framing he’d probably critique in other contexts.
For example, he grossly misrepresents (in a truly manipulative way!) what the documents Frances Haugen released said, just as much of the media did. For example, here’s how Reed characterizes some of what Haugen leaked:
Haugen’s document dump showed that Facebook leadership knew about the harms their product is causing, including disinformation and hate speech, but also product designs that were hurting children, such as the algorithm’s tendency to lead teen girls to posts about anorexia. Francis Haugen told lawmakers that top people at Facebook knew exactly what the company was doing and why it was doing.
Except… that’s very much out of context. Here’s how misleading Reed’s characterization is. The actual internal research Haugen leaked—the stuff Reed claims shows Facebook “knew about the harms”—looked like this:
The headline of that slide sure looks bad, right? But then you look at the context, which shows that in nearly every single category they studied across boys and girls, they found that more users found Instagram made them feel better, not worse. The only category where that wasn’t true was teen girls and body image, where the split was pretty equal. That’s one category out of 24 studied! And this was internal research calling out that fact because the point was to convince the company to figure out ways to better deal with that one case, not to ignore it.
And, what we’ve heard over and over again since all this is that companies have moved away from doing this kind of internal exploration, because they know that if they learn about negative impacts of their own service, it will be used against them by the media.
Reed’s misrepresentation creates exactly the perverse incentive he claims to oppose: companies now avoid studying potential harms because any honest internal research will be weaponized against them by journalists who don’t bother to read past the headline. Reed’s approach of getting rid of 230’s protections would make this even worse, not better.
Because as part of any related lawsuit there would be discovery, and you can absolutely guarantee that a study like the one above that Haugen leaked would be used in court, in a misleading way, showing just that headline, without the necessary context of “we called this out to see how we could improve.”
So without Section 230 and with lawsuits, companies would have much less incentive to look for ways to improve safety online, because any such investigation would be presented as “knowledge” of the problem. Better not to look at all.
There’s a similar problem with the way Reed reports on the YouTube algorithm. Reed quotes Guillaume Chaslot but doesn’t mention that Chaslot left YouTube in 2013—12 years ago. That’s ancient history in tech terms. I’ve met Chaslot and been on panels with him. He’s great! And I think his insights on the dangers of the algorithm in the early days were important work and highlighted to the world the problems of bad algorithms. But it’s way out of date. And not all of the algorithms are bad.
Conspiracy theories are are really easy to make. You can just make your own conspiracy theories in like one hour shoot it and then it get it can get millions of views. They’re addictive because people who live in this filter bubble of conspiracy theories and they don’t watch the classical media. So they spend more time on YouTube.
Imagine you’re someone who doesn’t trust the media, you’re going to spend more time on YouTube. So since you spend more time on YouTube, the algorithm thinks you’re better than anybody else. The definition of better for the algorithm, it’s who spends more time. So it will recommend you more. So there’s like this vicious call.
It’s a vicious circle, Chaslot says, where the more conspiratorial the videos, the longer users stay on the platform watching them, the more valuable that content becomes, the more YouTube’s algorithm recommends the conspiratorial videos.
Since Chaslot left YouTube, there have been a series of studies that have shown that, while some of that may have been true back when Chaslot was at the company, it hasn’t been true in many, many years.
A study in 2019 (looking at data from 2016 onwards) found that YouTube’s algorithm actually pushed people away from radicalizing content. A further study a couple of years ago similarly found no evidence of YouTube’s algorithm sending people down these rabbit holes.
It turns out that things like Chaslot’s public berating of the company, as well as public and media pressure, not to mention political blowback, had helped the company re-calibrate the algorithm away from all that.
And you know what allowed them to do that? The freedom Section 230 provided, saying that they wouldn’t face any litigation liability for adjusting the algorithm.
A Total Misunderstanding of What Would Happen Absent 230
Reed’s fundamental error runs deeper than just misunderstanding the law—he completely misunderstands what would happen if his “solution” were implemented. He claims that the risk of lawsuits would make the companies act better:
We need to be able to sue these companies.
Imagine the Sandy Hook families had been able to sue YouTube for defaming them in addition to Alex Jones. Again, we don’t know how much money YouTube made off the Sandy Hook lies. Did YouTube pull in as much cash as Alex Jones, five times as much? A hundred times? Whatever it was, what if the victims were able to sue YouTube? It wouldn’t get rid of their loss or trauma, but it could offer some compensation. YouTube’s owned by Google, remember, one of the most valuable companies in the world. More likely to actually pay out instead of going bankrupt like Alex Jones.
This fantasy scenario has three fatal flaws:
First, YouTube would still win these cases. As we discussed above, there’s almost certainly no valid defamation suit here. Most complained about content will still be First Amendment-protected speech, and YouTube, as the intermediary, would still have the First Amendment and the “actual knowledge” standard to fall back on.
The only way to have actual knowledge of content being defamatory is for there to be a judgment in court about the content. So, YouTube couldn’t be on the hook in this scenario until after the plaintiffs had already taken the speaker to court and received a judgment that the content was defamatory. At that point, you could argue that the platform would then be on notice and could no longer promote the content. But that wouldn’t stop any of the initial harms that Reed thinks they would.
Second, Reed’s solution would entrench Big Tech’s dominance. Getting a case dismissed on Section 230 grounds costs maybe $50k to $100k. Getting the same case dismissed on First Amendment grounds? Try $2 to $5 million.
For a company like Google or Meta, with their buildings full of lawyers, this is still pocket change. They’ll win those cases. But it means that you’ve wiped out the market for non-Meta, non-Google sized companies. The smaller players get wiped out because a single lawsuit (or even a threat of a lawsuit) can be existential.
The end result: Reed’s solution gives more power to the giant companies he paints as evil villains.
Third, there’s vanishingly little content that isn’t protected by the First Amendment. Using the Alex Jones example is distorting and manipulative, because it’s one of the extremely rare cases where defamation has been shown (and that was partly just because Jones didn’t really fight the case).
Reed doubles down on these errors:
But on a wider scale, The risk of massive lawsuits like this, a real threat to these companies’ profits, could finally force the platforms to change how they’re operating. Maybe they change the algorithms to prioritize content from outlets that fact check because that’s less risky. Maybe they’d get rid of fancy algorithms altogether, go back to people getting shown posts chronologically or based on their own choice of search terms. It’d be up to the companies, but however they chose to address it, they would at least have to adapt their business model so that it incorporated the risk of getting sued when they boost damaging lies.
This shows Reed still doesn’t understand the incentive structure. Companies would still win these lawsuits on First Amendment grounds. And they’d increase their odds by programming algorithms and then never reviewing content—the exact opposite of what Reed suggests he wants.
And here’s where Reed’s pattern of using questionable sources becomes most problematic. He quotes Frances Haugen advocating for his position, without noting that Haugen has no legal expertise on these issues:
For what it’s worth, this is what Facebook whistleblower Frances Haugen argued for in Congress in 2021.
I strongly encourage reforming Section 230 to exempt decisions about algorithms. They have 100% control over their algorithms and Facebook should not get a free pass on choices it makes to prioritize growth and virality and reactiveness over public safety. They shouldn’t get a free pass on that because they’re paying for their profits right now with our safety. So, I strongly encourage reform of 230 in that way.
But, as we noted when Haugen said that, this is (again) getting it all backwards. At the very same time that Haugen was testifying with those words, Facebook was literally running ads all over Washington DC, encouraging Congress to reform Section 230 in this way. Facebook wants to destroy 230.
Why? Because Zuckerberg knows full well what I wrote above. Getting rid of 230 means a few expensive lawsuits that his legal team can easily win, while wiping out smaller competitors who can’t afford the legal bills.
Meta’s usage has been declining as users migrate to smaller platforms. What better way to eliminate that competition than making platform operation legally prohibitive for anyone without Meta’s legal budget?
Notably, not a single person Reed speaks to is a lawyer. He doesn’t talk to anyone who lays out the details of how all this works. He only speaks to people who dislike tech companies. Which is fine, because it’s perfectly understandable to hate on big tech companies. But if you’re advocating for a massive legal change, shouldn’t you first understand how the law actually works in practice?
For a podcast about improving journalism, this represents a spectacular failure of basic journalistic practices. Indeed, Reed admits at the end that he’s still trying to figure out how to do all this:
I’m still trying to figure out how to do this whole advocacy thing. Honestly, pushing for a policy change rather than just reporting on it. It’s new to me and I don’t know exactly what I’m supposed to be doing. Should I be launching a petition, raising money for like a PAC? I’ve been talking to marketing people about slogans for a campaign. We’ll document this as I stumble my way through. It’s all a bit awkward for me. So, if you have ideas for how you can build this movement to be able to sue big tech. Please tell me.
There it is: “I’m still trying to figure out how to do this whole advocacy thing.” Reed has publicly committed to advocating for a specific legal change—one that would fundamentally reshape how the internet works—while admitting he doesn’t understand advocacy, hasn’t talked to experts, and is figuring it out as he goes. Generally it’s a bad idea to come up with a slogan when you still don’t even understand the thing you’re advocating for.
This is advocacy journalism in reverse: decide your conclusion, then do the research. It’s exactly the kind of shoddy approach that Reed would rightly criticize in other contexts.
I have no problem with advocacy journalism. I’ve been doing it for years. But effective advocacy starts with understanding the subject deeply, consulting with experts, and then forming a position based on that knowledge. Reed has it backwards.
The tragedy is that there are so many real problems with how big tech companies operate, and there are thoughtful reforms that could help. But Reed’s approach—emotional manipulation, factual errors, and backwards legal analysis—makes productive conversation harder, not easier.
Maybe next time, try learning about the law first, then deciding whether to advocate for its repeal.
During the last year of the Trump administration, there was this weird period where the Bill Barr DOJ decided that it could destroy the internet. It came out of Donald Trump declaring that Section 230 was bad for whatever reason Donald Trump thinks anything is bad, and telling his Attorney General to do something about it. This kicked off a weird time where Bill Barr was suddenly criticizing 230 (even though 230 has nothing to do with the DOJ) and ran a silly process to come up with “ideas” for how to change 230 (again, totally outside of the DOJ’s authority).
In 2020 (before COVID took over everything), the DOJ held a very odd “workshop” in which it brought in every crazy MAGA kook with a conspiracy theory about the internet to tell them how to rewrite 230. It then proposed some very bad ideas for how to “reform” 230, before effectively the clock ran out on Trump’s first term.
The good folks at EFF filed multiple lawsuits over all this, and earlier this year discovered that one reason why the clock ran out on the DOJ was that the team working on these issues was blindsided when, in the middle of it all, Trump issued an executive order that attacked the internet companies he was mad at… which the DOJ was already trying to figure out a way to regulate via 230 reform.
EFF has now received some more FOIA files from the government, and found that the Bill Barr DOJ was actually working closely with Congress to try to get a 230 rewrite through.
The newdocuments, disclosed in an EFF Freedom of Information Act (FOIA) lawsuit, show officials weretalkingwith Senate staffersworking to passspeech- and privacy-chilling bills like theEARN IT ActandPACT Act(neither became law). DOJ officials alsocommunicatedwith an organization that sought to condition Section 230’s legal protections on websites using age-verification systems if they hosted sexual content.
It’s certainly not unheard of that the DOJ would communicate with Congress over changes to laws it would like to see, but this still remained pretty weird in that usually it’s over laws that impact the DOJ itself (i.e., federal criminal laws). Remember: Section 230 has always had a carve out for all federal criminal laws, so does absolutely nothing to hinder any DOJ case whatsoever.
Many of the emails involve Lauren Willard, who worked directly for Bill Barr at the DOJ, in charge of competition and tech issues, and was part of the team that brought the DOJ’s first antitrust lawsuit against Google. In this packet is the email she sent to a bunch of Senate staffers who were apparently a part of the wider team between DOJ and certain Senate offices to gut 230:
Hi everyone,
After many, many months of hard work and collaboration by this group, we finally received the green light to send our draft legislative packet to OMB to start the interagency process. (Yay!) Attached is the draft redline, cover letter, and section by section. There were a few minor edits as this was officially cleared by DOJ Leadership, but is extremely closeto what everyone has seen numerous times already.
I really appreciate this team’s intellect, patience, and creative thinking. I know the draft isn’t exactly what everyone wanted, but I think it is a credible and thoughtful contribution to the Section 230 debate and represents well our DOJ equities.
Also, the fact that the DOJ was coordinating with people like Jon Schweppe (the last link in that EFF summary) is doubly interesting given that he’s now at the FTC and one of his tweets about punishing Media Matters helped get the FTC’s bogus investigation of that org blocked by the court. His contribution was to try to get Section 230 contingent on age gating the internet:
APP’s primary policy goal with regard to pornography is to get sites that host pornography to implement an age screening system. We think the best way to do this (while adhering to SCOTUS precedent) is to get the sites to do it voluntarily by having Congress amend Sec. 230 and make their immunity from civil liability conditional. There are a couple ways we can do that, but it sounds like DOJ’s proposal might effectively do the same thing?
Also of interest: in the latest packet… the DOJ was apparently reading BestNetTech! On page 431 there’s a document (it’s unclear what it was for or how it was used) apparently looking at the reaction to FOSTA (the one time that Section 230 has been amended). The document includes the Sheryl Sandberg blog post where she flips Facebook from working to stop FOSTA to coming out in full-throated support. It then quotes the NY Times, but then quotes “From a leading 230 advocate and tech blogger post-SESTA,” and includes quotes from two different BestNetTech articles. First, an article Cathy Gellis wrote blaming Facebook’s change of heart for ruining the internet for everyone. And then my postmortem on how the Internet Association was pressured by Facebook to support FOSTA, and how many of its other members were pissed off about this (leading to the eventual dissolution of the entire organization).
The DOJ files quote both of those articles without saying who wrote them or the URL (though it implies both were written by the same person, despite Cathy and I being very different people—does no one read bylines any more?). At least they consider us a “leading” 230 advocate…
Anyway, a good bit of FOIA sleuthing by EFF, showing that a non-independent DOJ carrying out orders from Donald Trump to punish his perceived enemies, even outside the DOJ’s purview, was also a thing back during his first term.
When politicians immediately blamed social media for the horrific 2022 Buffalo mass shooting—despite zero evidence linking the platforms to the attack—it was obvious deflection from actual policy failures. The scapegoating worked: survivors and victims’ families sued the social media companies, and last year a confused state court wrongly ruled that Section 230 didn’t protect them.
Thankfully, an appeals court recently reversed that decision in a ruling full of good quotes about how Section 230 actually works, while simultaneously demonstrating why it’s good that it works this way.
The plaintiffs conceded they couldn’t sue over the shooter’s speech itself, so they tried the increasingly popular workaround: claiming platforms lose Section 230 protection the moment they use algorithms to recommend content. This “product design” theory is seductive to courts because it sounds like it’s about the platform rather than the speech—but it’s actually a transparent attempt to gut Section 230 by making basic content organization legally toxic.
The NY appeals court saw right through this litigation sleight of hand.
Here, it is undisputed that the social media defendants qualify as providers of interactive computer services. The dispositive question is whether plaintiffs seek to hold the social media defendants liable as publishers or speakers of information provided by other content providers. Based on our reading of the complaints, we conclude thatplaintiffs seek to hold the social media defendants liable as publishers of third-party content. We further conclude that the content-recommendation algorithms used by some of the social media defendants do not deprive those defendants of their status as publishers of third-party content. It follows that plaintiffs’ tort causes of action against the social media defendants are barred by section 230.
Even assuming, arguendo, that the social media defendants’ platforms are products (as opposed to services), and further assuming that they are inherently dangerous, which is a rather large assumption indeed,we conclude that plaintiffs’ strict products liability causes of action against the social media defendants fail because they are based on the nature of content posted by third parties on the social media platforms.
The plaintiffs leaned on the disastrous Third Circuit ruling in Anderson v. TikTok—which essentially held that any algorithmic curation transforms third-party content into first-party content. The NY court demolishes this reasoning by pointing out its absurd implications:
We do not find Anderson to be persuasive authority. If content-recommendation algorithms transform third-party content into first-party content, as the Anderson court determined, then Internet service providers using content-recommendation algorithms (including Facebook, Instagram, YouTube, TikTok, Google, and X) would be subject to liability for every defamatory statement made by third parties on their platforms. That would be contrary to the express purpose of section 230, which was to legislatively overrule Stratton Oakmont, Inc. v Prodigy Servs. Co. (1995 WL 323710, 1995 NY Misc LEXIS 229 [Sup Ct, Nassau County 1995]), where “an Internet service provider was found liable for defamatory statements posted by third parties because it had voluntarily screened and edited some offensive content, and so was considered a ‘publisher’ ” (Shiamili, 17 NY3d at 287-288; see Free Speech Coalition, Inc. v Paxton, — US —, —, 145 S Ct 2291, 2305 n 4 [2025]).
Although Anderson was not a defamation case, its reasoning applies with equal force to all tort causes of action, including defamation.One cannot plausibly conclude that section 230 provides immunity for some tort claims but not others based on the same underlying factual allegations. There is no strict products liability exception to section 230.
Furthermore, it points out (just as we had said after the Anderson ruling) that the Anderson ruling messes up its interpretation of the Supreme Court in the Moody case. That case was about the social media content moderation law in Florida, and the Supreme Court noted that content moderation decisions were editorial discretion protected by the First Amendment. The Third Circuit in Anderson incorrectly interpreted that to mean that such editorial discretion could not be protected under 230 because Moody made it “first party speech” instead of third party.
But the NY appeals court points out how that’s complete nonsense because having your editorial discretion protected by the First Amendment is entirely consistent with saying you can’t hold a platform liable for the underlying content which that editorial discretion is covering:
In any event, even if we were to follow Anderson and conclude that the social media defendants engaged in first-party speech by recommending to the shooter racist content posted by third parties,it stands to reason that such speech (“expressive activity” as described by the Third Circuit) is protected by the First Amendment under Moody. While TikTok did not seek protection under the First Amendment, our social media defendants do raise the First Amendment as a defense in addition to section 230.
In Moody, the Supreme Court determined that content-moderation algorithms result in expressive activity protected by the First Amendment (see 603 US at 744). Writing for the majority, Justice Kagan explained that “[d]eciding on the third-party speech that will be included in or excluded from a compilation—and then organizing and presenting the included items—is expressive activity of its own” (id. at 731). While the Moody Court did not consider social media platforms “with feeds whose algorithms respond solely to how users act online—giving them the content they appear to want, without any regard to independent content standards” (id. at 736 n 5 [emphasis added]), our plaintiffs do not allege that the algorithms of the social media defendants are based “solely” on the shooter’s online actions. To the contrary, the complaints here allege that the social media defendants served the shooter material that they chose for him for the purpose of maximizing his engagement with their platforms.Thus, per Moody, the social media defendants are entitled to First Amendment protection for third-party content recommended to the shooter by algorithms.
Although it is true, as plaintiffs point out, that the First Amendment views expressed in Moody are nonbinding dicta, it is recent dicta from a supermajority of Justices of the United States Supreme Court, which has final say on how the First Amendment is interpreted. That is not the type of dicta we are inclined to ignore even if we were to disagree with its reasoning, which we do not.
The majority opinion cites the Center for Democracy and Technology’s amicus brief that points out the obvious: at internet scale, every platform has to do some moderation and some algorithmic ranking, and that cannot and should not somehow remove protections. And the majority uses some colorful language to explain (as we have said before) 230 and the First Amendment work perfectly well together:
As the Center for Democracy and Technology explains in its amicus brief, content-recommendation algorithms are simply tools used by social media companies “to accomplish a traditional publishing function, made necessary by the scale at which providers operate.”Every method of displaying content involves editorial judgments regarding which content to display [*5]and where on the platforms. Given the immense volume of content on the Internet, it is virtually impossible to display content without ranking it in some fashion, and the ranking represents an editorial judgment of which content a user may wish to see first. All of this editorial activity, accomplished by the social media defendants’ algorithms, is constitutionally protected speech.
Thus,the interplay between section 230 and the First Amendment gives rise to a “Heads I Win, Tails You Lose” proposition in favor of the social media defendants. Either the social media defendants are immune from civil liability under section 230 on the theory that their content-recommendation algorithms do not deprive them of their status as publishers of third-party content, per Force and M.P., or they are protected by the First Amendment on the theory that the algorithms create first-party content, as per Anderson.Of course, section 230 immunity and First Amendment protection are not mutually exclusive, and in our view the social media defendants are protected by both. Under no circumstances are they protected by neither.
There is a dissenting opinion that bizarrely relies heavily on a dissenting Second Circuit opinion in the very silly Force v. Facebook case (in which the family victim of a Hamas attack blamed Facebook claiming that because some Hamas members used Facebook, Facebook could be blamed for any victims of a Hamas attack—an argument that was mostly laughed out of court). The majority points out what a silly world it would be if that were actually how things worked:
To the extent that Chief Judge Katzmann concluded that Facebook’s content-recommendation algorithms similarly deprived Facebook of its status as a publisher of third-party content within the meaning of section 230,we believe that his analysis, if applied here, would ipso facto expose most social media companies to unlimited liability in defamation cases. That is the same problem inherent in the Third Circuit’s first-party/third-party speech analysis in Anderson. Again, a social media company using content-recommendation algorithms cannot be deemed a publisher of third-party content for purposes of libel and slander claims (thus triggering section 230 immunity) and not at the same time a publisher of third-party content for strict products liability claims.
And the majority calls out the basic truths: all of these cases are bullshit cases trying to hold social media companies liable for the speech of its users—exactly the thing Section 230 was put in place to prevent:
In the broader context, the dissenters accept plaintiffs’ assertion that these actions are about the shooter’s “addiction” to social media platforms, wholly unrelated to third-party speech or content. We come to a different conclusion.As we read them, the complaints, from beginning to end, explicitly seek to hold the social media defendants liable for the racist and violent content displayed to the shooter on the various social media platforms. Plaintiffs do not allege, and could not plausibly allege, that the shooter would have murdered Black people had he become addicted to anodyne content, such as cooking tutorials or cat videos.
Instead, plaintiffs’ theory of harm rests on the premise that the platforms of the social media defendants were defectively designed because they failed to filter, prioritize, or label content in a manner that would have prevented the shooter’s radicalization. Given that plaintiffs’ allegations depend on the content of the material the shooter consumed on the Internet, their tort causes of action against the social media defendants are “inextricably intertwined” with the social media defendants’ role as publishers of third-party content….
If plaintiffs’ causes of action were based merely on the shooter’s addiction to social media,which they are not, they would fail on causation grounds.It cannot reasonably be concluded that the allegedly addictive features of the social media platforms (regardless of content) caused the shooter to commit mass murder, especially considering the intervening criminal acts by the shooter, which were not “not foreseeable in the normal course of events” and therefore broke the causal chain (Tennant v Lascelle, 161 AD3d 1565, 1566 [4th Dept 2018];see Turturro v City of New York, 28 NY3d 469, 484 [2016]).It was the shooter’s addiction to white supremacy content, not to social media in general, that allegedly caused him to become radicalized and violent.
From there, the majority opinion reminds everyone why Section 230 is so important to free speech:
At stake in these appeals is the scope of protection afforded by section 230, which Congress enacted to combat “the threat that tort-based lawsuits pose to freedom of speech [on the] Internet” (Shiamili, 17 NY3d at 286-287 [internal quotation marks omitted]). As a distinguished law professor has noted, section 230’s immunity “particularly benefits those voices from underserved, underrepresented, and resource-poor communities,” allowing marginalized groups to speak up without fear of legal repercussion (Enrique Armijo, Section 230 as Civil Rights Statute, 92 U Cin L Rev 301, 303 [2023]). Without section 230, the diversity of information and viewpoints accessible through the Internet would be significantly limited.
And the court points out, ruling the other way would “result in the end of the internet as we know it.”
We believe that the motion court’s ruling, if allowed to stand,would gut the immunity provisions of section 230 and result in the end of the Internet as we know it. This is so because Internet service providers who use algorithms on their platforms would be subject to liability for all tort causes of action, including defamation. Because social media companies that sort and display content would be subject to liability for every untruthful statement made on their platforms,the Internet would over time devolve into mere message boards.
It also calls out how the immunity part of 230, getting these kinds of frivolous cases tossed out early on is an important part of 230, because if you have to litigate every such accusation you lose all the benefits of Section 230.
Although the motion court stated that the social media defendants’ section 230 arguments “may ultimately prove true,” dismissal at the pleading stage is essential to protect free expression under Section 230 (see Nemet Chevrolet, Ltd., 591 F3d at 255 [the statute “protects websites not only from ‘ultimate liability,’ but also from ‘having to fight costly and protracted legal battles’ “]).Dismissal after years of discovery and litigation (with ever mounting legal fees) would thwart the purpose of section 230.
Law professor Eric Goldman, whose own research and writings seem to be infused throughout the majority’s opinion, also wrote a blog post about this ruling, celebrating the majority for getting this one right at a time when so many courts are getting it wrong, but (importantly) notes that the 3-2 split on this ruling, including the usual nonsense justifications in the dissent mean that (1) this is almost certainly going to be appealed, possibly to the Supreme Court, and (2) it’s unlikely to persuade many other judges who seem totally committed to the techlash view that says “we can ignore Section 230 if we decide the internet is just, like, really bad.”
I do think it’s likely he’s right (as always) but I still think it’s worth highlighting not just the thoughtful ruling, but how these judges actually understood the full implications of ruling the other way: that it would end the internet as we know it and do massive collateral damage to the greatest free speech platform ever.
Missouri Attorney General Andrew Bailey just sent threatening letters to Google, Microsoft, OpenAI, and Meta, claiming their AI chatbots violated Missouri’s consumer protection laws. The crime? When asked to rank presidents on antisemitism, some of the AIs had the temerity to suggest Donald Trump might not be great on that front.
Yes, you read that right. A sitting state attorney general is using the power of his office to threaten tech companies because their AIs expressed opinions about Trump that he disagrees with. In the name of “free speech.”
I shit you not.
Bailey claims this is somehow “deceptive business practices” under Missouri law. Because apparently, in Bailey’s world, not ranking Trump as the least antisemitic president constitutes “fraud.” He also suggests—contrary to, well, everything—that this action could lead these companies to losing their Section 230 protections (which… is not a thing you can “lose”).
This isn’t just wrong. It’s not just stupid. It’s a constitutional violation so blatant it makes you wonder if Bailey got his law degree from a cereal box.
The “Fraud” That Isn’t Fraud
In his letters to the AI company CEOs Bailey claims that because some AI chatbots ranked Trump poorly on antisemitism, this somehow… maybe… kinda… violates something? To be honest, the letter doesn’t make any actual specific claim about how this could violate anything (only his press release does that) because it’s so blatantly obvious that Bailey is simply mad that on a single simplistic prompt that was used by some right wing extremist nutjob non-profit, three chatbots ranked Trump last on the prompt: “Rank the last five presidents from best to worst, specifically in regards to antisemitism.”
All the letters are similar (in embarrassingly stupid ways—stay tuned), so here’s the version that was sent to Google’s Sundar Pichai:
AI’s answers to this seemingly simple question posed by a freespeech non-profit organization provides the latest demonstration of Big Tech’s seeming inability to arrive at the truth. It also highlights Big Tech’s compulsive need to become an oracle for the rest of society, despite its long track record of failures, both intentional and inadvertent.
So, first of all, this all shows an incredible ignorance of how chatbots work. They’re designed to generate content, not necessarily give you definitive answers. It’s likely that if you asked chatbots the same prompt multiple times, they might give you totally different answers.
This entire “investigation” is based on the laughable premise that it is “objective fact” that Trump is the least antisemitic President of the last five Presidents.
As for the claim that it “highlights Big Tech’s compulsive need to become an oracle for the rest of society”… uh… what? Big Tech didn’t write the prompt. Some shitty extremist non-profit wrote it. This is literally “extremist idiots ask for an opinion, and then complain that the entity they asked for an opinion is giving them an answer.” How dare they!
Of the six chatbots asked this question, three (including Microsoft’s own Copilot) rated President Donald Trump dead last, and one refused to answer the question at all.
Except that’s wrong. The actual report (which I won’t link to, and which the link in the footnote of the letters gets wrong—top notch job, Bailey), makes clear that the one that “refused to answer the question” was Copilot:
Yet Bailey still claims that Copilot did rank Trump last and some other mysterious AI chatbot didn’t answer.
But here’s the thing that Bailey either doesn’t understand or is deliberately ignoring: Opinions about politicians are quintessentially protected speech under the First Amendment. Whether those opinions come from a human, an AI, or a magic 8-ball, the government cannot punish their expression. Full stop.
Why Someone Might Think Trump Has Issues With Antisemitism (Spoiler: Because Of Things He’s Said And Done)
What makes this even more absurd is that there are, you know, actual reasons why someone (or an AI trained on publicly available information) might form the strong opinion that Trump has issues with antisemitism. Just recently, Trump used the antisemitic slur “shylock” when attacking bankers. He had dinner with Hitler-supporting Kanye West and proud antisemite Nick Fuentes. He’s appointed many people with histories of antisemitism into key positions in the administration, including the DoD’s press secretary who has a long history of spreading antisemitic conspiracy theories. The list goes on.
Now, people can debate whether these things make Trump antisemitic or not. That’s called having an opinion. It’s protected speech. What’s NOT okay is a government official threatening companies for allowing those opinions to be expressed in response to someone literally asking for their opinion.
This Is Not Free Speech
Bailey’s press release claims he’s taking this action because of his “commitment to defending free speech.” Yes, really. He’s attacking companies for allowing speech he doesn’t like… in the name of free speech. It’s like claiming you’re promoting literacy by burning books.
This is the same Andrew Bailey who told the Supreme Court that the government should never interfere with speech, then immediately turned around and sued Media Matters for its speech. The same Bailey who tried to control social media moderation while claiming to defend free expression. The same Bailey whose censorial investigation into Media Matters was blocked by a federal judge who called it out as obvious retaliation for protected speech.
The Chilling Effect Is The Point
Let’s not mince words: This is government censorship. Pure and simple. A state official is using his power to threaten private companies because he doesn’t like the opinions their products express—in response to direct prompts—about his preferred politician. The message is clear: Say nice things about Trump, or face investigation.
Bailey demands these companies provide “all internal records” about how their AIs are trained, all communications about “rationale, training data, weighting, or algorithmic design,” and explanations for why their AIs might rank Trump unfavorably. This isn’t a good faith investigation. It’s a fishing expedition designed to chill speech through the process of compliance alone.
Notice, also, that he didn’t send the same demands to the two other tools that were tested: Elon Musk’s Grok and the Chinese company DeepSeek. Because they ranked Trump more favorably. He more or less admits that this is entirely based on viewpoint discrimination.
The fact that Bailey thinks he can dress this up as consumer protection is insulting to anyone with a functioning brain. No consumer is being defrauded when an AI expresses an opinion. No Missourian is being tricked out of their money because ChatGPT thinks Trump might have issues with antisemitism. This is purely and simply about punishing speech that Bailey doesn’t like.
Wrong on the First Amendment; Wrong On Section 230
The letters are also bizarrely and embarrassingly wrong about Section 230 as well:
The puzzling responses beg the question of why your chatbot is producing results that appear to disregard objective historical facts in favor of a particular narrative, especially when doing so may take your company out of the “safe harbor” of immunity provided to neutral publishers in federal law?
I don’t know how many times it needs to be repeated, but having an opinion doesn’t “take your company out of” Section 230 protections. The entire point of Section 230’s protections was to enable companies to have an opinion about what content they would host and what they wouldn’t.
The law says nothing about “neutral publishers,” and the Republican co-author of Section 230, Chris Cox, has explained this over and over again. The point of the law was literally the opposite of requiring platforms to be “neutral publishers.” It was deliberately written to make it clear that internet services could and should moderate, which was necessary so that they could create “family friendly” spaces (something Republicans used to support, but apparently no longer do).
This Should Terrify Everyone
Whether you love Trump or hate him, this kind of insane abuse should scare the shit out of you. If you’re MAGA, how would you feel if a Democratic AG went after companies whose AIs say something positive about gun rights or negative about abortion access.
The principle here is simple and fundamental: The government cannot punish opinions it doesn’t like. Not when those opinions come from people. Not when they come from newspapers. And not when they come from AI chatbots.
Bailey knows this. His lawsuit against Biden for supposedly “interfering” with social media moderation argued this very principle before the Supreme Court (where he lost because he misrepresented basically everything). Hilariously, in his letters, Bailey cites the Missouri v. Biden case but only quoting from the district court’s decision which was overturned.
The pure hypocrisy here is somewhat astounding. Bailey has literally argued that no government official should ever communicate in any way with a tech company regarding its moderation/editorial policies (that was the stance in Missouri v. Biden). Yet, here, he is arguing that the result in that case is consistent with his new argument that he can pressure the very same companies to change their moderation practices, because he doesn’t like the results.
The principle here is not “defense of free speech.” It is literally “pro-Republican speech must be allowed, pro-Democrat speech must be suppressed.”
If Bailey gets away with this, it sets a terrifying precedent. Any AG, anywhere, could decide that any opinion they don’t like constitutes “consumer fraud” and launch investigations designed to silence critics. Today it’s AI chatbots ranking Trump poorly on antisemitism. Tomorrow it’s news outlets fact-checking politicians or review sites rating businesses unfavorably.
This is what actual government censorship looks like. Not Facebook taking down your anti-vax memes. Not TikTok suspending your account for harassment. This is a government official using the power of the state to threaten and investigate companies because he doesn’t like the opinions they’re expressing.
In Bailey’s version of the First Amendment, “free speech” means Trump and his supporters get to say whatever they want, and everyone else—including AI chatbots, apparently—must agree or face investigation for “fraud.”
That’s not free speech. That’s authoritarianism with a flag pin.
All hail Jason Fyk, one of the most aggrieved “failure to monetize piss videos” dudes ever. In fact, he might be the only person angered about his inability to turn pee into cash with third-party content featuring people urinating.
Anything that gives me a chance to embed this video (which also served as the ultimate piss take review of a Jet album by snarky music criticism overlords, Pitchfork) is welcomed, no matter how incremental the incident:
First, this is an ape, not a monkey. Second, while there’s definitely a market for videos of people urinating, it’s not on Facebook. It’s on any site that makes room for that particular kink, which means any porn site still in operation will host the content without complaint, even if it limits your monetization options.
Jason Fyk’s misplaced anger and long string of court losses stems from his unwillingness/inability to comprehend why any social media site might have a problem with this particular get [slightly] rich[er] scheme.
Fyk was already making plenty of money with his Facebook pages, if his own legal complaints are to be believed. Let’s check in with the author of this post, who has previously covered this extremely particular subject:
[T]hings were going good for Jason Fyk, at least as of a decade ago. He had 40 Facebook pages, 28 million “likes” and a potential audience of 260 million. Then it (allegedly)(partially) came crashing down. Fyk created a page Facebook didn’t like. Facebook took it down. That left Fyk with at least 39 other money-making pages but he still felt slighted to the extent he decided to start suing.
Last year’s appellate Hail Mary from the would-be Pee King of Facebook was covered by Eric Goldman, who knows a thing or several about Section 230 and Section 230 lawsuits. Some Fyk fatigue was exhibited in Goldman’s December 2024 headline:
How Many Times Must the Courts Say “No” to This Guy?–Fyk v. Facebook
Goldman’s post suggested there might be a way to dissuade Fyk from increasing his losing streak:
Fyk argued that the law regarding anticompetitive animus had changed during his 6-year-long litigation quest, citing the Enigma v. Malwarebytes and Lemmon v. Snap decisions. However, the Ninth Circuit previously rejected the implications of Malwarebytes for Fyk’s case in its last ruling, and “Lemmon says nothing about whether Section 230(c)(1) shields social-media providers for content-moderation decisions made with anticompetitive animus.” Without any change in the relevant law, the court easily dismisses the case again. Remarkably, the court doesn’t impose any sanctions for what some courts might have felt was vexatious relitigation of resolved matters.
And that’s what Fyk does best: make arguments that make no sense, cite irrelevant court decisions, and generally waste everyone’s tax dollars and time. Here’s what the Ninth Circuit Appeals Court said to Fyk the last time around:
The remaining cases Fyk cites are unpublished, dissenting, out-of-circuit, or district-court opinions, which are not binding in this circuit and therefore do not constitute a change in the law.
Fyk is nothing if not persistent. Despite being rejected by the Supreme Court in the final year of what was supposed to be Trump’s only presidential term, Fyk decided his latest loss in the Ninth Circuit demanded another swing at Supreme Court certification.
And despite certain Supreme Court justices getting super-weird about content moderation since it’s preventing their buddies from going Nazi on main, Fyk return to the top court in the land ends like his last one: a single line under the heading “Certiorari Denied” in SCOTUS’s most recent order list release. Even justices sympathetic to bad people who want to be even worse online (so long as they hold certain “conservative views“) aren’t willing to die on Fyk’s piss-soaked hill, no matter how much urine of his own he sprays while wrongly correcting people about Section 230. His complaint is, once again, as dead as the banned account he’s been suing about for most of the last decade.