The Third Circuit’s Section 230 Decision In Anderson v. TikTok Is Pure Poppycock.
from the that's-not-how-any-of-this-works dept
Last week, the U.S. Court of Appeals for the Third Circuit concluded, in Anderson v. TikTok, that algorithmic recommendations aren’t protected by Section 230. Because they’re the platforms’ First Amendment-protected expression, the court reasoned, algorithms are the platforms’ “own first-party speech,” and thus fall outside Section 230’s liability shield for the publication of third-party speech.
Of course, a platform’s decision to host a third party’s speech at all is also First Amendment-protected expression. By the Third Circuit’s logic, then, such hosting decisions, too, are a platform’s “own first-party speech” unprotected by Section 230.
We’ve already hit (and not for the last time) the key problem with the Third Circuit’s analysis. “Given … that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms,” the court declared, “it follows that doing so amounts to first-party speech under [Section] 230, too.” No, it does not. Assuming a lack of overlap between First Amendment protection and Section 230 protection is a basic mistake.
Section 230(c)(1) says that a website shall not be “treated as the publisher” of most third-party content it hosts and spreads. Under the ordinary meaning of the word, a “publisher” prepares information for distribution and disseminates it to the public. Under Section 230, therefore, a website is protected from liability for posting, removing, arranging, and otherwise organizing third-party content. In other words, Section 230 protects a website as it fulfills a publisher’s traditional role. And one of Section 230’s stated purposes is to “promote the continued development of the Internet”—so the statute plainly envisions the protection of new, technology-driven publishing tools as well.
The plaintiffs in Anderson are not the first to contend that websites lose Section 230 protection when they use fancy algorithms to make publishing decisions. Several notable court rulings (all of them unceremoniously brushed aside by the Third Circuit, as we shall see) reject the notion that algorithms are special.
The Second Circuit’s 2019 decision in Force v. Facebook is especially instructive. The plaintiffs there argued that “Facebook’s algorithms make … content more ‘visible,’ ‘available,’ and ‘usable.’” They asserted that “Facebook’s algorithms suggest third-party content to users ‘based on what Facebook believes will cause the user to use Facebook as much as possible,’” and that “Facebook intends to ‘influence’ consumers’ responses to that content.” As in Anderson, the plaintiffs insisted that algorithms are a distinct form of speech, belonging to the platform and unprotected by Section 230.
The Second Circuit was unpersuaded. Nothing in the text of Section 230, it observed, suggests that a website “is not the ‘publisher’ of third-party information when it uses tools such as algorithms that are designed to match that information with a consumer’s interests.” In fact, it noted, the use of such tools promotes Congress’s express policy “to promote the continued development of the Internet.”
By “making information more available,” the Second Circuit wrote, Facebook was engaging in “an essential part of traditional publishing.” It was doing what websites have done “on the Internet since its beginning”—“arranging and distributing third-party information” in a manner that “forms ‘connections’ and ‘matches’ among speakers, content, and viewers of content.” It “would turn Section 230(c)(1) upside down,” the court concluded, to hold that Congress intended to revoke Section 230 protection from websites that, whether through algorithms or otherwise, “become especially adept at performing the functions of publishers.” The Second Circuit had no authority, in short, to curtail Section 230 on the ground that by deploying algorithms, Facebook had “fulfill[ed] its role as a publisher” too “vigorously.”
As the Second Circuit recognized, it would be exceedingly difficult, if not impossible, to draw logical lines, rooted in law, around how a website arranges third-party content. What in Section 230 would enable a court to distinguish between content placed in a “for you” box, content that pops up in a newsfeed, content that appears at the top of a homepage, and content that’s permitted to exist in the bowels of a site? Nothing. It’s the wrong question. The question is not how the website serves up the content; it’s what makes the content problematic. When, under Section 230, is third-party content also a website’s first-party content? Only, the Second Circuit explained, when the website “directly and materially contributed to what made the content itself unlawful.” This is the “crucial distinction”—presenting unlawful content (protected) versus creating unlawful content (unprotected).
Perhaps you think the problem of drawing non-arbitrary lines around different forms of presentation could be solved, if only we could get the best and brightest judges working on it? Well, the Supreme Court recently tried its luck, and it failed miserably. To understand the difficulties with excluding algorithmic recommendations from Section 230, all the Third Circuit had to do was meditate on the oral argument in Gonzalez v. Google. It was widely assumed that the justices took that case because at least some of them wanted to carve algorithms out of Section 230. How hard could it be? But once the rubber hit the road, once they had to look at the matter closely, the justices had not the faintest idea how to do that. They threw up their hands, remanding the case without reaching the merits.
The lesson here is that creating an “algorithm” rule would be rash and wrong—not least because it would involve butchering Section 230 itself—and that opinions such as Force v. Facebook are correct. But instead of taking its cues from the Gonzalez non-decision, the Third Circuit looked to the Supreme Court’s newly released decision in Moody v. NetChoice.
Moody confirms (albeit, alas, in dicta) that social media platforms have a First Amendment right to editorial control over their newsfeeds. The right to editorial control is the right to decide what material to host or block or suppress or promote, including by algorithm. These are all expressive choices. But the Third Circuit homed in on the algorithm piece alone. Because Moody declares algorithms a platform’s protected expression, the Third Circuit claims, a platform does not enjoy Section 230 protection when using an algorithm to recommend third-party content.
The Supreme Court couldn’t coherently separate algorithms from other forms of presentation, and the distinguishing feature of the Third Circuit’s decision is that it never even tries to do so. Moody confirms that choosing to host or block third-party content, too, is a platform’s protected expression. Are those choices “first-party speech” unprotected by Section 230? If so—and the Third Circuit’s logic requires that result—Section 230(c)(1) is a nullity.
This is nonsense. And it’s lazy nonsense to boot. Having treated Moody’s stray lines about algorithms like live hand grenades, the Third Circuit packs up and goes home. Moody doesn’t break new ground; it merely reiterates existing First Amendment principles. Yet the Third Circuit uses Moody as one neat trick to ignore the universe of Section 230 precedent. In a footnote (for some reason, almost all the decision’s analysis appears in footnotes) the court dismisses eight appellate rulings, including Force v. Facebook, that conflict with its ruling. It doesn’t contest the reasoning of these opinions; it just announces that they all “pre-dated [Moody v.] NetChoice.”
Moody roundly rejects the Fifth Circuit’s (bananas) First Amendment analysis in Paxton v. NetChoice. In that faulty decision, the Fifth Circuit wrote that Section 230 “reflects Congress’s factual determination that Platforms are not ‘publishers,’” and that they “are not ‘speaking’ when they host other people’s speech.” Here again is the basic mistake of seeing the First Amendment and Section 230 as mutually exclusive, rather than mutually reinforcing, mechanisms. The Fifth Circuit conflated not treating a platform as a publisher, for purposes of liability, with a platform’s not being a publisher, for purposes of the First Amendment. In reality, websites that disseminate third-party content both exercise First Amendment-protected editorial control and enjoy Section 230 protection from publisher liability.
The Third Circuit fell into this same mode of woolly thinking. The Fifth Circuit concluded that because the platforms enjoy Section 230 protection, they lack First Amendment rights. Wrong. The Supreme Court having now confirmed that the platforms have First Amendment rights, the Third Circuit concluded that they lack Section 230 protection. Wrong again. Congress could not revoke First Amendment rights wherever Section 230 protection exists, and Section 230 would serve no purpose if it did not apply wherever First Amendment rights exist.
Many on the right think, quite irrationally, that narrowing Section 230 would strike a blow against the bogeyman of online “censorship.” Anderson, meanwhile, involved the shocking death of a ten-year-old girl. (A sign, in the view of one conservative judge on the Anderson panel, that social media platforms are dens of iniquity. For a wild ride, check out his concurring opinion.) So there are distorting factors at play. There are forces—a desire to stick it to Big Tech; the urge to find a remedy in a tragic case—pressing judges to misapply the law. Judges engaging in motivated reasoning is bad in itself. But it is especially alarming here, where judges are waging a frontal assault on the great bulwark of the modern internet. These judges seem oblivious to how much damage their attacks, if successful, are likely to cause. They don’t know what they’re doing.
Corbin Barthold is internet policy counsel at TechFreedom.
Filed Under: 1st amendment, 3rd circuit, anderson v. tiktok, free speech, section 230
Companies: tiktok


Comments on “The Third Circuit’s Section 230 Decision In Anderson v. TikTok Is Pure Poppycock.”
In before the “but it’s bad for Big Tech, so it must be good for the Internet!” comments. Do schools not require reading “The Monkey’s Paw” any more?
So, if the kid had gone to a link farm, clicked on a link that led to a “blackout challenge” video, and attempted the trick because of that, this appeals court would find the link farm responsible?
Re:
No, because a link farm isn’t “Big Tech” so there is no motivation for them to come to that opinion even though the basic facts are the same.
Section 230 was not designed for moderation by Al Gore.
Re:
Or AlGoreRhythms.
Re: Re:
At least that you can dance to.
Next up, if someone tries to emulate something they see in a movie or TV show and is hurt, the TV networks and movie theaters will be held responsible. After all, isn’t the very act of making the show available on their platform akin to stating “you should watch this?” /rollseyes
I mean, I get it, a kid died and the parents want someone to blame (that is, besides whatever blame they might bear themselves for not being aware of what their 10 year old kid was watching online, and not impressing upon their child that they should not try to repeat stupid things they see on TV), but I wonder, is there no attempt by these parents to hold accountable the persons who actually uploaded the challenge video?
Not sure I’d hold up SCOTUS as our best and brightest, but hey.
Re:
They’ve managed to understand how section 230 works so far, at least.
(Or at least understood enough around it to not rule against it, so far.)
It has been proven on bluesky that it bears repeating: making websites liable for the existence of content based on the general knowledge harmful content exists somewhere, which the court explicitly does when it treats knowlege of the blackout challenge generally as red flag knowledge for the purpose of concocting a negligence theory of liability, only results in social media not looking at the content. Whatever content you think is bad and wrong and should be banned from the internet? Websites can not look for it, because to start looking is to invite knowledge, and therefore to invite liability. They can not proactively check for CSAM. They can not proactively screen for groomers. They can not remove extreme leftist calls for a violent revolution, nor neo Nazi calls to purge “those people”. None of it. The Capitalist fiscal incentive will be to not moderate at all.
If something is protected by the First Amendment that means you can’t be penalised for it, not that you can. The court here says that up is down, black is white, and left is right.
Re:
“Section 230 applies to 1A conduct”
Incompetent court: “Because 1A applies, Section 230 doesn’t!”
Re: Re:
If 1A applies then dosn’t it mean the goverment can’t regulate it (no need for section 230)
Re: Re: Re:
They can’t, yes. 230 just makes the process a whole lot less messy and expensive.
Here’s to hoping this one gets overturned soon.
I yearn for the days where the internet isn’t constantly under threat in some form.
Re:
..It IS gonna be challenged, right?
Re: Re:
Well, granted, it HAS survived like three decades so far, so that must surely mean something.
Re: Re: Re:
If this ruling somehow kills it in the end, then I can’t believe this of all things would be what ended the internet. Or at least neutered it pretty heavily for..However long it takes them to realize their mistake.
Cops arrest a tesla vehicle,
are they going to charge it?
Section 230 causes third parties to be defrauded, misled, and to make decisions based on lies they read online. Censorship causes people to not be able to hear all sides of a debate. Like in the film “Unlawful Entry,” where the target was the husband, but the primary victim was the wife. An internet built on defamation and censorship is going to harm far more than just its targets.
Re:
I’m not sure you know what section 230 is actually intended for.
Re:
Section 230 allows for platforms to host third party content without direct liability for that content. It doesn’t cause or prevent fraud. People do. And 230 just means you can’t hold a host responsible for the fraud of a person who chose to use their platform. It’s the same as if you went into a Starbucks and posted a fraudulent message on a community bulletin board. Starbucks can’t and doesn’t have to vet your claims or make sure you’re not committing fraud before you can post a message.
The internet is built on the freedom of everyone to post and exchange messages. Without 230, only the wealthy get the freedom to do that because otherwise every host is liable for everything and won’t allow third party content. It would become television all over again.
Your comment here would not be hosted because Mike would become liable for your bullshit takes with 230. You’re shitting on your own freedoms.
Re:
…hallucinates nobody mentally competent, ever.
Nobody in the real world has ever been harmed by Section 230, Jhon.
Re:
“Section 230 causes third parties to be defrauded, misled, and to make decisions based on lies they read online.”
The third party problems you refer to are “caused” by the entity that left the comment, not the legislation put in place to mitigate the potential for law suit madness.
Should a business owner be liable for graffiti left upon the side of their building? Would you make business investments based upon what you read on a public restroom wall?
Just a reminder
After seeing the show that was the previous article on this subject’s comment section and seeing how many people were beginning to spiral in to anxiety. These articles are important to be informed, but if it’s beginning to take a toll on your mental well being, take a break. I know it’s unrelated but I feel like it needs to be said anyway
Re:
As someone who was amongst the panicked doomsayers in that article’s comments, you are correct.
It’s a bit of a vicious cycle. You get riled up, you can only see the worst case scenarios, you keep checking the news for something to counteract the doomsday mindset you’ve put yourself in, repeat.
Believe me, it’s so, so exhausting to go through. Better to try and just stop looking if it gets to you this bad.
Re: Re:
Update: jfc I need to take my own advice I’ve been kept in a spiral for days thinking about this case.
Re: Re: Re:
Gotten so bad that I’m dreading the end of this next week, thinking it’ll mean the ruling will stick and the internet will just be doomed from then on.
I’m taking a break from this site.
History
If I remember my history correctly, Section 230 was created by Congress as a direct response to the Prodigy decision https://en.m.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prodigy_Services_Co.
The Prodigy decision basically said that since Prodigy was aware of the content, since they moderated it, they were responsible for it.
Section 230 was created to make sure moderation mistakes did not make a company liable. Back then, Congress realized that without moderation everything would turn into Xitter. I don’t see how a court could read a law intended to protect moderation as not protecting moderation of it’s done by computer.
Re:
History repeats itself in the strangest of ways.
Serious question: What are the actual odds of this ruling being overturned. And, if not, it rendering all forms of messengers, social media, etc effectively dead?
I apologize for the dramatic wording, but it IS how I’m seeing news articles report on it.
I’m so scared of section 230 getting repealed.