When a school district sues social media companies claiming they can’t educate kids because Instagram filters exist, that district is announcing to the world that it has fundamentally failed at its core mission. That’s exactly what New York City just did with its latest lawsuit against Meta, TikTok, and other platforms.
The message is unmistakable: “We run the largest school system in America with nearly a million students, but we’re unable to teach children that filtered photos aren’t real or help them develop the critical thinking skills needed to navigate the modern world. So we’re suing someone else to fix our incompetence.”
This is what institutional failure looks like in 2025.
NYC first got taken in by this nonsense last year, as Mayor Adams said all social media was a health hazard and toxic waste. However, that lawsuit was rolled into the crazy, almost impossible to follow, consolidated version of that lawsuit in California that currently has over 2300 filings on the docket. So, apparently, NYC dropped that version, and has now elected to sue, sue again. With the same damn law firm, Keller Rohrback, that kicked off this trend and are the lawyers behind a big chunk of these lawsuits.
The actual complaint is bad, and everyone behind it should feel bad. It’s also 327 pages, and there’s no fucking way I’m going to waste my time going through all of it, watching my blood pressure rise as I have to keep yelling at my screen “that’s not how any of this works.”
The complaint leads with what should be Exhibit A for why NYC schools are failing their students—a detailed explanation of adolescent brain development that perfectly illustrates why education matters:
Children and adolescents are especially vulnerable to developing harmful behaviors because their prefrontal cortex is not fully developed. Indeed, it is one of the last regions of the brain to mature. In the images below, the blue color depicts brain development.
Because the prefrontal cortex develops later than other areas of the brain, children and adolescents, as compared with adults, have less impulse control and less ability to evaluate risks, regulate emotions and regulate their responses to social rewards.
Stop right there. NYC just laid out the neurological case for why education exists. Kids have underdeveloped prefrontal cortexes? They struggle with impulse control, risk evaluation, and emotional regulation? THAT’S LITERALLY WHY WE HAVE SCHOOLS.
The entire premise of public education is that we can help children develop these exact cognitive and social skills. We teach them math because their brains can learn mathematical reasoning. We teach them history so they can evaluate evidence and understand cause and effect. We teach them literature so they can develop empathy and critical thinking.
But apparently, when it comes to digital literacy—arguably one of the most important skills for navigating modern life—NYC throws up its hands and sues instead of teaches.
This lawsuit is a 327-page confession of educational malpractice.
The crux of the lawsuit is, effectively, “kids like social media, and teachers just can’t compete with that shit.”
In short, children find it particularly difficult to exercise the self-control required to regulate their use of Defendants’ platforms, given the stimuli and rewards embedded in those platforms, and as a foreseeable and probable consequence of Defendants’ design choices tend to engage in addictive and compulsive use. Defendants engaged in this conduct even though they knew or should have known that their design choices would have a detrimental effect on youth, including those in NYC Plaintiffs’ community, leading to serious problems in schools and the community.
By this logic, basically any products that children like are somehow a public nuisance.
This lawsuit is embarrassing to the lawyers who brought it and to the NYC school system.
Take the complaint’s hysterical reaction to Instagram filters, which perfectly captures the educational opportunity NYC is missing:
Defendants’ image-altering filters cause mental health harms in multiple ways. First, because of the popularity of these editing tools, many of the images teenagers see have been edited by filters, and it can be difficult for teenagers to remain cognizant of the use of filters. This creates a false reality wherein all other users on the platforms appear better looking than they actually are, often in an artificial way. As children and teens compare their actual appearances to the edited appearances of themselves and others online, their perception of their own physical features grows increasingly negative. Second, Defendants’ platforms tend to reward edited photos, through an increase in interaction and positive responses, causing young users to prefer the way they look using filters. Many young users believe they are only attractive when their images are edited, not as they appear naturally. Third, the specific changes filters make to individuals’ appearances can cause negative obsession or self-hatred surrounding particular aspects of their appearance. The filters alter specific facial features such as eyes, lips, jaw, face shape, and face slimness—features that often require medical intervention to alter in real life
Read that again. The complaint admits that “it can be difficult for teenagers to remain cognizant of the use of filters” and that kids struggle to distinguish between edited and authentic images.
That’s not a legal problem. That’s a curriculum problem.
A competent school system would read that paragraph and immediately start developing age-appropriate digital literacy programs. Media literacy classes. Critical thinking exercises about online authenticity. Discussions about self-image and social comparison that have been relevant since long before Instagram existed.
Instead, NYC read that paragraph and decided the solution is to sue the companies rather than teach the kids.
This is educational malpractice masquerading as child protection. If you run a million-student school system and your response to kids struggling with digital literacy is litigation rather than education, you should resign and let someone competent take over.
They’re also getting sued for… not providing certain features, like age verification. Even though, as we keep pointing out, age verification is (1) likely unconstitutional outside of the narrow realm of pornographic content, and (2) a privacy and security nightmare for kids.
The broader tragedy here extends beyond one terrible lawsuit. NYC is participating in a nationwide trend of school districts abandoning their educational mission in favor of legal buck-passing. These districts, often working with the same handful of contingency-fee law firms, have decided it’s easier to blame social media companies than to do the hard work of preparing students for digital citizenship.
This represents a fundamental misunderstanding of what schools are supposed to do. We don’t shut down the world to protect children from it—we prepare children to navigate the world as it exists. That means teaching them to think critically about online content, understand privacy and security, develop healthy relationships with technology, and build the cognitive skills to resist manipulation.
Every generation gets a moral panic or two, and apparently “social media is destroying kids’ brains” is our version of moral panics of years past. We’ve seen this movie before: the waltz would corrupt young women’s morals, chess would stop kids from going outdoors, novels would rot their brains on useless fiction, bicycles would cause moral decay, radio would destroy family conversation, pinball machines would turn kids into delinquents, television would make them violent, comic books would corrupt their minds, and Dungeons & Dragons would lead them to Satan worship.
As society calmed down, eventually, after each of those, we now look back on those moral panics as silly, hysterical overreactions. You would hope that a modern education system would take note that they have an opportunity to use these new forms of media as a learning opportunity.
But faced with social media, America’s school districts have largely given up on education and embraced litigation. That should terrify every parent more than any Instagram filter ever could.
The real scandal isn’t that social media exists. It’s that our schools have become so risk-averse and educationally bankrupt that they’ve forgotten their core purpose: preparing young people to be thoughtful, capable adults in the world they’ll actually inherit.
Look, if you want to cut to the chase: the lawyers working for Google and Meta know that the MAGA world is very, very stupid and very, very gullible, and it’s very, very easy to tell them something that they know will be interpreted as a “victory” while actually signaling something very, very different. You could just reread my analysis of Meta and Mark Zuckerberg’s silly misleading caving to Rep. Jim Jordan last year, because this is more of the same.
This time it’s Google doing the caving in a manner they absolutely know doesn’t actually admit to things that Jordan and the MAGAverse will insist it does actually admit. If anything, it’s actually admitting the reverse. Specifically, it sent a letter replying to some Jim Jordan subpoenas, which Jim Jordan is claiming as a victory for free speech because Google said things he can misrepresent as such.
Lots of very silly people (including Jordan) have been running around all week falsely claiming that Google has “admitted” that the Biden administration illegally censored people, and in response, they’re now reinstating accounts of people who were “unfairly censored.”
To be fair, this is what Google wants Jim Jordan and MAGA people to believe because it feeds into their pathetic victim narrative.
But it’s not what Google actually said for people who can read (and comprehend basic English). I won’t go through the entire letter, but let’s cover the supposed admission of censorship from the Biden admin:
Senior Biden Administration officials, including White House officials, conducted repeated and sustained outreach to Alphabet and pressed the Company regarding certain user-generated content related to the COVID-19 pandemic that did not violate its policies. While the Company continued to develop and enforce its policies independently, Biden Administration officials continued to press the Company to remove non-violative user-generated content.
It is not new, nor is it all that controversial, that the Biden administration did some outreach regarding COVID-19 content. But note what Google says here: “the Company continued to develop and enforce its policies independently.” In other words, Biden folks reached out, Google said “thanks, but that doesn’t violate our policies, so we’re not doing anything about it.”
Now, we can say that the government shouldn’t be in the business of telling private companies anything at all, but that’s a bit rich coming from the MAGA world that spent the last week focused on getting Disney to “moderate” Jimmy Kimmel out of a fucking job with actual threats of punishment if they failed to do so.
And that, once again, is the key issue: as the Supreme Court has long held, government officials are allowed to use “the bully pulpit” to seek to persuade companies as long as there is no implicit or explicit threat. Some will argue that the message here must have come with an implicit threat, and that’s an area where people can debate and differ on, though the fact that Google flat out admits that it basically told the Biden admin “no” seems to undermine that there was any threat included.
As online platforms, including Alphabet, grappled with these decisions, the Administration’s officials, including President Biden, created a political atmosphere that sought to influence the actions of platforms based on their concerns regarding misinformation.
Again, this is not new. The Biden admin did this publicly and many of us called them out for it. The question is whether or not they reached the level of coercion.
Meanwhile, this is either accidental irony, or Google’s lawyers know that Jim Jordan would totally miss the sarcasm included in this next bit:
It is unacceptable and wrongwhen any government, including the Biden Administration,attempts to dictate how the Company moderates content, and the Company has consistently fought against those efforts on First Amendment grounds.
Why do I say it’s ironic? Because Jim Jordan’s subpoenas and demands to Google are very much a government official attempting to dictate how Google moderates content (in that he wants them to not moderate content he favors).
Indeed, right after this, Google starts groveling about how it’s so, so sorry that YouTube took moderation actions on conspiracy theory and nonsense peddler accounts that Jordan likes and thus will begin to reinstate them.
Yes, in the very letter where Google tells Jim Jordan “it’s wrong for the government to tell us how to moderate,” it also says “thank you for telling us how to moderate, we are following your demands.” Absolutely incredible.
Perhaps even more incredible is the discussion of fact checking. The company mentions that it doesn’t employ third-party fact checkers for YouTube to review content for moderation purposes:
In contrast to other large platforms, YouTube has not operated a fact-checking program that identifies and compensates fact-checking partners to produce content to support moderation. YouTube has not and will not empower fact-checkers to take action on or label content across the Company’s services.
Which in turn led Jordan to crow about how this was a huge success:
If you can’t read that, it’s Jordan saying:
But that’s not all. YouTube is making changes to its platform to prevent future censorship. YouTube is committing to the American people that it will NEVER use outside so-called “fact-checkers” to censor speech. No more telling Americans what to believe and not believe.
But fact checking is not “censorship.” It’s literally “more speech.” It’s not telling anyone what to believe or what not to believe. It’s providing additional information. You know, that whole “marketplace of ideas” that they keep telling us is so important.
Then, Jordan crowed directly about how his own efforts caused YouTube to reinstate people. In other words, in the same letter that he insists supports him and which says it is “unacceptable and wrong” for government officials “to dictate how the Company moderates content” he excitedly claims credit for dictating how YouTube should moderate content:
“Because of our work.” So you are flat out admitting that you have told Google how to moderate, and it is complying by reinstating accounts that you wanted them to reinstate.
That certainly would raise questions about unconstitutional jawboning if we didn’t live in a world in which it has been decided “it’s okay when Republicans do it” but not okay when Democrats do something much less direct or egregious.
It’s almost like there’s a double standard, and it’s very much like Google is willing to suck up to MAGA folks to take advantage of that double standard… just as Mark Zuckerberg did.
It’s always fascinating to watch supposed “free speech warriors” reveal their true colors the moment they get a tiny bit of power. We’ve been covering the ongoing saga of various COVID contrarians who spent years falsely claiming they were “censored” by the Biden administration, only to see the Supreme Court definitively reject those claims in Murthy v. Missouri.
Now that some of these same people are running health agencies under Trump, we’re getting to see what actual censorship looks like—and surprise, surprise, it’s coming from the very people who complained the loudest about being silenced.
The latest example comes courtesy of Dr. Vinay Prasad, now the FDA’s top vaccine regulator, who used copyright claims to shut down a YouTube channel run by Dr. Jonathan Howard, a neurologist and psychiatrist who has been documenting and critiquing the statements of what he calls our “current Medical Establishment.”
Howard’s channel served an important function: documenting the public statements of people who are now in positions of significant power over American health policy. As Howard explains in a detailed blog post on Science-Based Medicine:
A core goal of my work has been to preserve the words of our current Medical Establishment. While accurately remembering the past is valuable in its own right, we need to remember their prior pronouncements to judge their current credibility, even if they don’t want us to do that. A doctor who fluffed RFK Jr. or spread blatant disinformation about a deadly virus is unlikely to be trustworthy about anything.
To that end, I started a YouTube channel last year, which served as a repository of what our current Medical Establishment said. I had accumulated about 350 videos, almost all of which were short clips of famous doctors saying absurd things- that herd immunity had arrived in the spring of 2021 and that RFK Jr. was an honest broker about vaccines, for example. I appeared in just a handful of the videos, as a small face offering commentary in the corner, though I hadn’t made a new video all year. I never promoted the channel and made no money from it.
The channel was small—just 256 subscribers—but the videos were primarily clips of public statements, interviews, and social media posts that Howard used to support his critiques and articles.
Jonathan Howard, a neurologist and psychiatrist in New York City, received an email from YouTube on Friday night, which stated that Vinay Prasad, who is the FDA’s top vaccine regulator, had demanded the removal of six videos of himself from Howard’s YouTube channel.
Howard’s entire channel has now been deleted by YouTube, which cited copyright infringement.
Here’s where the hypocrisy becomes unmistakable. This is the same Vinay Prasad who has spent years positioning himself as a victim of censorship—someone who built his brand on being a “free speech” advocate. Howard notes with some irony how he won’t do all the “I am being censored!” nonsense that Prasad and his now colleague as a government health official, Jay Bhattacharya, have spent years doing:
To be clear, the loss of my YouTube channel is a trivial thing, and I promise not to make it the center of my identify. I won’t make a Supreme Court case out of it or record lengthy videos about it- Prasad’s Lecture is Cancelled from ACCP Conference b/c Online Haters. I won’t sit down for self-pitying interviews with Bari Weiss, the Wall Street Journal, and Reason Magazine. I’ve been censored before, and I am not dramatic about such things. After all, my YouTube channel had 256 subscribers and its videos were typically seen by dozens of people, DOZENS! Its loss is a speck of dust compared to what RFK Jr. is destroying, and on one level, it both really funny and pathetic that Dr. Prasad would care so much about it.
Howard also notes that Prasad seems to have zero problem with anti-vaxxers using the very same videos of Prasad, showing how this is clearly selective enforcement (i.e., a government official engaging in viewpoint discrimination to shut down Howard’s attempt to call out Prasad’s nonsense):
From his podcast Gmail account, Dr. Prasad filed a formal complaint that video clips he had posted to Twitter had been uploaded to YouTube. YouTube agreed and killed my channel. Dr. Prasad was not bothered by someone sharing his videos on principle. For years, he’s been happy to let anti-vaxx disinformation accounts share countless clips of his vulgar revenge fantasies.Dr. Prasad only objected when I, someone who exposed his disinformation, sought to preserve and share these exact same videos.
This is textbook censorial behavior disguised as copyright enforcement. Copyright has long been the tool of choice for those looking to silence critics, and this appears to be a classic example. Howard’s use of these clips—for commentary, criticism, and documentation—would almost certainly qualify as fair use under copyright law. As Howard points out, he’s now reuploading videos with his own commentary included, which makes the fair use case even stronger.
But the copyright angle is really beside the point here. The real story is the breathtaking hypocrisy of someone who built his brand on being a “free speech” advocate suddenly using legal threats to silence a critic the moment he gets into government.
This hypocrisy extends throughout the Trump health apparatus. As I detailed last year, Jay Bhattacharya—now head of the NIH—spent years falsely claiming he was “censored” by social media platforms, when the reality was that his content was simply being fact-checked or receiving less algorithmic promotion. Meanwhile, RFK Jr., now Secretary of Health and Human Services, has filed numerous (failed) lawsuits making similarly bogus censorship claims.
It’s almost as if all these “health” professionals who spent years falsely claiming they were censored because they received some (well deserved) pushback and criticism for their highly questionable arguments, were really just itching to censor their critics all along.
Howard captures this perfectly:
It’s no secret that the administration in which Dr. Prasad proudly serves-Vinay was not anti-Trump– iscensoring scientistsand dissident voices. The termination of my channel is a small part of that process, and so it’s OK to be clear about what seems to have happened here.
The pattern is clear: these individuals spent the Biden administration crying about imaginary censorship while building their brands as free speech martyrs. Now that they’re in power, they’re showing us what actual censorship looks like—using copyright claims, legal threats, and government pressure to silence critics and preserve their own narratives.
It’s worth noting that Howard isn’t giving up. As he explains:
Unlike Dr. Prasad, I have no problem with anyone sharing my work. Feel free! Much of it merely collects and curates what Dr. Prasad said the past 5 years, including in the erased videos. None of this is lost, and I think it’s very important that we don’t forget it. And even though my YouTube channel (RIP, 2024-2025) was assassinated by my own government, there are many ways to remind the world of what he said. I’ve already createdanother YouTube channel, and this time every video will come with my commentary. It will be much better than the one that got erased, and hopefully it will be more widely viewed.
This is exactly the right response. When would-be censors try to use copyright as a cudgel, the answer isn’t to be silenced—it’s to make the fair use case even stronger by adding more commentary and criticism.
The broader lesson here is one we’ve seen repeatedly: the loudest voices falsely complaining about “censorship” are often the first to engage in actual censorship when given the opportunity. These COVID contrarians built their entire brands on being silenced martyrs, but the moment they gained real power, they immediately started trying to silence their critics.
Howard puts it best:
Americans do not need our government’s permission to remember their words and inform the world about what our public officials said. I refuse to let Dr. Prasad be silenced or censored. He should extend me the same courtesy. He is a powerful government official. I am just a private citizen seeking to hold my government accountable.
That’s the real issue here: a government official using legal threats to try to silence a private citizen who is documenting his public statements. It’s exactly the kind of behavior that Prasad and his colleagues claimed to oppose when it was happening to them (even though it wasn’t actually happening to them).
The fact that this is being done under the guise of copyright law doesn’t make it any less censorial—it just makes it more cowardly. At least when governments engage in direct censorship, they’re being honest about what they’re doing. Using copyright claims to silence critics is censorship with a fig leaf, and it’s particularly galling when it comes from people who built their reputations complaining about being censored.
The hypocrisy here is undeniable. These are the same people who spent years claiming that any fact-checking or algorithmic demotion was “censorship,” now using actual legal threats to silence critics. They demanded that social media platforms give them unlimited reach and immunity from criticism, while simultaneously working to eliminate criticism of their own statements.
Howard’s documentation project is more important than ever, precisely because people like Prasad are now in positions of significant power over American health policy. The public has a right to know what these officials said before they gained power, and they have a right to hold them accountable for those statements.
The fact that Prasad is trying to memory-hole his own public statements should tell us everything we need to know about how confident he is in defending them. If your public statements can’t withstand scrutiny, perhaps the problem isn’t with the people scrutinizing them.
One of the more frustrating things about content streaming has been how quickly we went from having a conversation about cord-cutting to the realization that all of the streaming services that enabled said cord-cutting have morphed into the very cable providers that people wanted to escape. You can see this in a variety of ways. More packaged bundles that include content people don’t actually want. Stupid local blackouts of content, particularly when it comes to live sports. Subscription fees that rapidly shift higher with no value add for the customer. And, of course, carriage disputes.
I could write up an explanation as to what these kind of disputes are, but Karl Bode put it together so beautifully that I’ll just borrow his words instead.
For years cable TV has been plagued by retrans feuds and carriage disputes that routinely end with users losing access to TV programming they pay for. Basically, broadcasters will demand a rate hike in new content negotiations, the cable TV provider will balk, and then each side blames the other for failing to strike a new agreement on time like reasonable adults. That repeatedly results in content being blacked out for months, without consumers ever getting a refund. After a few months, the two sides strike a new confidential deal, your bill goes up, and nobody much cares how that impacts the end user. Rinse, wash, repeat.
The only thing I’d really want to add to that is how the blame game that gets played by both sides is typically directed at the actual customer. The goal typically is to at least threaten the other side’s goodwill with the public by calling them greedy or whatever, or sometimes to get the public to engage in the pressure campaign themselves by calling one side or the other to complain. It’s a rather remarkable thing to watch two wealthy entities use their own customers as pawns in a chess battle with one another over just how much money each side will make from those same pawns.
Well, we’re at it again it seems, this time as YouTube TV and the Fox network are at odds over carriage fees. And the timing, on the eve of the NFL season beginning, isn’t lost on anyone.
YouTube TV could soon lose access to Fox channels, it announced on its official blog, mere days before the 2025 NFL season begins. It warned users that it’s actively negotiating with Fox now that the renewal date for their partnership is approaching, but Fox is allegedly asking for an amount “far higher than what partners with comparable content offerings receive.” YouTube TV says it’s aiming to reach an agreement that “reflects the value of their content and is fair for both sides” without the service having to raise its prices to be able to offer Fox channels.
If both sides aren’t able to come to an agreement by 5PM Eastern time on August 27, subscribers will no longer be able to access all Fox news and business programs, as well as all sporting events (like NFL games) broadcast on Fox channels. The content from the channels saved in their library will also disappear. In case YouTube TV fails to reach a deal with Fox and the network’s channels become unavailable for “an extended period of time,” it will give subscribers a $10 credit.
Who knows what an “extended period of time” means, but I’ll say that the offer of any kind of a credit is better than what usually occurs. As for how out of whack the ask from Fox is, I don’t have those details, but I’m not terribly surprised that it’s unpalletable to YouTube. Between the leverage the network has as football season is about to start, the stranglehold Fox News has on about a third of the country’s cable news viewership, and the fact that Fox is probably still feeling the pain of a nearly $800 million dollar settlement over its defamatory news content, well, I imagine the ask is quite large.
But not so large that YouTube couldn’t absorb it if it wanted to. Instead, both sides are doing some mild public sniping and PR campaigning against each other, while the customer is left to await their fate.
If we were going to keep doing this sort of thing, what was the point of cutting the cord to begin with?
When politicians immediately blamed social media for the horrific 2022 Buffalo mass shooting—despite zero evidence linking the platforms to the attack—it was obvious deflection from actual policy failures. The scapegoating worked: survivors and victims’ families sued the social media companies, and last year a confused state court wrongly ruled that Section 230 didn’t protect them.
Thankfully, an appeals court recently reversed that decision in a ruling full of good quotes about how Section 230 actually works, while simultaneously demonstrating why it’s good that it works this way.
The plaintiffs conceded they couldn’t sue over the shooter’s speech itself, so they tried the increasingly popular workaround: claiming platforms lose Section 230 protection the moment they use algorithms to recommend content. This “product design” theory is seductive to courts because it sounds like it’s about the platform rather than the speech—but it’s actually a transparent attempt to gut Section 230 by making basic content organization legally toxic.
The NY appeals court saw right through this litigation sleight of hand.
Here, it is undisputed that the social media defendants qualify as providers of interactive computer services. The dispositive question is whether plaintiffs seek to hold the social media defendants liable as publishers or speakers of information provided by other content providers. Based on our reading of the complaints, we conclude thatplaintiffs seek to hold the social media defendants liable as publishers of third-party content. We further conclude that the content-recommendation algorithms used by some of the social media defendants do not deprive those defendants of their status as publishers of third-party content. It follows that plaintiffs’ tort causes of action against the social media defendants are barred by section 230.
Even assuming, arguendo, that the social media defendants’ platforms are products (as opposed to services), and further assuming that they are inherently dangerous, which is a rather large assumption indeed,we conclude that plaintiffs’ strict products liability causes of action against the social media defendants fail because they are based on the nature of content posted by third parties on the social media platforms.
The plaintiffs leaned on the disastrous Third Circuit ruling in Anderson v. TikTok—which essentially held that any algorithmic curation transforms third-party content into first-party content. The NY court demolishes this reasoning by pointing out its absurd implications:
We do not find Anderson to be persuasive authority. If content-recommendation algorithms transform third-party content into first-party content, as the Anderson court determined, then Internet service providers using content-recommendation algorithms (including Facebook, Instagram, YouTube, TikTok, Google, and X) would be subject to liability for every defamatory statement made by third parties on their platforms. That would be contrary to the express purpose of section 230, which was to legislatively overrule Stratton Oakmont, Inc. v Prodigy Servs. Co. (1995 WL 323710, 1995 NY Misc LEXIS 229 [Sup Ct, Nassau County 1995]), where “an Internet service provider was found liable for defamatory statements posted by third parties because it had voluntarily screened and edited some offensive content, and so was considered a ‘publisher’ ” (Shiamili, 17 NY3d at 287-288; see Free Speech Coalition, Inc. v Paxton, — US —, —, 145 S Ct 2291, 2305 n 4 [2025]).
Although Anderson was not a defamation case, its reasoning applies with equal force to all tort causes of action, including defamation.One cannot plausibly conclude that section 230 provides immunity for some tort claims but not others based on the same underlying factual allegations. There is no strict products liability exception to section 230.
Furthermore, it points out (just as we had said after the Anderson ruling) that the Anderson ruling messes up its interpretation of the Supreme Court in the Moody case. That case was about the social media content moderation law in Florida, and the Supreme Court noted that content moderation decisions were editorial discretion protected by the First Amendment. The Third Circuit in Anderson incorrectly interpreted that to mean that such editorial discretion could not be protected under 230 because Moody made it “first party speech” instead of third party.
But the NY appeals court points out how that’s complete nonsense because having your editorial discretion protected by the First Amendment is entirely consistent with saying you can’t hold a platform liable for the underlying content which that editorial discretion is covering:
In any event, even if we were to follow Anderson and conclude that the social media defendants engaged in first-party speech by recommending to the shooter racist content posted by third parties,it stands to reason that such speech (“expressive activity” as described by the Third Circuit) is protected by the First Amendment under Moody. While TikTok did not seek protection under the First Amendment, our social media defendants do raise the First Amendment as a defense in addition to section 230.
In Moody, the Supreme Court determined that content-moderation algorithms result in expressive activity protected by the First Amendment (see 603 US at 744). Writing for the majority, Justice Kagan explained that “[d]eciding on the third-party speech that will be included in or excluded from a compilation—and then organizing and presenting the included items—is expressive activity of its own” (id. at 731). While the Moody Court did not consider social media platforms “with feeds whose algorithms respond solely to how users act online—giving them the content they appear to want, without any regard to independent content standards” (id. at 736 n 5 [emphasis added]), our plaintiffs do not allege that the algorithms of the social media defendants are based “solely” on the shooter’s online actions. To the contrary, the complaints here allege that the social media defendants served the shooter material that they chose for him for the purpose of maximizing his engagement with their platforms.Thus, per Moody, the social media defendants are entitled to First Amendment protection for third-party content recommended to the shooter by algorithms.
Although it is true, as plaintiffs point out, that the First Amendment views expressed in Moody are nonbinding dicta, it is recent dicta from a supermajority of Justices of the United States Supreme Court, which has final say on how the First Amendment is interpreted. That is not the type of dicta we are inclined to ignore even if we were to disagree with its reasoning, which we do not.
The majority opinion cites the Center for Democracy and Technology’s amicus brief that points out the obvious: at internet scale, every platform has to do some moderation and some algorithmic ranking, and that cannot and should not somehow remove protections. And the majority uses some colorful language to explain (as we have said before) 230 and the First Amendment work perfectly well together:
As the Center for Democracy and Technology explains in its amicus brief, content-recommendation algorithms are simply tools used by social media companies “to accomplish a traditional publishing function, made necessary by the scale at which providers operate.”Every method of displaying content involves editorial judgments regarding which content to display [*5]and where on the platforms. Given the immense volume of content on the Internet, it is virtually impossible to display content without ranking it in some fashion, and the ranking represents an editorial judgment of which content a user may wish to see first. All of this editorial activity, accomplished by the social media defendants’ algorithms, is constitutionally protected speech.
Thus,the interplay between section 230 and the First Amendment gives rise to a “Heads I Win, Tails You Lose” proposition in favor of the social media defendants. Either the social media defendants are immune from civil liability under section 230 on the theory that their content-recommendation algorithms do not deprive them of their status as publishers of third-party content, per Force and M.P., or they are protected by the First Amendment on the theory that the algorithms create first-party content, as per Anderson.Of course, section 230 immunity and First Amendment protection are not mutually exclusive, and in our view the social media defendants are protected by both. Under no circumstances are they protected by neither.
There is a dissenting opinion that bizarrely relies heavily on a dissenting Second Circuit opinion in the very silly Force v. Facebook case (in which the family victim of a Hamas attack blamed Facebook claiming that because some Hamas members used Facebook, Facebook could be blamed for any victims of a Hamas attack—an argument that was mostly laughed out of court). The majority points out what a silly world it would be if that were actually how things worked:
To the extent that Chief Judge Katzmann concluded that Facebook’s content-recommendation algorithms similarly deprived Facebook of its status as a publisher of third-party content within the meaning of section 230,we believe that his analysis, if applied here, would ipso facto expose most social media companies to unlimited liability in defamation cases. That is the same problem inherent in the Third Circuit’s first-party/third-party speech analysis in Anderson. Again, a social media company using content-recommendation algorithms cannot be deemed a publisher of third-party content for purposes of libel and slander claims (thus triggering section 230 immunity) and not at the same time a publisher of third-party content for strict products liability claims.
And the majority calls out the basic truths: all of these cases are bullshit cases trying to hold social media companies liable for the speech of its users—exactly the thing Section 230 was put in place to prevent:
In the broader context, the dissenters accept plaintiffs’ assertion that these actions are about the shooter’s “addiction” to social media platforms, wholly unrelated to third-party speech or content. We come to a different conclusion.As we read them, the complaints, from beginning to end, explicitly seek to hold the social media defendants liable for the racist and violent content displayed to the shooter on the various social media platforms. Plaintiffs do not allege, and could not plausibly allege, that the shooter would have murdered Black people had he become addicted to anodyne content, such as cooking tutorials or cat videos.
Instead, plaintiffs’ theory of harm rests on the premise that the platforms of the social media defendants were defectively designed because they failed to filter, prioritize, or label content in a manner that would have prevented the shooter’s radicalization. Given that plaintiffs’ allegations depend on the content of the material the shooter consumed on the Internet, their tort causes of action against the social media defendants are “inextricably intertwined” with the social media defendants’ role as publishers of third-party content….
If plaintiffs’ causes of action were based merely on the shooter’s addiction to social media,which they are not, they would fail on causation grounds.It cannot reasonably be concluded that the allegedly addictive features of the social media platforms (regardless of content) caused the shooter to commit mass murder, especially considering the intervening criminal acts by the shooter, which were not “not foreseeable in the normal course of events” and therefore broke the causal chain (Tennant v Lascelle, 161 AD3d 1565, 1566 [4th Dept 2018];see Turturro v City of New York, 28 NY3d 469, 484 [2016]).It was the shooter’s addiction to white supremacy content, not to social media in general, that allegedly caused him to become radicalized and violent.
From there, the majority opinion reminds everyone why Section 230 is so important to free speech:
At stake in these appeals is the scope of protection afforded by section 230, which Congress enacted to combat “the threat that tort-based lawsuits pose to freedom of speech [on the] Internet” (Shiamili, 17 NY3d at 286-287 [internal quotation marks omitted]). As a distinguished law professor has noted, section 230’s immunity “particularly benefits those voices from underserved, underrepresented, and resource-poor communities,” allowing marginalized groups to speak up without fear of legal repercussion (Enrique Armijo, Section 230 as Civil Rights Statute, 92 U Cin L Rev 301, 303 [2023]). Without section 230, the diversity of information and viewpoints accessible through the Internet would be significantly limited.
And the court points out, ruling the other way would “result in the end of the internet as we know it.”
We believe that the motion court’s ruling, if allowed to stand,would gut the immunity provisions of section 230 and result in the end of the Internet as we know it. This is so because Internet service providers who use algorithms on their platforms would be subject to liability for all tort causes of action, including defamation. Because social media companies that sort and display content would be subject to liability for every untruthful statement made on their platforms,the Internet would over time devolve into mere message boards.
It also calls out how the immunity part of 230, getting these kinds of frivolous cases tossed out early on is an important part of 230, because if you have to litigate every such accusation you lose all the benefits of Section 230.
Although the motion court stated that the social media defendants’ section 230 arguments “may ultimately prove true,” dismissal at the pleading stage is essential to protect free expression under Section 230 (see Nemet Chevrolet, Ltd., 591 F3d at 255 [the statute “protects websites not only from ‘ultimate liability,’ but also from ‘having to fight costly and protracted legal battles’ “]).Dismissal after years of discovery and litigation (with ever mounting legal fees) would thwart the purpose of section 230.
Law professor Eric Goldman, whose own research and writings seem to be infused throughout the majority’s opinion, also wrote a blog post about this ruling, celebrating the majority for getting this one right at a time when so many courts are getting it wrong, but (importantly) notes that the 3-2 split on this ruling, including the usual nonsense justifications in the dissent mean that (1) this is almost certainly going to be appealed, possibly to the Supreme Court, and (2) it’s unlikely to persuade many other judges who seem totally committed to the techlash view that says “we can ignore Section 230 if we decide the internet is just, like, really bad.”
I do think it’s likely he’s right (as always) but I still think it’s worth highlighting not just the thoughtful ruling, but how these judges actually understood the full implications of ruling the other way: that it would end the internet as we know it and do massive collateral damage to the greatest free speech platform ever.
It’s funny just how often the actions of so-called “strong men” actually show just how scared and fragile they are. And if you want a more specific example of what I’m talking about, you can typically tell when a national government is feeling scared or weak, because that usually comes along with restrictions on a free and open internet. Worried that your horror-show of a government from a human rights standpoint might generate pushback and protests? Construct the Great Firewall of China. Concerned that social media sites might serve as rallying points against your election in Brazil? Have the actual police actually police the internet for anything you don’t like.
Launched a war of aggression against your neighbor because you thought you could annex an entire country in a few months, only to find out that you’re in a prolonged war of attrition that your own people might get severely tired of? Well, then you do what the Kremlin did, and iteratively crackdown on internet access and freedoms over several years. And Putin isn’t stopping.
Russia appears to be degrading performance or access to some targeted internet sites, such as YouTube, while also building out state-controlled alternatives to Western technology, which will inevitably be banned.
YouTube videos that won’t load. A visit to a popular independent media website that produces only a blank page. Cellphone internet connections that are down for hours or days. While it’s still possible to circumvent restrictions by using virtual private network apps, those are routinely blocked, too.
Authorities further restricted internet access this summer with widespread shutdowns of cellphone internet connections and adopting a law punishing users for searching for content they deem illicit.
They also are threatening to go after the popular WhatsApp platform while rolling out a new “national” messaging app that’s widely expected to be heavily monitored.
So it’s a three-pronged approach, designed purely to silence dissent and prevent the distribution of anti-government speech online as well as any coordination from opposition groups that could occur there. The banning or degradation of websites controls the information Russian citizens will see, the attacks on VPNs prevents them from getting around that control, and the mandated use of state-controlled messaging apps ensures that Russians won’t try to coordinate dissent activities online or, if they mistakenly do, provide the Russian government with a way to monitor that activity.
Starting next month, all new smartphones sold in Russia will come pre-installed with MAX, a government-developed messaging and services app. Officials describe MAX as Russia’s answer to China’s WeChat: an all-in-one platform for chatting, posting updates, making payments, and accessing government services.
The Kremlin has already begun testing MAX in schools, with authorities hinting it could soon become mandatory for teachers, parents, and even students. Experts warn that this level of integration will make MAX unavoidable in everyday Russian life.
Deputy head of Russia’s IT committee, Anton Gorelkin, recently warned WhatsApp to “prepare to leave the Russian market.” With nearly 100 million Russian users, losing WhatsApp would mark a massive shift in how people communicate.
Make no mistake, this is all a symptom of fear. Russia wouldn’t try to silence online information unless Russians were hungry for it and unless the Russian government didn’t want that information to get out. Ditto when it comes to the use of VPNs. And the Kremlin wouldn’t be trying to control how Russian citizens communicate online unless it feared it would be used to threaten government control. None of this represents strength, nor confidence. It’s fear.
Maybe this will work, though I doubt it, if given a long enough timeline for all of this to play out. The internet tends to route around censorship, as the saying goes, and I’m not sure the Russian people have been conditioned to accept the government’s word in the same way the Chinese people may have been.
But what I do know is that Putin and his government aren’t taking these actions because everything is going so well for them.
Last fall, heavily influenced by Jonathan Haidt’s extremely problematic book, Australia announced that it was banning social media for everyone under the age of 16. This was already a horrifically stupid idea—the kind of policy that sounds reasonable in a tabloid headline but crumbles under any serious scrutiny. Over and over again studies have found that social media is neither good nor bad for most teens. It’s also good for some—especially those who are in need of finding community or like-minded individuals. It is, also, not so great for a small group of kids, though the evidence there suggests that it’s worst for those dealing with untreated mental health issues, which causes them to use social media as an alternative to help.
There remains little to no actual evidence that an outright ban will be helpful, and plenty to suggest it will be actively harmful to many.
But now Australia has decided to double down on the stupid, announcing that YouTube will be included in the ban. This escalation reveals just how disconnected from reality this entire policy framework has become. We’ve gone from “maybe we should protect kids from social media” to “let’s ban children from accessing one of the world’s largest repositories of educational content.”
Australia said on Wednesday it will add YouTube to sites covered by its world-first ban on social media for teenagers, reversing an earlier decision to exempt the Alphabet-owned video-sharing site and potentially setting up a legal challenge.
The decision came after the internet regulator urged the government last week to overturn the YouTube carve-out, citing a survey that found 37% of minors reported harmful content on the site.
This is painfully stupid and ignorant. The claim that 37% of minors reported seeing harmful content is also… meaningless without a lot more context and details. What counts as “harmful”? A swear word? Political content their parents disagree with? A video explaining evolution? What was the impact? Is this entirely self-reported? What controls were there? Just saying 37% is kind of meaningless without the details.
This is vibes-based policymaking dressed up in statistics. You could probably get 37% of kids to report “harmful content” on PBS Kids if you asked them vaguely enough. The fact that Australia’s internet regulator is using this kind of methodological garbage to reshape internet policy tells you everything you need to know about how seriously they’ve thought this through.
But also, YouTube is not just effectively the equivalent of television for teens today—it’s often far superior to traditional television because it’s not gatekept by media conglomerates with their own agendas. The idea that you should need to be 16 years old to watch some YouTube programs is beyond laughable, especially given the amount of useful educational content on YouTube. These days there are things like Complexly, Khan Academy, Mark Rober, and plenty of other educational content that kids love and which lives on YouTube. Kids are learning calculus from 3Blue1Brown, exploring history through Crash Course, and getting better science education from YouTube creators than from most traditional textbooks. This isn’t just entertainment—it’s democratized education that bypasses the gatekeeping of traditional media entirely.
This isn’t just unworkable—it’s the construction of a massive censorship infrastructure that will inevitably be used for purposes far beyond “protecting children.” Once you’ve built the system to block kids from YouTube, you’ve built the system to block anyone from anything. And that system will be irresistible to future governments with different ideas about what content people need to be “protected” from.
And the Australian government already knows that age verification tech is a privacy and security nightmare. They admitted as much two years ago.
Of course, kids will figure out ways around it anyway. VPNs exist. Older friends exist. Parents who aren’t idiots exist—and they’ll help their kids break this law. The only thing this accomplishes is teaching an entire generation that their government’s laws are arbitrary, unenforceable, and fundamentally disconnected from reality. It’s teaching kids to have less respect for government.
This isn’t happening in a vacuum, either. Australia is part of a broader global trend of governments using “protect the children” rhetoric as cover for internet control. The UK’s porn age verification disaster, the US Kids Online Safety Act, similar proposals across Europe—they all follow the same playbook. Identify a genuine concern (kids sometimes see stuff online that isn’t great for them), propose a solution that sounds reasonable in a headline (age limits!), then implement it through surveillance and censorship infrastructure that can be repurposed for whatever moral panic comes next.
The end result will be that Australia has basically taught a generation of teenagers not to trust the government, that their internet regulators are completely out of touch, and that laws are stupid. But it goes deeper than that. This kind of blatantly unworkable policy doesn’t just breed contempt for specific laws—it undermines the entire concept of legitimate governance. When laws are this obviously disconnected from technological and social reality, it signals that the people making them either don’t understand what they’re regulating or don’t care about whether their policies actually work. It’s difficult to see how that benefits anyone at all.
Microsoft-owned LinkedIn has quietly joined the parade of tech giants rolling back basic protections for transgender users, removing explicit prohibitions against deadnaming and misgendering from its hate speech policies this week. The change, first spotted by the nonprofit Open Terms Archive, eliminates language that previously listed “misgendering or deadnaming of transgender individuals” as examples of prohibited hateful content.
LinkedInremovedtransgender-related protections from its policy on hateful and derogatory content. The platformno longer lists “misgendering or deadnaming of transgender individuals” as examples of prohibited conduct. While “content that attacks, denigrates, intimidates, dehumanizes, incites or threatens hatred, violence, prejudicial or discriminatory action” is still considered hateful, addressing a person by a gender and name they ask not be designated by is not anymore.
Similarly, the platform removed “race or gender identity” from its examples of inherent traits for which negative comments are considered harassment. That qualification of harassment is now kept only for behaviour that is actively “disparaging another member’s […] perceived gender”, not mentioning race or gender identity anymore.
The move is particularly cowardly because LinkedIn made the change with zero public announcement or explanation. When pressed by a reporter at The Advocate, the company offered the classic corporate non-answer: “We regularly update our policies” and insisted that “personal attacks or intimidation toward anyone based on their identity, including misgendering, violates our harassment policy.”
But here’s the thing: if your policies haven’t actually changed, why remove the explicit protections? Why make it harder for users and moderators to understand what’s prohibited? The answer is as obvious as it is pathetic: LinkedIn is preemptively capitulating to political pressure in this era of MAGA culture war.
This follows the now-familiar playbook we’ve seen from Meta, YouTube, and others. Meta rewrote its policies in January to allow content calling LGBTQ+ people “mentally ill” and portraying trans identities as “abnormal.” YouTube quietly scrubbed “gender identity” from its hate speech policies, then had the audacity to call it “regular copy edits.” Now LinkedIn is doing the same cowardly dance.
What makes this particularly infuriating is the timing. These companies aren’t even waiting for actual government threats. They’re just assuming that sucking up to the Trump administration’s anti-trans agenda will somehow protect them from regulatory scrutiny. It’s the corporate equivalent of rolling over and showing your belly before anyone even raises their voice.
And it won’t help. The Trump administration will still target them and demand more and more, knowing that these companies will just roll over again.
And let’s be clear about what deadnaming and misgendering actually are: they’re deliberate acts of dehumanization designed to erase transgender people’s identities and make them feel unwelcome in public spaces. When platforms explicitly protect against these behaviors, it sends a message that trans people belong in these spaces. When they quietly remove those protections, they’re sending the opposite message. They’re saying “we don’t care about your humanity, and we will let people attack you for your identity.”
LinkedIn’s decision is especially disappointing because professional networking platforms should be spaces where people can present their authentic selves without fear of purely hateful harassment. Trans professionals already face discrimination in hiring and workplace environments. The last thing they need is for LinkedIn to signal that it’s open season for harassment on its platform.
The company is trying to argue that it still prohibits harassment and hate speech generally. But vague, general policies are much harder to enforce consistently than specific examples. When you remove explicit guidance about what constitutes anti-trans harassment, you make it easier for bad actors to push boundaries and harder for moderators to draw clear lines.
This is exactly the wrong moment for tech companies to be weakening protections for vulnerable communities. Anti-trans rhetoric and legislation have reached fever pitch, with the Trump administration making attacks on transgender rights a central part of its agenda. This is when platforms should be strengthening their commitment to protecting people from harassment, not quietly rolling back safeguards.
Sure, standing up for what’s right when there’s political pressure to do otherwise is hard. But that’s exactly when it matters most. These companies have billions in revenue and armies of lawyers. If anyone can afford to take a principled stand, it’s them.
Instead, we’re watching them fold like cheap suits at the first sign of political headwinds. They’re prioritizing their relationships with authoritarian politicians over the safety of their users. And they’re doing it in the most cowardly way possible: quietly, without explanation, hoping no one will notice.
The message this sends to transgender users is clear: you’re expendable. Your safety and dignity are less important than our political calculations. And that message isn’t just coming from fringe platforms or obvious bad actors—it’s coming from mainstream services owned by some of the world’s largest companies.
This isn’t just bad for transgender users. It’s bad for everyone who believes that online spaces should be governed by consistent principles rather than political opportunism. When platforms start making policy decisions based on which way the political winds are blowing, they undermine their own credibility and the trust users place in them.
Hell, for years, all we heard from the MAGA world was how supposedly awful it is when platforms make moderation decisions based on political pressure.
Where are all of those people now?
The irony is that these companies are probably making themselves less safe, not more. By signaling that they’ll cave to political pressure, they’re inviting more of it. Authoritarians don’t respect weakness—they exploit it.
LinkedIn, Meta, YouTube, and the rest need to understand: there’s no appeasing the anti-trans mob. No matter how many protections you strip away, it will never be enough. Stick to your principles and protect your users regardless of political pressure.
But instead of showing backbone, these companies are racing to see who can capitulate fastest. It’s a disgraceful display of corporate cowardice at exactly the moment when courage is most needed.
We all deserve better than watching supposedly values-driven companies abandon their principles the moment it becomes politically inconvenient to maintain them.
In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Hank Green, popular YouTube creator and educator. After spending some time talking about being a creator at the whims of platforms, they cover: