Five Section 230 Cases That Made Online Communities Better
from the you-need-230-on-that-wall dept
The House Energy and Commerce Committee is holding a hearing tomorrow on “sunsetting” Section 230.
Despite facing criticism, Section 230 has undeniably been a cornerstone in the architecture of the modern web, fostering a robust market for new services, and enabling a rich diversity of ideas and expressions to flourish. Crucially, Section 230 empowers platforms to maintain community integrity through the moderation of harmful content.
With that, it’s somewhat surprising that the proposal to sunset Section 230 has garnered Democratic support, given that Section 230 has historically empowered social media services to actively remove content that perpetuates racism and bigotry, thus protecting marginalized communities, including individuals identifying as LGBTQ+ and people of color.
As the hearing approaches, I wanted to highlight five instances where Section 230 swiftly and effectively shielded social media platforms from lawsuits that demanded they host harmful content contrary to their community standards. Without Section 230, online services would face prolonged and costlier legal battles to uphold their right to moderate content — a right guaranteed by the First Amendment.
Section 230 Empowered Vimeo to Remove ‘Conversion Therapy’ Content
Christian Pastor James Domen and Church United sued Vimeo after the platform terminated their account for posting videos promoting Sexual Orientation Change Efforts (SOCE) (i.e. ‘conversion therapy’), which Vimeo argued violated its content policies.
Plaintiffs argued that Vimeo’s actions were not in good faith and discriminated based on sexual orientation and religion. However, the court found that the plaintiffs failed to demonstrate Vimeo acted in bad faith or targeted them discriminatorily.
The District Court initially dismissed the lawsuit, ruling that Vimeo was protected under Section 230 for its content moderation decisions. On appeal, the Second Circuit Court upheld the lower court’s dismissal. The appellate court emphasized that Vimeo’s actions fell within the protections of Section 230, particularly noting that decisions about content moderation are at the platform’s discretion when conducted in good faith. [Note: a third revision of the Court’s opinion omitted Section 230, however, the case remains a prominent example of how Section 230 ensures the initial dismissal of content removal cases].
In upholding Vimeo’s decision to remove content promoting conversion therapy, the Court reinforced that Section 230 protects platforms when they choose to enforce community standards that aim to maintain a safe and inclusive environment for all users, including individuals who identify with LGBTQ+ communities.
Notably, the case also illustrates how platforms can be safeguarded against lawsuits that may attempt to reinforce the privilege of majority groups under the guise of discrimination claims.
Case: Domen v. Vimeo, Inc., №20–616-cv (2d Cir. Sept. 24, 2021).
Section 230 Empowered Twitter to Remove Intentional Dead-Naming & Mis-Gendering
Meghan Murphy, a self-proclaimed feminist writer from Vancouver, ignited controversy with a series of tweets in January 2018 targeting Hailey Heartless, a transgender woman. Murphy’s posts, which included referring to Heartless as a “white man” and labeling her a “trans-identified male/misogynist,” clearly violated Twitter’s guidelines at the time by using male pronouns and mis-gendering Heartless.
Twitter responded by temporarily suspending Murphy’s account, citing violations of its Hateful Conduct Policy. Despite this, Murphy persisted in her discriminatory rhetoric, posting additional tweets that challenged and mocked the transgender identity. This pattern of behavior led to a permanent ban in November 2018, after Murphy repeatedly engaged in what Twitter identified as hateful conduct, including dead-naming and mis-gendering other transgender individuals.
In response, Murphy sued Twitter alleging, among other claims, that Twitter had engaged in viewpoint discrimination. Both the district and appellate courts held that the actions taken by Twitter to enforce its policies against hateful conduct were consistent with Section 230.
The case of Meghan Murphy underscores the pivotal role of Section 230 in empowering platforms like Twitter to maintain safe and inclusive environments for all users, including those identifying as LGBTQ+.
Case: Murphy v. Twitter, Inc., 2021 WL 221489 (Cal. App. Ct. Jan. 22, 2021).
Section 230 Empowered Twitter to Remove Hateful & Derogatory Content
In 2018, Robert M. Cox tweeted a highly controversial statement criticizing Islam, which led to Twitter suspending his account.
“Islam is a Philosophy of Conquests wrapped in Religious Fantasy & uses Racism, Misogyny, Pedophilia, Mutilation, Torture, Authoritarianism, Homicide, Rape . . . Peaceful Muslims are Marginal Muslims who are Heretics & Hypocrites to Islam. Islam is . . .”
To regain access, Cox was required to delete the offending tweet and others similar in nature. Cox then sued Twitter, seeking reinstatement and damages, claiming that Twitter had unfairly targeted his speech. The South Carolina District Court, however, upheld the suspension, citing Section 230:
“the decision to furnish an account, or prohibit a particular user from obtaining an account, is itself publishing activity. Therefore, to the extent Plaintiff seeks to hold the Defendant liable for exercising its editorial judgment to delete or suspend his account as a publisher, his claims are barred by § 230(c) of the CDA.”
In other words, actions taken upon third-party content, such as content removal and account termination, are wholly within the scope of Section 230 protection.
Like the Murphy case, Cox v. Twitter emphasizes the importance of Section 230 in empowering platforms like Twitter to decisively and swiftly remove hateful content, maintaining a healthier online environment without getting bogged down in lengthy legal disputes.
Case: Cox v. Twitter, Inc., 2:18–2573-DCN-BM (D.S.C.).
Section 230 Empowered Facebook to Remove Election Disinformation
In April 2018, Facebook took action against the Federal Agency of News (FAN) by shutting down their Facebook account and page. Facebook cited violations of its community guidelines, emphasizing that the closures were part of a broader initiative against accounts controlled by the Internet Research Agency (IRA), a group accused of manipulating public discourse during the 2016 U.S. presidential elections. This action was part of Facebook’s ongoing efforts to enhance its security protocols to prevent similar types of interference in the future.
In response, FAN filed a lawsuit against Facebook which led to a legal battle that centered on whether Facebook’s actions violated the First Amendment or other legal rights of FAN. The Court, however, determined that Facebook was not a state actor nor had it engaged in any joint action with the government that would make it subject to First Amendment constraints. The court also dismissed FAN’s claims for damages under Section 230.
In an attempt to avoid Section 230, FAN argued that Facebook’s promotion of FAN’s content via Facebook’s recommendation algorithms converts FAN’s content into Facebook’s content. The Court didn’t buy it:
Plaintiffs make a similar argument — that recommending FAN’s content to Facebook users through advertisements makes Facebook a provider of that content. The Ninth Circuit, however, held that such actions do not create “content in and of themselves.”
The FAN case illustrates the critical role Section 230 plays in empowering platforms like Facebook to decisively address and mitigate election-related disinformation. By shielding platforms that act swiftly against entities that violate their terms of service, particularly those involved in spreading divisive or manipulative content, Section 230 ensures that social media services can remain vigilant guardians against the corruption of public discourse.
Case: Federal Agency of News LLC v. Facebook, Inc., 2020 WL 137154 (N.D. Cal. Jan. 13, 2020).
Section 230 Empowered Facebook to Ban Hateful Content
Laura Loomer, an alt-right activist, filed lawsuits against Facebook (and Twitter) after her account was permanently banned. Facebook labeled Loomer as “dangerous,” a designation that she argued was both wrongful and harmful to her professional and personal reputation. Facebook’s classification of Loomer under this term was based on their assessment that her activities and statements online were aligned with behaviors that promote or engage in violence and hate:
“To the extent she alleges Facebook called her “dangerous” by removing her accounts pursuant to its DIO policy and describing its policy generally in the press, the law is clear that calling someone “dangerous” — or saying that she “promoted” or “engaged” in “hate” — is a protected statement of opinion. Even if it were not, Ms. Loomer cannot possibly meet her burden to prove that it would be objectively false to describe her as “dangerous” or promoting or engaging in “hate” given her widely reported controversial public statements. To the extent Ms. Loomer is claiming, in the guise of a claim for “defamation by implication,” that Facebook branded her a “terrorist” or accused her of conduct that would also violate the DIO policy, Ms. Loomer offers no basis to suggest (as she must) that Facebook ever intended or endorsed that implication.”
Loomer challenged Facebook’s decision on the grounds of censorship and discrimination against her political viewpoints. However, the Court ruled in favor of Facebook, citing Section 230 among other reasons. The Court’s decision emphasized that as a private company, Facebook has the right to enforce its community standards and policies, including the removal of users it deems as violating these policies.
Case: Loomer v. Zuckerberg, 2023 WL 6464133 (N.D. Cal. Sept. 30, 2023).
Jess Miers is Senior Counsel to the Chamber of Progress and a Section 230 expert. This post originally appeared on Medium and is republished here with permission.
Filed Under: content moderation, conversion therapy, disinformation, election disinformation, hate speech, lgbtq, section 230, site integrity




Comments on “Five Section 230 Cases That Made Online Communities Better”
Three reminders for the inevitable trolls:
Least it isn’t a markup.
You can tell a lot about a person by who they'll throw under the bus
With that, it’s somewhat surprising that the proposal to sunset Section 230 has garnered Democratic support, given that Section 230 has historically empowered social media services to actively remove content that perpetuates racism and bigotry, thus protecting marginalized communities, including individuals identifying as LGBTQ+ and people of color.
Less surprising and more telling, specifically that the democrats involved don’t actually support those marginalized communities and are at best willing to see them silenced online if that’s what it takes for them to get some good ‘Look at me sticking it to Big Tech!’ soundbites, and that assumes that silencing minorities isn’t the goal rather than ‘just’ acceptable collateral damage.
Re:
that assumes that silencing minorities isn’t the goal rather than ‘just’ acceptable collateral damage.
I think, they’re pulling a Batman gambit, mainly cause they’re using minorities & small websites as pawns, because if what Leif K-Brooks said is true: “and while some of them are much larger companies with much greater resources, *”they all have their breaking point somewhere.**”
Then it may mean they’re relying on acceptable collateral damage.
Re: Re: 'Sticking it to Big Tech' by giving them an unassailable market position
It’s possible they’re that stupid but I wouldn’t put money on it, because off all the groups that will come out ahead should 230 be killed the major tech companies top the list as they will have just had their market dominance locked in due to all their competitors being killed off and the legal landscape shaped such that no future competitors could possible become big enough to challenge them.
Similar to how the biggest beneficiaries for crippling encryption would be criminals the ultimate irony of the ‘we must kill 230 to reign in Big Tech’ subsection of anti-230 arguments is that they will be the biggest beneficiaries of the law’s removal.
Re: Re: Re:
It’s possible they’re that stupid
You never know, really.
The idea that I, as a private citizen, or a private company, can’t discriminate based on political beliefs is so incredibly fascist. The GQP has lost its ever loving mind, and somehow Trump and Biden are neck in neck in the polls.
Re:
This is the fake moral outrage of all the so-called “free speech enthusiasts.” They think they’re being clever by using the morality of tolerance against the left, but they don’t understand that tolerance includes the paradox of tolerance and tolerating fascists goes against actual tolerance.
They pretend free speech doesn’t involve filtering out the people you’ve already listened to and dismissed as trolls, bigots, and assholes.
Re:
some of the polls are paid for by trump so i don’t trust polls
Re:
The idea that I, as a private citizen, or a private company, can’t discriminate based on political beliefs is so incredibly fascist.
It is that but it’s primarily a smokescreen as ‘political beliefs’ is merely the dogwhistle they use for racist, sexist and/or other toxic and abhorrent speech since those gutless cowards are too dishonest to actually own their own words and instead try to frame what they say as ‘political speech’ so they don’t have to be specific.
Or as the classic tweet put it…
Conservative: I have been censored for my conservative views
Me: Holy shit! You were censored for wanting lower taxes?
Con: LOL no…no not those views
Me: So…deregulation?
Con: Haha no not those views either
Me: Which views, exactly?
Con: Oh, you know the ones
(All credit to Twitter user @ndrew_lawrence.)
Re:
“The idea that I, as a private citizen …
can’t discriminate based on political beliefs …”
Freedom of Association has entered the chat.
Ok Loomer
If 230 is gutted the left should have lawsuits ready to sue ex twitter, truth social, fox news, breitbart, and so on for all the user generated content they are now liable for.
While the article seams fine,
the author works for, and the original article was published for the chamber of progress a trade group of big tech companies, which hides the amount of money it gets from its donors.
Re:
While your post seems fine,
we don’t know who you work for and how much money you get paid because you hid your identity.
Re: Re:
no one(or at least thats what i want you to think)
i also tried to clarify that the article seems correct
Re: Re: Re:
When you call into question the credibility of the author, you do the same for the article.
Re: Re: Re:
whoops misread your post
Re:
Quick question: Even if what you say is true, how does that affect the credibility of the article?
It needs an update
When 230 was put in place, things like Facebook, Twitter, and Tiktok didn’t exist. They didn’t know that in the not so distant future, most of the population would be on just a handful of sites getting all their info and news from them and that allows them to control public opinion in a lot of ways such as censoring certain people or a certain side of a political issue. There’s been plenty of evidence of this happening and they hide behind section 230. I believe it should still exist, but it definitely needs to be revised imho. There shouldn’t be just a handful of people deciding what the public should and shouldn’t see.
Re:
Always interesting reading someone’s take on Section 230 and how they think the authors reasoned, especially when its mostly divorced from what the authors actually reasoned and intended.
When you say things like “…such as censoring certain people or a certain side of a political issue. There’s been plenty of evidence of this happening and they hide behind section 230“, it only shows that you don’t know what you are talking one bit plus it also proves you don’t even know about something called the first amendment. I would also have a list of all these “plenty of evidence” and what was said.
But it’s okay for media in general to decide what the public should and shouldn’t see, or do you want to force them to carry other peoples speech against their wishes too? And here I would also like examples of that “they” have decided the public shouldn’t see.
You are entirely free to voice your opinion, but when it is built on assumptions, faulty, understand, zero knowledge and make-believe, don’t be surprised when people think you are full of shit.
Re:
Huh. Never realized that Section 230 applies to traditional print media. Who knew?
Re:
Irrelevant. Congress passed 230 to let the services that were (and weren’t) precursors to Facebook, Twitter, and TikTok moderate speech as they so pleased.
Three things.
Yes or no: Do you believe the government should have the right to compel any interactive web service into hosting any third-party speech that it would otherwise refuse to host?
Also yes or no: Do you believe the government should have the right to compel any interactive web service that hosts “unorthodox”-yet-legal speech into refusing to host that speech?
No one who says this about 230 has ever offered any potential revision to 230 that wouldn’t harm the Internet as a whole and/or present a massive loophole with which bad-faith actors can attack legal speech because “it’s on the Internet”.
Complain to Congress about monopolization and anti-trust issues. Start a blog that gets out the info you think is being censored. Do literally anything else that addresses or routes around the problem but doesn’t go after 47 U.S.C. § 230—because you’d only be helping that handful of people you’re complaining about if you succeed in killing that law.
Re: Re:
Incorrect. Congress passed 230 to let online services, both existing then and existing in the future, including Facebook, Twitter, and TikTok, moderate speech as they so pleased.
Fucking what? And what’s wrong with the accurate and accepting phrase “people who are members of LGBTQ+ communities, subversive phobe?
Re:
This. Not all such people actually identify with the groups they’re members of (not all trans people identify as trans, for example), but they nevertheless remain members of these groups, as stated.
IMHO, Meghan Murphy is correct that men aren’t women, so why she insists that individuals like Thomas Beatie, Elliot Page, etc, are women…
Re:
And any interactive service that would prefer to create a space for trans people to exist without facing endless harassment and derision has every legal right to tell Murphy and every other transphobe to fuck off. Prove they don’t.
Re: Re:
Again you miss the detail and criticize what you think was said. I thought you were supposed to be autistic?
Re: Re: Re:
I sincerely apologize for the error in my reading comprehension. That said:
I have never been formally diagnosed with autism of any kind. You and the trolls keep making the mistake that my saying “I might have a little autism” with relative and sincere uncertainty is the same thing as saying “I am actually autistic, no doubt about it” with the absolute and unyielding certainty of God.
Re: Re: Re:2
Here’s the thing: there’s no such thing as ‘a little autism’, you’re either on the autism spectrum or you’re not, and the fact that you lack empathy, lack theory of mind, miss detail, etc. indicates allism rather than autism. I’m with AC on this, and you’re the troll rather than them.
Re: Re: Re:3
🤣
🤔
🤫
Re: Re: Re:3
I’m on the spectrum and I literally said, “I think I might be a little autistic” to the doctor who diagnosed me as exhibiting a significant number of indicative behaviors and symptoms. As Stephen said, he hasn’t been formally diagnosed so using loose terminology to describe it is not just understandable but even common. I’ve heard the same from others who have been officially diagnosed. We’ve literally joked that anyone who suggests they “might be a little autistic” or “might have a bit of autism” likely has more indicators and they’re just not aware of what all the indicators are. I thought a lot of my indicators were just me being “weird” when I was kid. Telling someone they lack empathy because they didn’t use the terminology the way you expect isn’t useful or compassionate. And diagnosing someone over the internet based on limited conversation is “a little” useless too.
Re: Re:
Reading comprehension, ch… Oh, wait. No.
Re: Re:
Autistic attention to detail, ch… Oh wait, no.