230 Protects Users, Not Big Tech

from the it-protects-you dept

Once again, several Senators appear poised to gut one of the most important laws protecting internet users – Section 230 (47 U.S.C. § 230)

Don’t be fooled – many of Section 230’s detractors claim that this critical law only protects big tech. The reality is that Section 230 provides limited protection for all platforms, though the biggest beneficiaries are small platforms and users. Why else would some of the biggest platforms be willing to endorse a bill that guts the law? In fact, repealing Section 230 would only cement the status of Big Tech monopolies.

As EFF has said for years, Section 230 is essential to protecting individuals’ ability to speak, organize, and create online. 

Congress knew exactly what Section 230 would do – that it would lay the groundwork for speech of all kinds across the internet, on websites both small and large. And that’s exactly what has happened.  

Section 230 isn’t in conflict with American values. It upholds them in the digital world. People are able to find and create their own communities, and moderate them as they see fit. People and companies are responsible for their own speech, but (with narrow exceptions) not the speech of others. 

The law is not a shield for Big Tech. Critically, the law benefits the millions of users who don’t have the resources to build and host their own blogs, email services, or social media sites, and instead rely on services to host that speech. Section 230 also benefits thousands of small online services that host speech. Those people are being shut out as the bill sponsors pursue a dangerously misguided policy.  

If Big Tech is at the table in any future discussion for what rules should govern internet speech, EFF has no confidence that the result will protect and benefit internet users, as Section 230 does currently. If Congress is serious about rewriting the internet’s speech rules, it must spend time listening to the small services and everyday users who would be harmed should they repeal Section 230.  

Section 230 Protects Everyday Internet Users 

There’s another glaring omission in the arguments to end Section 230: how central the law is to ensuring that every person can speak online, and that Congress or the Administration does not get to define what speech is “good” and “bad”.   

Let’s start with the text of Section 230. Importantly, the law protects both online services and users. It says that “no provider or user shall be treated as the publisher” of content created by another. That’s in clear agreement with most Americans’ belief that people should be held responsible for their own speech—not that of others.   

Section 230 protects individual bloggers, anyone who forwards an email, and social media users who have ever reshared or retweeted another person’s content online. Section 230 also protects individual moderators who might delete or otherwise curate others’ online content, along with anyone who provides web hosting services

As EFF has explained, online speech is frequently targeted with meritless lawsuits. Big Tech can afford to fight these lawsuits without Section 230. Everyday internet users, community forums, and small businesses cannot. Engine has estimated that without Section 230, many startups and small services would be inundated with costly litigation that could drive them offline. Even entirely meritless lawsuits cost thousands of dollars to fight, and often tens or hundreds of thousands of dollars.

Deleting Section 230 Will Create A Field Day For The Internet’s Worst Users  

Section 230’s detractors say that too many websites and apps have “refused” to go after “predators, drug dealers, sex traffickers, extortioners and cyberbullies,” and imagine that removing Section 230 will somehow force these services to better moderate user-generated content on their sites.  

These arguments fundamentally misunderstand Section 230. The law lets platforms decide, largely for themselves, what kind of speech they want to host, and to remove speech that doesn’t fit their own standards without penalty. 

 If lawmakers are legitimately motivated to help online services root out unlawful activity and terrible content appearing online, the last thing they should do is eliminate Section 230. The current law strongly incentivizes websites and apps, both large and small, to kick off their worst-behaving users, to remove offensive content, and in cases of illegal behavior, work with law enforcement to hold those users responsible. 

If Congress deletes Section 230, the pre-digital legal rules around distributing content would kick in. That law strongly discourages services from moderating or even knowing about user-generated content. This is because the more a service moderates user content, the more likely it is to be held liable for that content. Under that legal regime, online services will have a huge incentive to just not moderate and not look for bad behavior. This would result in the exact opposite of their goal of protecting children and adults from harmful content online.

Republished from the EFF’s Deeplinks blog.

Filed Under: ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “230 Protects Users, Not Big Tech”

Subscribe: RSS Leave a comment
21 Comments
Anonymous Coward says:

many startups and small services would be inundated with costly litigation that could drive them offline. Even entirely meritless lawsuits cost thousands of dollars to fight, and often tens or hundreds of thousands of dollars.

It’s also worth keeping in mind that, in many cases (probably most), users have agreed to imdemnify the service providers, without limit, for these legal fees. So, if you post a message that gets a company sued, you could end up owing the company millions of dollars.

Arianity (profile) says:

The law is not a shield for Big Tech.

It is, and you said so yourself earlier. It also is a shield for small sites, and users. It can (and does) do both. Big Tech is a part of “all platforms”.

That’s in clear agreement with most Americans’ belief that people should be held responsible for their own speech—not that of others.

I mean, if that were true publisher (or distributor) liability wouldn’t exist. There’s a reason so many people misunderstand 230, and it’s because it doesn’t always exactly line up with expectations for what qualifies as “their own speech”.

Section 230’s detractors say that too many websites and apps have “refused” to go after “predators, drug dealers, sex traffickers, extortioners and cyberbullies,”

These arguments fundamentally misunderstand Section 230. The law lets platforms decide, largely for themselves, what kind of speech they want to host,

That’s saying the same thing. The ability to decide what kind of speech they want to host is being able to decide if you want to allow say, cyberbullying, or how many (or how few) resources you want to dedicate to it. (exceptions apply to federal crimes, so it’s not great to mix those examples)

The current law strongly incentivizes websites and apps, both large and small, to kick off their worst-behaving users, to remove offensive content,

It doesn’t. It just doesn’t disincentivize them to do those things, if they want to. There’s a distinction there.

Stephen T. Stone (profile) says:

Re:

It doesn’t.

It does, though. If websites weren’t incentivized to moderate speech, they’d likely turn into something worse than 8kun…until, y’know, the law decided to do something about that. A site whose admins know for sure that a bunch of criminal stuff happens on their servers and don’t do anything about that because “we don’t believe in moderation” is a site destined to die a quick and messy death. Even the admins at a shitpit like 4chan moderate speech to some extent⁠—even if their incentive is mostly “keep the site’s metaphorical ass out of the fire into which federal authorities would dump it”.

The First Amendment allows moderation to happen. 230 short circuits lawsuits designed to route around the First Amendment and place liability for speech/actions on people/entities that don’t deserve to hold such liability. Any change to 230, including its repeal, would wreck that delicate state of affairs and bring about a massive sea change to how the Internet works. And that’s why there is no good-faith argument against 230.

Arianity (profile) says:

Re: Re:

It does, though. If websites weren’t incentivized to moderate speech, they’d likely turn into something worse than 8kun…

That isn’t because of 230 though, that’s because of other incentives (notably, the market). Most sites don’t turn into 8kun because most people don’t want to be on 8kun (but if a site really wants to be 8kun, it can). That isn’t because of 230, 230 just allows that incentive to kick in, without the disincentive of getting fucked by liability. There are incentives to not run a shitpit, but it’s because any sane person (and even most of the insane ones) doesn’t like a shitpit. “230 doesn’t provide the incentive” != “there are no incentives”.

A site whose admins know for sure that a bunch of criminal stuff

230 doesn’t affect criminal stuff, so that’s a bit of a separate thing.

The First Amendment allows moderation to happen.

Yes and no. 1A always allows moderation, but it doesn’t always guarantee no publisher liability tied to that moderation, even if it’s third party content. That’s why Stratton Oakmont v Prodigy freaked people out, and we got 230- because Prodigy lost and was held to have publisher liability, despite the 1A defense. It’s not because it won but spent too much money to prove it (although that is also a problem in other cases, one that 230 fixes).

The reason 230 allows the shortcut is because the site is always protected from liability, as long as it’s third party content. No need to do discovery if nothing in discovery can pierce it. 1A can’t skip that, because there are some rare cases where that defense can be pierced (like Prodigy, Barnes v Yahoo, Blockowicz v Williams, etc).

Any change to 230, including its repeal, would wreck that delicate state of affairs

I wouldn’t go so far as to say any change. For instance, that one change you thought could be done without changing it, would probably be fine. (This also happens to be one of the few examples where 1A and 230 differ.)

But we’re not getting any well considered change here, so it’s moot. Anything realistically coming out of this is going to be trash.

But I don’t think it’s particularly helpful to say stuff like “230 protects users not Big Tech” to stop it. That’s not going to convince those Dem Senators not to fuck it up. You’re much better off sticking to the argument that it protects Big Tech, but it does so for good and necessary reasons (which the article does do, later on, just not consistently).

Stephen T. Stone (profile) says:

Re: Re: Re:

That isn’t because of 230 though, that’s because of other incentives (notably, the market).

Interactive web services that moderate very little of the speech that goes through them don’t give a shit about “the market”. If they did, they would moderate more of that speech.

230 just allows that incentive to kick in, without the disincentive of getting fucked by liability.

Thank you for restating my point.

230 doesn’t affect criminal stuff, so that’s a bit of a separate thing.

It isn’t. 230 creates a shield for interactive web services that protects them from liability for unlawful speech they didn’t make/directly publish. Putting even the tiniest dent in that shield makes it far less effective.

The reason 230 allows the shortcut is because the site is always protected from liability

No, it isn’t. 230 is a defense, not a guaranteed right, and that defense can only be raised in a court of law to short-circuit lawsuits that target a service over speech for which the service carries no liability. The plaintiffs in such a lawsuit have the burden of proving the service should hold liability; if they can’t jump that hurdle, that’s their problem. They shouldn’t get to drain a service dry with an expensive and time-consuming lawsuit that has no chance of winning. 230 exists to short-circuit those lawsuits before that outcome can happen.

that one change you thought could be done without changing it, would probably be fine

And if you look in the reply right beneath mine in that thread, you’ll see issues raised by that change.

we’re not getting any well considered change here, so it’s moot. Anything realistically coming out of this is going to be trash.

And yet, you’ll still support the idea that 230 must somehow be changed to magically make it better, even though there is no change that could be made to 230⁠—including the one you linked to⁠—that wouldn’t tank 230’s effectiveness within the broader context of the U.S. legal system.

I don’t think it’s particularly helpful to say stuff like “230 protects users not Big Tech” to stop it.

The point of such phrasing is to remind everyone that 230 isn’t only a(n alleged) “gift to Big Tech” or was made only with “Big Tech” in mind⁠—it also protects smaller companies and even users of services both “Big” and small. Maybe there’s a more elegant way of phrasing it that includes the stuff you’re talking about, sure. But that phrasing isn’t wrong.

Anonymous Coward says:

Re: Re: Re:2

Interactive web services that moderate very little of the speech that goes through them don’t give a shit about “the market”.

That’s fair, “market” probably isn’t the right word, because there are nonmarket motivations (Truth social isn’t the way it is for market purposes. And FB’s changes are political, etc). But the rest about shitpits still applies, those motivations aren’t coming from 230.

Thank you for restating my point.

Yes, and that point is different than 230 creating the incentive. “230 removes the massive liability disincentive to allow other incentives to matter” and “230 creates an incentive to moderate” are not synonyms. That doesn’t mean the former isn’t important, but they are different things. And the distinction matters a lot, if for nothing else than understanding what those incentives are, and how/why sites respond to them.

It isn’t. 230 creates a shield for interactive web services that protects them from liability for unlawful speech they didn’t make/directly publish

Civil yes, criminal no.

No effect on criminal law
Nothing in this section shall be construed to impair the enforcement of section 223 or 231 of this title, chapter 71 (relating to obscenity) or 110 (relating to sexual exploitation of children) of title 18, or any other Federal criminal statute. link

You know this, because you’ve correctly made this distinction yourself before. Maybe you didn’t mean criminal originally?

that defense can only be raised in a court of law to short-circuit lawsuits that target a service over speech for which the service carries no liability.

No, it doesn’t. Prodigy is an example of this. It lost and had publisher liability pre-230, and after 230 passed the case was dismissed on appeal because 230 applied to it.

Eric Goldman’s excellent article goes into this in detail:

This Essay explains how Section 230 provides significant and irreplaceable substantive and procedural benefits beyond the First Amendment’s free speech protections.

Some of these claims have strong First Amendment defenses, analogous to the
defamation jurisprudence.28 However, for other claims, First Amendment defenses
have little or no effect. Section 230 equally immunizes all of these claims, so it
clearly provides more protection for those claims with limited or weak First
Amendment defense.2

The procedural benefit is the more important part (and in most cases, is what matters, because most cases are 1A protected anyway), but it’s not the only part

The plaintiffs in such a lawsuit have the burden of proving the service should hold liability

There is no way to show that for third party content under 230. (with a few explicit exceptions for criminal law, KOSA). Again, quoting Goldman: Indeed, courts routinely interpret Section 230 to immunize all claims based on third-party content (other than those referenced in Section 230’s statutory exclusions), regardless of what causes of action the plaintiff actually alleges

And if you look in the reply right beneath mine in that thread, you’ll see issues raised by that change.

Those replies don’t say that it would be impossible, just that are potential issues that would need to be addressed first. It doesn’t say they’re unaddressable.

And yet, you’ll still support the idea that 230 must somehow be changed to magically make it better, even though there is no change that could be made to 230⁠—including the one you linked to⁠—that wouldn’t tank 230’s effectiveness

I don’t see how you can argue that something you thought could already happen, would tank it’s effectiveness. You thought it already worked that way. And the comments afterwards don’t say that it would necessarily tank it, either.

The point of such phrasing is to remind everyone that 230 isn’t only a(n alleged) “gift to Big Tech” or was made only with “Big Tech” in mind⁠.. But that phrasing isn’t wrong.

I mean, the “only” part is pretty important. Leaving it out changes the meaning entirely. “230 isn’t a shield for Big Tech” and “230 isn’t only a shield for Big Tech” have very different meanings. And I think that’s going to matter when you’re talking to a Dem Senator who already doesn’t understand 230 and is thinking about repeal- they’re not going to give the benefit of the doubt, they’re going to think it’s lying to them.

Arianity (profile) says:

Re: Re: Re:2

Section 230 not applying in cases of federal crimes is a lot different to it not applying to petty offenses, which it still does,

So ignoring that this has nothing to do with what we were talking about (he was explicitly talking about “criminal stuff”, and specifically incentives that still exist under 230. Even in the hypothetical world where this is true, it would not what he would be referring to), this is still also just wrong. Petty offenses in federal law are still federal crimes, and are governed by federal crime statutes. A petty offense just denotes the magnitude. See for instance here.

Being charitable, you may be confusing it with how 230 can preempt state laws. That does not have to do with being a petty offense or not.

Feel free to link a reputable source or a court case showing otherwise. You won’t because you pulled this out of your ass. If you’re going to try to pedantically gotcha something irrelevant, at least have some clue about what the fuck you’re talking about.

Rocky (profile) says:

Re:

It is, and you said so yourself earlier. It also is a shield for small sites, and users. It can (and does) do both. Big Tech is a part of “all platforms”.

Which he actually said if you had paid attention.

I mean, if that were true publisher (or distributor) liability wouldn’t exist.

Why are you conflating two different things? In one instance you have a publisher who decides beforehand to accept any liability that may come from speech they decide to publish, in the other instance they may be saddled with liability from speech from 3rd party speech they didn’t know of beforehand. This isn’t difficult to distinguish between so your “if that were true” is based on your own misunderstanding.

That’s saying the same thing.

Not really unless you want to reduce down the argument to a simplistic yes/no that has no bearing on reality.

The ability to decide what kind of speech they want to host is being able to decide if you want to allow say, cyberbullying, or how many (or how few) resources you want to dedicate to it.

Define cyberbullying in such a way than anyone can clearly state that “this is cyberbullying which is a crime”, I’ll wait. And that is the exactly the point, because the definition boils down to “I know it when I see it” which is very subjective and every site has their own rules for what is allowed or not based on the preferences of those running it.

It doesn’t. It just doesn’t disincentivize them to do those things, if they want to. There’s a distinction there.

But it does, just because you don’t agree with how some sites moderate their site it doesn’t mean they aren’t incentivized to moderate it as they wish. If they weren’t incentivized, why even moderate?

The problem with your argument is that you have turned it on its head to prove a non-existent point, its like saying going to the beach doesn’t disincentivize people from lazing in the sun or go swimming.

Arianity (profile) says:

Re: Re:

Which he actually said if you had paid attention.

They did in some places, and not others, and I explicitly acknowledged that (hence and you said so yourself earlier.). The reason I’m mentioning it is that it’s inconsistent, and it’s inconsistent in a way that tries to downplay the fact that it does shield Big Tech.

Saying it’s not a shield for Big Tech, and then later admitting it is a shield Big Tech, but it also does good things, is not a good defense. Just stick with the latter. Being consistent with the “yes, it is a shield for Big Tech, but it also has to be this way for users/small platforms” is a way better pitch.

Why are you conflating two different things?

Because those two different things are not different in the way the article is comparing them (which is just about whether it’s 3rd party or not).

On top of that, those differences are not present in all cases

In one instance you have a publisher who decides beforehand to accept any liability that may come from speech they decide to publish, in the other instance they may be saddled with liability from speech from 3rd party speech they didn’t know of beforehand.

Even if a site proactively decides to publish something, they still have 230 protection. This is explicitly stated in e.g. Zeran v AOL. Thus, lawsuits seeking to hold a service provider liable for its exercise of a publisher’s traditional editorial functions — such as deciding whether to publish, withdraw, postpone or alter content — are barred

230 protects both if they don’t know, and also if they did know beforehand. The comparison to other forms of publishing liability is only fair in the latter case, that’s true.

Not really unless you want to reduce down the argument to a simplistic yes/no that has no bearing on reality.

What specifically, is the difference, then? It’s not boiling it down, they are fundamentally two sides of the same coin.

Define cyberbullying in such a way than anyone can clearly state that “this is cyberbullying which is a crime”, I’ll wait.

Why? I’m not claiming it’s a crime, or even that it should be a crime.

And that is the exactly the point, because the definition boils down to “I know it when I see it” which is very subjective

Yes, and that freedom means you’re free to say “I never see it” if you want. I’m not saying it’s easy, or even that freedom shouldn’t exist. It does exist, and for good reason. The author is saying that it doesn’t exist, which is not the same thing as existing for a reason. If you’re trying to save 230, I don’t think it is very helpful to not acknowledge the trade off, even if the trade off is absolutely worth it.

But it does, just because you don’t agree with how some sites moderate their site it doesn’t mean they aren’t incentivized to moderate it as they wish. If they weren’t incentivized, why even moderate?

Because there are other incentives to moderate, they just don’t come from 230. Unmoderated cesspools suck, and basically no one likes them. A site can be a cesspool if they want, and 230 is neutral on that. That doesn’t mean people will like it, though. There’s functionally almost no market for it. “230 doesn’t provide the incentive” != “there are no incentives”.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Congress or the Administration does not get to define what speech is “good” and “bad”.

We’re quickly approaching that point with Apartheid Clyde’s ability to literally bribe people to vote a specific way not even being questioned by judiciary.

It’s happening. And there isn’t a damn thing anyone’s doing about it.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a BestNetTech Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

BestNetTech community members with BestNetTech Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the BestNetTech Insider Shop »

Follow BestNetTech

BestNetTech Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the BestNetTech Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
BestNetTech needs your support! Get the first BestNetTech Commemorative Coin with donations of $100
BestNetTech Deals
BestNetTech Insider Discord
The latest chatter on the BestNetTech Insider Discord channel...
Loading...