NY Times Gets 230 Wrong Again; Misrepresenting History, Law, And The First Amendment

from the that's-not-how-it-works dept

The NY Times has real difficulty not misrepresenting Section 230. Over and over and over and over and over again it has misrepresented how Section 230 works, even having to once run this astounding correction (to an article that had a half-page headline saying Section 230 was at fault):

A day later, it had to run another correction on a different article also misrepresenting Section 230:

You would think with all these mistakes and corrections that the editors at the NY Times might take things a bit more slowly when either a reporter or a columnist submits a piece purportedly about Section 230.

Apparently not.

Julia Angwin has done some amazing reporting on privacy issues in the past and has exposed plenty of legitimately bad behavior by big tech companies. But, unfortunately, she appears to have been sucked into nonsense about Section 230.

She recently wrote a terribly misleading opinion piece, bemoaning social media algorithms and blaming Section 230 for their existence. The piece is problematic and wrong on multiple levels. It’s disappointing that it ever saw the light of day without someone pointing out its many flaws.

A history lesson:

Before we get to the details of the article, let’s take a history lesson on recommendation algorithms, because it seems that many people have very short memories.

The early internet was both great and a mess. It was great because anyone could create anything and communicate with anyone. But it was a mess because that came with a ton of garbage and slop. There were attempts to organize that information and make it useful. Things like Yahoo became popular not because they had a search engine (that came later!) but because they were an attempt to “organize” the internet (Yahoo originally stood for “Yet Another Hierarchical Officious Oracle”, recognizing that there were lots of attempts to “organize” the internet at that time).

After that, searching and search algorithms became a central way of finding stuff online. In its simplest form, search is a recommendation algorithm based on the keywords you provide run against its index. In the early days, Google cracked the code to make that recommendation algorithm for content on the wider internet.

The whole point of a search recommendation is “the algorithm thinks these are the most relevant bits of content for you.”

The next generation of the internet was content in various silos. Some of those were user-generated silos of content, such as Facebook and YouTube. And some of them were professional content, like Netflix or iTunes. But, once again, it wasn’t long before users felt overwhelmed with the sheer amount of content at their fingertips. Again, they sought out recommendation algorithms to help them find the relevant or “good” content, and to avoid the less relevant “bad” content. Netflix’s algorithm isn’t very different from Google’s recommendation engine. It’s just that, rather than “here’s what’s most relevant for your search keywords,” it’s “here’s what’s most relevant based on your past viewing history.”

Indeed, Netflix somewhat famously perfected the content recommendation algorithm in those years, even offering up a $1 million prize to anyone who could build a better version. Years later, a team of researchers won the award, but Netflix never implemented it, saying that the marginal gains in quality were not worth the expense.

Either way, though, it was clearly established that the benefit and the curse of the larger internet is that in enabling anyone to create and access content, too much content is created for anyone to deal with. Thus, curation and recommendation is absolutely necessary. And handling both at scale requires some sort of algorithms. Yes, some personal curation is great, but it does not scale well, and the internet is all about scale.

People also seem to forget that recommendation algorithms aren’t just telling you what content they think you’ll want to see. They’re also helping to minimize the content you probably don’t want to see. Search engines choosing which links show up first are also choosing which links they won’t show you. My email is only readable because of the recommendation engines I run against it (more than just a spam filter, I also run algorithms that automatically put emails into different folders based on likely importance and priority).

Algorithms aren’t just a necessary part of making the internet usable today. They’re a key part of improving our experiences.

Yes, sometimes algorithms get things wrong. They could recommend something you don’t want. Or demote something you do. Or maybe they recommend some problematic information. But sometimes people get things wrong too. Part of internet literacy is recognizing that what an algorithm presents to you is just a suggestion and not wholly outsourcing your brain to the algorithm. If the problem is people outsourcing their brain to the algorithm, it won’t be solved by outlawing algorithms or adding liability to them.

It being just a suggestion or a recommendation is also important from a legal standpoint: because recommendation algorithms are simply opinions. They are opinions of what content that algorithm thinks is most relevant to you at the time based on what information it has at that time.

And opinions are protected free speech under the First Amendment.

If we held anyone liable for opinions or recommendations, we’d have a massive speech problem on our hands. If I go into a bookstore, and the guy behind the counter recommends a book to me that makes me sad, I have no legal recourse, because no law has been broken. If we say that tech company algorithms mean they should be liable for their recommendations, we’ll create a huge mess: spammers will be able to sue if email is filtered to spam. Terrible websites will be able to sue search engines for downranking their nonsense.

On top of that, First Amendment precedent has long been clear that the only way a distributor can be held liable for even harmful recommendation is if the distributor had actual knowledge of the law-violating nature of the recommendation.

I know I’ve discussed this case before, but it always gets lost in the mix. In Winter v. GP Putnam, the Ninth Circuit said a publisher was not liable for publishing a mushroom encyclopedia that literally “recommended” people eat poisonous mushrooms. The issue was that the publisher had no way to know that the mushroom was, in fact, inedible.

We conclude that the defendants have no duty to investigate the accuracy of the contents of the books it publishes. A publisher may of course assume such a burden,  but there is nothing inherent in the role of publisher or the surrounding legal doctrines to suggest that such a duty should be imposed on publishers. Indeed the cases uniformly refuse to impose such a duty.  Were we tempted to create this duty, the gentle tug of the First Amendment and the values embodied therein would remind us of the social costs.

It’s not hard to transpose this to the internet. If Google recommends a link that causes someone to poison themselves, precedent says we can hold the author liable, but not the distributor/recommender unless they have actual knowledge of the illegal nature of the content. Absent that, there is nothing to actually sue over.

And, that’s good. Because you can’t demand that anyone recommending anything know with certainty whether or not the content they are recommending is good or bad. That puts way too much of a burden on the recommender, and makes the mere process of recommending anything a legal minefield.

Note that the issue of Section 230 does not come up even once in this history lesson. All that Section 230 does is say that websites and users (that’s important!) are immune from their editorial choices for third party content. That doesn’t change the underlying First Amendment protections for their editorial discretion, it just allows them to get cases tossed out earlier (at the very earliest motion to dismiss stage) rather than having to go through expensive discovery/summary judgment and possibly even all the way to trial.

Section 230 isn’t the issue here:

Now back to Angwin’s piece. She starts out by complaining about Mark Zuckerberg talking up Meta’s supposedly improved algorithms. Then she takes the trite and easy route of dunking on that by pointing out that Facebook is full of AI slop and clickbait. That’s true! But… that’s got nothing to do with legal liability. That simply has to do with… how Facebook works and how you use Facebook? My Facebook feed has no AI slop or clickbait, perhaps because I don’t click on that stuff (and I barely use Facebook). If there was no 230 and Facebook were somehow incentivized to do less algorithmic recommendation, feeds would still be full of nonsense. That’s why the algorithms were created in the first place. Indeed, studies have shown that when you remove algorithms, feeds are filled with more nonsense, because the algorithms don’t filter out the crap any more.

But Angwin is sure that Section 230 is to blame and thinks that if we change it, it will magically make the algorithms better.

Our legal system is starting to recognize this shift and hold tech giants responsible for the effects of their algorithms — a significant, and even possibly transformative, development that over the next few years could finally force social media platforms to be answerable for the societal consequences of their choices.

Let’s back up and start with the problem. Section 230, a snippet of law embedded in the 1996 Communications Decency Act, was initially intended to protect tech companies from defamation claims related to posts made by users. That protection made sense in the early days of social media, when we largely chose the content we saw, based on whom we “friended” on sites such as Facebook. Since we selected those relationships, it was relatively easy for the companies to argue they should not be blamed if your Uncle Bob insulted your strawberry pie on Instagram.

So, again, this is wrong. From the earliest days of the internet, we always relied on recommendation systems and moderation, as noted above. And “social media” didn’t even come into existence until years after Section 230 was created. So, it’s not just wrong to say that Section 230’s protections made sense for early social media, it’s backwards.

Also, it is somewhat misleading to call Section 230 “a snippet of law embedded in the 1995 Communications Decency Act.” Section 230 was an entirely different law, designed to be a replacement for the CDA. It was the Internet Freedom and Family Empowerment Act and was put forth by then Reps. Cox and Wyden as an alternative to the CDA. Then, Congress, in its infinite stupidity, took both bills and merged them.

But it was also intended to help protect companies from being sued for recommendations. Indeed, two years ago, Cox and Wyden explained this to the Supreme Court in a case about recommendations:

At the same time, Congress drafted Section 230 in a technology-neutral manner that would enable the provision to apply to subsequently developed methods of presenting and moderating user-generated content. The targeted recommendations at issue in this case are an example of a more contemporary method of content presentation. Those recommendations, according to the parties, involve the display of certain videos based on the output of an algorithm designed and trained to analyze data about users and present content that may be of interest to them. Recommending systems that rely on such algorithms are the direct descendants of the early content curation efforts that Congress had in mind when enacting Section 230. And because Section 230 is agnostic as to the underlying technology used by the online platform, a platform is eligible for immunity under Section 230 for its targeted recommendations to the same extent as any other content presentation or moderation activities.

So the idea that 230 wasn’t meant for recommendation systems is wrong and ahistorical. It’s strange that Angwin would just claim otherwise, without backing up that statement.

Then, Angwin presents a very misleading history of court cases around 230, pointing out cases where Section 230 has been successful in getting bad cases dismissed at an early stage, but in a way that makes it sound like the cases would have succeeded absent 230:

Section 230 now has been used to shield tech from consequences for facilitating deadly drug sales, sexual harassment, illegal arms sales and human trafficking. And in the meantime, the companies grew to be some of the most valuable in the world.

But again, these links misrepresent and misunderstand how Section 230 functions under the umbrella of the First Amendment. None of those cases would have succeeded under the First Amendment, again because the companies had no actual knowledge of the underlying issues, and thus could not be held liable. All Section 230 did was speed up the resolution of those cases, without stopping the plaintiffs from taking legal action against those actually responsible for the harms.

And, similarly, we could point to another list of cases where Section 230 “shielded tech firms from consequences” for things we want them shielded from consequences on, like spam filters, kicking Nazis off your platform, fact-checking vaccine misinformation and election denial disinformation, removing hateful content and much much more. Remove 230 and you lose that ability as well. And those two functions are tied together at the hip. You can’t get rid of the protections for the stuff Julia Angwin says is bad without also losing the protections for things we want to protect. At least not without violating the First Amendment.

This is the part that 230 haters refuse to understand. Platforms rely on the immunity from liability that Section 230 gives them to make editorial decisions on all sorts of content. Yet, somehow, they think that taking away Section 230 would magically lead to more removals of “bad” content. That’s the opposite of true. Remove 230 and things like removing hateful information, putting in place spam filters, and stopping medical and election misinfo becomes a bigger challenge, since it will cost much more to defend (even if you’d win on First Amendment grounds years later).

Angwin’s issue (as is the issue with so many Section 230 haters) is that she wants to blame tech companies for harms created by users of those technologies. At its simplest level, Section 230 is just putting the liability on the party actually responsible. Angwin’s mad because she’d rather blame tech companies than the people actually selling drugs, sexually harassing people, selling illegal arms or engaging in human trafficking. And I get the instinct. Big tech companies suck. But pinning liability on them won’t fix that. It’ll just allow them to get out of having important editorial discretion (making everything worse) while simultaneously building up a bigger legal team, making sure competitors can never enter the space.

That’s the underlying issue.

Because if you blame the tech companies, you don’t get less of those underlying activities. You get companies who won’t even look to moderate such content, because that would be used in lawsuits against them as a sign of “knowledge.” Or if the companies do decide to more aggressively moderate, you would get any attempt to speak out about sexual harassment blocked (goodbye to the #MeToo movement… is that what Angwin really wants?)

Changing 230 would make things worse, not better:

From there, Angwin takes the absolutely batshit crazy 3rd Circuit opinion in Anderson v. TikTok, which explicitly ignored a long list of other cases based on misreading a non-binding throwaway line in a Supreme Court ruling, and gave no other justification for its ruling, as being a good thing?

If the court holds platforms liable for their algorithmic amplifications, it could prompt them to limit the distribution of noxious content such as nonconsensual nude images and dangerous lies intended to incite violence. It could force companies, including TikTok, to ensure they are not algorithmically promoting harmful or discriminatory products. And, to be fair, it could also lead to some overreach in the other direction, with platforms having a greater incentive to censor speech.

Except, it won’t do that. Because of the First Amendment it does the opposite. The First Amendment requires actual knowledge of the violative actions and content, so doing this will mean two things: companies taking either a much less proactive stance or, alternatively, taking one that will be much quicker to remove any controversial content (so goodbye #MeToo, #BlackLivesMatter or protests against the political class).

Even worse, Angwin seems to have spoken to no one with actual expertise on this if she thinks this is the end result:

My hope is that the erection of new legal guardrails would create incentives to build platforms that give control back to users. It could be a win-win: We get to decide what we see, and they get to limit their liability.

As someone who is actively working to help create systems that give control back to users, I will say flat out that Angwin gets this backwards. Without Section 230 it becomes way more difficult to do so. Because the users themselves would now face much greater liability, and unlike the big companies, the users won’t have buildings full of lawyers willing and able to fight such bogus legal threats.

If you face liability for giving users more control, users get less control.

And, I mean, it’s incredible to say we need legal guardrails and less 230 and then say this:

In the meantime, there are alternatives. I’ve already moved most of my social networking to Bluesky, a platform that allows me to manage my content moderation settings. I also subscribe to several other feeds — including one that provides news from verified news organizations and another that shows me what posts are popular with my friends.

Of course, controlling our own feeds is a bit more work than passive viewing. But it’s also educational. It requires us to be intentional about what we are looking for — just as we decide which channel to watch or which publication to subscribe to.

As a board member of Bluesky, I can say that those content moderation settings and the ability for others to make feeds and for them to be available for Angwin to choose what she wants are possible in large part due to Section 230. Without Section 230 to protect both Bluesky and its users, it makes it much more difficult to defend lawsuits over those feeds.

Angwin literally has this backwards. Without Section 230, is Bluesky as open to offering up third-party feeds? Are they as open to allowing users to create their own feeds? Under the world that Angwin claims to want, where platforms have to crack down on “bad” content, it would be a lot more legally risky to allow user control and third-party feeds. Not because providing the feeds would lead to legal losses, but without 230 it would encourage more bogus lawsuits, and cost way more to get those lawsuits tossed out under the First Amendment.

Bluesky doesn’t have a building full of lawyers like Meta has. If Angwin got her way, Bluesky would need that if it wanted to continue offering the features Angwin claims she finds so encouraging.

This is certainly not the first time that the NY Times has directly misled the public about how Section 230 works. But Angwin certainly knows many of the 230 experts in the field. It appears she spoke to none of them and wrote a piece that gets almost everything backwards. Angwin is a powerful and important voice towards fixing many of the downstream problems of tech companies. I just wish that she would spend some time understanding the nuances of 230 and the First Amendment to be more accurate in her recommendations.

I’m quite happy that Angwin likes Bluesky’s approach to giving power to end users. I only wish she wasn’t advocating for something that would make that way more difficult.

Filed Under: , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “NY Times Gets 230 Wrong Again; Misrepresenting History, Law, And The First Amendment”

Subscribe: RSS Leave a comment
98 Comments
This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Once more, with feeling: No one can oppose 230 without lying about it.

Along those lines: Those who claim “230 needs reform” love to evade questions about what exact reforms are needed and how those specific reforms would affect the broader Internet, including smaller interactive web services.

This comment has been flagged by the community. Click here to show it.

Stephen T. Stone (profile) says:

Re:

Nobody is forcing you to read these articles. Nobody is forcing you to worry all the live-long day about 230 and the Internet. You’re doing that to yourself.

Is the death of 230 possible? Yes. Can you do anything about it? Maybe not. Should you worry so much about a future that isn’t written⁠—one that you may not be able to affect⁠—that you wreck yourself mentally and/or physically? Hell no. Besides, if you spend all day speculating about the future in a negative way, of course you’re going to feel like shit.

I’m worried about 230 getting the axe, too. But as someone with no actual sociopolitical power, I couldn’t stop that from happening even if I tried. Whatever happens, happens; we’ll all have to adapt somehow. And if you refuse to adapt? That’s your problem.

This comment has been flagged by the community. Click here to show it.

Stephen T. Stone (profile) says:

Re: Re: Re:

I’m waiting for

Stop doing that to yourself. If 230 is axed, you’re not going to get a cash prize for calling it. All you’re doing is driving yourself insane with worry over a situation in which you have little-to-no actual control or influence.

I think it’s safe to say that most of BestNetTech’s regular commentators would love to see 230 left alone. If it gets the axe, we’re going to mourn the loss of 230 (or whatever parts of it get axed away). Then we’re going to look for ways to mitigate the effects of 230 getting axed⁠—if we’re not already doing that right now. (Hope for the best and plan for the worst, as the saying goes.) We’ll be prepared to adapt. You’ll apparently be pissing your pants and yelling “I told you so”.

Worry, but don’t worry so much that you become numb to reality⁠—or positive outcomes. As it stands, despite the attacks against 230, it remains intact. Celebrate that fact even while you plan for a future without it. And if the death of 230 never comes to pass, hey, no big deal that you spent time prepping. Well, unless you become one of those weird-ass “I live in an underground bunker” apocalypse preppers or some shit…in which case, you’ve got a different problem.

This comment has been deemed insightful by the community.
Anonymous Coward says:

“The early internet was both great and a mess. It was great because anyone could create anything and communicate with anyone. But it was a mess because that came with a ton of garbage and slop.”

Yes…and no. First, let’s note that the early Internet existed well before Yahoo (referenced later in that paragraph) came along.

Second, it came with some garbage and slop, but not a lot — because people who ran things actually cared, actually paid attention, actually responded to comments and complaints, actually shut down idiots and abusers, actually took some pride in their operations and tried hard to make sure that they were a positive for the entire rest of the Internet.

Compare and contrast with today.

Third, and to the larger point: we were making and implementing policy decisions (like moderation) well before Section 230 came along because we realized, somewhere in the early 1980’s, that we had to. We did it reluctantly because we’re techies and we’d rather be doing other things, but we realized that it was necessary…so we did what we had to. Nothing has changed in the interim, but those who would gut Section 230 don’t seem to realize that. They would blow up the dam that’s holding back a flood because they don’t like the architectural style of the abutments.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re:

AI!?!?!?!?!?!?!?!?
AI tools like ChatGPT are a menace to society! They churn out responses without any real understanding or empathy, leading to shallow and often misleading interactions. These tools are built on vast amounts of data, but they lack the nuance and context that human intelligence brings to the table. They can perpetuate biases, spread misinformation, and even manipulate users by exploiting their data. The worst part? They’re taking over jobs that require genuine human touch and creativity, leaving people out of work and feeling devalued. It’s a dystopian nightmare where machines are prioritized over human connection and authenticity.

This comment has been flagged by the community. Click here to show it.

Arianity says:

If the problem is people outsourcing their brain to the algorithm, it won’t be solved by outlawing algorithms or adding liability to them.

Eh, yes and no. It doesn’t fix the underlying problem, but it can improve the results they get. That’s not nothing. And that’s not getting into how hard it can be to get around suggestions like Google’s. As it enshittifies, it’s not as simple as just scrolling down. There’s a discussion orthogonal to liability about forcing choice in algorithms (which, yes, would have speech implications).

Because you can’t demand that anyone recommending anything know with certainty whether or not the content they are recommending is good or bad.

You still get full 230 protections even if you’re fully certain it’s bad. For example: None of those cases would have succeeded under the First Amendment, again because the companies had no actual knowledge of the underlying issues, and thus could not be held liable. In the case of Herrick at least (2nd link), they had actual knowledge of the underlying issue.

You can’t get rid of the protections for the stuff Julia Angwin says is bad without also losing the protections for things we want to protect. At least not without violating the First Amendment.

Existing publisher liability already literally does this, without violating the First Amendment, to some extent. Publishers are free to not publish Nazis etc. I’m not saying it’s easily ported over, but neither are they inherently connected at the hip.

At its simplest level, Section 230 is just putting the liability on the party actually responsible.

Except, the publisher is (sometimes!) partially “actually responsible” for things that are published. That’s why we have publisher liability in the first place. If a publisher isn’t responsible, publisher liability as a concept shouldn’t exist in print at all.

The First Amendment requires actual knowledge of the violative actions and content

230 doesn’t. (That said, this also assumes that this is a correct interpretation of the First Amendment, which is not a given. The Armslist case would be a good example)

It’s not hard to transpose this to the internet. If Google recommends a link that causes someone to poison themselves, precedent says we can hold the author liable, but not the distributor/recommender unless they have actual knowledge of the illegal nature of the content.

It’s worth noting, if you’re transposing it, 230 collapses distributors and publishers. It explicitly treats all sites as not publishers, regardless of actual knowledge. Fixing the actual knowledge part wouldn’t fix everything (most cases they don’t have it), but it does address some.

The First Amendment requires actual knowledge of the violative actions and content, so doing this will mean two things: companies taking either a much less proactive stance or, alternatively, taking one that will be much quicker to remove any controversial content

This depends on what it’s replaced with. With what Angwin is suggesting, it’d be a problem, but speaking generally, it’s a false dichotomy.

And even then it’s questionable. 230 protections already don’t extend to e.g. copyright/criminal liability, and we don’t see them ignoring or instantly removing those things as you’re claiming they would.

As a board member of Bluesky, I can say that those content moderation settings and the ability for others to make feeds and for them to be available for Angwin to choose what she wants are possible in large part due to Section 230.

It might be worth thinking about how we can make other platforms provide that, while also keeping the benefits of 230. It doesn’t have to be all or nothing.

Because the users themselves would now face much greater liability,

It seems clear from Angwin’s article that when she’s talking about users, she means the user using the service itself, not other users providing the filters (she may think Bluesky’s options are all generated by Bluesky). You can’t really sue yourself.

That said, there are creative schemes you could come up with, with people opting into something and giving up liability.

since it will cost much more to defend (even if you’d win on First Amendment grounds years later).

Again, I’ll note that this is something that should be addressed in it’s own right. It doesn’t have to be that bad.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re:

Regardless of if things would be fine on the liability end with or without section 230. Fact is that without it, a lot of smaller websites, messengers, forums, etc online will be forced to close their doors. Either due to getting sucked dry by legal fees from constant bogus or moral-panic lawsuits, or preemptively to avoid exactly that.

Arianity says:

Re: Re:

Regardless of if things would be fine on the liability end with or without section 230. Fact is that without it, a lot of smaller websites, messengers, forums, etc online will be forced to close their doors. Either due to getting sucked dry by legal fees from constant bogus or moral-panic lawsuits, or preemptively to avoid exactly that.

Yeah, any changes are predicated on also being able to keep a similar system of being able to fast track kill bogus lawsuits, similar to now. Otherwise it’s a nonstarter if you can’t keep that part. But I don’t see a reason why we wouldn’t be able to do that.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: Re: Re:

But I don’t see a reason why we wouldn’t be able to do that.

This kind of admission indicates you probably shouldn’t have posted such a lengthy response with unearned confidence. Many of the attempts to repeal 230 involve have involved not putting anything substantial in place to protect websites from death by lawsuit flood. It’s a very real danger if anyone actually gains traction with an appeal effort.

Arianity says:

Re: Re: Re:4

And I don’t exactly see the way lawsuits work getting changed to be less costly to combat as a thing that’ll happen anytime soon either, y’know?

To be clear, I don’t see it happening anytime soon, either. But I don’t think that means it’s not worth pushing towards (or at the very least, just talking about). Politics is a strong and slow boring of hard boards. and all that. Change takes time. I’m a progressive, so there’s a lot of things in politics that I’d like, be it universal healthcare or free college, that aren’t going to happen anytime soon. But you gotta start somewhere, right?

In the same way, we’re not going to get people to stop attacking 230 for bad reasons anytime soon, either. It’s still a fight worth fighting, though.

Arianity says:

Re: Re: Re:4

Just wait until someone like Mark Zuckerberg happens to say something Mike agrees with. Then Arianity really loses his mind.

Only when Mike is deceptive or inconsistent about it. Agreeing with Zuck is fine, lying about it to make it sound better is something Mike should know better than to do.

Arianity says:

Re: Re: Re:6

You really want us to link back to the article do you?

Please do, because I know you’re deliberately misquoting my actual argument.

People aren’t dumb, Arianity.

Then why do you keep acting like it?

We can read.

Apparently not, considering how much of this thread has been about misquoting me or complaining about how long replies are.

Arianity says:

Re: Re: Re:2

This kind of admission indicates you probably shouldn’t have posted such a lengthy response with unearned confidence.

Why not?

Many of the attempts to repeal 230 involve have involved not putting anything substantial in place to protect websites from death by lawsuit flood.

I’m aware. The fact that it hasn’t been done does not mean that it can’t be done. Those are two different things.

It’s a very real danger if anyone actually gains traction with an appeal effort.

It’s one I’ve repeatedly acknowledged, and agree with.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:3

The fact that it hasn’t been done does not mean that it can’t be done.

It kinda does, though. No one who suggests any sort of reform or repeal of 230 has ever offered any ideas on how to protect smaller services from a “death by a thousand lawsuits” situation that larger services can generally avoid. If you could do it, you would be the first.

Anonymous Coward says:

Re: Re: Re:5

So your solution amounts to basically “just be a billionaire, lol”? Since that is what power it would take to be able to survive the death of thousand lawsuits and the only candidates other than politicians I could see to blame for “propping up Section 230”.

I shouldn’t even have to say why converting the internet into a defacto oligarchy where being massive is required to even exist as an interactive website is a bad idea.

Arianity says:

Re: Re: Re:4

No one who suggests any sort of reform or repeal of 230 has ever offered any ideas on how to protect smaller services from a “death by a thousand lawsuits” situation that larger services can generally avoid.

The post I gave last time would do that, I think. I specifically addressed that point. The way to do that is to include a path to quick/summary judgement, similar to how 230 does. When 230 gets used as a defense, judges already have to do analyses on e.g. whether something is first party/third party speech, when they evaluate if 230 applies. You can basically copy that process verbatim.

We also have analogues we can compare to like anti-SLAPP laws. Those apply only to first party speech, of course. But they are still instructive- despite anti-SLAPP having exceptions, not all first party speech smaller services are dead to frivolous suits. Why can’t we port that model over? Seems like it works.

I don’t really get this idea that somehow 230 is the only way to do this. It’s one way, sure, and it’s harder to work around by being broad (which is a big plus), but it does so at the expense of being a bit overbroad (which is a downside). There’s nothing particularly magic about it, though. Any aspect of 230 you want you could recreate in a similar law, with the caveat that it’s carefully written.

I definitely respect the argument that that’s not going to happen anytime soon, and that it’s very hard, but that’s different than impossible. Hell, people can’t even agree that there are tradeoffs in the first place.

If you could do it, you would be the first.

Is that because it can’t be done, though? In my experience, basically every article about repealing/replacing 230 is like the NYT article. That doesn’t tell us much about what is possible, it just tells us the author is a bit of an idiot. 230 seems to have the unfortunate effect of it’s critics falling for dumb ideas like repealing it.

It may be impossible. But if so, I’d like to know why, beyond just “good laws are hard to write”.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re: Re: Re:5

The way to do that is to include a path to quick/summary judgement, similar to how 230 does.

230 already does this. For what reason does 230 need to be replaced or “fixed” to make it do what it already does?

it does so at the expense of being a bit overbroad

I don’t see how it’s overbroad in any sense. But hey, let’s say it is. In that case, the question I’d ask is this: How would you narrow down 230 without stripping away any of the protections that smaller services rely on to avoid a “death by a thousand lawsuits” situation?

Is that because it can’t be done, though?

Possibly. 230 has become such a load-bearing law that destroying it would basically wreck the Internet as we know it. No one who opposes 230 can do so honestly⁠—and those who offer reforms or “fixes” to 230 can’t explain how those “fixes” would still protect the broader Internet. That should tell you how strong 230 is as a law…and how dishonest those who oppose it really are.

Anonymous Coward says:

Re: Re: Re:6

How would you narrow down 230 without stripping away any of the protections that smaller services rely on to avoid a “death by a thousand lawsuits” situation?

Since Arianity doesn’t seem to have answered this one, I will. I would like to see Section 230 protections not apply to third party content that becomes first party content in the following way. If you pre-approve any content before publishing it, whether that be articles/blogposts commissioned from people not employees of your website or comments that are pre-screened before posting, then you are deemed to have been the publisher rather than the third party who originally created them, and are thus liable for their content because you approved it.

Arianity says:

Re: Re: Re:9

There is a subtle distinction that you’re missing out on. The text of 230 says: No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

A provider can never be treated as a publisher (for things 230 applies to. it exempts copyright/federal criminal law). There are no listed exceptions. That means things where you do act as a traditional publisher are still protected, regardless of things like actual knowledge. It doesn’t cover stuff where the provider acts as a first party speaker and injects it’s own speech, but that’s not quite the same thing as publisher. It’s a subtle distinction, but it’s very important.

However, what counts as ‘first party’ vs ‘third party’ isn’t set out in the text of 230 itself. That comes from court cases like Zeran. Stuff you listed like pre-screening/approval falls under publisher currently, not first party, for instance.Zeran: Thus, lawsuits seeking to hold a service provider liable for its exercise of a publisher’s traditional editorial functions — such as deciding whether to publish, withdraw, postpone or alter content — are barred. Technically not because of the text of 230, but because of cases like Zeran interpreting it that way.

Anon is partially right that some of this comes down to cases like Zeran, and how broadly they’ve interpreted what counts as third party vs first party speech. You could in theory reach a different interpretation. That would fix things specifically related to being first party that you listed, but it wouldn’t affect third party issues like actual knowledge.

But even if that were just interpretation issues, it’d still be moot. Even if it could/should be read that way, it hasn’t been. And generally the way to fix a judicial misreading is passing a law to clarify.

Stephen T. Stone (profile) says:

Re: Re: Re:10

A provider can never be treated as a publisher (for things 230 applies to. it exempts copyright/federal criminal law). There are no listed exceptions.

BestNetTech’s automated filters often hold comments back for moderation purposes. Yes or no: Should BestNetTech be treated as the legally liable publisher of any comment that its moderation system stops from being posted⁠—and should it be treated that way only and specifically because that comment got held back by automated filters and the admins later let it through?

Anonymous Coward says:

Re: Re: Re:11

Again, not Arianity, but I will say that the case you outline should be the one exception. Or, rather, where some comments a held back by an automated filter, that should count as ordinary moderation and no liability should attach. I was speaking about the case I have often seen where all comments are held back for pre-screening, making the comments section more like the ‘Letters to the Editor’ page in a traditional newspaper. Print newspapers can be sued for defamation over the contents of the letters they choose to print, so why not websites if they choose which comments to publish in the same way?

Stephen T. Stone (profile) says:

Re: Re: Re:12

where some comments a held back by an automated filter, that should count as ordinary moderation and no liability should attach.

And yet, to let a filtered comment through, someone working on that service must personally review the comment before deciding to approve it. You speak of a “Letters to the Editor” situation, but for what reason does the “automatically filtered comments” situation deserve an exemption when both situations require someone to approve of a given bit of third-party speech before it becomes “published” speech?

Anonymous Coward says:

Re: Re: Re:13

Because a filter doesn’t hold back all comments unless set to do so, only those deemed by the software to be potentially problematic, hence why that case would fall under ordinary moderation. Where all comments are held back, that is more like the letters page of a newspaper, as I have already stated. Please stop attempting to make me move the goalposts as doing so is you moving them by proxy.

Stephen T. Stone (profile) says:

Re: Re: Re:14

Because a filter doesn’t hold back all comments unless set to do so, only those deemed by the software to be potentially problematic, hence why that case would fall under ordinary moderation.

Human editors act as filters. A “letters to the editor” page at a large newspaper like, say, the New York Times probably has several lesser editors who review letters before asking a senior editor for approval to print them. In what way is that different from an automated moderation filter in regards to the final publication of a comment, and for what reason should the automated filter process receive any more legal protections than the human filter process when they both have the same fundamental result of “a person saw this speech before publication and approved of it being published”?

Anonymous Coward says:

Re: Re: Re:15

According to the answer given by Barnaby Page on this page, newspapers, unlike some other companies such as printers, web hosting companies, and retailers, cannot realistically argue the defense of “innocent dissemination” – in which the defendant claims they were unaware of the libel and not under any obligation to check for potential libel. This is because they pre-screen all content they prepare for publication, as does anyone who holds back all comments for pre-screening. So if I was to make the claim that “Stephen T. Stone is a bigot that attacks minority groups he doesn’t like,” on a website that pre-screens all its comments, then the webmaster is very much aware of that statement and should be just as much on the hook for it as myself in the same way that a newspaper would be.

Arianity says:

Re: Re: Re:11

Yes or no: Should BestNetTech be treated as the legally liable publisher of any comment that its moderation system stops from being posted⁠—and should it be treated that way only and specifically because that comment got held back by automated filters and the admins later let it through?

Generally speaking, no. I don’t really see that as publishing in a meaningful sense. It technically fits under the current definition under Zeran, but I don’t think it should. When you’re moderating something (especially at scale), you’re generally not really actively engaging with it the way a publisher would. If I had to draw an analogy (which isn’t always perfect, but I think it fits in this case), it’s closer to being a distributor.

That said, I could see exceptions, for example if something is known harassment (thinking similar to the Grindr v Herrick case), and they’re getting approved anyway. Or similar to the TikTok death challenge case (where the plaintiff is alleging TikTok knew. That may not be true) Then I think a platform should be liable.

Again, this is one reason why I lean towards a negligence standard. I don’t think it’s negligent if you accidentally approve a harassing comment. If there’s a pattern of behavior that shows it’s negligent/intentional, yeah you should be partially liable.

This isn’t really 230 specific, though. I’d say the same thing for things that aren’t 230 protected, like copyright/federal criminal law. You shouldn’t be liable if you accidentally approve a copyrighted work. If you continue to leave it up after the copyright owner informs you it’s copyright infringing, well that’s a different story.

Stephen T. Stone (profile) says:

Re: Re: Re:12

When you’re moderating something (especially at scale), you’re generally not really actively engaging with it the way a publisher would.

I’ll repeat an earlier comment here: Human editors act as filters. A “letters to the editor” page at a large newspaper like, say, the New York Times probably has several lesser editors who review letters before asking a senior editor for approval to print them. In what way is that different from an automated moderation filter in regards to the final publication of a comment, and for what reason should the automated filter process receive any more legal protections than the human filter process when they both have the same fundamental result of “a person saw this speech before publication and approved of it being published”?

I don’t think it’s negligent if you accidentally approve a harassing comment. If there’s a pattern of behavior that shows it’s negligent/intentional, yeah you should be partially liable.

Two things.

  1. In case you forgot: 230 doesn’t apply to anything that violates federal law, so if that pattern of behavior results in criminal liability, 230 wouldn’t apply.
  2. Approving comments from a moderation queue is rarely an “accident”⁠—and as I said above, approval from such a queue is analogous to a newspaper editor giving final approval to print a letter to the editor, so if you want a negligence standard, it would likely apply to moderation queue approvals.

This isn’t really 230 specific, though.

You’ve been arguing in favor of altering 230. Saying this now really doesn’t soften the position you’ve taken.

Arianity says:

Re: Re: Re:6

230 already does this. For what reason does 230 need to be replaced or “fixed” to make it do what it already does?

230 does this regardless of things like actual knowledge, like I mentioned above. This would not. The process is the largely same, but what is protected is not the same.

We’re not trying to change the process (because that part is good and necessary for protecting against frivolous suits), but what is covered for protection. As is, 230 covers you regardless of if you act as a traditional publisher, no matter what. It applies regardless of actual knowledge, if you act as a traditional publisher, etc. That’s the part that changes.

How would you narrow down 230 without stripping away any of the protections that smaller services rely on to avoid a “death by a thousand lawsuits” situation?

See the above linked post. It narrows it (things like actual knowledge would no longer be covered), but it keeps all the protections for frivolous cases that would target smaller services.

and those who offer reforms or “fixes” to 230 can’t explain how those “fixes” would still protect the broader Internet

What part haven’t I explained?

Anonymous Coward says:

Re: Re: Re:3

Why not?

Because it means you don’t understand what you’re talking about (again). You chronically post long responses as if anyone cares about your confused takes that exhibit low comprehension of the issues you address. It’s usually not worth responding to because you’re usually all over the place with unrelated claims or ones that miss the point of the sentences you’re quoting.

You’re like a dude in a 100 level philosophy class thinking he can beat everyone else in an argument even though he hasn’t studied the subject matter much.

Arianity says:

Re: Re: Re:4

Because it means you don’t understand what you’re talking about (again)

And how does it do that, exactly? Go ahead, I’ll wait.

It’s usually not worth responding to because you’re usually all over the place with unrelated claims

Every response I write goes through the article, and addresses it line by line when I think it makes a mistake. That’s not “all over the place”, it follows the article exactly (barring some out of order copy/pastes that I try to avoid).

Sometimes they can be long, but in my opinion that is better than letting something incorrect slip by just because some people have a short attention span. You can handle reading a couple paragraphs.

You’re like a dude in a 100 level philosophy class thinking he can beat everyone else in an argument even though he hasn’t studied the subject matter much.

Bit ironic coming from someone whose replies haven’t touched the topic at all. If I were that wrong, it should be very easy to show why, instead of wasting your time on personal insults. If you prefer, feel free to stick to a single claim.

Anonymous Coward says:

Re: Re: Re:5

Stephen’s already done a good job of explaining how your perspective is naive. I don’t need to repeat his statements. You demonstrated your ignorance of the previous attempts at repealing 230 and what they involved. You think it’s easy to include protections (basically replacing the protections that 230 already provides and thus why doesn’t need to be replaced) yet no one who has tried to repeal 230 is actually interested in preserving those protections.

I don’t know whether I should give you the benefit of the doubt that you’re just naive or the benefit of the doubt that you’re actually smart and just disingenuous. Neither is a good look.

Every response I write goes through the article, and addresses it line by line when I think it makes a mistake.

Going through the article line by line is obnoxious. Try being succinct. You’re not saying more by using more words.

That’s not “all over the place”, it follows the article exactly (barring some out of order copy/pastes that I try to avoid).

Your comments are all over the place, not the quotes you’re purporting to address. You’ve brought up non sequiturs in the past that didn’t actually have to do with the topic because you didn’t understand the topic. I’m not going into detail because I specifically stopped reading your long diatribes because there’s nothing useful there and I’m not going to read through a bunch to provide you with examples you think you can argue with when that’s part of the process I’m suggesting you should stop perpetuating. It’s not useful.

You can handle reading a couple paragraphs.

Sure, a few paragraphs here and there, but you post ad nauseum on just about every comment thread you post to. It’s just one eye roll after another.

Bit ironic coming from someone whose replies haven’t touched the topic at all.

You’re wrong enough that trying to tell you how wrong you are, or at least how off you are in your understanding, would be very time-consuming and my purpose in this is to get you to stop wasting your time and ours. And Stephen’s doing a good job as usual, so there’s no reason to be redundant.

If I were that wrong, it should be very easy to show why

Holy fuck no! This statement alone is all kinds of ignorant. You can be wrong in multiple ways such that a single sentence could warrant any number of different points of correction that could take paragraphs to elaborate on. And that would especially be a waste of time because you’re not going to learn from it. You’ll get defensive like you’re doing now and give a line-by-line response as if this is debate club and your honor has been sullied and must be avenged.

Arianity says:

Re: Re: Re:6

Stephen’s already done a good job of explaining how your perspective is naive. I don’t need to repeat his statements.

I’ve covered the things he’s brought up, and they were things I’ve already explicitly factored into my proposal (and had, before he brought them up, because those concerns are ones I share). So I’d be curious to know how that supposedly shows naiveté.

You demonstrated your ignorance of the previous attempts at repealing 230 and what they involved.

No, I didn’t. I’m aware of those previous attempts. This is an assumption on your part looking for a gotcha, and an unfounded one.

You think it’s easy to include protections (basically replacing the protections that 230 already provides and thus why doesn’t need to be replaced)

(Again) I’ve never said it’s easy. I just don’t think it’s impossible. And “basically” isn’t “the same”- I was very explicit about what parts of 230 it keeps, and which it doesn’t. It keeps those protections for things we want to keep protecting, it does not keep those protections for everything that 230 currently covers. They are not interchangeable policies. This is a reading comprehension issue on your part.

yet no one who has tried to repeal 230 is actually interested in preserving those protections.

I’m aware, and have acknowledged this in the past. I don’t see how other people screwing up is an issue on my part.

Going through the article line by line is obnoxious. Try being succinct.

I disagree. I find going line by line is the best way to reply to something and keep it coherent, especially for an article that makes multiple claims. If you have a better suggestion, I’d love to hear it. Especially considering you’re using the exact same style in your replies.

Try being succinct.

There is no way to be succinct that covers every issue being brought up (when there is multiple), as you literally said yourself in that same post. That is a function of how many different issues there are, and how complex the topic is. I’m not interested in trying to score stupid zingers. You can’t have it both ways. Either you want a fully fleshed out opinion, or you’re going to get a succinct one that skims over details (which will get complained about as “ignorant”). Pick one.

I’m not going into detail

Or you’re not going into detail because you can’t, and you’re dodging. There’s fundamentally no way to resolve that claim unless you give actual details, so you can put up or shut up.

but you post ad nauseum on just about every comment thread you post to.

There are plenty of posts I’ve done that are shorter. I do tend to post more on things I disagree with, and those topics tends to be longer/more complex, yes. Feel free to ignore them if you prefer pithy catch phrases.

You can be wrong in multiple ways such that a single sentence could warrant any number of different points of correction that could take paragraphs to elaborate on.

Weren’t you just telling me to be succinct? Thank you for literally explaining why some of my posts are longer. If you expect me to be succinct, you should be able to take your own advice. That goes both ways, unless you’re a hypocrite.

Or I could be right, and your only way to get around that is pissing contests that don’t actually ever address any points. Because that would give something that could be argued with.

I don’t always agree with Stephen (and I’ve explained, in detail, why I think he’s incorrect), but at least he tries in good faith. And as a side note, I will note that I’ve admitted being wrong before (and learned things), in the TD comments no less.

And that would especially be a waste of time because you’re not going to learn from it.

As opposed to the waste of time you in writing a longer post lacking any details and full of ad hominem? There’s nothing to learn from that, so that excuse is pretty clearly bullshit. So yes, the evidence is that you can’t justify your argument.

You’re complaining about my being a “100 philosophy student”, meanwhile you’re dodging having to justify any claim by saying “no you’re wrong, and you’re so wrong I won’t even justify my position”. This is exactly the behavior you’re supposedly complaining about, and a garbage excuse to boot.

Anonymous Coward says:

Re: Re: Re:7

This is an example of being somewhat succinct, relative to the length of the comment I’m responding to. I’m not going to quote you. I’m going to ignore irrelevant points that you’ve made. Not everything needs to be addressed. Not every point that you’re wrong about must be corrected. Consider the cost of time and effort versus what effect you think it will have. If you’re just trying for a dopamine hit from being right, you don’t even have to hit send after writing ad nauseum. If you’re trying to get people to understand your thoughts, not alienating them with long, tedious bullshit is a good first step.

You’re less coherent the more you say because you try to hit so many different points with distracted attention. You don’t need to address every claim an article makes. You don’t need to address it line by line. You don’t need to fully flesh out an opinion (start a blog somewhere else for that!). A) no one else is as fascinated with your thoughts as you are and B) you make your points less interesting the more investment a reader sees they must make in order to get through the slog of your diatribes. And not every succinct opinion will seem ignorant if you learn to express yourself articulately.

I’m not going into detail because the point wasn’t to address your incorrectness line by line. I’m not trying to prove the claim that you’re wrong. I was investing some effort to hopefully never have to invest more later by pointing out the meta issue that you’re playing the wrong game in the comments. Your fixation with treating the comments like debate club is undermining your desire to be articulate. You seem to think saying more will make you seem more right and people will be able to understand your thoughts if you can just explain them more, but it makes you look more desperate and trying to cover for not being articulate.

Arianity says:

Re: Re: Re:8

You don’t need to address every claim an article makes.

You’re not wrong there, but it’s a balance. I do try to reduce length (long one stretch a reader’s attention, especially in the comments section. No one wants to read a rando. I’m aware it’s kind of annoying, it’s a huge cost). But at the same time, not commenting on something incorrect is essentially conceding it’s correct and letting it spread. If it’s longer, it’s at least there for the reader to find. It’s a trade off: accessibility or completeness. I tend to weigh the latter heavily (arguably too much, it’s subjective).

In my recent comment history, I’ve actually been intentionally experimenting with comments that hit just a few high notes, particularly when it’s repetitive (with a few exceptions, mainly 230/link tax). I’m not sure how I feel about it, on net. It’s definitely more accessible.

If you’re just trying for a dopamine hit from being right,

Not quite. The goal with those types of comments are to inform a reader. I don’t really care about being right when responding to an article (I’ll rub it in if a commenter is a dick, but it’s not a goal). The pithier responses would actually be better for the dopamine. I’ve considered a blog; it’s more suited to long form content, but that has issues in terms of being available to a reader (and self promoting would feel ick)

You’ll see hints of this; my comments to commenters rather than articles are much shorter, I don’t tend to waste time responding to posts I agree with (despite agreeing with TD ~90% of the time), etc.

Anonymous Coward says:

Re: Re: Re:9

Not correcting what you perceive to be wrong isn’t conceding that it’s correct. It’s not your responsibility to correct the record. And more people are reading the article than your comments. They’re not coming here to read your take. So even if that’s your motive, you’re going to fail by virtue of it not being your blog and your comments being long and tedious. You’re not informing readers because your reasoning isn’t very coherent on many of your comments. There’s a lot of self-importance in your response. You’re the only savior available to correct the record! Readers wouldn’t know otherwise!

Arianity says:

Re: Re: Re:10

It’s not your responsibility to correct the record.

I disagree. I think it’s something worth doing, and it’s something people should try to do when they can.

They’re not coming here to read your take.

Well, then they’re free not to read it. Problem solved! Never said anyone was forced to read it.

You’re the only savior available to correct the record!

No, you’re all free to do it, too.

Anonymous Coward says:

Re: Re: Re:11

You get really defensive and verbose when people correct your errors. Seems like you’d prefer they didn’t do it. You got really defensive when I pointed out your contradictions. And you keep thinking adding more words and arguing with everyone who disagrees with you will make it better, while just filling up the comments with noise.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Yeah, I’m pretty sure the NYT doesn’t have anybody proofreading their articles. NYT is a has been news agency. They’re no longer the gold standard for news. They’ve been captured and now are nothing more than a propaganda machine just like so many of the others. It’s why a lot of the country sees them as a joke now.

This comment has been deemed insightful by the community.
mick says:

Re:

This wasn’t news, genius; it was an opinion piece. Not understanding the difference is how people form dumb opinions like yours.

The Times is the gold standard for news, even if it’s imperfect. Its opinions are another matter, particularly for tech.

In fact, most of their tech coverage has been garbage for (at least) two decades, because people who are competent in tech make far too much money to become journalists. That’s how you wind up with people like David Pogue and Kara Swisher writing about technology (and why Swisher is great at writing about the tech personalities, and terrible when it comes to the actual technology).

This comment has been flagged by the community. Click here to show it.

RD says:

TL;DR

So in other words:

  • No free speech. At all. Anything is subject to censoring/curating/managing.
  • There is no limit or de minimis in the censoring, all speech is subject to removal
  • Zero repercussions for taking these limiting actions. The only repercussions are always justified against the speaker and they have zero expectation of any freedom of speech.

So sad to see TD fall down the fascist well. Aren’t you guys liberals? What is liberal about empowering big business against individuals to limit speech? Quite the “Radicals” you are.

Stephen T. Stone (profile) says:

Re:

No free speech. At all. Anything is subject to censoring/curating/managing.

Every website that you don’t own or operate can moderate your speech however a given site sees fit. That goes as much for Twitter and BestNetTech as it does for, say, 4chan.

There is no limit or de minimis in the censoring, all speech is subject to removal

The privilege to moderate speech extends precisely that far. No site is obligated⁠—legally, morally, and ethically⁠—to host any third-party speech. What that speech says or who says it doesn’t matter.

Zero repercussions for taking these limiting actions. The only repercussions are always justified against the speaker and they have zero expectation of any freedom of speech.

The use of a platform you don’t own is a privilege. That privilege can be revoked by the owner(s) of that platform. Barring some sort of contractual obligations between you and the owner(s), you can’t do shit to stop that from happening.

What is liberal about empowering big business against individuals to limit speech?

Nothing, really. But we live in a reality where “big business” controls many of our online outlets for speech. You can either learn to live with that or you can stop using those outlets. Just remember: Smaller platforms might not be owned by “big business”, but they’re not going to have the same reach as Twitter or Instagram.

Quite the “Radicals” you are.

That’s funny, that you think BestNetTech is radicalized into any political ideology, let alone leftist ideology.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
MrWilson (profile) says:

Re:

and really should stop giving unlicensed legal advice.

Have you considered you don’t understand what “unlicensed legal advice” entails? Mike isn’t giving legal advice. You don’t have to be a lawyer to speak on legal matters. You do have to be a licensed lawyer in order to give specific legal advice to a person when you establish an attorney-client relationship with them. Mike has not done that at all. Talking about the law in general is nothing like that.

The irony is that you’d understand why you’re wrong if you were a lawyer or just a person who knew how to google simple concepts.

That One Guy (profile) says:

Re:

As far as journalists some of it might be a matter of trying to destroy the competition, as sites that allow third-party content to be posted in real-time can pose a threat to more ‘traditional’ news outlets, however for the rest, and I imagine the majority…

A mix of those that are trying to destroy it because they hate how the law allows platforms to moderate ‘incorrectly'(whether that be more or less depends on who you ask), with the (frankly blindingly stupid) idea that getting rid of 230 will force platforms to moderate ‘correctly’, and then you’ve got the gullible dupes who’ve been told that 230 is a blight on all that is good so often by liars from the first group that they just take it as a given at this point.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re:

Censorship harms third parties who never hear the message.

You couldn’t argue this universally. Some messages aren’t useful. Some messages are just more noise which could distract third parties from useful messages. Some messages could be scams. Some messages could just be lies.

The 1st Amendment recognizes your right to speak. It doesn’t recognize your right to an audience or to force anyone to listen to your message or to force anyone to host your message.

Internet search results are provided by services that aren’t obligated to share every link or message. Start your own “uncensored” search engine if that’s what you want.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a BestNetTech Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

BestNetTech community members with BestNetTech Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the BestNetTech Insider Shop »

Follow BestNetTech

BestNetTech Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the BestNetTech Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
BestNetTech Deals
BestNetTech Insider Discord
The latest chatter on the BestNetTech Insider Discord channel...
Loading...