If you’re wondering what independent journalism that won’t bend to White House pressure looks like,you’re looking at it.
On Sunday night, CBS News’ newly imported “editor in chief,” Bari Weiss, killed a 60 Minutes story about Trump’s illegal deportations to a Salvadoran concentration camp—hours before it was set to air. Why? Because it might upset the White House. And because Weiss apparently doesn’t understand how television production works, she waited so long to kill it that it still got sent to foreign partners, meaning the story she tried to bury spread all over the internet anyway.
A perfect Streisand Effect, and a perfect illustration of what happens when you hand editorial control to someone more interested in protecting power than challenging it.
Supporting BestNetTech means supporting a news organization that won’t kill stories to please anyone in power. Not now, not ever.
The story itself was pretty typical 60 Minutes fare, and in some ways quite similar to a PBS Frontline piece, Surviving CECOT, that was released a few weeks earlier. The main new ground in the 60 Minutes story was that only about 3% of those Venezuelans illegally sent to CECOT actually had violent criminal records (contrary to what the administration claimed). There was also some further evidence showing that CECOT almost certainly violated the human rights of everyone in the concentration camp.
Here at BestNetTech, we’ve been covering the story of the illegal and unconscionable actions of the Trump administration, shipping these men to a concentration camp in El Salvador from the beginning.
While some have demanded that we “stick to tech” when an administration ships human beings to a modern torture camp based on nothing more than having tattoos, that’s everyone’s story to cover. If you want to “stick to tech,” feel free to go elsewhere. And if you want to get White House approved talking points and “view from nowhere” reporting, apparently CBS News is now there for you.
But if you want to know what all of this actually means and why it’s important, stick around.
Here’s the difference between us and CBS News: BestNetTech has been around for nearly 30 years precisely because we don’t have a Bari Weiss. We don’t have millions in venture capital or billionaire backers telling us what we can and can’t say. We’re nimble, we’re independent, and we answer to our readers—not to power.
But that independence comes at a cost. The price of a single 30-second TV commercial on 60 Minutes could fund BestNetTech for months. And right now, organizations that used to sponsor our work are backing away—not because they disagree with our reporting, but because they’re afraid Trump will come after them for supporting it.
So if you want independent reporting that won’t bend to White House pressure, we need your support. Back us at $100 or more between now and January 5th, and we’ll send you BestNetTech’s first-ever challenge coin—commemorating 30 years of Section 230, the law that makes comment sections and social media sharing possible, and which is under constant attack from the very people we’re covering.
Or hell, do it to spite the people who think journalism should serve power instead of challenging it. Either way works for us.
We promise we’ll put it to better use than any of the billionaire owned and controlled media orgs out there.
Yesterday, Rep. Harriet Hageman released her bill to repeal Section 230. She’s calling it “reform,” but make no mistake—it’s a repeal, and I’ll explain why below. The law turns 30 in February, and there’s a very real chance this could be its last anniversary.
Which is why we’re running BestNetTech’s fundraising campaign right now, offering our very first commemorative coin for donations of at least $100 made before January 5th. That coin celebrates those 30 years of Section 230. But more importantly, your support funds the kind of coverage that can actually cut through the bullshit at a moment when it matters most.
Because here’s the thing: for nearly three decades, we’ve been one of the only sources to report fully and accurately on both how Section 230 works and why it’s so important. And right now, with a bipartisan coalition gunning to kill it based on myths and misinformation, that expertise is desperately needed.
Section 230 remains one of the most misunderstood laws in America, even among the people in Congress trying to destroy it. Some of that confusion is deliberate—political expediency wrapped in talking points. But much of it has calcified into “common knowledge” that’s actively wrong. The “platform or publisher” distinction that doesn’t exist in the law. The idea that 230 protects illegal content. The claim that moderation choices forfeit your protections. All myths. All dangerous. All getting repeated by people who should know better.
So below, I’m highlighting some of our essential Section 230 coverage—not as a greatest hits compilation, but as a roadmap to understanding what’s actually at stake. If you believe in the open internet, you need Section 230. And if you need Section 230, you need someone who actually understands it fighting back against the tsunami of bullshit. That’s what you’re funding when you support BestNetTech.
Let’s start with the big one. Our most popular post ever on Section 230:
Five years later, this is still the single most useful thing you can hand someone who’s confidently wrong about Section 230. It systematically demolishes every major myth—the platform/publisher nonsense, the “neutrality” requirement that doesn’t exist, the “good faith” clause people misread, all of it—in a format designed to be shared. And people do share it, constantly, because the same wrong arguments keep recycling. Consider this your foundation.
This is the piece that exposes the semantic game. Politicians love to say they’re not repealing 230, just “reforming” it. But as Cathy Gellis explains, nearly every reform proposal accomplishes the same thing: it forces websites into expensive, extended litigation to reach an outcome the law currently reaches in weeks. That’s not reform—it’s sabotage by procedure. The real benefit of 230 isn’t the outcome (most of these cases would eventually win on First Amendment grounds anyway), it’s that you get there for $100k instead of $5 million. Strip that away and you’ve effectively repealed the law for everyone except the richest companies. Which, spoiler alert, is exactly the point of most “reform” proposals.
A near universal trait of those who show up with some crazy idea to “reform” Section 230 is that they don’t understand how the law works, despite the many explainers out there (and an entire book by Jeff Kosseff). And that’s why, as with Cathy’s article above, the advocates for reform lean in on the claim that they’re just “reforming” it when they’re actually leading to an effective repeal.
Law professor James Boyle asks the more interesting question: why do smart people keep getting this so catastrophically wrong? His answer—cognitive biases, analogies to other areas of law that don’t actually apply, and the sheer difficulty of thinking clearly about speech policy—explains why the same bad ideas keep resurfacing despite being debunked repeatedly. Understanding the psychology of the confusion is almost as important as correcting it.
So many complaints about Section 230 are actually complaints about the First Amendment in disguise. People angry that a website won’t remove certain speech often blame 230, but the reality is that the First Amendment likely protects that speech anyway. Prof. Jess Miers explains why killing 230 won’t magically enable the censorship people want—it’ll just make the process more expensive and unpredictable. Some people hear that and think “great, we can rely on the First Amendment alone then!” Which brings us to:
This is the piece that clicks it all into place. Prof. Eric Goldman’s paper explains that 230 isn’t an alternative to First Amendment protection—it’s a procedural shortcut to the same outcome. Without 230, most of these lawsuits would still eventually fail on First Amendment grounds. The difference is it would cost $3-5 million in legal fees to get there instead of $100k. That $100k vs $5 million gap is the difference between an ecosystem where small companies can exist and one where only giants survive. Anyone telling you we can just rely on the First Amendment either doesn’t understand this or is deliberately trying to consolidate the internet into a handful of megacorps.
And now we get to the part where even the supposed experts fuck it up. The NY Times—the Paper of Record—has made the same basic factual error about Section 230 so many times they’ve had to run variations of this correction repeatedly:
If it feels like you can’t trust the mainstream media to accurately report on Section 230, you’re not wrong. And that’s why we do what we do at BestNetTech.
Even the tech press—outlets that should know better—regularly faceplants on this stuff. This Wired piece was so aggressively wrong it read like parody. The value here is watching us dissect not just the errors, but how someone can write thousands of words about a law while fundamentally misunderstanding what it does.
The title says it all. When former members of Congress—people who theoretically understand how laws work—produce something this catastrophically wrong, it reveals the scope of the problem. These aren’t random trolls; these are people with institutional credibility writing op-eds that influence policy. The danger here is that their ignorance carries weight.
The pattern is almost comical: someone decides 230 is bad, spends zero time understanding it, then announces a “solution” that would either accomplish nothing or catastrophically backfire. This piece is representative of dozens we’ve written, each time responding to a new flavor of the same fundamental confusion, like no other publication online.
People have assigned Section 230 almost mystical properties—that it’s the reason democracy is failing, or that repealing it would somehow fix polarization, or radicalization, or misinformation. The law does none of these things, good or bad. This piece dismantles the fantasy thinking that treats 230 like a magic wand.
Amid all the doom-saying, it’s worth remembering what 230 actually enables. Jess Miers walks through five specific cases where the law protected communities, support groups, review sites, and services that improve people’s lives. Repealing 230 doesn’t just hurt Facebook—it destroys the ecosystem of smaller communities that depend on user-generated content.
Please support our continued reporting on Section 230
There are dozens more pieces in our archives, each responding to new variations of the same fundamental misunderstandings. We’ve been doing this for nearly three decades—long before it was politically fashionable to attack 230, and we’ll keep doing it as long as the law is under threat.
Because here’s what happens if we lose this fight: the internet consolidates into a handful of platforms big enough to survive the legal costs. Smaller communities die. Innovation gets strangled in the crib. And ironically, the problems people blame on 230—misinformation, radicalization, abuse—all get worse, because only the giants with the resources to over-moderate will survive, and they’ll moderate in whatever way keeps advertisers and governments happy, not in whatever way actually serves users.
That’s the stakes. Not whether Facebook thrives, but whether the next generation of internet services can even exist.
We’re committed to making sure policymakers, journalists, and anyone who cares about this stuff actually understand what they’re about to destroy. But we need support to keep doing it. If you agree that Section 230 matters, and that someone needs to keep telling the truth about it when even the NY Times can’t get basic facts right, support BestNetTech today. Consider a $230 donation and get our first commemorative coin, celebrating 30 years of a law that’s under existential threat and making sure it survives to see 31.
We’re still far below our goal of making BestNetTech funded primarily by reader donations. That matters because it’s the only way we can keep doing the kind of coverage that doesn’t exist anywhere else—coverage that refuses to treat this moment as normal, that digs into the details other outlets skip, and that actually understands how technology, policy, and democracy intersect.
(Quick admin notes: yes, we added a $230 donation level after someone pointed out the obvious oversight. Also, a few people have donated $99 because they don’t want a coin — we appreciate any donation of any amount, but we’ll ask everyone before shipping if they actually want the coin and you can just say no, so you can still donate any amount you want!)
For those still thinking about it, here’s what you’re actually supporting — our most important work over the last year:
This became the most-forwarded piece we published all year. It put a stake in the ground: we’re not going to pretend this is a normal administration making normal policy decisions. While mainstream outlets sanewash every lawless act as just another day in politics, we said it clearly: this is an attack on the institutions that make tech innovation and free speech possible. Democracy isn’t just background context for tech policy. It’s the foundational layer. Without it, nothing else matters.
Everyone’s stuck “working the refs” — begging governments, platforms, or billionaires to fix things. That’s learned helplessness, and it’s exactly what concentrates power in their hands. This piece broke down why decentralized tools and protocols actually matter: they’re not just technical curiosities, they’re how users take back agency. You don’t need permission from Mark Zuckerberg or Elon Musk to control your own digital life.
Tim Cushing took on critics who claimed we’d gotten “too political” by pointing out the obvious: when an administration is openly breaking laws and attacking institutions, pretending it’s normal is taking a political stance. Refusing the frame others try to impose on you is part of the job. This was Tim explaining why we won’t play along with manufactured both-sidesism when one side is actively dismantling rule of law.
A practical follow-up on taking back agency: how vibe coding tools are making it possible to build your own small, personal tools instead of waiting for some platform to maybe, possibly do what you need. Not giant apps — just software that solves your problem, built by you. Another way to make the internet work for you, rather than the other way around.
A modern update to the classic Dorothy Thompson 1941 Harper’s Essay “Who Goes Nazi?”, the Who Goes MAGA version seems to get discovered by some new pocket of the internet every few weeks and go viral again. It has certainly led to a bunch of discussions about why some people think that making life worse for a huge percentage of the population is a worthwhile price to pay in exchange for not having someone tell them they used an incorrect pronoun.
We called this before he even took the job: Brendan Carr would be the most censorial FCC chair in modern history. Turns out that was exactly right. He’s attacked comedians, threatened broadcasters, and openly weaponized government power against speech he dislikes. The mainstream press still covers him like a normal regulator making policy arguments. We don’t, because he’s not.
Cathy Gellis, one of our contributors, is fighting cancer. RFK Jr., now in charge of health agencies, is fighting cancer research. This isn’t abstract policy analysis — it’s the direct, personal cost of putting conspiracy theorists in charge of public health. Sometimes the stakes aren’t just democratic principles, they’re whether people live or die.
Bad-faith actors weaponize “free speech” rhetoric to demand debates that legitimize their nonsense. We broke down the actual grift: these aren’t genuine marketplace-of-ideas participants, they’re trolls gaming the system for attention and legitimacy. Real free speech principles don’t require you to platform and respond to every jackass who demands it.
Silicon Valley’s MAGA converts thought authoritarianism would be good for business. They’re learning the hard way that you can’t have a thriving innovation economy when you’re dismantling the rule of law and institutional stability that makes it possible. The AI bubble is hiding the rot, but the foundation is crumbling. These founders are about to get a very expensive education in why liberal democracy actually matters.
Someone walked into the CDC and opened fire while spouting the same conspiracy theories RFK Jr. spreads daily. That story got memory-holed fast. RFK Jr. keeps spreading the same dangerous rhetoric. Inflammatory lies have consequences. Most outlets moved on. We haven’t.
That first post is over 10,000 words breaking down how Zuckerberg fed Rogan a misleading narrative about Biden admin “pressure” while admitting repeatedly that Meta said no and felt no coercion. Meanwhile, the Trump administration has been making constant threats and demands of Zuck and he’s folded on nearly every one. We called out how the original story was nonsense, and followed up with details of just how willing Zuckerberg is to cave to Trump’s demands, while admitting he never felt compelled to do so under Biden. Seems like a big story that the mainstream media just skipped right over.
Everyone worries about AI hallucinations, but the President does the exact same thing — generating confident bullshit that sounds plausible without any regard for truth. Wind him up and he’ll fabricate entire realities as long as they make him look good. Stop analyzing his words like they contain coherent and consistent policy positions. He’s just probabilistically generating whatever sounds good in the moment.
Because we’d been tracking Carr’s censorial pattern, we immediately recognized his attempt to get Jimmy Kimmel fired for what it was: actual government censorship of speech critical of Trump. Not the imaginary censorship MAGA types complain about — real, unconstitutional use of state power to silence critics. Most outlets covered it as just another political spat.
DOGE was never about efficiency. It was about destruction. While mainstream outlets credulously covered Musk’s efficiency theater, we’ve been tracking the actual impacts and exploring whether or not anyone will ever be held accountable for the damage. We’ve been making clear what happens when you treat government like a tech startup you can “disrupt” without consequences. Most media treated it as a legitimate policy experiment.
The big-picture argument: democracy and digital infrastructure are inseparable now. If we let tech oligarchs control all our communication tools, we’ve already lost. But we haven’t lost — there are still paths to reclaiming control through decentralized systems. This post connected the dots between technical architecture and democratic survival in ways most political coverage completely misses.
Democrats aren’t just failing to oppose Trump’s authoritarianism — they’re actively collaborating on internet censorship bills that hand more power to the executive branch. It’s political malpractice dressed up as “protecting children” or whatever the excuse du jour is. Someone needs to point out that the opposition party is helping build the surveillance and censorship infrastructure they’ll inevitably face themselves.
Notice a pattern? These aren’t stories you could get anywhere else, because most outlets either don’t understand the technical details, don’t grasp the institutional stakes, or are too busy both-sidesing actual authoritarianism to call it what it is.
We understand how content moderation actually works, so we can explain why Zuck’s narrative to Rogan was bullshit. We’ve been covering the faux “censorship industrial complex” debate for years, so we recognized Carr’s threats immediately. We know how digital infrastructure shapes democracy, so we can connect those dots while political reporters are still figuring out what a protocol is.
That’s what you’re funding when you support BestNetTech.
But if you want more of these kinds of stories, we need your ongoing support. We need to prove that BestNetTech can stand alone as an independent publication. And that requires users to back it. And through January 5th, if you back us at $100 or more, you’ll get the very first (hopefully of many) commemorative coin from BestNetTech, this one honoring Section 230, which is rapidly approaching its 30th anniversary.
If you’ve been following along, you know why independent voices matter right now. The administration has been attacking institutions left and right. News orgs, law firms, philanthropic funders, universities, you name it. Many are capitulating. But not us. Someone needs to step up and keep doing this work without flinching and without compromising.
That’s where you come in.
We need your support to keep going. And because we wanted to try something different—and because we’re genuinely grateful—we’re offering everyone who backs us at $100 or more by January 5th our very first commemorative coin celebrating 30 years of Section 230.
This is an experiment inspired by Hank Green’s brilliant Crash Course yearly fundraiser, where they sell a limited edition coin. For years here at BestNetTech, supporters at any level have gotten an “Insider” badge on their BestNetTech account. Think of this coin as that badge’s physical form. If it goes well, we’ll make this an annual thing with a new design each year—so don’t miss the first edition.
Look, if you’ve been reading BestNetTech for any length of time, you know we basically never do this kind of direct ask. Most sites—even good independent ones—are drowning you in ads, paywalls, registration walls, or those guilt-trip popups that make you feel like you’re personally bankrupting a journalist every time you open an article. We’ve avoided all that because those tactics are fundamentally hostile to the community we’re trying to build. We want you here because our work matters to you, not because we made reading anything else impossible.
But we also know that often independent news sites need to do those things to build enough support to survive. We’d really prefer to stay away from those kinds of anti-community gimmicks. We want people to read and share our articles. We want people to be able to read our site without feeling like we’re just looking to make money off your attention.
The trade-off: if we’re not going to manipulate you into paying, we need to actually convince you that this work is worth supporting. And this year, that case basically makes itself.
This year has been the toughest in BestNetTech’s history. We’ve taken an uncompromising position on democracy. And we think that’s the right stance. But it’s certainly harmed the bottom line. Historically, BestNetTech has always been a bit of a loss leader for other work: events, research, games, and more that we seek out grants and sponsors for in order to support our ongoing work.
This year, we had multiple organizations pull back on planned work. They were candid about why: the administration’s focus on punishing dissent meant they couldn’t risk putting a target on their backs by sponsoring us. Which, frankly, tells you everything you need to know about the current environment—and exactly why voices that won’t cower are critical right now. We made a choice to cover what matters over what’s safe. That choice has real costs.
And so we need your help. BestNetTech has long had some really cool ways to support us, even if we’re not as in-your-face about promoting it. Donating any amount via the Friend of BestNetTech donation page gets you an Insider badge on your profile and comments, and if you give over $100 before January 5th, we’ll send you this amazing coin next year!
Our Insider Shop also offers subscription options from the Crystal Ball (which gets you early access to some of our articles) to the Behind the Curtain option, our premium package of BestNetTech perks. And we’ve got a bunch of other ideas lined up for next year to bring supporters even more value that isn’t about walling off our key stories that you come to BestNetTech for.
The path forward is clear: we need to build direct support from readers to a level that makes us less dependent on sponsors who can be pressured into silence. Other independent sites have proven this model works. We just need to get there.
If you think this kind of coverage matters—especially right now—back us. Get the coin if you want. Or don’t. But help us keep doing this work without compromise.
On Tuesday, I wrote about how we were upgrading our daily email newsletter—the one we’ve had for decades but never actually promoted. Thousands of you had signed up just by spotting the little email icon. We figured more might be interested if we actually talked about it. We’d upgraded the tech, written a whole post about it, and figured people would start signing up.
And then… crickets. For two days straight, the only “new” signup was a test I’d run with my own email address. Which was, you know, not ideal. It was possible that no more people wanted to sign up and we’d maxed out on subscribers already. But… that seemed unlikely.
Then we got a few reports from people saying they tried to sign up but got error messages. Which is, generally speaking, not what you want.
It turns out that we had a little bug: users who were signed into their BestNetTech account could sign up for the newsletter. But if you were signed out (as most readers are) well… you got the error. There’s some sort of QA lesson in that, and yes, we should have tested it logged out as well, but there’s always something you miss.
All that is to say, we’ve now fixed this, and ever since we did, the signups have been flowing in. So I thought I’d do another quick post and say that, no, really, you can sign up for the emailed daily newsletter if you want it!
Also worth noting: a bunch of people said they prefer RSS or just visiting the site directly, and that’s great too. We’ve had full-text RSS feeds for over two decades—long before most sites even understood what they were—and the site itself is always there. The point isn’t to force you into one distribution channel. If anything, we’re doing the opposite: giving you the option to consume BestNetTech however you actually want to, rather than locking you into whatever method happens to be most fashionable… or profitable. That’s increasingly rare, and it’s not an accident.
We just want you to be able to enjoy BestNetTech whichever way works best for you.
Look, we get it. Your inbox is probably drowning in newsletters right now. Every publication, influencer, and their cousin’s dog walker has suddenly discovered the revolutionary concept of… sending you emails with stuff to read. Who could have predicted that people might want content delivered directly to them?
Well, actually, we could have. Because we’ve been doing this since 1997.
Here’s the thing that’s particularly amusing about the great newsletter “revolution” of the past few years: it’s being hailed as some brilliant innovation that will save media from the tyranny of social media algorithms and platform dependency. Meanwhile, we’ve been quietly proving that exact point for almost three decades.
Back when BestNetTech started, it literally was a newsletter. Email was the primary way we distributed things for the first couple of years. But somewhere along the way, we kind of forgot to mention that we still send out a daily email with the full text of every single post. We just had a tiny email logo in the upper righthand corner, and many thousands of you actually subscribed to get those full text daily newsletters.
Not excerpts. Not teasers designed to drive clicks. The entire damn thing, delivered to your inbox every day.
While everyone else spent the last few years “discovering” that newsletters are the future of media (again), we just kept quietly sending ours out to all of you who had subscribed, but never once mentioning its existence in the past couple of decades.
We’ve finally updated the tools we use to manage and send the newsletter, which means we now have actual flexibility to do more interesting things with it. Previously, our newsletter was essentially “here’s today’s posts in email form”—which, to be clear, is still exactly what it is today. We made sure that step one was just recreating what we already had been sending, because why fix what isn’t broken?
But now we have the infrastructure to potentially experiment with different formats, frequencies, or focus areas if that’s what you want.
The core offering remains the same: subscribe, and every day you’ll get the full text of everything we published, delivered to your inbox.
Now that we have better tools, we’re curious about what else you might want to see from our newsletter. Weekly roundups? Deep dives into specific topics? Digest emails instead of full text?
We’ve got some ideas, but we’d rather hear from you. Drop a comment below and let us know what would make a BestNetTech newsletter more valuable to you. Do you want more analysis, different formatting, or just more reminders of all the crazy stories we cover?
We’d like to hear from people who receive the current email with all our posts (are there other supplementary newsletters you’d want to sign up for as well?) and from those who aren’t interested in the current email (is there something else you would want to receive?)
For now, though, the main thing is this: if you want BestNetTech delivered to your inbox every day, you can do that now, and it’s easier than before when you had to hunt around the site for that tiny email icon.
You can subscribe from this page, or by using the widget at the bottom of this post, or via the signup form in the right-hand navigation bar at the top of any page. It’s free, it’s daily, and it’s the full text of everything we publish.
And yes, we realize the irony of writing a blog post to promote our newsletter that will then be included in our newsletter. But let’s not get too deep in the weeds on that.
Now, what other newsletter features would actually be useful to you?
FTC Chair Andrew Ferguson begged Donald Trump for his job by promising he would “end Lina Khan’s politically motivated investigations.” And, yet, one of his first orders of business upon getting the job was to… kick off a politically motivated investigation regarding “big tech censorship,” which he (falsely) claimed was potentially illegally targeting conservative speech and violating the policies and promises of these platforms.
It was an odd decision for many reasons, not the least of which is that it seemed to be discussing not just a fantasy world scenario that never existed, but even if it had ever existed, it certainly no longer did. The biggest social media platforms of the day are now all controlled by the ultra-rich who lined up (literally) behind Donald Trump and have agreed to do his bidding. ExTwitter is owned by Elon Musk, Donald Trump’s largest donor and his right-hand man in destroying the government. Mark Zuckerberg is now running content policy changes by Trump’s top advisor Stephen Miller.
If there is any “bias” in content moderation, it is very much in favor of MAGA Trump views. Which, to be clear, is their right to do under the First Amendment.
But the entire premise of the inquiry seemed to simply misunderstand nearly everything about content moderation. So, yesterday, the Copia Institute filed our comment with the FTC highlighting the myriad problems and misunderstandings that the FTC seemed to embrace with this inquiry.
The crux of our argument:
The FTC’s inquiry into “platform censorship” fundamentally misunderstands three critical realities about online expression:
First, as the Supreme Court recently affirmed in Moody v. NetChoice, government scrutiny of platform moderation decisions directly violates First Amendment protections of private editorial discretion. It would violate it even if any platform were a legitimate chokepoint for information, but such is far from the case. We live in an era of unprecedented speech abundance, where anyone can reach global audiences through countless online channels, and anyone can consume information through countless online channels. The premise of investigating “censorship” ignores this surfeit of options in how we communicate, where we’ve moved away from a world of gatekeepers who limit speech to one of intermediaries who enable it, and indeed threatens to reverse that important, speech-fostering progress.
Second, content moderation ultimately enables, rather than constrains, more speech. For all the talk of certain websites being “the modern public square,” it is the wider open internet itself that should be seen as that public square. The metaphor only works in so much as the internet can facilitate such a wide variety of online expression through differentiated and competing offerings and communities. The multitude of platforms built upon that open internet make all that possible, so long as they are free to serve as private venues that cultivate distinct communities through their editorial choices. These choices are constitutionally protected editorial judgments that allow different platforms to serve different needs and communities.
Which is why, third, government interference with platform moderation would paradoxically reduce speech opportunities by threatening the entire ecosystem of services that make online expression possible. From content hosts to payment processors to infrastructure providers, countless specialized intermediaries enable platforms like ours to serve an ever growing and changing set of communities. Regulatory scrutiny of editorial decisions would force many of these services to refuse to facilitate all sorts of lawful speech, if not shut down or stop supporting user content entirely.
As both a content creator and platform operator who relies on this complex web of intermediary services to advance our own speech interests, we see this inquiry as a threat to our own expressive freedom as well as that of countless others. It is fundamentally misguided and we urge the FTC to terminate it immediately before damaging the very same speech interests it ostensibly claims to protect.
We then go into much greater detail on all three points. You can read the whole thing if you want, but I wanted to call out a few key things. Lots of comments address — as we did — the obvious First Amendment problems, but there were a few points we thought were unique.
For example, the entire premise that there’s a “censorship” problem is bizarre, given just how much the internet — through its variety of private platforms — now enables and encourages speech. We’re in a golden age of speech, not some censorial hellhole:
Historically, if you wanted to express yourself beyond those in the narrow geographical vicinity around you, you were dependent on gatekeepers and had to hope that some publisher, printer, editor, record label, studio, or other media middleman would be willing to distribute your expression, promote it, and help you monetize it. Those gatekeepers ultimately allowed only a minuscule percentage of expression to reach public audiences, and an even smaller percentage of that content was successfully promoted and monetized.
The rise of the internet changed the role of intermediaries from being mostly about gatekeeping expression to being mostly about enabling it, and as a result expression has on the whole proliferated, even though the intermediaries still have the right and ability to filter what messages they facilitate. As the Supreme Court noted in the Moody majority, the fact that the new platforms “convey the lion’s share of posts” does not change their rights under the First Amendment.
It remains bizarre to me that, in this much more expansive speech universe, so many people act as though their speech is restricted. To highlight this absurdity, we point to how ridiculous it would be if this same inquiry were directed at traditional media:
This notion misunderstands the nature of content moderation and how it is no different than editorial discretion, which is constitutionally incapable of being policed, no matter how it is marketed. For instance, when Fox News used to claim that its coverage is “Fair & Balanced” everyone recognized that it would be an absurd abuse of the First Amendment for the FTC to investigate whether or not that coverage is either “fair” or “balanced” as a potential “unfair practice” because of how inherently subjective such editorial discretion is.
Consider a more direct parallel: if the New York Times decides to reject an op-ed submission, it would be constitutionally farcical for the FTC to investigate whether their editorial decisions properly align with their stated mission of “all the news that’s fit to print.” These decisions are inherently subjective editorial judgments protected by the First Amendment and not for the government to interfere with.
Also, we highlight that content moderation rules are inherently subjective and can’t be any other way. Ask multiple people how to deal with specific content moderation decisions and they will all give you different answers. So much of the misunderstandings around content moderation are based on the myth that there is a single right answer to questions regarding moderation.
The same is true of content moderation. It is no different than the practices of any news media organization, in which editorial policies may be put in place, but where subjective editorial judgment calls are made every day. Online platforms must make these decisions on a scale far beyond what any traditional media outlet experiences. We have coined the eponymous “Masnick’s Impossibility Theorem” in recognition that there is never going to be an objectively “correct” way to moderate content. No matter how moderation may be intended, it simply cannot translate to perfect practice, let alone one all would agree is “perfect,” which is why the freedom to decide needs to be out of the government’s hands entirely.
We have empirically demonstrated the inherent subjectivity that inevitably informs moderation decisions through our “You Make the Call” event, where we challenged policy experts, regulators, and industry professionals to apply the same content moderation policy to multiple examples. The results of the exercise were telling: even with clearly articulated policies, experienced professionals consistently reached different conclusions about appropriate moderation actions. In every single case we presented, participants split their votes across all available options, highlighting the impossibility of “objective” content moderation.
Every person may also evaluate content against a policy differently. We have further demonstrated this tendency with two interactive online games the Copia Institute has created, allowing people to test their own abilities to do content moderation, bothat the moderator leveland at the level ofrunning a trust & safety team.
We probably should have pointed out that even the FTC inherently recognizes this. After all, it was moderating and restricting access to many of the comments that came in, claiming they were “inappropriate.”
And finally, as a service that regularly relies on a large number of third-party intermediaries to host, distribute, promote, and monetize our speech, we wanted to make clear that these efforts would inevitably limit ours (and others’) ability to speak, by destroying the intermediary services we rely on.
As both a content creator and platform operator, we rely on dozens of specialized intermediary services to reach our audience: social media for community engagement, podcast and video hosts for content distribution, chat services for communication, crowdfunding for monetization, and cloud services for infrastructure. Each of these services maintains their own editorial policies that align with their unique communities and business goals.
If government agencies could second-guess these editorial decisions, the impact would be severe and immediate:
Service differentiation would become impossible. Communities focused on specific interests — from knitting to weightlifting — could no longer maintain their distinct character through specialized content policies.
Compliance costs would force smaller platforms to shut down. Even basic content hosting would require extensive legal review and documentation of every moderation decision. Not only would the direct compliance costs be ruinous for many smaller services, the uncertainty and risk of liability would lead many to decide it would not be worth the hassle to facilitate anyone’s online speech at all.
Innovation would stagnate. Entrepreneurs who might launch new specialized platforms would be deterred by the inability to shape their services around their communities’, and customers’, needs.
The result? A dramatic reduction in online speech options. Content creators like us would face fewer channels for distribution and engagement. Communities would lose their specialized spaces. And the vibrant ecosystem of online expression would collapse into a handful of generic, risk-averse platforms.
In short, it would be a disaster for speech, and lead to an information environment significantly more censorial than the world we currently live in where a private company can freely choose to enforce its own rules as makes the most sense for it.
Thousands of comments were submitted to the FTC (though, admittedly, many of them are angry screeds from people about how their conspiracy theories and threats of violence were moderated and just how unfair it all is). I have little faith that anyone at the FTC will take our comment seriously.
But they should. What they are looking to do would be an outright disaster for free speech. And, yes, that might be Ferguson’s real goal. Just like FCC Chair Brendan Carr, he may wish to use the language and trappings of “free speech advocacy” to make himself a government censor. But, we should use the tools at our disposal today to call that out, and try to prevent that kind of actual censorship from being allowed.
Here on BestNetTech, we write a lot about content moderation and even did a whole big series of content moderation case studies. However, here’s an interesting one that involves BestNetTech itself from a couple weeks ago. It’s also a perfect example of Masnick’s Impossibility Theorem in action and a reminder of how the never-ending flood of spam and scams provides cover for bad actors to sneak through abusive reports.
This case should also be a giant red flag to policymakers working on content moderation laws. If your policy assumes everyone reporting content has pure motives, it’s not just naive, it’s negligent. Bad actors will exploit any system that gives them power to take down content, full stop.
Here’s what happened:
We were off on the Friday after Thanksgiving, and I went for a nice hike away from the internet. After getting home that evening, I saw an email saying that when the sender had tried to visit BestNetTech, they received a warning from Cloudflare that the site had been designated a “phishing” site.
I logged into our Cloudflare account and found that we had been blocked for phishing.
I did have the ability to request a review:
But, this all seemed pretty damn silly. Then I remembered that a couple days earlier, I had received a very odd email from another security provider, Palo Alto Networks, telling me that it had rejected my request to reclassify BestNetTech as a phishing site. Somewhat hilariously, it said that the “previous” category was “computer and internet info” and that I had requested it be reclassified as phishing (I had not…) and instead they had “reclassified” it back to computer-and-internet info.
It seemed fairly obvious that some jackass was going around to security companies trying to get BestNetTech reclassified as a phishing site. It didn’t work with Palo Alto Networks, but somehow it did with Cloudflare. It’s unclear if it was tried anywhere else, and how well it worked if it was tried elsewhere.
Thankfully, Cloudflare was quick to respond and to fix the issue. On top of that, the company was completely open and apologetic about how this happened. There was no hiding the ball at all. In fact, Cloudflare’s CEO Matthew Prince noted to me that this kind of thing might be worth writing about, given that it was a different kind of attack (though one he admitted the company never should have fallen for).
So how did this happen? According to Cloudflare, their trust & safety team were trying to go through a backlog of phishing reports and bulk processed them without realizing there was a bogus one (for BestNetTech!) in the middle.
I understand that some people in my shoes would be pretty mad about this. However, I’ve spent enough time with trust & safety folks to know that this kind of shit happens all the time. And it kind of has to. The vast, vast majority of trust & safety work is processing just obvious bad stuff: spam and scams. If you’re dealing with hundreds or thousands of those at once, it’s totally possible for a legitimate one to slip through the cracks. If a company actually hand-reviewed every single possible report, then the backlog would grow larger and larger, leaving actual spam and scam sites online.
This is the impossible bind that trust & safety teams find themselves in. Trust & safety teams obviously feel compelled to remove actual spam and scams relatively quickly to protect users. But going too quickly sometimes means making some mistakes.
We were just caught in the crossfire on this one. That’s not to say that this kind of nonsense would work for anyone else. Cloudflare tries to review such reports, but sometimes mistakes happen. I mean, we get the same thing (on a smaller scale) with our spam filter here at BestNetTech. If we get 2000 spam comments a day (which happens most days) and one false positive gets caught, we might not spot it. We actually have a separate system that tries to catch those mistakes and shunt them to a separate queue, so I think we still find the vast majority of falsely flagged comments, but I’m sure we miss some.
This is always going to be a challenge for trust & safety teams, and not something that some new regulation can realistically help with. If the law mandated a human review, you’d get problematic results with that too. Backlogs would grow. And even with a human, there’s no guarantee they’d have spotted this bogus request, since they’d probably be rapidly reading through hundreds of other similar reports, without the time or the capacity to go check each site carefully.
Cloudflare told me that the message they received was obvious bullshit. Someone sent them a report about BestNetTech, saying “There is malware that they spread to their visitors.” The problem was just that, in this case, no human read it. We just got bulk processed with a bunch of other reports, most of whom I’m sure were really pushing malware or phishing.
Yes, it may be mildly annoying that visitors were warned away from BestNetTech for a few hours. But to me, it’s even more fascinating to see someone trying this attack vector and having it work, if only briefly.
It’s a reminder that bad actors will try basically anything to try to find weaknesses in a system. So many of the laws around content moderation around the globe, such as the DSA, often seem to assume that basically everyone is an honest broker and well-meaning when it comes to moderation decisions. But, as we see here, that assumption can help allow bad actors to wreak havoc.
Policymakers need to start from the premise that some people will abuse any system that lets them take down content as they consider new content moderation laws. Laws that assume good faith are doomed. There are inherent tradeoffs in any approach, and even with the best system, mistakes are inevitable. The DMCA teaches us that any system that enables content removal will be abused. Policymakers must factor that in from the start, and yet they almost never acknowledge this.
Anyway, I appreciate Cloudflare’s quick response, apology, and willingness to be quite open about how this happened. And thanks for giving us another interesting content moderation case study at the same time.
Just a quick post to note an amazing (to me!) milestone. At some point last week (on Wednesday basically), this site passed over two million comments. That is since the site’s commenting feature launched in 1999. If you want the quick history: we started a newsletter 27 years ago on August 23, 1997, and it became a website in the spring of 1998, but we didn’t shift to the blog format with comments until March of 1999.
So, that’s basically two million comments across 25 years. Holy shit, that’s a lot of comments. Thank you to all of you who participate, especially the ones who add value with thoughtful, insightful, and funny comments. That’s what we’re always looking for.
It’s kind of incredible to me that this has lasted this long, especially given just how much the web has changed over this time, including how commenting has changed. People don’t remember this at all, but when BestNetTech launched in the blog format (using Slashcode 0.3), it posted the email address publicly if a user entered their email address in the form. Because in those days, that’s what people expected. The idea that people might want to keep their email addresses private, or that spam would be a problem, wasn’t even part of the thought process!
How far we’ve come.
I had thought about figuring out which comment was the actual two millionth, but that’s complicated by lots of factors, including that there is still plenty of comment spam that we miss. Just a few days before we hit the two million comment mark, I happened across an article from years ago that had about 50 comment spam messages that we had missed at the time, but which I promptly deleted. So what was the actual two millionth comment isn’t really definable, as I could very well find another cache of old spam on another day and delete them as well.
And, of course, we get somewhere on the order of 5,000 attempts at comment spam a day which are blocked before they ever get on the site. Only a very small percentage of spam gets through (though it’s still frustrating). If we were counting the number of attempted comments, including spam, then we’d be many millions higher.
Still, thank you to the community here of (mostly) productive commenters who keep things interesting and keep us on our toes here. The community aspects of this site are always what make it the best.
Let’s start off this post by noting that I know that some people hate anything and everything having to do with generative AI and insist that there are no acceptable uses of it. If that describes you, just skip this article. It’s not for you. Ditto for those who insist (incorrectly) that AI is nothing but a “plagiarism machine” or that training of AI systems is nothing but mass copyright infringement. I’ve discussed why all of that is wrong elsewhere.
Separately, I will agree that most uses of generative AI are absolute shit, and many are problematic. Almost every case I’ve heard of journalistic outfits using AI are examples of the dumbest fucking ways to use the technology. That’s because addle-brained finance and tech bros think that AI is a tool to replace journalists. And every time you do that, it’s going to flop, often in embarrassing ways.
However, I have been using some AI tools over the last few months and have found them to be quite useful, namely, in helping me write better. I think the best use of AI is in making people better at their jobs. So I thought I would describe one way in which I’ve been using AI. And, no, it’s not to write articles.
It’s basically to help me brainstorm,critique my articles, and make suggestions on how to improve them.
As a bit of background, let me explain how we work on articles at BestNetTech. We try to make sure that no article goes out into the world until it’s been reviewed by someone other than myself. Most of the reviews are for grammar/typos, but also other important editorial checks along the lines of “does everything I say actually make sense?” and “what things might people get mad about?”
A while back, I started using Lex.page. Some of what I’m going to describe below is available for free accounts, and some in the paid “Pro” accounts. I don’t know the current limits on free accounts, as I am paying for a Pro account and what’s included in what may have changed.
Lex is an AI tool built with writers in mind. It looks kind of like a nice Google Docs. While it does have the power to do some AI-generated writing for you, almost all of its tools are designed to assist actual writers, rather than do away with their work. You can ask it to write the next paragraph for you, but I’ve never used that tool. Indeed, for the first few months I barely used any of the AI tools at all. I just like the environment as a standard writing tool.
The one feature I did use occasionally was a tool to suggest headlines for articles. If I thought my own headline ideas could be stronger, I would have it generate 10 to 15 suggestions. The tool rarely came up with one that was good enough to use directly, but it would sometimes give me an idea that I could take and adjust, which was better than my initial idea.
However, I started using the AI more often a couple of months ago. There’s a tool called “Ask Lex” where you can chat with the AI (on a Pro account, you can choose from a list of AI models to use, and I’ve found that Claude Opus seems to work the best). I initially couldn’t think of anything to ask the AI, so I asked people in Lex’s Discord how they used it. One user sent back a “scorecard” that he had created, which he asked Lex to use to review everything he wrote.
I changed around the scorecard for my own purposes (and I keep fiddling with it, so it will likely change more soon), but the current version of the score card I use is as follows:
This is an article scorecard:
Does this article:
#1 have a clear opening that grabs the reader score from 0 to 3
#2 clearly explain what is happening from 0 to 3
#3 clearly address the complexities from 0 to 3
#4 lay out the strongest possible argument 0 to 3
#5 have the potential to be virally shared 0 to 3
#6 is there enough humor included in the article 0 to 3
Given these details, could you score this article and provide suggestions on how to improve ratings of 0 or 1?
I created a macro on my computer, so with a few keyboard taps, I can pop that whole thing up in the Ask Lex box and have it respond.
I’ll note that I don’t really care that much about the last two items on the list, but I have them in there for two reasons. First, as a kind of Van Halen brown M&M check, to make sure the AI isn’t just blowing smoke at me, but knows when to give me low ratings. Second, somewhat astoundingly, there are times (not always, but more frequently than I would have thought) when it gives really good suggestions to insert a funny line somewhere.
I’m going to demonstrate some of how it works, using the article I wrote last week about the legal disclaimer on the parody mashup of the Beach Boys singing Jay-Z’s 99 Problems. Here’s what it looked like when I ran my first draft against the scorecard:
The responses here are fairly generic, but I can dig deeper. While it said my opening was good, I wondered if it could be better, so I asked it for suggestions on a better opening. And its suggestions were good enough that I actually did rewrite much of my opening. My original opening had jumped right in to talking about “There I Ruined It,” and Lex suggested some opening framing that I liked better. Of course, it also suggested a terrible headline, which I ignored. It’s rare that I take any suggestion verbatim, but this time the opening was good enough that I used a pretty close version (again, this is rare, but it does often make me think of better ways to rewrite the opening).
Then, I know that above I said that I don’t much care about the humor, but since this story involved a funny video, I did ask if it had any suggestions on ways to make the article funnier. And… these were not good. Not good at all. So I basically ignored them all. However, sometimes it does come up with suggestions that, again, at least get me to add an amusing line or two into a piece. Even if they weren’t good for this article, I figured I should share them here so you get a sense of how it doesn’t always work well, but at least gets me to think about things.
Somewhat amusingly, when I ran this very article through the same process I’m discussing here, it suggested adding “more personality” to the piece. I asked it if it had suggestions on where, and its top suggestion was to “lean into the absurdity of some of the AI suggestions” in this part, but then concluded with an awful joke.
So, yeah, it’s suggesting I joke about how shit its jokes are. Great work, AI buddy.
I also will sometimes ask it for better headlines (as mentioned above). Lex has a built-in headline generator tool, but I’ve found that doing it as part of the “Ask Lex” conversation makes it much stronger. On this article we’re discussing, it didn’t generate any good suggestions, so I ignored them. However, I will admit that it came up with the title of the follow-up article: Universal Music’s Copyright Claim: 99 Problems And Fair Use Ain’t One. That was all Lex. My original was something much more boring.
Also, just this weekend, I added a brand new macro, which I like so far, in which I ask it to generate other headline ideas, based on some criteria, and then ask it to compare that to my existing headline that I came up with myself. I’ve only been using this one for a day or two, and didn’t use it on the fair use article last week, but here’s what it said about this very article you’re reading now:
Then my next step is to input another macro I created as a kind of gut check. I ask it to help me critique the article, highlighting which points are the weakest and can be made stronger, which points are strongest and could be emphasized more, and which points readers might get upset about and which I should improve. Finally, I ask it if anything is missing from the article.
Again, I don’t always agree with its suggestions (including some of the ones here), but it often makes me think carefully about the arguments I’m making and seeing how well they stand up. I have strengthened many of the things I say based on the responses from Lex that just get me to think more carefully about what’s written.
Occasionally I’ll ask it for other suggestions, such as a better metaphor for something. When I wrote about Allison Stanger’s bonkers congressional testimony a couple weeks ago, I was trying to think of a good example to show how silly it was that she thought Decentralized Autonomous Organizations (DAOs) were the same thing as decentralized social media. I asked Lex for suggestions on what would highlight how absurd that mistake is, and it gave me a long list of suggestions, including the one I eventually used: “saying ‘social security benefits’ when you mean ‘social media influencers’.”
Finally, after I go through all of that, I do use it to also do some basic editing help. Recently, Lex introduced a nice feature called “checks” which will “check” your writing and suggest edits on a variety of factors. Personally, the only ones I’ve found useful so far are the “Grammar” check and the “Readability” check.
I’ve tried all the rest, and don’t currently find them that useful for my style of writing. The grammar check is good at catching typos and extra commas, and the readability check is pretty good at getting me to chop up some of the run-on sentences that my human editors get frustrated with.
I do want to play more with the “Audience” one, but my attempts to explain who the BestNetTech audience is to it hasn’t quite worked yet. The team at Lex tells me they’re working to improve it.
There are a few more things, but that’s basically it. For me, it’s a brainstorming tool and a kind of “gut check” that helps me review my work and make it as strong as it can be before I hand it off to my human editors who will review it. I feel like I’m saving them time and effort as well by giving them a more complete version of each story I submit (and hopefully getting them less frustrated about having to break up my run-on sentences).
The important parts are that I’m not trying to replace anyone. I’m certainly not relying on it for actually writing very much. And I know that I’m going to reject many of the things it suggests. It’s basically just another set of eyeballs willing to look over my work and give me feedback. And, it does so quickly and is less sick of my writing quirks.
It’s not revolutionary. It’s not changing the world. But, for me, personally, it’s been pretty powerful, just in helping me to be a better writer.
And yes, this article was reviewed with the same tools, which obviously prompted me to include one of its suggestions in that screenshot above. I’ll leave the other suggestions that it made, and I took, up to your imagination.