You want to see actual government censorship in action? And have it done by people claiming they’re doing it to stop censorship? Check out last week’s revelation (originally reported by Reuters) that the US State Department will now start denying H-1B visas for anyone who has anything to do with trust & safety, fact checking, content moderation, or mis- or disinformation research. The government is now punishing people for speech—specifically, punishing them for the false belief that their work constitutes censorship.
The cable, sent to all U.S. missions on December 2, orders U.S. consular officers to review resumes or LinkedIn profiles of H-1B applicants – and family members who would be traveling with them – to see if they have worked in areas that include activities such as misinformation, disinformation, content moderation, fact-checking, compliance and online safety, among others.
“If you uncover evidence an applicant was responsible for, or complicit in, censorship or attempted censorship of protected expression in the United States, you should pursue a finding that the applicant is ineligible,” under a specific article of the Immigration and Nationality Act, the cable said.
It’s like JD Vance’s “the rules were you weren’t going to fact check me” taken to a new level.
This policy censors non-censors for not doing the thing that the White House and MAGA folks are actively doing every day. MAGA knows content moderation is necessary—they’re super eager to have it applied when it’s speech they don’t like. As we’ve recently discussed, they’ve suddenly been demanding social media companies stop foreign influence campaigns and remove anything mean about Charlie Kirk. At the same time, the White House itself is engaged in a twisted version of what it claims is fact checking and demanding that media orgs hire MAGA-friendly censors.
The hypocrisy is the point. But it’s also blatantly unconstitutional. As Carrie DeCell, senior staff attorney at the Knight First Amendment Institute at Columbia University, said in response to this news:
People who study misinformation and work on content-moderation teams aren’t engaged in ‘censorship’— they’re engaged in activities that the First Amendment was designed to protect. This policy is incoherent and unconstitutional.
Incoherent and unconstitutional is being too kind.
The real work that trust & safety professionals do makes this policy even more perverse. As trust & safety expert (and occasional Ctrl-Alt-Speech guest host) Alice Hunsberger told (the recently defunded) NPR:
“Trust and safety is a broad practice which includes critical and life-saving work to protect children and stop CSAM [child sexual abuse material], as well as preventing fraud, scams, and sextortion. T&S workers are focused on making the internet a safer and better place, not censoring just for the sake of it,” she said. “Bad actors that target Americans come from all over the world and it’s so important to have people who understand different languages and cultures on trust and safety teams — having global workers at tech companies in [trust and safety] absolutely keeps Americans safer.”
So the administration is now barring entry to people whose work includes stopping child sexual abuse material and protecting Americans from foreign bad actors—all while claiming to oppose censorship and demanding platforms remove content about Charlie Kirk. The only way this makes sense is if you understand what the actual principle at work is: we get to control all speech, and anyone who might interfere with that control must be punished.
There are no fundamental values at work here beyond “we have power, and we’re going to abuse it to silence anyone who stands in our way.”
The rushed integration of half-cooked automation into the already broken U.S. journalism industry simply isn’t going very well. There have been just countless examples where affluent media owners rushed to embrace automation and LLMs (usually to cut corners and undermine labor) with disastrous impact, resulting in lots of plagiarism, completely false headlines, and a giant, completely avoidable mess.
As U.S. news outlets fire staffers and editors, cut corners, and endlessly compromise integrity and standards, they’re also apparently being increasingly duped by people using AI to generate bogus stories and reporting. Like this freelancer for Business Insider and Wired, who apparently tricked editors at both publications into publishing several completely fabricated stories written mostly by LLMs.
The freelancer, who called herself Margaux Blanchard, apparently doesn’t exist. She pitched both outlets on a story about a town called Gravemont, “a decommissioned mining town in rural Colorado” that was purportedly repurposed into “one of the world’s most secretive training grounds for death investigation.” Except the town in question, like the author, apparently doesn’t exist.
The Press Gazette did a little digging and found that “at least” six publications published various articles by the fake person using AI, which all kind of piggybacked on each other to give the fake journalist credibility to get future stuff published. Including one article about a couple who met in Roblox, fell in love, and got married. But the couple, and nobody else in the article, appears to exist:
“The interviewees in the article do not seem to match up to any people about whom information is publicly available on the internet. For example the article cites “Jessica Hu, 34, an ordained officiant based in Chicago” who it says “has made a name for herself as a ‘digital celebrant,’ specialising in ceremonies across Twitch, Discord, and VRChat”. However, no such officiant appears to exist.”
This is less surprising for Business Insider (which increasingly traffics in clickbait and recently fired 25% of its staff) and more surprising for Wired, which has been doing a lot of great journalism during the second Trump term. It’s particularly embarrassing given the parade of extremely talented writers and editors that have repeatedly been shitcanned by many of these same outlets over the last decade.
Wired was at least transparent about the fuck up, publishing an article explaining how they were tricked, noting they only figured things out when the freelancer refused payment via traditional systems. But they acknowledge they didn’t adhere to traditional standards for fact checking (who has the time, apparently):
“We made errors here: This story did not go through a proper fact-check process or get a top edit from a more senior editor. First-time contributors to WIRED should generally get both, and editors should always have full confidence that writers are who they say they are.”
This country has taken an absolute hatchet to quality journalism, which in turn has done irreparable harm to any effort to reach reality-based consensus or have an informed electorate. The rushed integration of “AI,” usually by media owners who largely only see it as a way to cut corners and undermine labor, certainly isn’t helping. Add in the twisted financial incentives of an ad-based engagement infotainment economy, and you get exactly the sort of journalistic outcomes academics long predicted.
That, in turn, creates an environment rich for exploitation by the shittiest people imaginable, including random fraudsters, and the weird extremist zealots currently running what’s left of the United States.
In a world awash with misinformation and disinformation, those who spread and benefit from the chaos have worked hard to brand fact-checking and counterspeech as a form of censorship — and it’s a worryingly effective tactic. But there’s one type of counterspeech that is very hard to evade: mockery and satire. One person who knows that very well is Ben Collins, CEO of Global Tetrahedron, which purchased The Onion last year. This week, Ben joins us on the podcast to talk about the incredible power of mockery in the social and political landscape.
Here’s a story about being wrong. Not just regular wrong — we’re all wrong sometimes! — but spectacularly, publicly, “I’m going to double down again and again and again on this obviously false thing even after being corrected” wrong.
This week, Elon Musk stood in the Oval Office at the White House and was finally challenged about one of the Musk/Trump administration’s more creative (i.e., made up) claims: that the US was sending $50 million worth of condoms to Hamas in Gaza. (Sometimes it was $100 million. The details are flexible when you’re making things up.)
When confronted with the claim that the USAID grant in question was for a different Gaza, one in Mozambique, for anti-HIV and anti-TB programs rather than condoms, Musk responded by admitting that maybe “some of the things that I say will be incorrect and should be corrected.”
Now, for many people, this would be the moment to say “Ah yes, my mistake about that whole condom thing, I will try not to let that happen again.” But no! Musk immediately went on to insist that the $50 million was still too much for condoms. (It was not for condoms.) The funding from the US to Mozambique was actually a grant to the Elizabeth Glaser Pediatric AIDS Foundation, which has told reporters that none of it went to condoms.
BBC Verify contacted the aid agency that granted the funding – the Elizabeth Glaser Pediatric AIDS Foundation (EGPAF) – who told us that no money has been used to procure condoms.
You could claim — as Musk implies — that this is just one random mistake, but no. This is part of a pattern that has become Musk’s signature move: Find some anonymous ExTwitter troll’s creative interpretation of a government document they don’t even remotely understand, amplify it as absolute truth, and then — this is the important part — keep insisting it’s true even after actual experts explain why it’s so far beyond reality that no adult human being should believe it.
We saw this the other week with the false claims of pundit Bill Kristol supposedly getting funds from USAID when the reality was that they basically just used the same bank as a USAID recipient. (Yes, really. That’s the whole story. Same bank = SCANDAL!)
The admission that he “might get stuff wrong” would be commendable if we were talking about minor errors. But Musk isn’t making small, easily correctible mistakes. He’s consistently wrong about massive, consequential issues in ways that cause real, irreversible damage. It’s gotten so bad that people have started keeping a grim tally: the Elon Musk death toll. (When people start tracking the body count from your “mistakes,” maybe it’s time to reconsider your information-sharing strategy.)
But here’s the thing that makes this situation truly dangerous: It’s not just that Musk is wrong. Being wrong is, after all, a deeply human trait. We all suffer from confirmation bias — that tendency to believe things that confirm what we already think. I do it, you do it, everyone does it.
But Musk? Well, Musk has turned confirmation bias into an extreme sport, with a playing field littered with bodies. I would say it’s confirmation-bias-on-steroids, but given recent revelations, perhaps confirmation-bias-on-ketamine is more apt.
We’ve talked before about two important and related concepts that seem to bedevil Elon: the idea of Chesterton’s Fence (taking time to understand why something is where it is before you rip it out) and the concept that everything is a conspiracy theory if you don’t understand how anything works. Both of these seem to be at work here. He’s ripping out important systems without understanding why they’re there, and then — because he doesn’t understand them, nor even wish to try — insisting that they must be part of some grand conspiracy for fraud.
But here’s the incomprehensible bit that makes this whole situation move from merely absurd to genuinely tragic: There is literally no human being on Earth better positioned to get actual, detailed, thorough explanations for how things really work than Elon Musk.
He could easily get an actual briefing on literally anything. From many actual experts. Want to know how USAID actually works? He could have a dozen top experts in his office within the hour. Need to know how congressional appropriations work under the Constitution? I’m sure the foremost experts in the world would be at his door. He has unique access to all of the expertise in the world, not to mention the bank account to help pay for such expertise.
And yet, he consistently chooses to rely on sketchy anonymous accounts with names like TeslaFucks420 pushing transparently false narratives that any expert could debunk in minutes.
So here’s what someone should have asked as a follow-up to the admission about possibly being incorrect about the condoms: “Mr. Musk, you have unprecedented access to expertise and information. You could literally summon Nobel laureates to explain how government funding works. And yet you’re getting your information from anonymous ExTwitter accounts with anime avatars. Why?”
But wait, it gets better! (Or worse, depending on your perspective.) While Musk wants forgiveness for his “incorrect” statements, he’s showing absolutely zero mercy to others. Remember, this is the same Musk who, along with DOGE, is eagerly firing thousands upon thousands of federal employees for the grave sin of… [checks notes]… attending voluntary training sessions about being respectful to people from different backgrounds, as encouraged by Trump’s last education secretary, Betsy DeVos.
That’s right: Spreading demonstrably false information about government funding that affects people’s lives? “Oops, my bad, everyone makes mistakes!” Attending a workplace diversity seminar? “YOU’RE FIRED!”
But perhaps most telling is what happened after the press conference. Even after being directly corrected about the Mozambique HIV program funding, Musk doubled down. He not only continued to insist the money was for condoms but took to ExTwitter to amplify the false narrative, mockingly posting about “a LOT of condoms.”
It would indeed be a lot of condoms. If, you know, any of the money had actually gone to condoms. Which it didn’t.
This isn’t just about being wrong anymore. It’s about choosing to be wrong, repeatedly and destructively, when you have every possible resource to be right. So, it seems like the next time reporters have an opportunity to interview Elon, they should be asking things like, “why do you keep falling for the most blatantly bullshit nonsense around? Why are you “incorrect” so often about such basic stuff? Why should someone who consistently chooses conspiracy theories over readily available facts have the power to impact millions of lives?”
These seem like relevant questions. Though given the current political climate, I wouldn’t hold my breath waiting for anyone to ask them.
If you only remember two things about the government pressure campaign to influence Mark Zuckerberg’s content moderation decisions, make it these: Donald Trump directly threatened to throw Zuck in prison for the rest of his life, and just a couple months ago FCC Commissioner (soon to be FCC chair) Brendan Carr threatened Meta that if it kept on fact-checking stories in a way Carr didn’t like, he would try to remove Meta’s Section 230 protections in response.
Two months later — what do you know? — Zuckerberg ended all fact-checking on Meta. But when he went on Joe Rogan, rather than blaming those actual obvious threats, he instead blamed the Biden administration, because some admin officials sent angry emails… which Zuck repeatedly admits had zero impact on Meta’s actual policies.
Indeed, this very fact check may be a good example of what I talked about regarding Zuckerberg’s decision to end fact-checking, which is that it’s not as straightforward as some people think, as layers of bullshit may be presented misleadingly around a kernel of truth, and peeling back the layers is important for understanding.
Indeed, this is my second attempt at writing this article. I killed the first version soon after it hit 10,000 words and I realized no one was going to read all that. So this is a more simplified version of what happened, which can be summarized as: the actual threats came from the GOP, to which Zuckerberg quickly caved. The supposed threats from the Biden admin were overhyped, exaggerated, and misrepresented, and Zuck directly admits he was able to easily refuse those requests.
All the rest is noise.
I know that people who dislike Rogan dismiss him out of hand, but I actually think he’s often a good interviewer for certain kinds of conversations. He’s willing to speak to all sorts of people and even ask dumb questions, taking on the role of listeners/viewers. And that’s actually really useful (and enlightening) in certain circumstances.
Where it goes off the rails, such as here, is where (1) nuance and detail matter and (2) where the person he is interviewing has an agenda to push with a message that he knows Rogan will eat up, and knows Rogan does not understand enough to pick apart what really happened.
This is not the first time that Zuckerberg has gone on Rogan and launched a narrative by saying things that are technically true in a manner that is misleading, likely knowing that Rogan and his fans wouldn’t understand the nuances, and would run with a misleading story.
Two and a half years ago, he went on Joe Rogan and said that the FBI had warned the company about the potential for hack and leak efforts put forth by the Russians, which Rogan and a whole bunch of people, including the mainstream media, falsely interpreted as “the FBI told us to block the Hunter Biden laptop story.”
Except that’s not what he said. He was asked about the NY Post story (which Facebook never actually blocked, they only — briefly — blocked it from “trending”), and Zuckerberg very carefully worded his answer to say something that was already known, but which people not listening carefully might think revealed something new:
The background here is that the FBI came to us – some folks on our team – and was like ‘hey, just so you know, you should be on high alert. We thought there was a lot of Russian propaganda in the 2016 election, we have it on notice that basically there’s about to be some kind of dump that’s similar to that’.
But the fact that the FBI had sent out a general warning to all of social media to be on the lookout for disinfo campaigns like that was widely known and reported on way earlier. The FBI did not comment specifically on the Hunter Biden laptop story, nor did they tell Facebook (or anyone) to take anything down.
Still, that turned into a big thing, and a bunch of folks thought it was a big revelation. In part because when Zuck told that story to Rogan, Rogan acted like it was big reveal, because Rogan doesn’t know the background or the details or the fact that this had been widely reported. He also doesn’t realize there’s a huge difference between a general “be on the lookout” warning and a “hey, take this down!” demand, with the former being standard and the latter being likely unconstitutional.
In other words, Zuck has a history of using Rogan’s platform to spread dubious narratives, knowing that Rogan lacks the background knowledge to push back in the moment.
After that happened, I was at least open to the idea that Zuck just spoke in generalities and didn’t realize how Rogan and audience would take what he said and run with it, believing a very misleading story. But now that he’s done it again, it seems quite likely that this is deliberate. When Zuckerberg wants to get a misleading story out to a MAGA-friendly audience, he can reliably dupe Rogan’s listeners.
Indeed, this interview was, in many ways, similar to what happened two years ago. He was relating things that were already widely known in a misleading way, and Rogan was reacting like something big was being revealed. And then the media runs with it because they don’t know the details and nuances either.
This time, Zuckerberg talks about the supposed pressure from the Biden administration as a reason for his problematic announcement last week:
Rogan:What do you think started the pathway towards increasing censorship? Because clearly we were going in that direction for the last few years. It seemed like uh we really found out about it when Elon bought Twitter and we got the Twitter Files and when you came on here and when you were explaining the relationship with FBI where they were trying to get you to take down certain things that were true and real and certain things they tried to get you to limit the exposure to them. So it’s these kind of conversations. Like when did all that start?
So first off, note the framing of this question. It’s not accurate at all. Social media websites have always had content moderation/content policy efforts. Indeed, Facebook was historically way more aggressive than most. If you don’t, your platform fills up with spam, scams, abuse, and porn.
That’s just how it works. And, indeed, Facebook in the early days was aggressively paternalistic about what was — and what was not — allowed on its site. Remember its famously prudish “no nudity” policy? Hell, there was an entire Radiolab podcast about how difficult that was to implement in practice.
So, first, calling it “censorship” is misleading, because it’s just how you handle violations of your rules, which is why moderation is always a better term for it. Rogan has never invited me on his podcast. Is that censorship? Of course not. He has rules (and standards!) for who he platforms. So does Meta. Rejecting some speech is not “censorship”, it’s just enforcing your own rules on your own private property.
Second, Rogan himself is already misrepresenting what Zuckerberg told him two years ago about the FBI. Zuck did not say that the FBI was trying to get Facebook to “take down certain things that were true and real” and “limit the exposure to them.” They only said to be on the lookout for potential attempts by foreign governments to interfere with an election, leaving it up to the platforms to decide how to handle that.
On top of that, the idea that the simple fact of how content moderation works only became public with the Twitter Files is false. The Twitter Files revealed… a whole bunch of nothing interesting that idiots have misinterpreted badly. Indeed we know this because (1) we paid attention, and (2) Elon’s own legal team admitted in court that what people were misleadingly claiming about the Twitter Files wasn’t what was actually said.
From there, Zuck starts his misleading but technically accurate-ish response:
Zuck: Yeah, well, look, I think going back to the beginning, or like I was saying, I think you start one of these if you care about giving people a voice, you know? I wasn’t too deep on our content policies for like the first 10 years of the company. It was just kind of well known across the company that, um, we were trying to give people the ability to share as much as possible.
And, issues would come up, practical issues, right? So if someone’s getting bullied, for example, we deal with that, right? We put in place systems to fight bullying, you know? If someone is saying hey um you know someone’s pirating copyrighted content on on the service, it’s like okay we’ll build controls to make it so we’ll find IP protected content.
But it was really in the last 10 years that people started pushing for like ideological-based censorship and I think it was two main events that really triggered this. In 2016 there was the election of President Trump, also coincided with basically Brexit in the EU and sort of the fragmentation of the EU. And then you know in 2020 there was COVID. And I think that those were basically these two events where for the first time we just faced this massive massive institutional pressure to basically start censoring content on ideological grounds….
So this part is fundamentally, sorta, kinda accurate, which sets up the kernel of truth around which much bullshit will be built. It’s true that Zuck didn’t pay much attention to content policies on the site early on, but it’s nonsense that it was about “giving people a voice.” That’s Zuck retconning the history of Facebook. Remember, they only added things like the Newsfeed (which was more about letting people talk) when Twitter came about and Zuck freaked out that Twitter would destroy Facebook.
Second, he then admits that the company has always moderated, though he’s wrong that it was so reactive. From quite early on (as mentioned above) the company had decently strict content policies regarding how the site was moderated. And, really, much of that was based around wanting to make sure that users had a good experience on the site. So yes, things like bullying were blocked.
But what is bullying is a very subjective thing, and so much of content moderation is just teams trying to tell you to stop being such a jackass.
It is true that there was pressure on Facebook to take moderation challenges more seriously starting in 2016, and (perhaps?!?) if he had actually spent more time understanding trust & safety at that time, he would have a better understanding of the issues. But he didn’t, which meant that he made a mess of things, and then tried to “fix it” with weird programs like the Oversight Board.
But it also meant that he’s never, ever been good at explaining the inherent tradeoffs in trust & safety, and how some people are always going to dislike the choices you make. A good leader of a social network understands and can explain those tradeoffs. But that’s not Zuck.
Also, and this is important, Zuckerberg’s claims about pressure to moderate on “ideological” grounds are incredibly misleading. Yes, I’m sure some people were putting pressure on him around that, but it was far from mainstream and easy to ignore. People were asking him to stop potentially dangerous misinformation that was causing harm. For example, the genocide in Myanmar. Or information around COVID that was potentially legitimately dangerous.
In other words, it was really (like so much of trust & safety) an extension of the “no bullying” rule. The same was true of protecting marginalized groups like LGBTQ+ users or on issues like Black Lives Matter. The demands from users (not the government in those cases) were about protecting more marginalized communities from harassment and bullying.
I’m going to jump ahead because Zuck and Rogan say a lot of stupid shit here, but this article will get too long if I go through all of it. So let’s jump forward a couple of minutes, to where Zuckerberg really flubs his First Amendment 101 in embarrassing ways while trying to describe how Meta chose to handle moderation of COVID misinformation.
Zuckerberg: Covid was the other big one. Where that was also very tricky because you know at the beginning it was, you know, it’s like a legitimate “public health crisis,” you know, in the beginning.
And it’s… even people who are like the most ardent First Amendment defenders… that the Supreme Court has this clear precedent, that’s like all rightyou can’t yell fire in a crowded theater. There are times when if there’s an emergency your ability to speak can temporarily be curtailed in order to get an emergency under control.
So I was sympathetic to that at the beginning of Covid, it seemed like, okay you have this virus, seems like it’s killing a lot of people. I don’t know like we didn’t know at the time how dangerous it was going to be. So, at the beginning, it kind of seemed like okay we should give a little bit of deference to the government and the health authorities on how we should play this.
But when it went from, you know, two weeks to flatten the curve to… in like in the beginning it was like okay there aren’t enough masks, masks aren’t that important to, then, it’s like oh no you have to wear a mask. And you know all the, like everything, was shifting around. It just became very difficult to kind of follow.
In trying to defend Meta’s approach to COVID misinformation, Zuck manages to mangle First Amendment law in a way that’s both legally inaccurate and irrelevant to the actual issues at play.
There’s so much to unpack here. First off, he totally should have someone explain the First Amendment to him. He not only got it wrong, he even got it wrong in a way that is different than how most people get it wrong. We’ve covered the whole “fire in a crowded theater” thing so many times here on BestNetTech, so we’ll do the abbreviated version:
It’s not a “clear precedent.” It’s not a precedent at all. It was an offhand comment (in legal terms: dicta, so not precedential) in a case about jailing someone for handing out anti-war literature (something most people today would recognize as pretty clearly a First Amendment problem).
The Justice who said it, Oliver Wendell Holmes, appeared to regret it almost immediately, and in a similar case very shortly thereafter changed his tune and became a much more “ardent First Amendment defender.”
Most courts and lawyers (though there are a few holdouts) insist that whatever precedent there was in Schenck (which again, did not include that line) was effectively overruled a half century later in a different case that rejected the test in Schenck and moved to the “incitement to imminent lawless action” test.
So, quoting “fire in a crowded theater” these days is generally used as a (very bad, misguided) defense of saying “well, there’s some speech that’s so bad it’s obviously unprotected,” but without being able to explain why this particular speech is unprotected.
But Zuck isn’t even using it in that way. He seems to have missed that the whole point of the Holmes dicta (again, not precedent) was to talk about falsely yelling fire. Zuck implies that the (not actual) test is “can we restrict speech if there’s an actual fire, an actual emergency.” And, that’s also wrong.
But, the wrongness goes one layer deeper as well, because the First Amendment only applies to restrictions the government can put on speakers, not what a private entity like Meta (or the Joe Rogan Experience) can do on their own private property.
And then, even once you get past that, Zuck isn’t wrong that there was a lot of confusion about COVID and health in the early days, including lots of false information that came under the imprimatur of “official” sources, but… dude, Meta deliberately made the decision to effectively let the CDC decide what was acceptable even after many people (us included!) pointed out how stupid it was for platforms to outsource their decisions on “COVID misinfo” to government agencies which almost certainly would get stuff wrong as the science was still unclear.
But it wasn’t the White House that pressured Zuck into following the CDC position. Meta (alone among the major tech platforms) publicly declared early in the pandemic (for what it’s worth, when Trump was still President) that its approach to handling COVID misinformation would be based on “guidance” from official authorities like the CDC and WHO. Many of us felt that this was actually Meta abdicating its role and giving way too much power to government entities in the midst of an unclear scientific environment.
But for him to now blame the Biden admin is just blatantly ahistorical.
And from there, it gets worse:
Zuckerberg: This really hit… the most extreme, I’d say, during it was during the Biden Administration, when they were trying to roll out um the vaccine program and… Now I’m generally, like, pretty pro rolling out vaccines. I think on balance the vaccines are more positive than negative.
But I think that while they’re trying to push that program, they also tried to censor anyone who was basically arguing against it. And they pushed us super hard to take down things that were honestly were true. Right, I mean they they basically pushed us and and said, you know, anything that says that vaccines might have side effects, you basically need to take down.
And I was just like,well we’re not going to do that. Like,we’re clearly not going to do that.
Rogan then jumps in here to ask “who is they” but this is where he’s showing his own ignorance. The key point is the last line. Zuckerberg says he told them “we’re not going to do that… we’re clearly not going to do that.”
That’s it. That’s the ballgame.
The case law on this issue is clear: the government is allowed to try to persuade companies to do something. That’s known as using the bully pulpit. What it cannot do is coerce a company into taking action on speech. And if Zuckerberg and Meta felt totally comfortable saying “we’re not going to do that, we’re clearly not going to do that,” then end of story. They didn’t feel coerced.
Indeed, this is partly what the Murthy case last year was about. And during oral arguments, Justices Kavanaugh and Kagan (both of whom had been lawyers in the White House in previous lives) completely laughed off the idea that White House officials couldn’t call up media entities and try to convince them to do stuff, even with mean language.
Here was Justice Kavanaugh:
JUSTICE KAVANAUGH: Do you think on the anger point, I guess I had assumed, thought, experienced government press people throughout the federal government who regularly call up the media and — and berate them. Is that — I mean, is that not —
MR. FLETCHER: I — I — I don’t want
JUSTICE KAVANAUGH: — your understanding? You said the anger here was unusual. I guess I wasn’t —
MR. FLETCHER: So that —
JUSTICE KAVANAUGH: — wasn’t entirely clear on that from my own experience.
Later on, he said more:
JUSTICE KAVANAUGH: You’re speaking on behalf of the United States. Again, my experience is the United States, in all its manifestations, has regular communications with the media to talk about things they don’t like or don’t want to see or are complaining about factual inaccuracies.
Justice Kagan felt similarly:
JUSTICE KAGAN: I mean, can I just understand because it seems like an extremely expansive argument, I must say, encouraging people basically to suppress their own speech. So, like Justice Kavanaugh, I’ve had some experience encouraging press to suppress their own speech.
You just wrote about editorial. Here are the five reasons you shouldn’t write another one. You just wrote a story that’s filled with factual errors. Here are the 10 reasons why you shouldn’t do that again.
I mean, this happens literally thousands of times a day in the federal government.
“Literally thousands of times a day in the federal government.” What happened was not even that interesting or unique. The only issue, and the only time it creates a potential First Amendment problem, is if there is coercion.
This is why the Supreme Court rejected the argument in the Murthy case that this kind of activity was coercive and violated the First Amendment. The opinion, written by Justice Coney Barrett, makes it pretty clear that the White House didn’t even apply that much pressure towards Facebook on COVID info beyond some public statements, and instead most of the communication was Facebook sending info to the government (both admin officials and the CDC) and asking for feedback.
The Supreme Court notes that Facebook changed its policies to restrict more COVID info before it had even spoken to people in the White House.
In fact, the platforms, acting independently, had strengthened their pre-existing content moderation policies before the Government defendants got involved. For instance, Facebook announced an expansion of its COVID–19 misinformation policies in early February 2021, before White House officials began communicating with the platform. And the platforms continued to exercise their independent judgment even after communications with the defendants began. For example, on several occasions, various platforms explained that White House officials had flagged content that did not violate company policy. Moreover, the platforms did not speak only with the defendants about content moderation; they also regularly consulted with outside experts.
All of this info is public. It was in the court case. It’s in the Supreme Court transcript of oral arguments. It’s in the ruling in the Supreme Court.
Yet Rogan acts like this is some giant bombshell story. And Zuckerberg just lets him run with it. And then, the media ran with it as well, even though it’s a total non-story. As Kagan said, attempts to persuade the media happen literally thousands of times a day.
It only violates the First Amendment if they move over into coercion, threatening retaliation for not listening. And the fact that Meta felt free to say no and didn’t change its policies makes it pretty clear this wasn’t coercion.
But, Zuckerberg now knows he’s got Rogan caught on his line and starts to play it up. Rogan first asks who was “telling you to take down things” and Zuckerberg then admits that he wasn’t actually involved in any of this:
Rogan: Who is they? Who’s telling you to take down things that talk about vaccine side effects?
Zuckerberg:It was people in the um in the Biden Administration I think it was um…you know I wasn’t involved in those conversations directly…
Ah, so you’re just relaying the information that was publicly available all along and which we already know about.
Rogan then does a pretty good job of basically explaining my Impossibility Theorem (he doesn’t call it that, of course), noting the sheer scale of Meta properties, and how most people can’t even comprehend the scale, and that mistakes are obviously going to happen. Honestly, it’s one of the better “mainstream” explanations of the impossibility of content moderation at scale
Rogan: You’re moderating at scale that’s beyond the imagination. The number of human beings you’re moderating is fucking insane. Like what is… what’s Facebook… what how many people use it on a daily basis? Forget about how many overall. Like how many people use it regularly?
Zuck: It’s 3.2 billion people use one of our services every day
Rogan: (rolls around) That’s…!
Zuck: Yeah, it’s, no, it’s wild
Rogan: That’s more than a third of the planet! That’s so crazy and it’s almost half of Earth!
Zuck: Well on a monthly basis it is probably.
Rogan: UGGH!
But just I want I want to say that though for there’s a lot of like hypercritical people that are conspiracy theorists and think that everybody is a part of some cabal to control them. I want you to understand that, whether it’s YouTube or all these and whatever place that you think is doing something that’s awful, it’s good that you speak because this is how things get changed and this is how people find out that people are upset about content moderation and and censorship.
But moderating at scale is insane. It’s insane. What we were talking the other day about the number of videos that go up every hour on YouTube and it’s banana. It’s bananas. That’s like to try to get a human being that is reasonable, logical and objective, that’s going to analyze every video? It’s virtually impossible. It’s not possible. So you got to use a bunch of tools. You got to get a bunch of things wrong.
And you have also people reporting things. And how how much is that going to affect things there. You could have mass reporting because you have bad actors. You have some corporation that decides we’re going to attack this video cuz it’s bad for us. Get it taken down.
There’s so much going on. I just want to put that in people’s heads before we go on. Like understand the kind of numbers that we’re talking about here.
Like… that’s a decent enough explanation of the impossibility of moderating content at scale. If Zuckerberg wanted to lean into that, and point out that this impossibility and the tradeoffs it creates makes all of this a subjective guessing game, where mistakes often get made and everyone has opinions, that would have been interesting.
But he’s tossed out the line where he wants to blame the Biden administration (even though the evidence on this has already been deemed unproblematic by the Supreme Court just months ago) and he’s going to feed Rogan some more chum to create a misleading picture:
Zuckerberg: So I mean like you’re saying I mean this is… it’s so complicated this system that I could spend every minute of all of my time doing this and not actually focused on building any of the things that we’re trying to do. AI glasses, like the future of social media, all that stuff.
So I get involved in this stuff, but in general we we have a policy team. There are people who I trust there. The people are kind of working on this on a day-to-day basis. And the interactions that um that I was just referring to, I mean a lot of this is documented… I mean because uh you know Jim Jordan and the the House had this whole investigation and committee into into the the kind of government censorship around stuff like this and we produced all these documents and it’s all in the public domain…
I mean basically these people from the Biden Administration would call up our team and like scream at them and curse. And it’s like these documents are… it’s all kind of out there!
Rogan: Gah! Did you record any of those phone calls? God!
Zuckerberg: I don’t no… I don’t think… I don’t think we… but but… I think… I want listen… I mean, there are emails. The emails are published. It’s all… it’s all kind of out there and um and they’re like… and basically it just got to this point where we were like, no we’re not going to. We’re not going to take down things that are true. That’s ridiculous…
Parsing what he’s saying here is important. Again, we already established above a few important facts that Rogan doesn’t understand, and either Zuck doesn’t understand or is deliberately being coy in his explanation: (1) government actors are constantly trying to persuade media companies regarding their editorial discretion and that’s not against the law in any way, unless it crosses the line into coercion, and Zuck is (once again) admitting there was no coercion and they had no problem saying no. (2) He’s basing this not on actual firsthand knowledge but on stuff that is “all kind of out there” because “the emails are published” and “it’s all in the public domain.”
Now, because I’m not that busy creating AI glasses (though I am perhaps working on the future of social media), I actually did pay pretty close attention to what happened with those published emails and the documents in the public domain, and Zuckerberg is misrepresenting things, either on purpose or because the false narrative filtered back to him.
The reason I followed it closely is because I was worried that the Biden administration might cross the First Amendment line. This is not the case of me being a fan of the Biden administration, whose tech policies I thought were pretty bad almost across the board. The public statements that the White House made, whether from then press secretary Jen Psaki or Joe Biden himself, struck me as stupid things to say, but they did not appear to cross the First Amendment line, though they came uncomfortably close.
So I followed this case closely, in part, because if there was evidence that they crossed the line, I would be screaming from the BestNetTech rooftops about it.
But, over and over again, it became clear that while they may have walked up to the line, they didn’t seem to cross it. That’s also what the Supreme Court found in the Murthy case.
So when Zuckerberg says that there are published emails, referencing the “screaming and cursing,” I know exactly what he’s talking about. Because it was a highlight of the district court ruling that claimed the White House had violated the First Amendment (which was later overturned by the Supreme Court).
Indeed, in my write-up of that District Court ruling, I even called out the “cursing” email as an example that struck me as one of the only things that might actually be a pretty clear violation of the First Amendment. Here’s what I wrote two years ago when that ruling came out:
Most of the worst emails seemed to come from one guy, Rob Flaherty, the former “Director of Digital Strategy,” who seemed to believe his job in the White House made it fine for him to be a total jackass to the companies, constantly berating them for moderation choices he disliked.
I mean, this is just totally inappropriate for a government official to say to a private company:
Things apparently became tense between the White House and Facebook after that, culminating in Flaherty’s July 15, 2021 email to Facebook, in which Flaherty stated: “Are you guys fucking serious? I want an answer on what happened here and I want it today.”
But then I dug deeper and saw the filing where that quote actually comes from, realizing that the judge in the district court was taking it totally out of context. The ruling made it sound like Flaherty’s cursing outburst was in response to Facebook/Zuck refusing to go along with a content moderation demand.
If that were actually the case, then that would absolutely violate the First Amendment. The problem is that it’s not what happened. It was still inappropriate in general, but not an unconstitutional attack on speech.
What had happened was that Instagram had a bug that prevented the Biden account from getting more followers, and the White House was annoyed by that. Someone from Meta responded to a query, saying basically “oops, it was a bug, our bad, but it’s fixed now” and that response was forwarded to Flaherty, who acted like a total power-mad jackass with the “Are you guys fucking serious? I want an answer on what happened here and I want it today” response.
So here’s the key thing: that heated exchange had absolutely nothing to do with pressuring Facebook on its content moderation policies. That “public domain” “cursing” email is entirely about a bug that prevented the Biden account from getting more followers, and Rob throwing a bit of a shit fit about it.
As Zuck says (but notably no one on the Rogan team actually looks up), this is all “out there” in “the public domain.” Rogan didn’t look it up. It’s unclear if Zuckerberg looked it up.
But I did:
We can still find that response wholly inappropriate and asshole-ish. But it’s not because Facebook refused to take down information on vaccine side effects, as is clearly implied (and how Rogan takes it).
Indeed, Zuckerberg (again!) points out that the company’s response to requests to remove anti-vax memes was to tell the White House no:
Zuck: They wanted us to take down this meme of Leonardo DiCaprio looking at a TV talking about how 10 years from now or something um you know you’re going to see an ad that says okay if you took a Covid vaccine you’re um eligible you you know like uh for for this kind of payment like this sort of like class action lawsuit type meme.
And they’re like, “No, you have to take that down.” We just said, ‘No, we’re not going to take down humor and satire. We’re not going to take down things that are true.“
He then does talk about the stupid Biden “they’re killing people” comment, but leaves out the fact that Biden walked that back days later, admitting “Facebook isn’t killing people” and instead blaming people on the platform spreading misinformation and saying “that’s what I meant.”
But it didn’t change the fact that Facebook refused to take action on those accounts.
So even after he’s said multiple times that Facebook’s response to whatever comments came in from the White House was to tell them “no,” which is exactly what the Supreme Court made clear showed there was no coercion, Rogan goes on a rant as if Zuckerberg had just told him that they did, in fact, suppress the content the White House requested (something Zuck directly denied to Rogan multiple times, even right before this rant):
Rogan: Wow. [sigh] Yeah, it’s just a massive overstepping. Also, you weren’t killing people. This is the thing about all of this. It’s like they suppressed so much information about things that people should be doing regardless of whether or not you believe in the vaccine, regardless… put that aside. Metabolic health is of the utmost importance in your everyday life whether there’s a pandemic or there’s not and there’s a lot of things that you can do that can help you recover from illness.
It prevents illnesses. It makes your body more robust and healthy. It strengthens your immune system. And they were suppressing all that information and that’s just crazy. You can’t say you’re one of the good guys if you’re suppressing information that would help people recover from all kinds of diseases. Not just Covid. The flu, common cold, all sorts of different things. High doses of Vitamin C, D3 with K2 and magnesium. They were suppressing this stuff because they didn’t want people to think that you could get away with not taking a vaccine.
Dude, Zuck literally told you over and over again that they said no to the White House and didn’t suppress that content.
But Zuck doesn’t step in to correct Rogan’s misrepresentations, because he’s not here for that. He’s here to get this narrative out, and Rogan is biting hard on the narrative. Hilariously, he then follows it up by saying how the thing that Zuck just said didn’t happen, but which Rogan is chortling along as if it did happen, proves the evils of “distortion of facts” and…. where the hell is my irony font?
Rogan: This is a crazy overstep, but scared the shit out of a lot of people… redpilled as it were. A lot of people, because they realized like, oh, 1984 is like an instruction manual…
Zuck: Yeah, yeah.
Rogan: It’s like this is it shows you how things can go that way with wrong speak and withbizarre distortion of facts.
I mean, you would know, wouldn’t you, Joe?
From there, they pivot to a different discussion, though again, it’s Zuckerberg feeding Rogan lines about how the US ought to “protect” the US tech industry from foreign governments, rather than trying to regulate them.
A bit later on, there actually is a good discussion about the kinds of errors that are made in content moderation and why. Rogan (after spending so much time whining about the evils of censorship) suddenly turns around and says that, well, of course, Facebook should be blocking “misinformation” and “outright lies” and “propaganda”:
Rogan: But you do have to be careful about misinformation! And you have to be careful about just outright lies and propaganda complaints, or propaganda campaigns rather. And how do you differentiate?
Dude, like that’s the whole point of the challenge here. You yourself talked about the billions of people and how mistakes are made because so much of this is automated. But then you were misleadingly claiming that this info was taken down over demands from the government (which Zuckerberg clearly denied multiple times), and for you to then wrap back around to “but you gotta take down misinformation and lies and propaganda campaigns” is one hell of a swing.
But, as I said, it does lead to Zuck explaining how confidence levels matter, and how where you set those levels will cover both how much “bad” content gets removed, but also how much is left up and how much innocent content gets accidentally caught:
Zuck: Okay, you have some classifier that’s it’s trying to find say like drug content, right? People decide okay, it’s like the opioid epidemic is a big deal, we need to do a better job of cracking down on drugs and drug sales. Right, I don’t I don’t want people dealing drugs on our networks.
So we build a bunch of systems that basically go out and try to automate finding people who are who are dealing with dealing drugs. And then you basically have this question, which is how precise do you want to set the classifier? So do you want to make it so that the system needs to be 99% sure that someone is dealing drugs before taking them down? Do you want to to be 90% confident? 80% confident?
And then those correspond to amounts of… I guess the the statistics term would be “recall.” What percent of the bad stuff are you finding? So if you require 99% confidence then maybe you only actually end up taking down 20% of the bad content. Whereas if you reduce it and you say, okay, we’re only going to require 90% confidence now maybe you can take down 60% of the bad content.
But let’s say you say, no we really need to find everyone who’s doing this bad thing… and it doesn’t need to be as as severe as as dealing drugs. It could just be um I mean it could be any any kind of content of uh any kind of category of harmful content. You start getting to some of these classifiers might have you know 80, 85% Precision in order to get 90% of the bad stuff down.
But the problem is if you’re at, you know, 90% precision that means one out of 10 things that the classifier takes down is not actually problematic. And if you filter… if you if you kind of multiply that across the billions of people who use our services every day that is millions and millions of posts that are basically being taken down that are innocent.
And upon review we’re going to look at and be like this is ridiculous that this thing got taken down. Which, I mean, I think you’ve had that experience and we’ve talked about this for for a bunch of stuff over time.
But it really just comes down to this question of where do you want to set the classifiers so one of the things that we’re going to do is basically set them to… require more confidence. Which is this trade-off.
It’s going to mean that we will maybe take down a smaller amount of the harmful content. But it will also mean that we’ll dramatically reduce the amount of people who whose accounts were taken off for a mistake, which is just a terrible experience.
And that’s all a good and fascinating fundamental explanation of why the Masnick Impossibility Theorem remains in effect. There are always going to be different kinds of false positives and false negatives, and that’s going to always happen because of how you set the confidence levels of the classifiers.
Zuck could have explained that many of the other things that Rogan was whining about regarding the “suppression” of content around COVID (which, again, everyone but Rogan has admitted was based on Facebook’s own decision-making, not the US government), was quite often a similar sort of situation, where the confidence levels on the classifiers may have caught information it shouldn’t have, but which the company (at the time) felt had to be set at that level to make sure enough of the “bad” content (which Rogan himself says they should take down) gets caught.
But there is no recognition of how this part of the conversation impacts the earlier conversation at all.
There’s more in there, but this post is already insanely long, so I’ll close out with this: as mentioned in my opening, Donald Trump directly threatened to throw Zuck in prison for the rest of his life if Facebook didn’t moderate the way he wanted. And just a couple months ago, FCC Commissioner (soon to be FCC chair) Brendan Carr threatened Meta that if it kept on fact-checking stories in a way Carr didn’t like, he would try to remove Meta’s Section 230 protections in response.
None of that came up in this discussion. The only “government pressure” that Zuck talks about is from the Biden admin with “cursing,” which he readily admits they weren’t intimidated by.
So we have Biden officials who were, perhaps, mean, but not so threatening that Meta felt the need to bow down to them. And then we have Trump himself and leading members of his incoming administration who sent direct and obvious threats, which Zuck almost immediately bowed down to and caved.
And yet Rogan (and much of the media covering this podcast) claims he “revealed” how the Biden admin violated the First Amendment. Hell, the NY Post even ran an editorial pretending that Zuck didn’t go far enough because he didn’t reveal all of this in time for the Murthy case. And that’s only because the author doesn’t realize he literally is talking about the documents in the Murthy case.
The real story here is that Zuckerberg caved to Trump’s threats and felt fine pushing back on the Biden admin. Rogan at one point rants about how Trump will now protect Zuck because Trump “uniquely has felt the impact of not being able to have free speech.” That seems particularly ironic given the real story: Zuckerberg caved to Trump’s threats while pushing back on the Biden admin.
Zuckerberg knew how this would play to Rogan and Rogan’s audience, and he got exactly what he needed out of it. But the reality is that all of this is Zuck caving to threats from Trump and Trump officials, while feeling no coercion from the Biden admin. As social media continues to grapple with content moderation challenges, it would be nice if leaders like Zuckerberg were actually transparent about the real pressures they face, rather than fueling misleading narratives.
But that’s not the world we live in.
Strip away all the spin and misdirection, and the truth is inescapable: Zuckerberg folded like a cheap suit in the face of direct threats from Trump and his lackeys, while barely batting an eye at some sternly worded emails from Biden officials.
This was inevitable, ever since Donald Trump and the MAGA world freaked out when social media’s attempts to fact-check the President were deemed “censorship.” The reaction was both swift and entirely predictable. After all, how dare anyone question Dear Leader’s proclamations, even if they are demonstrably false? It wasn’t long before we started to see opinion pieces from MAGA folks breathlessly declaring that “fact-checking private speech is outrageous.” There were even politicians proposing laws to ban fact-checking.
In their view, the best way to protect free speech is apparently (?!?) to outlaw speech you don’t like.
With last week’s announcement by Mark Zuckerberg that Meta was ending its fact-checking program, the anti-fact-checking rhetoric hasn’t slowed down one bit.
So let’s be clear here: fact-checking is speech. Fact-checking is not censorship. It is protected by the First Amendment. Indeed, in olden times, when free speech supporters would talk about the “marketplace of ideas” and the “best response to bad speech is more speech,” they meant things like fact-checking. They meant that if someone were blathering on about utter nonsense, then a regime that enabled more speech could come along and fact-check folks.
There is no “censorship” involved in fact-checking. There is only a question of how others respond to the fact checks.
What the MAGA world is upset about is that, in some cases, private entities (who have every right to do this) would look at some fact checks and decide “maybe we shouldn’t promote utter fucking nonsense (or in some cases, potentially dangerous nonsense!) and spread it further”.
This is all still free speech. Some of it is speech about other speech and some of it is consequences from that speech.
But not one lick of it is “censorship.”
Yet this narrative has become so embedded in the MAGA world that the NY Post can write an entire article claiming that “fact-checking censors” exist without ever giving a single actual example of it happening.
There’s a really fun game that the Post Editorial Board is playing here, pretending that they’re just fine with fact-checking, unless it leads to “silencing.”
The real issue, that is, isn’t the checking, it’s the silencing.
But what “silencing” ever actually happened due to fact-checking? And when was it caused by the government (which would be necessary for it to violate the First Amendment)? The answer is none.
The piece whines about a few NY Post articles that had limited reach on Facebook, but that’s Facebook’s own free speech as well, not censorship. Also, it’s not at all clear that any of those issues had anything to do with “fact checking,” rather than a determination that the Post may have violated Facebook’s rules.
It does cite the supposed “censorship” of Trump’s NIH nominee Jay Bhattacharya for the Great Barrington Declaration:
Most notably,Dr. Jay Bhattacharya of Stanfordand his colleagues from Harvard and Oxford got silenced for recommending against mass lockdowns and instead for a focus on protecting only the elderly and other highly vulnerable populations.
Except, as we called out just recently, even Bhattacharya’s colleague who helped put together the Great Barrington Declaration (and who hosted the website) has said flat out that the reason the FB page was taken down had nothing to do with Facebook, but rather anti-vaxxers who brigaded the reporting system, claiming the Great Barrington Declaration was actually a pro-vaccination plot.
The Post goes on with this fun set of words:
Yes, the internet is packed with lies, misrepresentations and half-truths: So is all human conversation.
The only practical answer to false speech is and always been true speech; it doesn’t stop the liars or protect all the suckers, but most people figure it out well enough.
Shutting down debate in the name of “countering disinformation” only serves the liars with power or prestige or at least the right connections.
First off, the standard saying is that the response to false speech should be “more speech” not necessarily “true speech” but more to the point, uh, how do you get that “true speech”? Isn’t it… fact checking? And, if, as the NY Post suggests, the problem here is false speech in the fact checks, then shouldn’t the response be more speech in responserather than silencing the fact checkers?
I mean, their own argument isn’t even internally consistent.
They’re literally saying that we need more “truthful speech” and less “silencing of speech” while cheering on the silencing of organizations who try to provide more truthful speech. It’s a blatant contradiction.
The piece concludes with this bit of nonsense:
PolitiFact and all the rest are welcome to keep going, as long as they’re just equal voices in the conversation; we certainly mean to go on calling out what we see as lies.
Check all the facts you want, as long as you don’t get to silence anyone else.
But… that’s always been the case. Fact checkers have never had the power to “silence anyone else.” They just did their fact checking, provided more speech, and let others decide how to deal with that speech. The Post’s argument is a strawman, railing against a problem that doesn’t actually exist.
In the end, the Post’s piece inadvertently makes the case for more fact-checking, not less. In a world awash with misinformation, we need credible voices providing additional context and correcting the record. That’s the very essence of the free marketplace of ideas.
The Post seems to want a “free marketplace of ideas” where only ideas they agree with are allowed to be expressed. That’s not how free speech works.
Trying to silence voices calling out misinformation in the name of free speech is the height of hypocrisy. The Post should take its own advice – if you disagree with a fact check, respond with more speech, not by celebrating the active silencing of fact checkers you disagree with.
If you’re in London on Thursday 30th January, join Ben, Mark Scott (Digital Politics) and Georgia Iacovou (Horrific/Terrific) for an evening of tech policy, discussion and drinks. Register your interest.
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.
When the NY Times declared in September that “Mark Zuckerberg is Done With Politics,” it was obvious this framing was utter nonsense. It was quite clear that Zuckerberg was in the process of sucking up to Republicans after Republican leaders spent the past decade using him as a punching bag on which they could blame all sorts of things (mostly unfairly).
Now, with Trump heading back to the White House and Republicans controlling Congress, Zuck’s desperate attempts to appease the GOP have reached new heights of absurdity. The threat from Trump that he wanted Zuckerberg to be jailed over a made-up myth that Zuckerberg helped get Biden elected only seemed to cement that the non-stop scapegoating of Zuck by the GOP had gotten to him.
Since the election, Zuckerberg has done everything he can possibly think of to kiss the Trump ring. He even flew all the way from his compound in Hawaii to have dinner at Mar-A-Lago with Trump, before turning around and flying right back to Hawaii. In the last few days, he also had GOP-whisperer Joel Kaplan replace Nick Clegg as the company’s head of global policy. On Monday it was announced that Zuckerberg had also appointed Dana White to Meta’s board. White is the CEO of UFC, but also (perhaps more importantly) a close friend of Trump’s.
Some of the negative reactions to the video are a bit crazy, as I doubt the changes are going to have that big of an impact. Some of them may even be sensible. But let’s break them down into three categories: the good, the bad, and the stupid.
The Good
Zuckerberg is exactly right that Meta has been really bad at content moderation, despite having the largest content moderation team out there. In just the last few months, we’ve talked about multiple stories showcasing really, really terrible content moderation systems at work on various Meta properties. There was the story of Threads banning anyone who mentioned Hitler, even to criticize him. Or banning anyone for using the word “cracker” as a potential slur.
It was all a great demonstration for me of Masnick’s Impossibility Theorem of dealing with content moderation at scale, and how mistakes are inevitable. I know that people within Meta are aware of my impossibility theorem, and have talked about it a fair bit. So, some of this appears to be them recognizing that it’s a good time to recalibrate how they handle such things:
In recent years we’ve developed increasingly complex systems to manage content across our platforms, partly in response to societal and political pressure to moderate content. This approach has gone too far. As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes, frustrating our users and too often getting in the way of the free expression we set out to enable. Too much harmless content gets censored, too many people find themselves wrongly locked up in “Facebook jail,” and we are often too slow to respond when they do.
Leaving aside (for now) the use of the word “censored,” much of this isn’t wrong. For years it felt that Meta was easily pushed around on these issues and did a shit job of explaining why it did things, instead responding reactively to the controversy of the day.
And, in doing so, it’s no surprise that as the complexity of its setup got worse and worse, its systems kept banning people for very stupid reasons.
It actually is a good idea to seek to fix that, and especially if part of the plan is to be more cautious in issuing bans, it seems somewhat reasonable. As Zuckerberg announced in the video:
We used to have filters that scanned for any policy violation. Now, we’re going to focus those filters on tackling illegal and high-severity violations, and for lower-severity violations, we’re going to rely on someone reporting an issue before we take action. The problem is that the filters make mistakes, and they take down a lot of content that they shouldn’t. So, by dialing them back, we’re going to dramatically reduce the amount of censorship on our platforms. We’re also going to tune our content filters to require much higher confidence before taking down content. The reality is that this is a trade-off. It means we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.
Zuckerberg’s announcement is a tacit admission that Meta’s much-hyped AI is simply not up to the task of nuanced content moderation at scale. But somehow that angle is getting lost amidst the political posturing.
Some of the other policy changes also don’t seem all that bad. We’ve been mocking Meta for its “we’re downplaying political content” stance from the last few years as being just inherently stupid, so it’s nice in some ways to see them backing off of that (though the timing and framing of this decision we’ll discuss in the latter sections of this post):
We’re continually testing how we deliver personalized experiences and have recently conducted testing around civic content. As a result, we’re going to start treating civic content from people and Pages you follow on Facebook more like any other content in your feed, and we will start ranking and showing you that content based on explicit signals (for example, liking a piece of content) and implicit signals (like viewing posts) that help us predict what’s meaningful to people. We are also going to recommend more political content based on these personalized signals and are expanding the options people have to control how much of this content they see.
Finally, most of the attention people have given to the announcement has focused on the plan to end the fact-checking program, with a lot of people freaking out about it. I even had someone tell me on Bluesky that Meta ending its fact-checking program was an “existential threat” to truth. And that’s nonsense. The reality is that fact-checking has always been a weak and ineffective band-aid to larger issues. We called this out in the wake of the 2016 election.
This isn’t to say that fact-checking is useless. It’s helpful in a limited set of circumstances, but too many people (often in the media) put way too much weight on it. Reality is often messy, and the very setup of “fact checking” seems to presume there are “yes/no” answers to questions that require a lot more nuance and detail. Just as an example of this, during the run-up to the election, multiple fact checkers dinged Democrats for calling Project 2025 “Trump’s plan”, because Trump denied it and said he had nothing to do with it.
But, of course, since the election, Trump has hired on a bunch of the Project 2025 team, and they seem poised to enact much of the plan. Many things are complex. Many misleading statements start with a grain of truth and then build a tower of bullshit around it. Reality is not about “this is true” or “this is false,” but about understanding the degrees to which “this is accurate, but doesn’t cover all of the issues” or deal with the overall reality.
So, Zuck’s plan to kill the fact-checking effort isn’t really all that bad. I think too many people were too focused on it in the first place, despite how little impact it seemed to actually have. The people who wanted to believe false things weren’t being convinced by a fact check (and, indeed, started to falsely claim that fact checkers themselves were “biased.”)
Indeed, I’ve heard from folks at Meta that Zuck has wanted to kill the fact-checking program for a while. This just seemed like the opportune time to rip off the band-aid such that it also gains a little political capital with the incoming GOP team.
On top of that, adding in a feature like Community Notes (née Birdwatch from Twitter) is also not a bad idea. It’s a useful feature for what it does, but it’s never meant to be (nor could it ever be) a full replacement for other kinds of trust & safety efforts.
The Bad
So, if a lot of the functional policy changes here are actually more reasonable, what’s so bad about this? Well, first off, the framing of it all. Zuckerberg is trying to get away with the Elon Musk playbook of pretending this is all about free speech. Contrary to Zuckerberg’s claims, Facebook has never really been about free speech, and nothing announced on Tuesday really does much towards aiding in free speech.
I guess some people forget this, but in the earlier days, Facebook was way more aggressive than sites like Twitter in terms of what it would not allow. It very famously had a no nudity policy, which created a huge protest when breastfeeding images were removed. The idea that Facebook was ever designed to be a “free speech” platform is nonsense.
Indeed, if anything, it’s an admission of Meta’s own self-censorship. After all, the entire fact-checking program was an expression of Meta’s own position on things. It was “more speech.” Literally all fact-checking is doing is adding context and additional information, not removing content. By no stretch of the imagination is fact-checking “censorship.”
Of course, bad faith actors, particularly on the right, have long tried to paint fact-checking as “censorship.” But this talking point, which we’ve debunked before, is utter nonsense. Fact-checking is the epitome of “more speech”— exactly what the marketplace of ideas demands. By caving to those who want to silence fact-checkers, Meta is revealing how hollow its free speech rhetoric really is.
Also bad is Zuckerberg’s misleading use of the word “censorship” to describe content moderation policies. We’ve gone over this many, many times, but using censorship as a description for private property owners enforcing their own rules completely devalues the actual issue with censorship, in which it is the government suppressing speech. Every private property owner has rules for how you can and cannot interact in their space. We don’t call it “censorship” when you get tossed out of a bar for breaking their rules, nor should it be called censorship when a private company chooses to block or ban your content for violating its rules (even if you argue the rules are bad or were improperly enforced.)
The Stupid
The timing of all of this is obviously political. It is very clearly Zuckerberg caving to more threats from Republicans, something he’s been doing a lot of in the last few months, while insisting he was done caving to political pressure.
I mean, even Donald Trump is saying that Zuckerberg is doing this because of the threats that Trump and friends have leveled in his direction:
Q: Do you think Zuckerberg is responding to the threats you've made to him in the past?TRUMP: Probably. Yeah. Probably.
I raise this mainly to point out the ongoing hypocrisy of all of this. For years we’ve been told that the Biden campaign (pre-inauguration in 2020 and 2021) engaged in unconstitutional coercion to force social media platforms to remove content. And here we have the exact same thing, except that it’s much more egregious and Trump is even taking credit for it… and you won’t hear a damn peep from anyone who has spent the last four years screaming about the “censorship industrial complex” pushing social media to make changes to moderation practices in their favor.
Turns out none of those people really meant it. I know, not a surprise to regular readers here, but it should be called out.
Also incredibly stupid is this, which we’ll quote straight from Zuck’s Threads thread about all this:
That’s Zuck saying:
Move our trust and safety and content moderation teams out of California, and our US content review to Texas. This will help remove the concern that biased employees are overly censoring content.
There’s a pretty big assumption in there which is both false and stupid: that people who live in California are inherently biased, while people who live in Texas are not. People who live in both places may, in fact, be biased, though often not in the ways people believe. As a few people have pointed out, more people in Texas voted for Kamala Harris (4.84 million) than did so in New York (4.62 million). Similarly, almost as many people voted for Donald Trump in California (6.08 million) as did so in Texas (6.39 million).
There are people with all different political views all over the country. The idea that everyone in one area believes one thing politically, or that you’ll get “less bias” in Texas than in California, is beyond stupid. All it really does is reinforce misguided stereotypes.
The whole statement is clearly for political show.
It also sucks for Meta employees who work in trust & safety, who want access to certain forms of healthcare or want net neutrality, or other policies that are super popular among voters across the political spectrum, but which Texas has decided are inherently not allowed.
Finally, there’s this stupid line in the announcement from Joel Kaplan:
We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate. It’s not right that things can be said on TV or the floor of Congress, but not on our platforms.
I’m sure that sounded good to whoever wrote it, but it makes no sense at all. First off, thanks to the Speech and Debate Clause, literally anything is legal to say on the floor of Congress. It’s like the one spot in the world where there are no rules at all over what can be said. Why include that? Things could literally be said on the floor of Congress that would violate the law on Meta platforms.
Also, TV stations literally have restrictions known as “standards and practices” that are way, way, way more restrictive than any set of social media content moderation rules. Neither of these are relevant metrics to compare to social media. What jackass thought that using examples of (1) the least restricted place for speech and (2) a way more restrictive place for speech made this a reasonable argument to make here?
In the end, the reality here is that nothing announced this week will really change all that much for most users. Most users don’t run into content moderation all that often. Fact-checking happens but isn’t all that prominent. But all of this is a big signal that Zuckerberg, for all his talk of being “done with politics” and no longer giving in to political pressure on moderation, is very engaged in politics and a complete spineless pushover for modern Trumpist politicians.
When Donald Trump announced that he was appointing current FCC Commissioner Brendan Carr to be the next chair of the FCC, it was no surprise. Nor was it a surprise that Trump tried to play up that Carr was a “warrior for free speech.”
Commissioner Carr is a warrior for Free Speech, and has fought against the regulatory Lawfare that has stifled Americans’ Freedoms, and held back our Economy.
However, this is all projection, as with so much in the upcoming Trump administration. In reality, Brendan Carr may be the biggest threat to free speech in our government in a long while. And he’s not being shy about it.
Carr is abusing the power of his position to pressure companies to censor speech he disagrees with, all while cloaking it in the language of “free speech.” As an FCC commissioner, he has significant regulatory authority over broadcasters, and he’s wielding that power to push his preferred political agenda. He has no real authority over internet companies, but he’s pretending he does. He’s threatening broadcasters and social media companies alike, telling them there will be consequences if they don’t toe his line.
In this post, we’ll expose the details of Carr’s censorial agenda and the deceptive tactics he’s using to achieve it. Carr may claim to be a “free speech warrior,” but his actions show him to be the exact opposite. He is, as the Verge’s Nilay Patel aptly put it, “the most direct and sustained threat to the First Amendment and the freedom of the press any of us will ever experience.”
Threats to pull licenses and the “equal time rule”
First, we’ll detail an “easy” example around broadcast licenses, before getting into the much more thorny areas around content moderation and fact-checking. Carr has repeatedly claimed that he supports investigating and potentially pulling NBC’s “license” for having Kamala Harris show up on Saturday Night Live the weekend before the election. He claims that this violates the FCC’s “equal time rule.”
What he’s really doing: Telling broadcast channels not to platform candidates he doesn’t like or they will face expensive “investigations” and threats.
What things are factually wrong: NBC has no broadcast license to pull. Broadcast licenses are held by local affiliates who contract with NBC. NBC does own twelve affiliates, but the vast majority of NBC affiliates (223 of them) are not owned by NBC. Carr knows this. But he’s seen Donald Trump argue that NBC, CBS, and ABC should all have their (non-existent) licenses pulled at various times, and so he’s claiming the same thing.
Also, NBC did not violate the equal time rule, because it gave Donald Trump an equivalent amount of free time on its affiliates following a NASCAR race the next day. It also gave free time to Virginia Senate candidate Hung Cao, because his opponent Tim Kaine also appeared on SNL that night (though only to mock how forgettable Tim Kaine is).
Notably, in both cases, Trump and Cao got to deliver their own words in the form of ads to audiences. In contrast, both Harris and Kaine delivered lines scripted for them by SNL’s writers to be a part of a joke. So even if you want to be specific, it sounds like the GOP candidates got a much better deal.
Why this is all nonsense: First off, Republicans like Carr historically have loathed the equal time rule. It’s an offshoot of the Fairness Doctrine, a problematic concept that Republicans have long complained was unconstitutional, and which they supported killing when President Reagan effectively did so.
The equal time rule was created to ensure fair treatment of political candidates, not as a tool for government officials to bully the media. Carr is twisting a narrow regulation far beyond its intended purpose.
There is a strong belief, most strongly pushed in the GOP circles Carr inhabits, that the equal time rule would be found unconstitutional should it be challenged again and reach the Supreme Court. Under the decision in the Red Lion case, the Supreme Court blessed such restrictions only on broadcast spectrum, solely because of its scarcity.
Where there are substantially more individuals who want to broadcast than there are frequencies to allocate, it is idle to posit an unabridgeable First Amendment right to broadcast comparable to the right of every individual to speak, write, or publish. If 100 persons want broad cast licenses but there are only 10 frequencies to allocate, all of them may have the same “right” to a license; but if there is to be any effective communication by radio, only a few can be licensed and the rest must be barred from the airwaves. It would be strange if the First Amendment, aimed at protecting and furthering communications, prevented the Government from making radio communication possible by requiring licenses to broadcast and by limiting the number of licenses so as not to overcrowd the spectrum
But in a world where everyone can reach anyone via the internet, this argument is likely to hold a lot less weight.
Threatening to revoke broadcast licenses over unfavorable coverage is a blatant First Amendment violation. The government cannot use its licensing power to control or punish the speech of private actors. Carr surely knows this but doesn’t seem to care.
His threats are likely to have a chilling effect, with broadcasters self-censoring to avoid his ire. This is textbook government overreach and abuse of power to restrict free speech.
The bottom line: The clear message from Carr here is that if any TV station platforms speech he disagrees with, he will abuse his power as FCC chair to demand costly concessions from them in order to help those he supports. What this will likely mean in reality is self-censorship by broadcasters, avoiding platforming anyone Carr deems to be a problem to avoid having to deal with threats.
So, without doing anything directly (and he has little real power here at all), Carr gets to use a rule at the FCC which he knows is likely unconstitutional to get broadcast TV networks to choose to avoid platforming Democrats.
It’s pure censorship.
Threats to social media companies for fact-checking and attacks on Section 230
This one is even more complicated, but also even more dangerous. Right around the time when Trump announced Carr, Carr was crowing about a letter he had sent to Meta, Google, Apple, and Microsoft accusing them of “censorship” for partnering with NewsGuard, a company that gives its opinion about the trustworthiness of various news organizations.
In the letter, he argues that any content moderation is a form of “censorship” that violates the First Amendment rights of Americans. He claims that partnering with NewsGuard is evidence of such censorship, that Section 230 requires moderation be “in good faith” and that using NewsGuard somehow removes that good faith requirement.
Recently, he resummarized these points (in an even more misleading fashion) in a reply tweet to RFK Jr’s former running mate, Nicole Shanahan.
What he’s really doing: Telling internet companies that if they moderate things in a way he doesn’t like, he will use the power of the state to punish them. This includes fact-checking things in a way he dislikes, or calling out problematic sources in a way he dislikes.
What things are factually wrong: Oh so much. First off, the FCC has no authority over Section 230. He is pretending it does because one of his staffers during the last Trump administration conspired with some other Section 230 haters to get the admin to “ask” the FCC to see if it could do a rulemaking on 230.
Congress was pretty clear when it passed Section 230 that its direct intent was that the FCC not have authority over internet companies. Indeed, when Rep. Chris Cox introduced what became Section 230, it explicitly called out that the FCC shall not be authorized to regulate internet content services:
Declares that nothing in this Act shall be construed to authorize Federal Communications Commission regulation of the content of such services.
Cox made this even clearer during the floor debate on the bill, saying:
It will establish as the policy of the United States that we do not wish to have content regulation by the Federal Government of what is on the Internet, that we do not wish to have a Federal Computer Commission with an army of bureaucrats regulating the Internet because frankly the Internet has grown up to be what it is without that kind of help from the Government
The legislative history makes Congress’ intent crystal clear — they did not want the government regulating online speech. Yet that’s exactly what Carr is trying to do, in direct contradiction of the law.
Second, Section 230 does not require “good faith,” as Carr claims. The important parts of Section 230 are sections (c)(1) establishing that no internet service or user of a service can be held liable as the publisher and (c)(2) which talks about no liability for good faith moderation on content that the sites (not the government) find “otherwise objectionable.”
Courts have long established that (c)(1), which has no “good faith” claim, is actually the operative clause for protecting moderation decisions. Carr’s misleading quoting of (c)(2) ignores that (c)(1) already protects most moderation and does not require “good faith.”
And, even if “good faith” did somehow apply to moderation efforts, the only cases where that’s actually become an issue were in rare cases like the Malwarebytes case, where a court said that if moderation was done for anti-competitive purposes, it might not be in “good faith.”
Also, the idea that relying on NewsGuard “puts your 230 protections in jeopardy” is nonsense. Again, even ignoring everything above, there is no transitive property here where even if you could argue (and you really can’t) that NewsGuard’s opinions are “in bad faith” that this then transitions to social media moderation, and then on top of that “removes” 230. That’s literally not how any of this works.
Finally, and perhaps most importantly, private companies making editorial decisions about what content they allow on their own private property is not (and cannot be!) taking away First Amendment rights. The First Amendment restricts the government, not private property owners from making their own editorial decisions.
Why this is all nonsense: Earlier this year, we discussed the GOP’s weird infatuation with NewsGuard. Remember, NewsGuard was started by former Wall Street Journal publisher L. Gordon Crovitz, who is a well-known conservative voice. I repeat: he was the WSJ’s publisher for many years and wrote column after column in support of standard GOP talking points.
But, more importantly, all that NewsGuard does is give its opinion. It is literally using its free speech rights to express an opinion on the quality and trustworthiness of various news organizations. What the GOP is mad about is that sometimes (though not always!) it has rated some of the GOP’s preferred news sources as untrustworthy.
And apparently that kind of speech must be punished.
But anyone is free to agree or disagree with NewsGuard’s ranking system (I did so quite a lot in my last post on them, and some people at NewsGuard got upset with me about it, but that’s just everyone expressing their opinions).
You know? The marketplace of ideas.
What Carr is arguing here is (1) that NewsGuard’s opinions are somehow illegal, (2) that relying on them violates the free speech of Americans and (3) that companies that do so could then lose their Section 230 protections.
All of that is bullshit. NewsGuard’s opinions are opinions. They are speech. Whether or not social media companies (or anyone else) rely on them is also their free speech. I think Carr’s opinions are utter nonsense, and I can back that up with an explanation of why. And that’s all free speech.
But Carr is the one arguing that NewsGuard’s speech is somehow illegal because it sometimes calls out news orgs he likes as being full of shit. He’s literally trying to either destroy NewsGuard for expressing an opinion (which raises First Amendment questions on its own) or pressuring big tech companies to stop using NewsGuard as part of their processes for determining how trustworthy certain news is.
But, again, companies get to use their own First Amendment rights of association to determine if they wish to use NewsGuard as part of their editorial discretion or not.
Threatening to punish tech companies by somehow removing their Section 230 protections for using NewsGuard is an attempt to step in and remove their rights to punish them for expressing their own editorial discretion.
Indeed, in the Murthy v. Missouri case (where Carr was very much on the side of Missouri), the states directly claimed that President Biden threatening to remove Section 230 protections was evidence of government coercion which violated the First Amendment. Yet, here, Carr sees no problem doing the exact same thing.
And that’s not even getting into how little authority the FCC actually has here. I pointed out above that with the history of 230, it was clear that it was intended to make sure the FCC had no authority over the internet (which is also supported by the Supreme Court’s Red Lion ruling regarding scarcity and abundance). But also just this year, the Supreme Court’s decision in Loper Bright made it even more abundantly clear that the FCC has no authority to issue rulemakings on things not explicitly given to them by Congress.
Last week, even the folks at the Federalist Society called out Carr’s nonsense on this point. In a piece by Lawrence Spiwak, he explains that after the Loper Bright ruling (that took away Chevron Deference) the FCC clearly has no authority at all to rule on Section 230.
Perhaps the biggest impediment to any effort to have the FCC write definitive rules about the meaning of Section 230 is the Supreme Court’s rejection of Chevron last term inLoper Bright Enterprises v. Raimondo. There, the Court made it crystal clear that it is the exclusive role of the courts—and not the administrative state—to interpret statutes. As the Court observed, “even when an ambiguity happens to implicate a technical matter, it does not follow that Congress has taken the power to authoritatively interpret the statute from the courts and given it to [an administrative] agency. Congress expects courts to handle technical statutory questions . . . .” The Court’s rationale was straightforward:
Courts interpret statutes, no matter the context, based on the traditional tools of statutory construction, not individual policy preferences. Indeed, the Framers crafted the Constitution to ensure that federal judges could exercise judgment free from the influence of the political branches. They were to construe the law with “[c]lear heads . . . and honest hearts,” not with an eye to policy preferences that had not made it into the statute.
Thus, the message of Loper Bright to the FCC is clear: regardless of your political desires, interpreting Section 230 is not your job. Loper Bright, in plain terms, put the kibosh on Johnson’s argument that it is the FCC’s job “to determine whether courts have appropriately interpreted its proper scope.”
There’s even more here, but this piece is getting long enough.
The bottom line: Again, Carr is misleading people with layer upon layer of nonsense. But all he’s really doing is threatening to use the power of the government to punish companies for First Amendment-protected expression he dislikes.
The goal, again, is to get these companies to censor in advance. It’s to get them to agree not to moderate or even fact-check content he supports, taking away the free speech rights of those who would do so.
Carr is smart and he knows exactly what he’s doing here. He is couching his extreme censorial desires in the language of free speech, knowing that most people won’t know enough or understand the details and nuances to recognize what he’s doing.
But he is rushing in to be America’s top censor, and he’s the biggest threat to the First Amendment we’ve seen in quite a long time.
In the bizarro world of MAGA politics, up is down, black is white, and apparently, fact-checking is now a form of election interference.
It is no secret that people across the political spectrum have a very warped view of what free speech or the First Amendment means. But I am particularly perplexed by the view of many lately (and this seems to run across the political spectrum, tragically) that fact checking is an attack on free speech and should be punished. It feels ridiculous to even bring this up, but fact checking is not just protected speech, it is the proverbial “more speech” that pretend defenders of the First Amendment always claim is the only possible answer to speech you disagree with.
Anyway, last week you might have heard there was a Presidential debate between Kamala Harris and Donald Trump held on ABC. The CNN debate earlier this year between Trump and Biden included a vow from the moderators that they would do no fact-checking, which resulted in those moderators being roundly criticized.
On the other hand, ABC chose a few narrow points, when the lies were incredibly egregious, to provide simple fact-checks to blatantly false claims. I believe they responded just three times to make factual claims, even though the former President told an astounding number of blatant outright lies (not just exaggerations, but fully invented, made up bullshit).
This has set Republicans off on a ridiculous crusade, claiming that ABC was actively working with the Harris campaign to support it, which is not how any of this actually works. Then, Trump himself claimed that the debate was “rigged” (of course) and told Fox & Friends that (1) you “have to be licensed to” be a news organization and that (2) “they ought to take away their license for the way they did that” (i.e., fact-checked the debate).
Others in Trump’s circles claimed that the fact-checking was a form of “in-kind contribution” to the Harris campaign worth millions of dollars.
All of this is nonsense. First off, one of the complaints was that the moderators fact-checked Trump but didn’t fact-check Harris. There are a few responses to that, including that if you removed the three times they fact-checked Trump and compared things then, they still chose not to fact-check him on many, many more false claims and egregious lies. The second is that the fact-checking complaints around Harris are ones of leaving out context or having slight exaggerations. With Trump it was literal made-up nonsense, such as the false, bigoted claims about eating cats and dogs, or the idea that Democrats support killing babies after birth. Just out and out fearmongering bullshit.
But, again, fact-checking is free speech. The party that claims to be such a big believer in free speech should also support that.
However, even dumber is Trump’s false claim that ABC has to be licensed. That’s not how this works. It’s yet another false statement coming from the mind of a man who seems to only work in false statements. Individual affiliates can require licenses to obtain spectrum, but ABC itself is not something that needs licensing. You don’t need to be licensed to be a news organization.
Just ask Fox News.
Of course, we’ve been through this before with Trump, who has sued many news organizations he’s disliked (without much success) and has made this same bogus threat before. In 2017, he said that NBC should lose its (non-existent) license for reporting on former Secretary of State Rex Tillerson calling Trump a “moron.” A year later, he threatened to pull NBC’s (still non-existent) license over its reporting on Harvey Weinstein.
Earlier this year, he said both NBC and CNN should have their “licenses or whatever” taken away for not giving him free airtime by showing his victory speech following the Iowa caucuses.
All of this is ridiculous. It’s an attempt at intimidation. It’s an attempt to threaten and cajole news organizations to not speak, to not use their First Amendment rights, and to not fact check when the former President spews absolute fucking nonsense.
But, because MAGA world is making a big deal of this, even the FCC Chair, Jessica Rosenworcel, had to put out a statement on the very basics of the First Amendment:
“The FCC does not revoke licenses for broadcast stations simply because a political candidate disagrees with or dislikes content or coverage.”
It is true that there are some very, very, very limited and narrow circumstances under which the FCC can pull a local affiliate’s spectrum license (not the larger network). However, not liking how fact-checking happens is not even in the same zip code as those.
Indeed, if MAGA world is getting into the business of pulling affiliate licenses, they might not like where things end up. There has been an ongoing effort to pull the license from a Fox affiliate in Philadelphia, based specifically on Fox News admitting that it broadcast false information about the 2020 election.
I don’t support such efforts, which likely violate the First Amendment. Even if it’s a closer call when you’re dealing with a network that has effectively admitted to deliberately spreading false information the company knew was false. But here, the call from Trump to remove the license is simply because of a fact check. It was because they told the truth, not because they lied.
When that effort to remove the Fox affiliate’s license came about, MAGA world was furious. Senator Ted Cruz went on a rant about how “the job of policing so-called ‘misinformation’ belongs to the American people—not the federal government” and complained about how “the left” “want the FCC to be a truth commission & censor political discourse—a prospect that is unconstitutional.”
Hey Ted, care to comment on the claims from last week?
I see no similar statement from him about Trump and the MAGA world now demanding the same thing (for much more ridiculous reasons). I combed through his ExTwitter feed and surprisingly (well, not really) he seems to have no issue with his side calling to pull licenses. How typically hypocritical.
Tragically, this has become the modern Republican Party. They are total hypocrites on free speech. When they want to protect their own speech, they wrap themselves up in the cape of the First Amendment, but when someone who disagrees with them speaks up to contradict them with facts, they’re happy to push for censorship and punishment over speech.