Over the last few months, Elon Musk’s handpicked journalists have continued revealing less and less with each new edition of the “Twitter Files,” to the point that even those of us who write about this area have mostly been skimming each new release, confirming that yet again these reporters have no idea what they’re talking about, are cherry picking misleading examples, and then misrepresenting basically everything.
It’s difficult to decide if it’s even worth giving these releases any credibility at all in going through the actual work of debunking them, but sometimes a few out of context snippets from the Twitter Files, mostly from Matt Taibbi, seem to get picked up by others and it becomes necessary to dive back into the muck to clean up the mess that Matt has made yet again.
Unfortunately, this seems like one of those times.
Over the last few “Twitter Files” releases, Taibbi has been pushing hard on the false claim that, okay, maybe he can’t find any actual evidence that the government tried to force Twitter to remove content, but he can find… information about how certain university programs and non-governmental organizations received government grants… and they setup “censorship programs.”
It’s “censorship by proxy!” Or so the claim goes.
Except, it’s not even remotely accurate. The issue, again, goes back to understanding some pretty fundamental concepts that seem to escape Taibbi’s ability to understand. Let’s go through them.
Point number one: Studying misinformation and disinformation is a worthwhile field of study. That’s not saying that we should silence such things, or that we need an “arbiter of truth.” But the simple fact remains that some have sought to use misinformation and disinformation to try to influence people, and studying and understanding how and why that happens is valuable.
Indeed, I personally tend to lean towards the view that most discussions regarding mis- and disinformation are overly exaggerated moral panics. I think the terms are overused, and often misused (frequently just to attack factual news that people dislike). But, in part, that’s why it’s important to study this stuff. And part of studying it is to actually understand how such information is spread, which includes across social media.
Point number two: It’s not just an academic field of interest. For fairly obvious reasons, companies that are used to spread such information have a vested interest in understanding this stuff as well, though to date, it’s mostly been the social media companies that have shown the most interest in understanding these things, rather than say, cable news, even as some of the evidence suggests cable news is a bigger vector for spreading such things than social media.
Still, the companies have an interest in understand this stuff, and sometimes that includes these organizations flagging content they find and sharing it with the companies for the sole purpose of letting those companies evaluate if the content violate existing policies. And, once again, the companies regularly did nothing after noting that the flagged accounts didn’t violate any policies.
Point number three: governments also have an interest in understand how such information flows, in part to help combat foreign influence campaigns designed to cause strife and even violence.
Note what none of these three points are saying: that censorship is necessary or even desired. But it’s not surprising that the US government has funded some programs to better understand these things, and that includes bringing in a variety of experts from academia and civil society and NGOs to better understand these things. It’s also no surprise that some of the social media companies are interested in what these research efforts find because it might be useful.
And, really, that’s basically everything that Taibbi has found out in his research. There are academic centers and NGOs that have received some grants from various government agencies to study mis- and disinformation flows. Also, that sometimes Twitter communicated with those organization. Notably, many of his findings actually show that Twitter employees absolutely disagreed with the conclusions of those research efforts. Indeed, some of the revealed emails show Twitter employees somewhat dismissive of the quality of the research.
What none of this shows is a grand censorship operation.
However, that’s what Taibbi and various gullible culture warriors in Congress are arguing, because why not?
So, some of the organizations in questions have decided they finally need to do some debunking on their own. I especially appreciate the University of Washington (UW), which did a step by step debunker that, in any reasonable world, would completely embarrass Matt Taibbi for the very obvious fundamental mistakes he made:
False impression: The EIP orchestrated a massive “censorship” effort. In a recent tweet thread, Matt Taibbi, one of the authors of the “Twitter Files” claimed: “According to the EIP’s own data, it succeeded in getting nearly 22 million tweets labeled in the runup to the 2020 vote.” That’s a lot of labeled tweets! It’s also not even remotely true. Taibbi seems to be conflating our team’s post-hoc research mapping tweets to misleading claims about election processes and procedures with the EIP’s real-time efforts to alert platforms to misleading posts that violated their policies. The EIP’s research team consisted mainly of non-expert students conducting manual work without the assistance of advanced AI technology. The actual scale of the EIP’s real-time efforts to alert platforms was about 0.01% of the alleged size.
Now, that’s embarrassing.
There’s a lot more that Taibbi misunderstands as well. For example, the freak-out over CISA:
False impression: The EIP operated as a government cut-out, funneling censorship requests from federal agencies to platforms. This impression is built around falsely framing the following facts: the founders of the EIP consulted with the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) office prior to our launch, CISA was a “partner” of the EIP, and the EIP alerted social media platforms to content EIP researchers analyzed and found to be in violation of the platforms’ stated policies. These are all true claims — and in fact, we reported them ourselves in the EIP’s March 2021 final report. But the false impression relies on the omission of other key facts. CISA did not found, fund, or otherwise control the EIP. CISA did not send content to the EIP to analyze, and the EIP did not flag content to social media platforms on behalf of CISA.
There are multiple other false claims that UW debunks as well, including that it was a partisan effort, that it happened in secret, or that it did anything related to content moderation. None of those are true.
The Stanford Internet Observatory (SIO), which works with UW on some of these programs, ended up putting out a similar debunker statement as well. For whatever reason, the SIO seems to play a central role in Taibbi’s fever dream of “government-driven censorship.” He focuses on projects like the Election Integrity Project or the Virality Project, both of which were focused on looking at the flows of viral misinformation.
In Taibbi’s world, these were really government censorship programs. Except, as SIO points out, they weren’t funded by the government:
Does the SIO or EIP receive funding from the federal government?
As part of Stanford University, the SIO receives gift and grant funding to support its work. In 2021, the SIO received a five-year grant from the National Science Foundation, an independent government agency, awarding a total of $748,437 over a five-year period to support research into the spread of misinformation on the internet during real-time events. SIO applied for and received the grant after the 2020 election. None of the NSF funds, or any other government funding, was used to study the 2020 election or to support the Virality Project. The NSF is the SIO’s sole source of government funding.
They also highlight how the Virality Project’s work on vaccine disinformation was never about “censorship.”
Did the SIO’s Virality Project censor social media content regarding coronavirus vaccine side-effects?
No. The VP did not censor or ask social media platforms to remove any social media content regarding coronavirus vaccine side effects. Theories stating otherwise are inaccurate and based on distortions of email exchanges in the Twitter Files. The Project’s engagement with government agencies at the local, state, or federal level consisted of factual briefings about commentary about the vaccine circulating on social media.
The VP’s work centered on identification and analysis of social media commentary relating to the COVID-19 vaccine, including emerging rumors about the vaccine where the truth of the issue discussed could not yet be determined. The VP provided public information about observed social media trends that could be used by social media platforms and public health communicators to inform their responses and further public dialogue. Rather than attempting to censor speech, the VP’s goal was to share its analysis of social media trends so that social media platforms and public health officials were prepared to respond to widely shared narratives. In its work, the Project identified several categories of allegations on Twitter relating to coronavirus vaccines, and asked platforms, including Twitter, which categories were of interest to them. Decisions to remove or flag tweets were made by Twitter.
In other words, as was obvious to anyone who actually had followed any of this while these projects were up and running, these are not examples of “censorship” regimes. Nor are they efforts to silence anyone. They’re research programs on information flows. That’s also clear if you don’t read Taibbi’s bizarrely disjointed commentary and just look at the actual things he presents.
In a normal world, the level of just outright nonsense and mistakes in Taibbi’s work would render his credibility completely shot going forward. Instead, he’s become a hero to a certain brand of clueless troll. It’s the kind of transformation that would be interesting to study and understand, but I assume Taibbi would just build a grand conspiracy theory about how doing that was just an attempt by the illuminati to silence him.
Back in the fall we were among the first to highlight that Elon Musk might face a pretty big FTC problem. Twitter, of course, is under a 20 year FTC consent decree over some of its privacy failings. And, less than a year ago (while still under old management), Twitter was hit with a $150 million fine and a revised consent decree. Both of them are specifically regarding how it handles users private data. Musk has made it abundantly clear that he doesn’t care about the FTC, but that seems like a risky move. While I think this FTC has made some serious strategic mistakes in the antitrust world, the FTC tends not to fuck around with privacy consent decrees.
However, now the Wall Street Journal has a big article with some details about the FTC’s ongoing investigation into Elon’s Twitter (based on a now released report from the Republican-led House Judiciary who frames the whole thing as a political battle by the FTC to attack a company Democrats don’t like — despite the evidence included not really showing anything to support that narrative).
The Federal Trade Commission has demanded Twitter Inc. turn over internal communications related to owner Elon Musk, as well as detailed information about layoffs—citing concerns that staff reductions could compromise the company’s ability to protect users, documents viewed by the Wall Street Journal show.
In 12 letters sent to Twitter and its lawyers since Mr. Musk’s Oct. 27 takeover, the FTC also asked the company to “identify all journalists” granted access to company records and to provide information about the launch of the revamped Twitter Blue subscription service, the documents show.
The FTC is also seeking to depose Mr. Musk in connection with the probe.
I will say that some of the demands from the FTC appear to potentially be overbroad, which should be a concern:
The FTC also asked for all internal Twitter communications “related to Elon Musk,” or sent “at the direction of, or received by” Mr. Musk.
I mean… that seems to be asking for way more than is reasonable. I’ve heard some discussion that these requests are an attempt to figure out who Musk is delegating to handle privacy issues at the company (as required in the consent decree), but it seems that such a request can (and should) be more tailored to that point. Otherwise, it appears (and will be spun, as the House Judiciary Committee is doing…) as an overly broad fishing expedition.
Either way, as we predicted in our earlier posts, the FTC seems quite concerned about whether or not Twitter is conducting required privacy reviews before releasing new features.
The FTC also pressed Twitter on whether it was conducting in-depth privacy reviews before implementing product changes such as the new version of Twitter Blue, as required under the 2022 order. The agency sought detailed records on how product changes were communicated to Twitter users.
It asked Twitter to explain how it handled a recently reported leak of Twitter user-profile data, to account for changes made to the way users authenticate their accounts, and to describe how it scrubbed sensitive data from sold office equipment.
Another area that is bound to be controversial (and Matt Taibbi is, in his usual fashion, misleadingly misrepresenting things and whining about it) is that the FTC asked to find out which outside “journalists” had been granted access to Twitter systems:
On Dec. 13, the FTC asked about Twitter’s decision to give journalists access to internal company communications, a project Mr. Musk has dubbed the “Twitter Files” and that he says sheds light on controversial decisions by previous management.
The agency asked Twitter to describe the “nature of access granted each person” and how allowing that access “is consistent with your privacy and information security obligations under the Order.” It asked if Twitter conducted background checks on the journalists, and whether the journalists could access Twitter users’ personal messages.
Given the context, this request actually seems reasonable. The consent decree is pretty explicit about how Twitter needs to place controls on access to private information, and the possibility that Musk gave outside journalists access to private info was a concern that many people raised. Since then, Twitter folks have claimed that it never gave outside journalists full access to internal private information, but rather tasked employees with sharing requested files (this might still raise some questions about private data, but it’s not as free wheeling as some worried initially). If Twitter really did not provide access to internal private data to journalists, then it can respond to that request by showing what kind of access it did provide.
But, Taibbi is living down to his reputation and pretending it’s something different:
At best, Taibbi seems to be conflating two separate requests here. The request for all of Musk’s communications definitely does seem too broad, and it seems like Twitter’s lawyers (assuming any remain, or outside counsel that is still having its bills paid) could easily respond and push back on the extensiveness of the request to narrow it down to communications relevant to the consent decree. That’s… how this process normally works.
As for the claim that which journalists an executive talks to is not the government’s business, that is correct, but lacking context. It becomes the government’s business if part of the conversation with the journalist is to violate the law. And… it’s that point that the FTC is trying to determine. If they didn’t violate the consent decree, then, problem solved.
Thus, the request regarding how much access Musk gave to journalists seems like a legitimate question to determine if the access violated the consent decree. One hopes that Twitter was careful enough in how this was set up that the answer is “no, it did not violate the consent decree, and all access was limited and carefully monitored to protect user data,” but that’s kinda the reason that the investigation is happening in the first place.
Indeed, in the House Judiciary Committee report, in which they try to turn this into a much bigger deal, they do reveal a small snippet of the FTC’s requests to Twitter on this topic that suggest that Taibbi is (yet again) totally misrepresenting things (it’s crazy how often that’s the case with that guy), and their concern is literally to the single point implicated by the consent decree: did Twitter give outside journalists access to internal Twitter systems that might have revealed private data:
I would be concerned if the request actually were (as Taibbi falsely implies) for Musk to reveal every journalist he’s talking to. But the request (as revealed by the Committee) appears to only be about “journalists and other members of the media to whom” Elon has “granted any type of access to the Companies internal communications.” And, given that the entire consent decree is about restricting access to internal systems and others’ communications, that seems directly on point and not, as the Judiciary Committee and Taibbi complain, about an attack on the 1st Amendment.
It remains entirely possible that the FTC finds nothing at all here. Or that if it tries to file claims against Twitter that Twitter wins. Unlike some people, I am not rushing to assume that the FTC is going to bring Twitter to account. But there are some pretty serious questions about whether or not Musk is abiding by the consent decree, and violating a consent decree is just pleading for the FTC to make an expensive example of you.
Look. I want to stop writing about Twitter. I want to write about lots of other stuff. I have a huge list of other stories that I’m trying to get through, but then Elon Musk does something dumb again, and people run wild with it, and (for reasons that perplex me) much of the media either run with what Musk said, or just ignore it completely. But Musk is either deliberately lying about stuff or too ignorant to understand what he’s talking about, and I don’t know which is worse, though neither is a good look.
Today, his argument is that “the FBI has been paying Twitter to censor,” and he suggests this is a big scandal.
This would be a big scandal if true. But, it’s not. It’s just flat out wrong.
As with pretty much every one of these misleading statements regarding the very Twitter that he runs, where people (I guess maybe just former people) could explain to him why he’s wrong, it takes way more time and details to explain why he’s wrong than for him to push out these misleading lines that will now be taken as fact.
But, since at least some of us still believe in facts and truth, let’s walk through this.
First up, we already did a huge, long debunker on the idea that the FBI (or any government entity) was in any way involved in the Twitter decision to block links to the Hunter Biden laptop story. Most of the people who believed that have either ignored that there was no evidence to support it, or have simply moved on to this new lie, suggesting that “the FBI” was “sending lists” to Twitter of people to censor.
The problem is that, once again, that’s not what “the Twitter Files” show, even as the reporters working on it — Matt Taibbi, Bari Weiss, and Michael Shellenberger — either don’t understand what they’re looking at or are deliberately misrepresenting it. I’m no fan of the FBI, and have spent much of the two and a half decades here at BestNetTech criticizing it. But… there’s literally no scandal here (or if there is one, it’s something entirely different, which we’ll get to at the end of the article).
What the files show is that the FBI would occasionally (not very often, frankly) use reporting tools to alert Twitter to accounts that potentially violated Twitter’s rules. When the FBI did so, it was pretty clear that it was just flagging these accounts for Twitter to review, and had no expectation that the company would or would not do anything about it. In fact, they are explicit in their email that the accounts “may potentially constitute violations of Twitter’s Terms of Service” and that Twitter can take “any action or inaction deemed appropriate within Twitter policy.”
That is not a demand. There is no coercion associated with the email, and it certainly appears that Twitter frequently rejected these flags from the US government. Twitter’s most recent transparency report lists all of the “legal demands” the company received for content removals in the US, and its compliance rate is 40.6%. In other words, it complied with well under half of any demands for data removal from the government.
Indeed, even as presented (repeatedly) by Taibbi and Shellenberger as if it’s proof that Twitter closely cooperated with the FBI, over and over again if you read the actual screenshots, it shows Twitter (rightly!) pushing back on the FBI. Here, for example, Michael Shellenberger, shows Twitter’s Yoel Roth rejecting a request from the FBI to share information, saying they need to take the proper legal steps to request that info (depending on the situation, likely getting a judge to approve the request):
Now, we could have an interesting discussion (and I actually do think it’s an interesting discussion) about whether or not the government should be flagging accounts to review as terms of service violations. Right now, anyone can do this. You or I can go on Twitter and if we see something that we think violates a content policy, we can flag it for Twitter to review. Twitter than will review the content and determine whether or not it’s violative, and then decide what the remedy should be if it is.
That opens up an interesting question in general: should government officials and entities also be allowed to do the same type of flagging? Considering that anyone else can do it, and the company still reviews against its own terms of service and (importantly) feels free to reject those requests when they do not appear to violate the terms, I’m hard pressed to see the problem here on its own.
If there were evidence that there was some pressure, coercion, or compulsion for the company to comply with the government requests, that would be a different story. But, to date, there remains none (at least in the US).
As for the accounts that were flagged, from everything revealed to date in the Twitter Files, it mostly appears to be accounts that were telling a certain segment of the population (sometimes Republicans, sometimes Democrats) to vote on Wednesday, the day after Election Day, rather than Tuesday. Twitter had announced long before the election that any such tweets would violate policy. It does appear that a number of those tweets were meant as jokes, but as is the nature of content moderation, it’s difficult to tell what’s a joke from what’s not a joke, and quite frequently malicious actors will try to hide behind “but I was only joking…” when fighting back against an enforcement action. So, under that context, a flat “do not suggest people vote the day after Election Day” rule seems reasonable.
Given all that, to date, the only “evidence” that people can look at regarding “the FBI sent a list to censor” is that the FBI flagged (just as your or I could flag) accounts that were pretty clearly violating Twitter policies in a way that could undermine the US election, and left it entirely up to Twitter to decide what to do about it — and Twitter chose to listen to some requests and ignore others.
That doesn’t seem so bad in context, does it? It actually kinda seems like the sort of thing people would want the FBI to do to support election integrity.
But the payments!
So, there’s no evidence of censorship. But what about these payments? Well, that’s Musk’s hand-chosen reporters, Musk himself, and his fans totally misunderstanding some very basic stuff that any serious reporter with knowledge of the law would not mess up. Here’s Shellenberger’s tweet from yesterday that has spun up this new false argument:
That’s Shellenberger saying:
The FBI’s influence campaign may have been helped by the fact that it was paying Twitter millions of dollars for its staff time.
“I am happy to report we have collected $3,415,323 since October 2019!” reports an associate of Jim Baker in early 2021.
But this is a misreading/misunderstanding of how things work. This had nothing to do with any “influence campaign.” The law already says that if the FBI is legally requesting information for an investigation under a number of different legal authorities, the companies receiving those requests can be reimbursed for fulfilling them.
(a)Payment.—
Except as otherwise provided in subsection (c), a governmental entity obtaining the contents of communications, records, or other information under section 2702, 2703, or 2704 of this title shall pay to the person or entity assembling or providing such information a fee for reimbursement for such costs as are reasonably necessary and which have been directly incurred in searching for, assembling, reproducing, or otherwise providing such information. Such reimbursable costs shall include any costs due to necessary disruption of normal operations of any electronic communication service or remote computing service in which such information may be stored.
But note what this is limited to. These are investigatory requests for information, or so called 2703(d) requests, which require a court order.
Now, there are reasons to be concerned about the 2703(d) program. I mean, going back to 2013, when it was revealed that the 2703(d) program was abused as part of an interpretation of the Patriot Act to allow the DOJ/NSA to collect data secretly from companies, we’ve highlighted the many problems with the program.
So, by the way, did old Twitter. More than a decade ago, Twitter went to court to challenge the claim that a Twitter user had no standing to challenge a 2703(d) order. Unfortunately, Twitter lost and the feds are still allowed to use these orders (which, again, require a judge to sign off on them).
I do think it remains a scandal the way that 2703(d) orders work, and the inability of users to push back on them. But that is the law. And it has literally nothing whatsoever to do with “censorship” requests. It is entirely about investigations by the FBI into Twitter users based on evidence of a crime. If you want, you can read the DOJ’s own guidelines regarding what they can request under 2703(d).
Looking at that, you can see that if they can get a 2703(d) order (again, signed by a judge) they can seek to obtain subscriber info, transaction records, retrieved communications, and unretrieved communications stored for more than 180 days (in the past, we’ve long complained about the whole 180 days thing, but that’s another issue).
You know what’s not on that list? “Censoring people.” It’s just not a thing. The reimbursement that is talked about in that email is about complying with these information production orders that have been reviewed and signed by a judge.
It’s got nothing at all to do with “censorship demands.” And yet Musk and friends are going hog wild pushing this utter nonsense.
Meanwhile, Twitter’s own transparency report again already reveals data on these orders as part of its “data information requests” list, where it shows that in the latest period reported (second half of 2021) it received 2.3k requests specifying 11.3k accounts, and complied with 69% of the requests.
This was actually down a bit from 2020. But since the period the email covers is from 2019 through 2020, you can see that there were a fair number of information requests from the FBI:
Given all that, it looks like there were probably in the range of 8,000 requests for information, covering who knows how many accounts, that Twitter had to comply with. And so the $3 million reimbursement seems pretty reasonable, assuming you would need a decent sized skilled team to review the orders, collect the information, and respond appropriately.
If there’s any scandal at all, it remains the lack of more detailed transparency about the (d) orders, or the ability of companies like Twitter to have standing to challenge them on behalf of users. Also, there are reasonable arguments for why judges are too quick to approve (d) orders as valid under the 4th Amendment.
But literally none of that is “the FBI paid Twitter to censor people.”
Hello! Someone has referred you to this post because you’ve said something quite wrong about Twitter and how it handled something to do with Hunter Biden’s laptop. If you’re new here, you may not know that I’ve written a similar post for people who are wrong about Section 230. If you’re being wrong about Twitter and the Hunter Biden laptop, there’s a decent chance that you’re also wrong about Section 230, so you might want to read that too! Also, these posts are using a format blatantly swiped from lawyer Ken “Popehat” White, who wrote one about the 1st Amendment. Honestly, you should probably read that one too, because there’s some overlap.
Now, to be clear, I’ve explained many times before, in other posts, why people who freaked out about how Twitter handled the Hunter Biden laptop story are getting confused, but it’s usually been a bit buried. I had already started a version of this post last week, since people keep bringing up Twitter and the laptop, but then on Friday, Elon (sorta) helped me out by giving a bunch of documents to reporter Matt Taibbi.
So, let’s review some basics before we respond to the various wrong statements people have been making. Since 2016, there have been concerns raised about how foreign nation states might seek to interfere with elections, often via the release of hacked or faked materials. It’s no secret that websites have been warned to be on the lookout for such content in the leadup to the election — not with demands to suppress it, but just to consider how to handle it.
Partly in response to that, social media companies put in place various policies on how they were going to handle such material. Facebook set up a policy to limit certain content from trending in its algorithm until it had been reviewed by fact-checkers. Twitter put in place a “hacked materials” policy, which forbade the sharing of leaked or hacked materials. There were — clearly! — some potential issues with that policy. In fact, in September of 2020 (a month before the NY Post story) we highlighted the problems of this very policy, including somewhat presciently noting the fear that it would be used to block the sharing of content in the public interest and could be used against journalistic organizations (indeed, that case study highlights how the policy was enforced to ban DDOSecrets for leaking police chat logs).
The morning the NY Post story came out there was a lot of concern about the validity of the story. Other news organizations, including Fox News, had refused to touch it. NY Post reporters refused to put their name on it. There were other oddities, including the provenance of the hard drive data, which apparently had been in Rudy Giuliani’s hands for months. There were concerns about how the data was presented (specifically how the emails were converted into images and PDFs, losing their header info and metadata).
The fact that, much later on, many elements of the laptops history and provenance were confirmed as legitimate (with some open questions) is important, but does not change the simple fact that the morning the NY Post story came out, it was extremely unclear (in either direction) except to extreme partisans in both camps.
Based on that, both Twitter and Facebook reacted somewhat quickly. Twitter implemented its hacked materials policy in exactly the manner that we had warned might happen a month earlier: blocking the sharing of the NY Post link. Facebook implemented other protocols, “reducing its distribution” until it had gone through a fact check. Facebook didn’t ban the sharing of the link (like Twitter did), but rather limited the ability for it to “trend” and get recommended by the algorithm until fact checkers had reviewed it.
To be clear, the decision by Twitter to do this was, in our estimation, pretty stupid. It was exactly what we had warned about just a month earlier regarding this exact policy. But this is the nature of trust & safety. People need to make very rapid decisions with very incomplete information. That’s why I’ve argued ever since then that while the policy was stupid, it was no giant scandal that it happened, and given everything, it was not a stretch to understand how it played out.
Also, importantly, the very next day Twitter realized it fucked up, admitted so publicly, and changed the hacked materials policy saying that it would no longer block links to news sources based on this policy (though it might add a label to such stories). The next month, Jack Dorsey, in testifying before Congress, was pretty transparent about how all of this went down.
All of this seemed pretty typical for any kind of trust & safety operation. As I’ve explained for years, mistakes in content moderation (especially at scale) are inevitable. And, often, the biggest reason for those mistakes is the lack of context. That was certainly true here.
Yet, for some reason, the story has persisted for years now that Twitter did something nefarious, engaging in election interference that was possibly at the behest of “the deep state” or the Biden campaign. For years, as I’ve reported on this, I’ve noted that there was literally zero evidence to back any of that up. So, my ears certainly perked up last Friday when Elon Musk said that he was about to reveal “what really happened with the Hunter Biden story suppression.”
Certainly, if there was evidence of something nefarious behind closed doors, that would be important and worth covering. If it was true that through discussions I’ve had with dozens of Twitter employees over the past few years every single one of them lied about what happened, well, that would also be useful for me to know.
And then Taibbi revealed… basically nothing of interest. He revealed a few internal communications that… simply confirmed everything that was already public in statements made by Twitter, Jack Dorsey’s Congressional testimony, and in declarations made as part of a Federal Elections Commission investigation into Twitter’s actions. There were general concerns about foreign state influence campaigns, including “hack and leak” in the lead up to the election, and there were questions about the provenance of this particular data, so Twitter made a quick (cautious) judgment call and implemented a (bad) policy. Then it admitted it fucked up and changed things a day later. That’s… basically it.
And, yet, the story has persisted over and over and over again. Incredibly, even after the details of Taibbi’s Twitter thread revealed nothing new, many people started pretending that it had revealed something major, with even Elon Musk insisting that this was proof of some massive 1st Amendment violation:
Now, apparently more files are going to be published, so something may change, but so far it’s been a whole lot of utter nonsense. But when I say that both here on BestNetTech and on Twitter, I keep seeing a few very, very wrong arguments being made. So, let’s get to the debunking:
1. If you said Twitter’s decision to block links to the NY Post was election interference…
You’re wrong. Very much so. First off, there was, in fact, a complaint to the FEC about this very point, and the FEC investigated and found no election interference at all. It didn’t even find evidence of it being an “in-kind” contribution. It found no evidence that Twitter engaged in politically motivated decision making, but rather handled this in a non-partisan manner consistent with its business objectives:
Twitter acknowledges that, following the October 2020 publication of the New York Post
articles at issue, Twitter blocked users from sharing links to the articles. But Twitter states that
this was because its Site Integrity Team assessed that the New York Post articles likely contained
hacked and personal information, the sharing of which violated both Twitter’s Distribution of
Hacked Materials and Private Information Policies. Twitter points out that although sharing
links to the articles was blocked, users were still permitted to otherwise discuss the content of the
New York Post articles because doing so did not directly involve spreading any hacked or
personal information. Based on the information available to Twitter at the time, these actions
appear to reflect Twitter’s stated commercial purpose of removing misinformation and other
abusive content from its platform, not a purpose of influencing an election
All of this is actually confirmed by the Twitter Files from Taibbi/Musk, even as both seem to pretend otherwise. Taibbi revealed some internal emails in which various employees (going increasingly up the chain) discussed how to handle the story. Not once does anyone in what Taibbi revealed suggest anything even remotely politically motivated. There was legitimate concern internally about whether or not it was correct to block the NY Post story, which makes sense, because they were (correctly) concerned about making a decision that went too far. I mean, honestly, the discussion is not only without political motive, but shows that the trust & safety apparatus at Twitter was concerned with getting this correct, including employees questioning whether or not these were legitimately “hacked materials” and questioning whether other news stories on the hard drive should get the same treatment.
There are more discussions of this nature, with people questioning whether or not the material was really “hacked” and initially deciding on taking the more cautious approach until they knew more. Twitter’s Yoel Roth notes that “this is an emerging situation where the facts remain unclear. Given the SEVERE risks here and lessons of 2016, we’re erring on the side of including a warning and preventing this content from being amplified.”
Again, exactly as has been noted, given the lack of clarity Twitter reasonably decided to pump the brakes until more was known. There was some useful back-and-forth among employees — the kind that happens in any company regarding major trust & safety decisions, in which Twitter’s then VP of comms questioned whether or not this was the right decision. This shows a productive discussion — not anything along the lines of pushing for any sort of politically motivated outcome.
And then deputy General Counsel Jim Baker (more on him later, trust me…) chimes in to again highlight exactly what everyone has been saying: that this is a rapidly evolving situation, and it makes sense to be cautious until more is known. Baker’s message is important:
I support the conclusion that we need more facts to assess whether the materials were hacked. At this stage, however, it is reasonable for us to assume that they may have been and that caution is warranted. There are some facts that indicate that the materials may have been hacked, while there are others indicating that the computer was either abandoned and/or the owner consented to allow the repair shop to access it for at least some purposes. We simply need more information.
Again, all of this is… exactly what everyone has said ever since the day after it happened. This was an emerging story. The provenance was unclear. There were some sketchy things about it, and so Twitter enacted the policy because they just weren’t sure and didn’t have enough info yet. It turned out to be a bad call, but in content moderation, you’re going to make some bad calls.
What is missing entirely is any evidence that politics entered this discussion at all. Not even once.
2. But Twitter’s decision to “suppress” the story was a big deal and may have swung the election to Biden!
I’m sorry, but there remains no evidence to support that silly claim either. First off, Twitter’s decision actually seemed to get the story a hell of a lot more attention. Again, as noted above, Twitter did nothing to stop discussion of the story. It only blocked links to one story in the NY Post, and only for that one day. And the very fact that Twitter did this (and Facebook took other action) caused a bit of a Streisand Effect (hey!) which got the underlying story a lot more attention because of the decisions by those two companies.
The reality, though, is that the story just wasn’t that big of a deal for voters. Hunter Biden wasn’t the candidate. His father was. Everyone already pretty much knew that Hunter is a bit of a fuckup and clearly personally profiting off of the situation, but there was no actual big story in the revelations (I mean, yeah, there are still some people who insist there are, but they’re the same people who misunderstood the things we’re debunking here today). And, if we’re going to talk about kids of Presidents profiting off of their last name, well, there’s a pretty long list to go down….
But don’t take my word for it, let’s look at the evidence. As reporter Philip Bump recently noted, there’s actual evidence in Google search trends that Twitter and Facebook’s decision really did generate a lot more interest in the story. It was well after both companies took action that searches on Google for Hunter Biden shot upward:
Also, soon after, Twitter reversed its policy, and there was widespread discussion of the laptop in the next three weeks leading up to the election. The brief blip in time in which Twitter and Facebook limited the story seemed to have only fueled much more interest in it, rather than “suppressing” it.
Indeed, another document in the “Twitter Files” highlights how a Democratic member of the House, Ro Khanna, actually reached out to Twitter to point this out and to question Twitter’s decision (if this was really a big Democratic conspiracy, you’d think he’d be supportive of the move, rather than critical of it, but the reverse was true.) Rep. Khanna’s email to Twitter noted:
I say this as a total Biden partisan and convinced he didn’t do anything wrong. But the story has now become more about censorship than relatively innocuous emails and it’s become a bigger deal than it would have been.
So again, the evidence actually suggests that the story wasn’t suppressed at all. It got more attention. It didn’t swing the election, because most people didn’t find the story particularly revealing.
3. The government pressured Twitter/Facebook to block this story, and that’s a huge 1st Amendment violation / treason / crime of the century / etc.
Yeah, so, that’s just not true. I’ve spent years calling out government pressure on speech, from Democrats (and more Democrats) to Republicans (and more Republicans). So I’m pretty focused on watching when the government goes over the line — and quick to call it out. And there remains no evidence at all of that happening here. At all. Taibbi admits this flat out:
Incredibly, I keep seeing people on Twitter claim that Taibbi said the exact opposite. And you have people like Glenn Greenwald who insist that Taibbi only meant “foreign” governments here, despite all the evidence to the contrary. If he had found evidence that there was US government pressure here… why didn’t he post it? The answer: because it almost certainly does not exist.
Some people point to Mark Zuckerberg’s appearance over the summer on Joe Rogan’s podcast as “proof” that the FBI directed both companies to suppress the story, but that’s not at all what Zuckerberg said if you listened to his actual comments. Zuckerberg admits that they make mistakes, and that it feels terrible when they do. He goes into a pretty detailed explanation of some of how trust & safety works in determining whether or not a user is authentic. Then Rogan asks about the laptop story, and Zuckerberg says:
So, basically, the background here, is the FBI basically came to us, some folks on our team, and were like “just so you know, you should be on high alert, we thought there was a lot of Russian propaganda in the 2016 election, we have it on notice, basically, that there’s about to be some kind of dump that’s similar to that. So just be vigilant.”
This does not say that the FBI came to Facebook and said “suppress the Hunter Biden laptop story.” It was just a general warning that the FBI had intelligence that there might be some foreign influence operations, and to “be vigilant.”
This is nearly identical to what Twitter’s then head of “site integrity,” Yoel Roth, noted in his declaration in the FEC case discussed above:
“[F]ederal
law enforcement agencies communicated that they expected ‘hack-and-leak operations’ by state actors might occur
in the period shortly before the 2020 presidential election . . . . I also learned in these meetings that there were
rumors that a hack-and-leak operation would involve Hunter Biden.”
Basically the FBI is saying, in general, they have some intelligence that this kind of attack may happen, so be careful. It did not say to censor the info. It didn’t involve any threats. It wasn’t specifically about the laptop story.
And, in fact, as of earlier this week, we now have the FBI’s version of these events as well! That’s because of the somewhat silly lawsuit that Missouri and Louisiana filed against the Biden administration over Twitter’s decision to block the NY Post story. Just this week, Missouri released the deposition of FBI agent, Elvis Chan, who is often found at the center of conspiracy theories regarding “government censorship.”
And Chan tells basically the same story with a few slight differences, mostly in terms of framing. Specifically, Chan says that he never told the companies to “expect” a hack and leak attack, but rather to be aware of the possibility, slightly contradicting Roth’s declaration:
Yeah, I don’t know what Mr. Roth meant or meant, but what I’m letting you know is that from my recollection — I don’t believe we would have worded it so strongly to say that we expected there to be hacks. I would have worded it to say that there was the potential for hacks, and I believe that is how anyone from our side would have framed the comment.
And the reason I believe that is because I and the FBI, for that matter the U.S. intelligence community, was not aware of any successful hacks against political organizations or political campaigns.
You don’t think that intelligence officials described it in the way that Mr. Roth does here in this sentence in the affidavit?
Yeah, I would not have — I do not believe that the intelligence community would have expected it. I said that they would have been concerned about the potential for it.
In the deposition, Chan repeats (many, many times) that he wouldn’t have used the language saying such an effort would be “expected” but that it was something to look out for.
He also doesn’t recall Hunter Biden’s name even coming up, though he does say they warned them to be on the lookout for discussions on “hot button” issues, and notes that the companies themselves would often ask about certain scenarios:
So from my recollection, the social media companies, who include Twitter, would regularly ask us, “Hey, what kind of content do you think the nation state actors, the Russians would post,” and then they would provide examples. Like, “Would it be X” or “Would it be Y” or “Would it be Z.” And then we — I and then the other FBI officials would say, “We believe that the Russians will take advantage of any hot-button issue.” And we — I do not remember us specifically saying “Hunter Biden” in any meeting with Twitter.
Later on he says:
Yeah, in my estimation, we never discussed Hunter Biden specifically with Twitter. And so the way I read that is that there are hack-and-leak operations, and then at the time — at the time I believe he flagged one of the
potential current events that were happening ahead of the elections.
You believe that he, Yoel Roth, flagged Hunter Biden in one of these meetings?
No. I believe — I don’t believe he flagged it during one of the meetings. I just think that — so I don’t know. I cannot read his mind, but my assessment is because I don’t remember discussing Hunter Biden at any of the meetings with Twitter, that we didn’t discuss it.
So this would have been something that he would have just thought of as a hot-button issue on his own that happened in October.
He goes into great detail about meeting with tons of companies, but notes that mostly he’d talk to them about cybersecurity threats, not disinformation. He talks a bit about Russian disinformation campaigns, highlighting the well known Internet Research Agency, which specialized in pushing divisive messaging on US social media platforms. However, he basically confirms that he never discussed the laptop with anyone at any of these companies, and the deposition makes it pretty clear that if anyone at the FBI would have done so, it either would have been Chan himself or done with Chan’s knowledge.
As for the NY Post story, and the laptop itself, he notes he found out about it through the media, just like everyone else. And then he says that he didn’t talk with anyone at Twitter or Facebook about it, despite being their main contact on these kinds of issues.
Q. It’s your testimony that those news articles are the first time that you became aware that — you became aware of Hunter Biden’s laptop in any connection?
Yes. I don’t remember if it was a New York Post article or if it was another media outlet, but it was on multiple media outlets, and I can’t remember which article I read.
And before that day, October 14th, 2020, were you aware — were you aware of Hunter Biden — had anyone ever mentioned Hunter Biden’s laptop to you?
No.
[….]
Do you know if anyone at Twitter reached out to anyone at the FBI to check or verify anything about the Hunter Biden story?
I am not aware of any communications between Yoel Roth and the FBI about this topic.
Are you aware of any communications between anyone at Twitter and anyone in the federal government about the decision to suppress content relating to the Hunter Biden laptop story once the story had broken?
I am not aware of Mr. Roth’s discussions with any other federal agency. As I mentioned, I am not aware of any discussions with any FBI employees about this topic as well. But I only know who I know. So I don’t — he may have had these conversations, but I was not aware of it.
You mentioned Mr. Roth. How about anyone else at Twitter, did anyone else at Twitter reach out, to your knowledge, to anyone else in the federal government?
So I can only answer for the FBI. To my knowledge, I am not aware of any Twitter employee reaching out to any FBI employee regarding this topic.
/
How about Facebook, other than that meeting you referred to where an analyst asked the FBI to comment on the Hunter Biden investigation, are you aware of any communications between anyone at Facebook and anyone at the FBI related to the Hunter Biden laptop story?
No.
How about any other social media platform?
No.
How about Apple or Microsoft?
No.
Basically, the exact same story emerges no matter how you look at it. The FBI, along with CISA, would have various meetings with internet companies mainly to warn them about cybersecurity (i.e., hacking) threats, but also generally mentioned the possibility of hack and leak attempts with a general warning to be on the lookout for such things, and that they may touch on “hot button” social and news topics. Nowhere is there any indication of pressure or attempts to tell the companies what to do, or how they should handle it. Just straight up information sharing.
When you look at all three statements — Zuckerberg’s, Roth’s, and Chan’s — basically the same not-very-interesting story emerges. The US government had some general meetings that happen with lots of big companies to warn them about various potential cybersecurity threats, and the issue of hack-and-leak campaigns as a general possibility came up with no real specifics and no warnings.
And no one communicated with the companies directly about the NY Post story.
Given all that, I honestly don’t see how there’s any reasonable concern here. There’s certainly no clear 1st Amendment concern. There appears to be zero in the way of government involvement or pressure. There’s no coercion or even implied threats. There’s literally nothing at all (no matter how Missouri’s Attorney General completely misrepresents it).
Indeed, the only thing revealed so far that might be concerning regarding the 1st Amendment is that Taibbi claimed that the Trump administration allegedly made demands of Twitter.
If the Trump administration actually had sent requests to “remove” tweets (as Taibbi claims in an earlier tweet) that would most likely be a 1st Amendment issue. However, Taibbi reveals no such requests, which is really quite remarkable. It is also possible that Taibbi is overselling these claims, because this is a part of a discussion that we’ll get to in the next section, regarding Twitter’s flagging tools, which anyone (including you or me) can use to flag content for Twitter to review to see if it violates the company’s terms of service. While there are certainly some concerns about the government’s use of such tools, unless there’s some sort of threat or coercion, and as long as Twitter is free to judge the content for itself and determine how to handle it under its own terms, there’s probably no 1st Amendment issue.
Indeed, some people have highlighted the fact that the government gets “special treatment” in having its flags reviewed. But, from people I’ve spoken to, that actually goes against the “1st Amendment violation!” argument, because many social media companies set up special systems for government agents not to enable “moar censorship!” but because they know they have to be extra vigilant in reviewing those requests so as not to take down content mistakenly based on a government request.
So, sorry, so far there appears to be no government intrusion, and certainly no 1st Amendment violation.
4. The Biden campaign / Democrats demanded Twitter censor the NY Post! And that’s a 1st Amendment violation / treason / the crime of the century / etc.
So, again, the only way that there’s a 1st Amendment violation is if the government issued the demand. And in October of 2020, the Biden campaign and the Democratic National Committee… were not the government. The 1st Amendment does not restrict their ability, as private citizens (even while campaigning for public office) to flag content for Twitter to review against its policies. Hilariously, Elon Musk seems kinda confused about how time works. That tweet that we screenshotted about about the “1st Amendment” violation is in response to an internal email that Taibbi revealed about what Taibbi (misleadingly) says are “requests from connected actors to delete tweets” followed by a screenshot of Twitter employees listing out some tweets saying “more to review from the Biden team” and someone responding “handled these.”
There was then the next tweet which was a similar set of two tweets sent over from the Democratic National Committee (as compared to the Biden campaign in the first one). This includes a tweet from the actor James Woods, which the Twitter team calls special attention to for being “high profile.”
Except, as a few enterprising folks discovered when looking up those tweets listed, they were… basically Hunter Biden nude images that were found on the laptop hard drive, which clearly violated Twitter’s terms of service (and likely violated multiple state laws regarding the sharing of nonconsensual nude images). This includes the James Woods tweet, which included a fake Biden campaign ad that showed a naked picture of Hunter Biden lying on a bed with his (only slightly blurred) penis quite visible. I’m not going to share a link to the image.
A good investigative reporter might have looked up what was in those tweets before posting a conspiratorial post implying that these were attempts by the campaign to remove the NY Post story or some other important information. But Taibbi did not. Nor has he commented on it since.
On top of that, while Taibbi claims that these were “requests to delete,” as the Twitter email quite clearly says, these are for Twitter to “review.” In other words, these were flagged for Twitter to review if they violate Twitter’s policies as the naked images clearly do.
So, there’s clearly no 1st Amendment concern here because, despite Musk’s understanding of the space-time continuum, the Biden administration was not in the White House in October of 2020. Second, even if we’re concerned about political campaigns asking for content to be deleted, flagging content for companies to review to see if they violate policies is not (in any way) the same as demanding it be deleted. Anyone can flag content. And then the company reviews it and makes a determination.
Even more importantly, nothing revealed so far suggests that the campaign had anything to say to Twitter regarding the NY Post story or any story regarding the laptop. Literally the only concerns raised were about the naked pictures.
Finally, as noted above, the only other Democrat mentioned so far in the Twitter files is Rep. Ro Khanna who told Twitter it was wrong to stop the links to the NY Post article, and urged them to rescind the decision in the name of free speech. That does not sounds like the Democrats secretly pressuring the company to block the story. It kinda sounds like the exact opposite.
So despite what everyone keeps yelling on Twitter (including Elon Musk) this still doesn’t appear to be evidence of “censorship” or even “suppression of the Hunter Biden laptop story.” It’s just focused on the nonconsensual sharing of Hunter’s naked images.
As a side note, Woods has now said he’s going to sue over this, though for the life of me I have no idea what sort of claim he thinks he has, or how it’s going to go over in court when he claims his rights were violated when he was unable to share Hunter’s dick pic.
5. But Jim Baker! He worked for the FBI! And he was in charge of the Twitter files! Clearly he’s covering up stuff!
Here we are ripping from the stupidity headlines. This one came out just last night as Taibbi added a “supplement” to the Twitter files, again seemingly confused about how basically anything works. According to Taibbi in a very unclear and awkwardly worded thread, he and Bari Weiss (another opinion columnist who Musk has decided to share the files with) were having some sort of “complication” in accessing the files. Taibbi claims that Twitter’s Deputy General Counsel, Jim Baker, was reviewing the files, and somehow this was as problem (he does not explain why or how, though there’s a lot of conjecture).
Baker is, in fact, the former General Counsel at the FBI. It made news when he was hired.
Baker was subject to a bunch of conspiracy theory stuff a few years ago regarding the FBI and some of the sillier theories regarding the Trump campaign, including the Steele Dossier and the even sillier “Alfa Bank” story (which had always been silly and lots of people, including us, had mocked when it came out).
But despite all that, there’s really little evidence that Baker has done anything particularly noteworthy here. The stuff about his actions while at the FBI is totally overblown partisan hackery. People talk about the so-called “criminal investigation” he faced for his work looking into Russian interference in the 2020 election, but that appears to be something mostly cooked up by extreme Trumpists in the House and appears to have gone nowhere. And, yes, he was a witness at the Michael Sussman trial, which was sorta connected to the Alfa Bank stuff, but his testimony supported John Durham, not Michael Sussman, in that he claimed that Sussman made a false statement to him, which the entire case hinged on (and, for what it’s worth, the trial ended in acquittal).
In other words, almost all of the FBI-related accusations against Baker are entirely “guilt by association” type claims, with nothing at all legitimate to back them up.
As for Twitter, we already highlighted Baker’s email that Taibbi revealed, which shows a normal, thoughtful, cautious discussion of a normal trust & safety debate, with nothing even remotely political.
The latest claims from Taibbi and Weiss also don’t make much sense. Elon Musk has told his company to hand over a bunch of internal documents to reporters. Any corporate lawyer would naturally do a fairly standard document review before doing so to make sure that they’re not handing over any private information or something else that might create legal issues for Musk. And since a large chunk of the legal team has left the company, it wouldn’t be all that surprising if the task ended up on Baker’s desk.
Now, you can argue (as Taibbi and others now imply) that there’s some massive conflict of interest here, but, uh… that’s not at all clear, and not really how conflict of interest works. And, again, there’s little indication that Baker had a major role here at all, beyond being one of many who weighed in on this matter (and did so in a perfectly reasonable manner).
Honestly, Baker not reviewing the documents first would have potentially put him in legal jeopardy for not doing the very basic function of his job in making sure the company he worked for didn’t put itself in serious legal jeopardy by revealing things that might create huge liabilities for Musk and the company.
Either way, late Tuesday, Musk announced that Baker had “exited” from the company, and when asked by a random Twitter user if he had been “asked to explain himself first” Musk claimed that Baker’s “explanation was… unconvincing.”
And perhaps there’s something more here that will be revealed by Weiss now that the shackles have been removed. But, based on what’s been stated so far, a perfectly plausible explanation is that Musk confronted Baker wanting to know why he was holding back the files and what his role was in “suppressing” the NY Post story. And Baker told him, truthfully, that his role was exactly as was revealed in the email (giving his general thoughts on the proper approach to handling the story) and that he was reviewing documents because that’s his job, and Musk got mad and fired him.
Somewhat incredibly, Musk also seemed to imply he only learned of Baker’s involvement on Sunday.
Some people are claiming that Musk is saying he only discovered that Baker worked for him on Sunday, which is possible but seems unlikely. Conspiracy theorists had pointed out Baker’s role at the company to Musk as far back as April. A more charitable explanation is that Musk only discovered that Baker was handling the document review on Sunday. And I guess that’s plausible but, again, really only reflects extremely poorly on Musk.
If he’s going to reveal internal documents to reporters, especially ones that Musk himself keeps claiming implicate him in potential criminal liability (yes, it happened before his time, but Musk purchased the liabilities of the company as well), it’s not just perfectly normal, but kinda necessary to have lawyers do some document review. Again, as a more charitable explanation, perhaps Musk just wanted a different lawyer to do the review, and my only answer there is maybe he shouldn’t have gotten rid of so many lawyers from the legal team. Might have helped.
So, look, there could be a possible issue here, but given how much has been totally misrepresented throughout this whole process, without any actual evidence to support the “Jim Baker mastermind” theory, it’s difficult to take it even remotely seriously when there’s a perfectly normal, non-nefarious explanation to how all of this went down.
The absence of evidence is not evidence that there’s a coverup. It might just be evidence that you’re prone to believing in unsubstantiated conspiracy theories, though.
6. Still, all this proved that Twitter is “illegally” biased towards Democrats!
Taibbi made a big deal out of the fact that Twitter employees overwhelmingly donated to Democrats in their political contributions, which is not exactly new or surprising. Musk commented on this as well, suggesting sarcastically it was proof of bias at Twitter, but left out that among the companies in the chart he was commenting on… was also Tesla, where over 90% of employee donations went to Democrats.
But, more importantly, it’s not surprising in the least. Employees of many companies lean left. Executives (who donate way more money) tend to lean right. I mean, you can look at a similar chart of executive donations that shows they overwhelmingly go to Republicans. Neither is illegal, or even a problem. It’s just reality.
And companies making editorial decisions are… in fact… allowed to have bias in their political viewpoints. I would bet that if you looked at donations by employees at the NY Post or Fox News, they would generally favor Republicans. Indeed, imagine what would happen if someone took over Fox News and suddenly started revealing (1) communications between Fox News execs and Republican politicians and campaigns and (2) internal editorial meeting notes regarding what to promote. Don’t you think it would be way more biased than what the Twitter files revealed?
Here’s the important point on that: Fox News’ clear bias is not illegal either. And, indeed, if Democrats in Congress held hearings on “Fox News’ bias” and demanded that its top executives appear and explain their editorial decision making in promoting GOP talking points, people should be outraged over the clear intimidation factor, which would obviously be problematic from a 1st Amendment angle. Yet I don’t expect people to get all that worked up about the same thing happening to Twitter, even though it’s actually the same issue.
Companies are allowed to be biased. But the amazing thing revealed in the Twitter files is just how little evidence there is that any bias was a part of the debate on how to handle this stuff. Everything appeared to be about perfectly reasonable business decisions.
And… that’s it. I fear that this story is going to live on for years and years and years. And the narrative full of nonsense is already taking shape. However, I like to work off of actual facts and evidence, rather than fever dreams and misinterpretations. And I hope that you’ll read this and start doing the same.