When Elon Musk took over Twitter, one of his primary stated goals was to “bring back free speech” to the platform. He was particularly critical of how Twitter had briefly blocked links to a New York Post story about Hunter Biden’s laptop in 2020. But now, the self-proclaimed “free speech absolutist” is doing the very thing he criticized: banning links and suspending journalists.
We’ve discussed at great length how almost everyone misremembers and misunderstands the whole “Twitter suppressed the Hunter Biden laptop story.”
But the key factors are that Twitter chose to block the sharing of that link for 24 hours, claiming it violated its “hacked materials” policy. Many, many people (including us) called this out as bullshit, and Twitter backed down the very next day and admitted that was a mistake and said it was clarifying the policy to not include news reporting. The NY Post account wasn’t allowed to post for a couple of weeks, because it was told it had to delete the offending tweet first.
Elon has expressed his anger at Twitter for doing this a few times. When he took over the company, one of the first things he did was to give Matt Taibbi access to internal communications about that (which showed… not much of interest beyond standard discussions within the company about how to handle potentially sensitive materials). He was very excited to reveal this:
He has also suggested that those involved deserved prison time:
(FWIW, it’s absolutely false that Twitter’s actions had any impact on the election. The Federal Election Commission investigated it and found nothing. Multiple reports show that the story gained more traction after Twitter blocked links to it. The block only lasted 24 hours, and no other site blocked links. Twitter didn’t block any of the links to any other story about the laptop).
Either way, last year we called out Elon’s hypocrisy when he did the same damn thing regarding the JD Vance hacked dossier. And yet, that story disappeared after a few days, even though it was arguably worse. That also involved secretive materials that recipients weren’t supposed to have related to a Presidential campaign. And in that case, Elon had no problem blocking links and suspending the reporter.
The only real difference was that it was done under the “doxing” policy rather than under the “hacked materials” policy. Elon has a history of stretching the definition of the “doxing” policy, and ignoring it when the doxing happens to people he dislikes.
And now it’s happened again.
For a while now, a bunch of people have insisted that a huge Elon stan on ExTwitter, named Adrian Dittman, was really an Elon-alt account. The main “clue” was that Dittman sounds eerily similar to Musk. They even did a Spaces together, though many people argued that it was just Musk talking to himself.
Dittman and Musk have occasionally joked about it, but whenever anyone tried to call them out directly on it, they tended to just play coy.
Over the weekend, the Spectator published a pretty compelling argument, saying that Dittman really is not Elon, and instead is just a German dude who lives in Fiji who is a huge Elon fan who just coincidentally sounds an awful lot like Elon.
Elon even responded to a tweet about the piece (jokingly) claiming to reveal that he really is Dittman. Except, you may notice something odd here:
Yeah, the tweet Elon is responding to is not available, saying it “violated the X Rules.” The company has banned all links to the Spectator article and suspended the author, Jacqueline Sweet, for 30 days, claiming that the article violated its doxing policy.
Yup, just like the NY Post with the Hunter Biden laptop story, where Twitter told them they had to first delete the offending tweet, the new ExTwitter also says the offending tweet must be deleted to start the countdown.
And, just as with the NY Post story, anyone trying to share the link is blocked from doing so:
In no world does this violate any actual “doxxing” policy. Dittman was posting under his own name. The reporting was just confirming that Dittman is who he said he was. How is that doxxing? Revealing that someone is who they claim to be is not doxxing by any reasonable definition. It didn’t reveal his location beyond “Fiji,” a country with about a million inhabitants.
But, either way, this is again Elon doing exactly the same thing that he loudly proclaimed was so horrible before, a supposedly egregious suppression of free speech that apparently required a takeover of Twitter and a public airing of the internal discussions that resulted in that decision.
Of course, as with the JD Vance story, these actions will quickly be forgotten, while we’ll undoubtedly keep hearing the misleading (or downright false) claim that Twitter illegally suppressed the story of the Hunter Biden laptop.
Yes, Elon is free to manage ExTwitter however he wants since it’s his property. But it would be nice if some people (including Elon!) could at least have the intellectual honesty to admit (1) that he’s doing the same damn thing that he got upset at Twitter for doing and (2) that this completely undermines his claims about why he had to take over the site.
There is a widely believed but totally false claim by Trumpists that the “Biden administration” told Twitter to “censor” the NY Post article about the contents of Hunter Biden’s laptop four years ago. Indeed, just a few weeks ago, Donald Trump himself accused the Biden administration of doing as much:
Of course, in 2020 Donald Trump was in the White House. So either he’s a very confused old man or he’s accusing himself of rigging the election against himself.
Of course, none of this is actually true. As we’ve covered extensively, there remains zero evidence that the Trump White House or even the Biden campaign put pressure on social media to block that story. What did happen was (1) the FBI sent generic warnings to social media to be on the lookout for foreign adversaries conducting hack-and-leak operations to impact the election, but took no stance on the Hunter Biden story, and (2) the Biden administration made a few requests to social media sites solely asking the companies to remove nude selfies that Hunter had on his laptop (i.e., nothing political).
Still, if just a month ago Trump was claiming that any effort to ask Twitter to limit access is proof of an election being “rigged,” you’d have to imagine that he’s furious that [checks notes] his own campaign reached out directly to Elon Musk to get him to remove any link to the hacked-and-leaked internal Trump campaign dossier on JD Vance.
You will recall, of course, that hackers from Iran supposedly social engineered their way into the Trump campaign system and have been trying to get the media to share the docs. They finally found a rando Substacker willing to do so, and then he was quickly banned, as were all links to the document on ExTwitter. As we noted at the time, basically everyone seemed to switch their positions on whether or not this was okay.
Supporters of Elon and Trump insisted that this banning and blocking was absolutely aboveboard and necessary. Supporters of Harris insisted that this was awful, terrible censorship and election interference. Neither seemed willing to recognize that the scenario was effectively identical to four years ago.
Except, it was even more extreme. The NY Times reported late last week that — unlike four years ago — the Trump campaign actually did call Elon to get the content removed.
The relationship has proved significant in other ways. After a reporter’s publication of hacked Trump campaign information last month, the campaign connected with X to prevent the circulation of links to the material on the platform, according to two people with knowledge of the events. X eventually blocked links to the material and suspended the reporter’s account.
So, if you’re paying along at home, the Biden campaign did not do anything to get Twitter to block the original story. But to Trumpists, it proves that they “rigged the election” against Trump. Yet here, we now have the same platform, now controlled by someone who has explicitly come out in favor of Donald Trump and revamped the site to basically push pro-Trump material over and over again.
And when a similar situation developed, this time the Trump campaign did reach out to ExTwitter and got them to put in place much more restrictive blocks on the content (old Twitter only blocked one link for 24 hours before reversing course).
Once again, we’re seeing how this works: if Trump does it, it’s perfectly reasonable and no problem. If anyone else is accused of doing it (misleadingly) in favor of a Democrat, it’s treason and election interference.
What a fucked up, stupid situation.
Even worse, as the Washington Post’s Philip Bump has noted, when JD Vance is asked whether or not Trump won the 2020 election, he is now trying to blame the government “censorship” (that did not happen) of the NY Post story as a retort.
“Did Donald Trump lose the 2020 election?” he was asked.
“Did big technology companies censor a story that independent studies have suggested would have cost Trump millions of votes?” he replied.
This is his parry, the idea that one couldn’t say the 2020 election was fair because there was an effort to censor this determinative story. It is, as we’ve noted in the past, a way for people unwilling to echo Trump’s wilder election-fraud claims to instead point to something less easily falsifiable, this idea that anti-Trump forces put their thumbs on the scales.
But what Vance says here is falsifiable. It is not the case that tech companies censoring a story — specifically, a New York Post story about an email attributed to a laptop owned by Joe Biden’s son Hunter — cost Trump the election.
This is a seven-layer cake of lies. Vance is lying about almost everything here to paint a totally false picture and to avoid admitting what he knows is true: that Donald Trump lost the election in 2020.
It probably will not shock you to find out that big tech’s promises to never again suppress embarrassing leaked content about a political figure came with a catch. Apparently, it only applies when that political figure is a Democrat. If it’s a Republican, then of course the content will be suppressed, and the GOP officials who demanded that big tech never ever again suppress such content will look the other way.
A week and a half ago, the Senate Intelligence Committee held a hearing about the threat of foreign intelligence efforts to interfere with US elections. Senator Tom Cotton, who believes in using the US military to suppress American protests, used the opportunity to berate Meta and Google for supposedly (but not really) “suppressing” the Hunter Biden laptop story:
In that session — which I feel the need to remind you was just held on September 18th — both Nick Clegg from Meta and Kent Walker from Google were made to promise that they would never, ever engage in anything like the suppression of the Hunter Biden laptop story (Walker noted that Google had taken no effort to do so when that happened in the first place).
Clegg explicitly said that a similar demotion “would not take place today.”
Take a wild guess where this is going?
Exactly one week and one day after that hearing, Ken Klippenstein released the Trump campaign’s internal vetting dossier on JD Vance. It’s pretty widely accepted that the document was obtained via hacking by Iranian agents and had been shopped around to US news sites for months. Klippenstein, who will do pretty much anything for attention, finally bit.
In response, Elon immediately banned Ken’s ExTwitter account and blocked any and all links to not just the document, but to Ken’s Substack. He went way further than anyone ever did regarding the original Hunter Biden laptop story and the content revealed from that laptop. We noted the irony of how the scenario is nearly identical to the Hunter Biden laptop story, but everyone wants to flip sides in their opinion of it.
Elon being a complete fucking hypocrite is hardly new. It’s almost to be expected. That combined with his public endorsement (and massive funding) of the Trump/Vance campaign means it’s noteworthy, but not surprising, that he’d do much more to seek to suppress the Vance dossier than old Twitter ever did about the Hunter laptop story.
So, what about Meta and Google? After all, literally a week earlier, top execs from each company said in a Senate hearing under oath that they would never seek to suppress similar content this year.
And yet…
That’s the link to the dossier on Threads with a message saying “This link can’t be opened from Threads. It might contain harmful content or be designed to steal personal information.”
Ah. And remember, while Twitter did restrict links to the NY Post article for about 24 hours, Meta never restricted the links. It only set it so that the Facebook algorithm wouldn’t promote the story until they checked and made sure it was legit. But here, they’re blocking all links to the Vance dossier on all their properties. When asked, a Meta spokesperson told the Verge:
“Our policies do not allow content from hacked sources or content leaked as part of a foreign government operation to influence US elections. We will be blocking such materials from being shared on our apps under our Community Standards.”
Yeah, but again, literally a week ago, Nick Clegg said under oath that they wouldn’t do this. The “hacked sources” policy was the excuse Twitter had used to block the NY Post story.
Does anyone realize how ridiculous all of this looks?
And remember how Zuckerberg was just saying he regrets “censoring” political content? Just last week, there was a big NY Times piece arguing, ridiculously, that Zuck was done with politics. Apparently it’s only Democrat-politics that he’s done with.
As for Google, well, Walker told Senator Cotton that the Biden laptop story didn’t meet their standards to have it blocked or removed. But apparently the Vance dossier does. NY Times reporter Aric Toler found that you can’t store the document in your Google Drive, saying it violates their policies against “personal and confidential information”:
As we’ve said over and over again, neither of these things should have been blocked. The NY Post story shouldn’t have been blocked, and the Vance dossier shouldn’t have been blocked. Yes, there are reasons to be concerned about foreign interference in elections, but if something is newsworthy, it’s newsworthy. It’s not for these companies to determine what’s newsworthy at all.
While it was understandable why in the fog of the release about the Hunter Biden story both Twitter and Meta said “let’s pump the brakes and see…” given how much attention has been paid to all that, including literally one week before this, it certainly raises a ton of questions to then immediately move to blocking the Vance dossier.
Of course, the hypocrisy will stand, because the GOP, which has spent years pointing to the Hunter Biden laptop story as their shining proof of “big tech bias” (even though it was nothing of the sort), will immediately, and without any hint of shame or acknowledgment, insist that of course the Vance dossier must be blocked and it’s ludicrous to think otherwise.
And thus, we see the real takeaway from all that working of the refs over the years: embarrassing stuff about Republicans must be suppressed, because it’s doxxing or hacking or foreign interference. However, embarrassing stuff about Democrats must be shared, because any attempt to block it is election interference.
It’s been all of [checks calendar] one freaking day since we wrote about Elon Musk’s hypocrisy on free speech compared to the old Twitter regime, and he has to go and make another example.
Twitter, under old management: Briefly limits sharing of (at the time) unverified Hunter Biden laptop story. Elon: “Outrageous censorship!” and possibly a “First Amendment violation!”
ExTwitter, under Elon: Blocks links to leaked JD Vance dossier. Also Elon: “Most egregious doxxing ever!” Hmm…
As we’ve discussed for years now, very few people fully understand what happened four months ago with Twitter and the NY Post’s story about the content of Hunter Biden’s laptop. Two years ago, we pieced together what actually happened based on information from lawsuits, but also from what Elon released after taking over Twitter (though he did so misleadingly).
In short, Twitter had a very, very broad policy (too broad!) regarding “hacked materials.” We had criticized how that policy had been used to hide news reports before the whole Hunter Biden laptop story came out, warning that the policy was too broad and resulted in blocking legitimate news based on leaks.
At the same time, there were widespread (legitimate) concerns that foreign entities might engage in “hack and dump” efforts to leak critical information, as had happened in 2016. The folks who had access to the details of the laptop had shopped the contents around to multiple news sources who all refused to publish it, including Fox News. Eventually, the NY Post bit on the story, though even the main author of it was so unsure of the story he asked for his name to be taken off the byline. The actual content revealed in the story was… not really particularly interesting or revelatory.
Given the general concerns about amplifying a “hack and dump” campaign perhaps by a foreign adversary, and with no direct communication by the government, Twitter had a quick internal discussion. Then, they decided to limit access to the NY Post’s story under the “hacked materials” policy (as they had done before) until they knew more about the provenance of the laptop content. At that point, users were unable to sharethe link to just that story.
The internal leaks from the company showed that the decision makers inside the company struggled with how to deal with this, but politics did not come into play. Instead, they noted that given it “is an emerging situation where the facts remain unclear” and the risks, they decided to err on the side of caution and limit the distribution.
This did not actually limit interest in the article (hello Streisand Effect), which got way more traffic once Twitter made that decision.
Just one day later, Twitter admitted it had made a mistake, changed the policy, and again began allowing users to share that story.
Following that, there have been years of nonsense. This includes a firm (false) belief that Twitter actively tried to stifle the story for political reasons, that it blocked the story for months, that it knew the story was real, that the FBI and/or the non-existent Biden administration (remember Trump was the President at the time) had ordered Twitter to suppress the story.
An election interference lawsuit was filed… and rejected. There were Congressional investigations from Jim Jordan, which turned up nothing (but which he still spun as exposing conspiratorial actions).
But to many, including Elon Musk and many of his most vocal fans, it is taken as fact that old evil Twitter deliberately censored that story for political reasons, possibly changing the course of the 2020 election (even though literally none of that is accurate).
When his own company released the fact that the Biden campaign (not administration) asked Twitter if it might remove five tweets that showed Hunter Biden dick pics that were revealed as a part of the leak, Elon claimed that this story was a quintessential “violation of the Constitution’s First Amendment,” even as the tweets clearly violated Twitter’s policy against the sharing of non-consensual nude images.
Indeed, many people cite that false narrative as a reason they’re happy that “free speech absolutist” Elon took over to make sure such a thing would never happen again.
Fast forward to yesterday…
Hold onto your hats, folks. This year, there are widespread (legitimate) concerns about foreign interference in the election including “hack and dump” efforts. Over the last month, there have been tons of stories regarding how Iran had hacked Trump officials, obtained a bunch of things, and shopped them around to a variety of media sources, who all refused to publish it.
Eventually, one dipshit decided to publish at least some of it: the Trump internal dossier on JD Vance. In this case, the dipshit was Ken Klippenstein, an independent reporter, known for his terrible reporting as well as his willingness to beg for attention on social media.
The actual content revealed in the story was… not really particularly interesting or revelatory. It’s a dossier of all the reasons why Vance might be a bad VP choice. There’s little that’s surprising in there.
So, the scenario has an awful lot of similarities to the Hunter Biden laptop story, right? Almost eerily so. But this time, Elon Musk is in charge, right? And so, obviously, he left this up, right? And he let people share it, right? Free speech absolutism, right? Right? Elon?
Hahaha, of course not.
And if you try to share the link to Ken’s article? According to multiple people who have tried, it does not work. Here’s one screenshot of a few that I saw showing what happens if you try:
You also can’t share the link via DMs.
Another user on Twitter notes that their own account was temporarily suspended not even for tweeting out a link to the Vance dossier story, but for tweeting a link to Ken’s post about getting suspended!
ExTwitter Safety claims Ken’s is a “temporary” suspension (just like Twitter’s temporary limit on the NY Post — though in that case they didn’t suspend the account as they did here). And the reason given is that the dossier supposedly revealed Vance’s physical addresses and “the majority of his Social Security number.”
As opposed to, say, Hunter Biden’s dick pics.
That said, the link posted to ExTwitter did not, in fact, reveal the addresses or partial SSN. It linked to an article that Ken wrote about the dossier, which then did include a link to the file, but it’s still two clicks away from ExTwitter.
Ken points out that this particular info (Vance’s addresses and partial SSN) is widely available online or via data brokers. That still seems a bit iffy, and it feels like he could have easily redacted that info, but chose not to. There are plenty of cases that many people consider to be “doxxing” that are little more than getting info from a data broker.
Elon, though, is insisting that this was “one of the most egregious, evil doxxing actions we’ve ever seen.” Which is laughably untrue.
And, of course, unlike the old Twitter regime, which made no public displays of support for presidential candidates, Elon has publicly endorsed Donald Trump, become one of the largest donors to his campaign, and turned ExTwitter into a non-stop pro-Trump promotional media site. So, unlike the old Twitter regime, Elon has made it clear that he absolutely wants to use the site to elect his preferred candidate and would have political reasons for trying to suppress this marginally embarrassing dossier.
So… is Jim Jordan going to launch an investigation and hold hearings, like he did about Twitter and the NY Post over Hunter Biden’s laptop? Is he going to haul Elon before Congress and demand he explain what happened? Will Elon release the “X-Files” revealing the internal discussions he and his employees had over banning Ken and blocking the sharing of the link?
Or nah?
Already we’re seeing Musk’s biggest fans trying to come up with justifications for how these stories are totally different. But they’re literally not. On basically all important details they’re effectively identical.
Again, I said at the time (and even before the Biden laptop story came out) that I thought Twitter’s policy was bad and they were wrong to temporarily block the sharing of the link. I also think that Elon is wrong to suspend Ken and block the sharing of the links as well.
But watch the rank hypocrisy fly. The old Twitter regime at least struggled with this decision internally (later revealed by Elon) and recognized that they were making a quick call based on imperfect information that they quickly reversed course on and apologized.
I have a confession. While yesterday the House Oversight Committee took up six hours (sorta, as there was a big power outage in the middle) wasting everyone’s time with a hearing on “Twitter’s Role in Suppressing the Biden Laptop Story,” I chose not to watch it in real-time. Instead, afterwards I went back and watched the video at 3x speed (and skipped over the giant power outage part), meaning I was able to watch the whole thing in less than two hours. If you, too, wish to subject yourself to this abject nonsense, I highly recommend doing something similar. Though, a better option would be just not to waste your time.
Unfortunately, the panelists — four former Twitter employees — had neither option at hand and had to sit through all of the craziness. By this point, I’m kind of used to absolutely ridiculous hearings in Congress trying to “grill” tech execs over things. They have a familiar pattern. The elected officials engage in pure grandstanding, ironically deliberately designed to try to make clips of them go viral on the very social media they’re criticizing. But this one was even worse. Honestly, the four witnesses — former deputy general counsel James Baker, former legal chief Vijaya Gadde, former head of trust & safety Yoel Roth, and a former member of the safety policy team, Anika Collier Navaroli — barely had time to say anything. Almost all of the politicians used up most of their own 5 minutes on their own grandstanding.
To the extent that they asked any questions (and this was, tragically, mostly true on both sides of the aisle, with only a few limited exceptions), they asked misleading, confused questions, and when any of the witnesses tried to clarify, or to express anything even remotely approaching nuance, the elected officials would steamroll over them and move on.
Nothing in the hearing was about finding out anything.
Nothing in the hearing was about exploring the actual issues and tradeoffs around content moderation.
Many of the Republicans wanted to just complain that their own tweets weren’t given enough prominence on Twitter. It was embarrassing. On the Democratic side, many of the Representatives (rightly) called out that the whole hearing was stupid nonsense, but that didn’t stop a few of them from pushing their own questionable theories, including the suggestion from Rep. Raskin (whose comments were mostly good, including calling out how obviously ridiculous the same panel would be if they called Fox News to explain its editorial choices) that Twitter’s failure to stop January 6th from happening was illegal or Rep. Bush’s suggestion that social media should be nationalized. On the GOP side, you had Rep. Boebert suggest that the panelists had broken the law in exercising their 1st Amendment rights, and multiple other Reps. insist over and over again — even as the panelists highlighted the contention was blatantly false — that Twitter deliberately suppressed the Biden laptop story.
Of course, if you’ve read BestNetTech, you already know what the Twitter files actually showed, which was that the decision to block the links to that one story for one day was a mistake, but had nothing to do with politics, or pressure from Joe Biden or the FBI. But the hearing was extremely short on facts from the Representatives, who just kept repeating false claim after false claim.
But… the biggest reveal was actually that the Donald Trump White House demanded that Twitter remove a tweet from Chrissy Tiegen which Trump felt insulted by. Remember, in the original Twitter Files, Matt Taibbi had insisted that the Trump White House sent takedown demands to Twitter, but in all of the Twitter files since then, no one (not Taibbi or any of the others who got access) have said anything about what Trump wanted taken down. Instead, it was Navaroli who talked about how the Trump White House had complained about this tweet, and demanded Twitter take it down.
That tweet was in response to Trump whining that after he signed a Criminal Justice Reform bill he didn’t get enough credit. In the short four tweet rant, Trump mentions “musician @johnlegend, and his filthy mouthed wife, are talking now about how great it is – but I didn’t see them around when we needed help getting it passed.” Tiegen then responded as seen above.
And it actually sounds like Twitter did the same thing it does with every note from anyone — government official or other — and reviewed the tweet against its policies. Apparently, there was some sort of policy that would take down tweets if there were three insults in a tweet, and so they had to analyze if “pussy ass bitch” was three insults or one giant insult (or two? I dunno). Either way, it was determined that it didn’t meet the three insult threshold and remained on the site.
Still, this certainly raises the question: in all of the “Twitter Files,” where is the release of the details about Trump getting his panties in a bunch and demanding that Tiegen’s tweet get taken down?
Now, I’m expecting that all the people in our comments who have insisted that the FBI highlighting tweets that might violate actual policies is a Constitutional violation will now admit that the former President they worship also violated the Constitution under their understanding of it… or, nah?
Speaking of the former President, Navaroli also revealed yet another way in which Twitter bent over backwards to protect Trump and other Republicans. She relayed the discussion over a tweet by Trump, in which he suggested that Congressional Representatives of color, with whom he had policy disagreements should “go back and help fix the totally broken and crime infested places from which they came.”
At the time, Twitter’s policies had a rule against attacking immigrants, and even called out the specific phrase “go back to where you came from,” as violating that policy. Navaroli discussed how she flagged that tweet as violating the policy, but was overruled by people higher up on the team. And, soon after that, the policy was changed to remove that phrase as an example of a violation.
Now, there are arguments that could be made for why that particular tweet, in context, might not have truly violated the policy. There are also pretty strong arguments for why it did. Reasonable people can disagree, and I would imagine that there was some level of debate within Twitter. But to make that call and then soon after delete the phrase from the policy certainly suggests going the extra step not to “censor conservatives” but to give them extra leeway even as they violated the site’s policies repeatedly.
The whole thing was as parade of nonsense, and I even heard from a Republican Congressional staffer afterwards complaining about how the whole thing completely backfired on Republicans. They set out to “prove” that Twitter conspired with the US deep state to censor the Hunter Biden laptop story. And, in the end, the witnesses quite effectively debunked each point of that, while instead the key takeaway was that Trump demanded a tweet insulting himself be taken down, and Twitter explicitly changed its rules to protect Trump after he violated the rules.
Hello! Someone has referred you to this post because you’ve said something quite wrong about Twitter and how it handled something to do with Hunter Biden’s laptop. If you’re new here, you may not know that I’ve written a similar post for people who are wrong about Section 230. If you’re being wrong about Twitter and the Hunter Biden laptop, there’s a decent chance that you’re also wrong about Section 230, so you might want to read that too! Also, these posts are using a format blatantly swiped from lawyer Ken “Popehat” White, who wrote one about the 1st Amendment. Honestly, you should probably read that one too, because there’s some overlap.
Now, to be clear, I’ve explained many times before, in other posts, why people who freaked out about how Twitter handled the Hunter Biden laptop story are getting confused, but it’s usually been a bit buried. I had already started a version of this post last week, since people keep bringing up Twitter and the laptop, but then on Friday, Elon (sorta) helped me out by giving a bunch of documents to reporter Matt Taibbi.
So, let’s review some basics before we respond to the various wrong statements people have been making. Since 2016, there have been concerns raised about how foreign nation states might seek to interfere with elections, often via the release of hacked or faked materials. It’s no secret that websites have been warned to be on the lookout for such content in the leadup to the election — not with demands to suppress it, but just to consider how to handle it.
Partly in response to that, social media companies put in place various policies on how they were going to handle such material. Facebook set up a policy to limit certain content from trending in its algorithm until it had been reviewed by fact-checkers. Twitter put in place a “hacked materials” policy, which forbade the sharing of leaked or hacked materials. There were — clearly! — some potential issues with that policy. In fact, in September of 2020 (a month before the NY Post story) we highlighted the problems of this very policy, including somewhat presciently noting the fear that it would be used to block the sharing of content in the public interest and could be used against journalistic organizations (indeed, that case study highlights how the policy was enforced to ban DDOSecrets for leaking police chat logs).
The morning the NY Post story came out there was a lot of concern about the validity of the story. Other news organizations, including Fox News, had refused to touch it. NY Post reporters refused to put their name on it. There were other oddities, including the provenance of the hard drive data, which apparently had been in Rudy Giuliani’s hands for months. There were concerns about how the data was presented (specifically how the emails were converted into images and PDFs, losing their header info and metadata).
The fact that, much later on, many elements of the laptops history and provenance were confirmed as legitimate (with some open questions) is important, but does not change the simple fact that the morning the NY Post story came out, it was extremely unclear (in either direction) except to extreme partisans in both camps.
Based on that, both Twitter and Facebook reacted somewhat quickly. Twitter implemented its hacked materials policy in exactly the manner that we had warned might happen a month earlier: blocking the sharing of the NY Post link. Facebook implemented other protocols, “reducing its distribution” until it had gone through a fact check. Facebook didn’t ban the sharing of the link (like Twitter did), but rather limited the ability for it to “trend” and get recommended by the algorithm until fact checkers had reviewed it.
To be clear, the decision by Twitter to do this was, in our estimation, pretty stupid. It was exactly what we had warned about just a month earlier regarding this exact policy. But this is the nature of trust & safety. People need to make very rapid decisions with very incomplete information. That’s why I’ve argued ever since then that while the policy was stupid, it was no giant scandal that it happened, and given everything, it was not a stretch to understand how it played out.
Also, importantly, the very next day Twitter realized it fucked up, admitted so publicly, and changed the hacked materials policy saying that it would no longer block links to news sources based on this policy (though it might add a label to such stories). The next month, Jack Dorsey, in testifying before Congress, was pretty transparent about how all of this went down.
All of this seemed pretty typical for any kind of trust & safety operation. As I’ve explained for years, mistakes in content moderation (especially at scale) are inevitable. And, often, the biggest reason for those mistakes is the lack of context. That was certainly true here.
Yet, for some reason, the story has persisted for years now that Twitter did something nefarious, engaging in election interference that was possibly at the behest of “the deep state” or the Biden campaign. For years, as I’ve reported on this, I’ve noted that there was literally zero evidence to back any of that up. So, my ears certainly perked up last Friday when Elon Musk said that he was about to reveal “what really happened with the Hunter Biden story suppression.”
Certainly, if there was evidence of something nefarious behind closed doors, that would be important and worth covering. If it was true that through discussions I’ve had with dozens of Twitter employees over the past few years every single one of them lied about what happened, well, that would also be useful for me to know.
And then Taibbi revealed… basically nothing of interest. He revealed a few internal communications that… simply confirmed everything that was already public in statements made by Twitter, Jack Dorsey’s Congressional testimony, and in declarations made as part of a Federal Elections Commission investigation into Twitter’s actions. There were general concerns about foreign state influence campaigns, including “hack and leak” in the lead up to the election, and there were questions about the provenance of this particular data, so Twitter made a quick (cautious) judgment call and implemented a (bad) policy. Then it admitted it fucked up and changed things a day later. That’s… basically it.
And, yet, the story has persisted over and over and over again. Incredibly, even after the details of Taibbi’s Twitter thread revealed nothing new, many people started pretending that it had revealed something major, with even Elon Musk insisting that this was proof of some massive 1st Amendment violation:
Now, apparently more files are going to be published, so something may change, but so far it’s been a whole lot of utter nonsense. But when I say that both here on BestNetTech and on Twitter, I keep seeing a few very, very wrong arguments being made. So, let’s get to the debunking:
1. If you said Twitter’s decision to block links to the NY Post was election interference…
You’re wrong. Very much so. First off, there was, in fact, a complaint to the FEC about this very point, and the FEC investigated and found no election interference at all. It didn’t even find evidence of it being an “in-kind” contribution. It found no evidence that Twitter engaged in politically motivated decision making, but rather handled this in a non-partisan manner consistent with its business objectives:
Twitter acknowledges that, following the October 2020 publication of the New York Post
articles at issue, Twitter blocked users from sharing links to the articles. But Twitter states that
this was because its Site Integrity Team assessed that the New York Post articles likely contained
hacked and personal information, the sharing of which violated both Twitter’s Distribution of
Hacked Materials and Private Information Policies. Twitter points out that although sharing
links to the articles was blocked, users were still permitted to otherwise discuss the content of the
New York Post articles because doing so did not directly involve spreading any hacked or
personal information. Based on the information available to Twitter at the time, these actions
appear to reflect Twitter’s stated commercial purpose of removing misinformation and other
abusive content from its platform, not a purpose of influencing an election
All of this is actually confirmed by the Twitter Files from Taibbi/Musk, even as both seem to pretend otherwise. Taibbi revealed some internal emails in which various employees (going increasingly up the chain) discussed how to handle the story. Not once does anyone in what Taibbi revealed suggest anything even remotely politically motivated. There was legitimate concern internally about whether or not it was correct to block the NY Post story, which makes sense, because they were (correctly) concerned about making a decision that went too far. I mean, honestly, the discussion is not only without political motive, but shows that the trust & safety apparatus at Twitter was concerned with getting this correct, including employees questioning whether or not these were legitimately “hacked materials” and questioning whether other news stories on the hard drive should get the same treatment.
There are more discussions of this nature, with people questioning whether or not the material was really “hacked” and initially deciding on taking the more cautious approach until they knew more. Twitter’s Yoel Roth notes that “this is an emerging situation where the facts remain unclear. Given the SEVERE risks here and lessons of 2016, we’re erring on the side of including a warning and preventing this content from being amplified.”
Again, exactly as has been noted, given the lack of clarity Twitter reasonably decided to pump the brakes until more was known. There was some useful back-and-forth among employees — the kind that happens in any company regarding major trust & safety decisions, in which Twitter’s then VP of comms questioned whether or not this was the right decision. This shows a productive discussion — not anything along the lines of pushing for any sort of politically motivated outcome.
And then deputy General Counsel Jim Baker (more on him later, trust me…) chimes in to again highlight exactly what everyone has been saying: that this is a rapidly evolving situation, and it makes sense to be cautious until more is known. Baker’s message is important:
I support the conclusion that we need more facts to assess whether the materials were hacked. At this stage, however, it is reasonable for us to assume that they may have been and that caution is warranted. There are some facts that indicate that the materials may have been hacked, while there are others indicating that the computer was either abandoned and/or the owner consented to allow the repair shop to access it for at least some purposes. We simply need more information.
Again, all of this is… exactly what everyone has said ever since the day after it happened. This was an emerging story. The provenance was unclear. There were some sketchy things about it, and so Twitter enacted the policy because they just weren’t sure and didn’t have enough info yet. It turned out to be a bad call, but in content moderation, you’re going to make some bad calls.
What is missing entirely is any evidence that politics entered this discussion at all. Not even once.
2. But Twitter’s decision to “suppress” the story was a big deal and may have swung the election to Biden!
I’m sorry, but there remains no evidence to support that silly claim either. First off, Twitter’s decision actually seemed to get the story a hell of a lot more attention. Again, as noted above, Twitter did nothing to stop discussion of the story. It only blocked links to one story in the NY Post, and only for that one day. And the very fact that Twitter did this (and Facebook took other action) caused a bit of a Streisand Effect (hey!) which got the underlying story a lot more attention because of the decisions by those two companies.
The reality, though, is that the story just wasn’t that big of a deal for voters. Hunter Biden wasn’t the candidate. His father was. Everyone already pretty much knew that Hunter is a bit of a fuckup and clearly personally profiting off of the situation, but there was no actual big story in the revelations (I mean, yeah, there are still some people who insist there are, but they’re the same people who misunderstood the things we’re debunking here today). And, if we’re going to talk about kids of Presidents profiting off of their last name, well, there’s a pretty long list to go down….
But don’t take my word for it, let’s look at the evidence. As reporter Philip Bump recently noted, there’s actual evidence in Google search trends that Twitter and Facebook’s decision really did generate a lot more interest in the story. It was well after both companies took action that searches on Google for Hunter Biden shot upward:
Also, soon after, Twitter reversed its policy, and there was widespread discussion of the laptop in the next three weeks leading up to the election. The brief blip in time in which Twitter and Facebook limited the story seemed to have only fueled much more interest in it, rather than “suppressing” it.
Indeed, another document in the “Twitter Files” highlights how a Democratic member of the House, Ro Khanna, actually reached out to Twitter to point this out and to question Twitter’s decision (if this was really a big Democratic conspiracy, you’d think he’d be supportive of the move, rather than critical of it, but the reverse was true.) Rep. Khanna’s email to Twitter noted:
I say this as a total Biden partisan and convinced he didn’t do anything wrong. But the story has now become more about censorship than relatively innocuous emails and it’s become a bigger deal than it would have been.
So again, the evidence actually suggests that the story wasn’t suppressed at all. It got more attention. It didn’t swing the election, because most people didn’t find the story particularly revealing.
3. The government pressured Twitter/Facebook to block this story, and that’s a huge 1st Amendment violation / treason / crime of the century / etc.
Yeah, so, that’s just not true. I’ve spent years calling out government pressure on speech, from Democrats (and more Democrats) to Republicans (and more Republicans). So I’m pretty focused on watching when the government goes over the line — and quick to call it out. And there remains no evidence at all of that happening here. At all. Taibbi admits this flat out:
Incredibly, I keep seeing people on Twitter claim that Taibbi said the exact opposite. And you have people like Glenn Greenwald who insist that Taibbi only meant “foreign” governments here, despite all the evidence to the contrary. If he had found evidence that there was US government pressure here… why didn’t he post it? The answer: because it almost certainly does not exist.
Some people point to Mark Zuckerberg’s appearance over the summer on Joe Rogan’s podcast as “proof” that the FBI directed both companies to suppress the story, but that’s not at all what Zuckerberg said if you listened to his actual comments. Zuckerberg admits that they make mistakes, and that it feels terrible when they do. He goes into a pretty detailed explanation of some of how trust & safety works in determining whether or not a user is authentic. Then Rogan asks about the laptop story, and Zuckerberg says:
So, basically, the background here, is the FBI basically came to us, some folks on our team, and were like “just so you know, you should be on high alert, we thought there was a lot of Russian propaganda in the 2016 election, we have it on notice, basically, that there’s about to be some kind of dump that’s similar to that. So just be vigilant.”
This does not say that the FBI came to Facebook and said “suppress the Hunter Biden laptop story.” It was just a general warning that the FBI had intelligence that there might be some foreign influence operations, and to “be vigilant.”
This is nearly identical to what Twitter’s then head of “site integrity,” Yoel Roth, noted in his declaration in the FEC case discussed above:
“[F]ederal
law enforcement agencies communicated that they expected ‘hack-and-leak operations’ by state actors might occur
in the period shortly before the 2020 presidential election . . . . I also learned in these meetings that there were
rumors that a hack-and-leak operation would involve Hunter Biden.”
Basically the FBI is saying, in general, they have some intelligence that this kind of attack may happen, so be careful. It did not say to censor the info. It didn’t involve any threats. It wasn’t specifically about the laptop story.
And, in fact, as of earlier this week, we now have the FBI’s version of these events as well! That’s because of the somewhat silly lawsuit that Missouri and Louisiana filed against the Biden administration over Twitter’s decision to block the NY Post story. Just this week, Missouri released the deposition of FBI agent, Elvis Chan, who is often found at the center of conspiracy theories regarding “government censorship.”
And Chan tells basically the same story with a few slight differences, mostly in terms of framing. Specifically, Chan says that he never told the companies to “expect” a hack and leak attack, but rather to be aware of the possibility, slightly contradicting Roth’s declaration:
Yeah, I don’t know what Mr. Roth meant or meant, but what I’m letting you know is that from my recollection — I don’t believe we would have worded it so strongly to say that we expected there to be hacks. I would have worded it to say that there was the potential for hacks, and I believe that is how anyone from our side would have framed the comment.
And the reason I believe that is because I and the FBI, for that matter the U.S. intelligence community, was not aware of any successful hacks against political organizations or political campaigns.
You don’t think that intelligence officials described it in the way that Mr. Roth does here in this sentence in the affidavit?
Yeah, I would not have — I do not believe that the intelligence community would have expected it. I said that they would have been concerned about the potential for it.
In the deposition, Chan repeats (many, many times) that he wouldn’t have used the language saying such an effort would be “expected” but that it was something to look out for.
He also doesn’t recall Hunter Biden’s name even coming up, though he does say they warned them to be on the lookout for discussions on “hot button” issues, and notes that the companies themselves would often ask about certain scenarios:
So from my recollection, the social media companies, who include Twitter, would regularly ask us, “Hey, what kind of content do you think the nation state actors, the Russians would post,” and then they would provide examples. Like, “Would it be X” or “Would it be Y” or “Would it be Z.” And then we — I and then the other FBI officials would say, “We believe that the Russians will take advantage of any hot-button issue.” And we — I do not remember us specifically saying “Hunter Biden” in any meeting with Twitter.
Later on he says:
Yeah, in my estimation, we never discussed Hunter Biden specifically with Twitter. And so the way I read that is that there are hack-and-leak operations, and then at the time — at the time I believe he flagged one of the
potential current events that were happening ahead of the elections.
You believe that he, Yoel Roth, flagged Hunter Biden in one of these meetings?
No. I believe — I don’t believe he flagged it during one of the meetings. I just think that — so I don’t know. I cannot read his mind, but my assessment is because I don’t remember discussing Hunter Biden at any of the meetings with Twitter, that we didn’t discuss it.
So this would have been something that he would have just thought of as a hot-button issue on his own that happened in October.
He goes into great detail about meeting with tons of companies, but notes that mostly he’d talk to them about cybersecurity threats, not disinformation. He talks a bit about Russian disinformation campaigns, highlighting the well known Internet Research Agency, which specialized in pushing divisive messaging on US social media platforms. However, he basically confirms that he never discussed the laptop with anyone at any of these companies, and the deposition makes it pretty clear that if anyone at the FBI would have done so, it either would have been Chan himself or done with Chan’s knowledge.
As for the NY Post story, and the laptop itself, he notes he found out about it through the media, just like everyone else. And then he says that he didn’t talk with anyone at Twitter or Facebook about it, despite being their main contact on these kinds of issues.
Q. It’s your testimony that those news articles are the first time that you became aware that — you became aware of Hunter Biden’s laptop in any connection?
Yes. I don’t remember if it was a New York Post article or if it was another media outlet, but it was on multiple media outlets, and I can’t remember which article I read.
And before that day, October 14th, 2020, were you aware — were you aware of Hunter Biden — had anyone ever mentioned Hunter Biden’s laptop to you?
No.
[….]
Do you know if anyone at Twitter reached out to anyone at the FBI to check or verify anything about the Hunter Biden story?
I am not aware of any communications between Yoel Roth and the FBI about this topic.
Are you aware of any communications between anyone at Twitter and anyone in the federal government about the decision to suppress content relating to the Hunter Biden laptop story once the story had broken?
I am not aware of Mr. Roth’s discussions with any other federal agency. As I mentioned, I am not aware of any discussions with any FBI employees about this topic as well. But I only know who I know. So I don’t — he may have had these conversations, but I was not aware of it.
You mentioned Mr. Roth. How about anyone else at Twitter, did anyone else at Twitter reach out, to your knowledge, to anyone else in the federal government?
So I can only answer for the FBI. To my knowledge, I am not aware of any Twitter employee reaching out to any FBI employee regarding this topic.
/
How about Facebook, other than that meeting you referred to where an analyst asked the FBI to comment on the Hunter Biden investigation, are you aware of any communications between anyone at Facebook and anyone at the FBI related to the Hunter Biden laptop story?
No.
How about any other social media platform?
No.
How about Apple or Microsoft?
No.
Basically, the exact same story emerges no matter how you look at it. The FBI, along with CISA, would have various meetings with internet companies mainly to warn them about cybersecurity (i.e., hacking) threats, but also generally mentioned the possibility of hack and leak attempts with a general warning to be on the lookout for such things, and that they may touch on “hot button” social and news topics. Nowhere is there any indication of pressure or attempts to tell the companies what to do, or how they should handle it. Just straight up information sharing.
When you look at all three statements — Zuckerberg’s, Roth’s, and Chan’s — basically the same not-very-interesting story emerges. The US government had some general meetings that happen with lots of big companies to warn them about various potential cybersecurity threats, and the issue of hack-and-leak campaigns as a general possibility came up with no real specifics and no warnings.
And no one communicated with the companies directly about the NY Post story.
Given all that, I honestly don’t see how there’s any reasonable concern here. There’s certainly no clear 1st Amendment concern. There appears to be zero in the way of government involvement or pressure. There’s no coercion or even implied threats. There’s literally nothing at all (no matter how Missouri’s Attorney General completely misrepresents it).
Indeed, the only thing revealed so far that might be concerning regarding the 1st Amendment is that Taibbi claimed that the Trump administration allegedly made demands of Twitter.
If the Trump administration actually had sent requests to “remove” tweets (as Taibbi claims in an earlier tweet) that would most likely be a 1st Amendment issue. However, Taibbi reveals no such requests, which is really quite remarkable. It is also possible that Taibbi is overselling these claims, because this is a part of a discussion that we’ll get to in the next section, regarding Twitter’s flagging tools, which anyone (including you or me) can use to flag content for Twitter to review to see if it violates the company’s terms of service. While there are certainly some concerns about the government’s use of such tools, unless there’s some sort of threat or coercion, and as long as Twitter is free to judge the content for itself and determine how to handle it under its own terms, there’s probably no 1st Amendment issue.
Indeed, some people have highlighted the fact that the government gets “special treatment” in having its flags reviewed. But, from people I’ve spoken to, that actually goes against the “1st Amendment violation!” argument, because many social media companies set up special systems for government agents not to enable “moar censorship!” but because they know they have to be extra vigilant in reviewing those requests so as not to take down content mistakenly based on a government request.
So, sorry, so far there appears to be no government intrusion, and certainly no 1st Amendment violation.
4. The Biden campaign / Democrats demanded Twitter censor the NY Post! And that’s a 1st Amendment violation / treason / the crime of the century / etc.
So, again, the only way that there’s a 1st Amendment violation is if the government issued the demand. And in October of 2020, the Biden campaign and the Democratic National Committee… were not the government. The 1st Amendment does not restrict their ability, as private citizens (even while campaigning for public office) to flag content for Twitter to review against its policies. Hilariously, Elon Musk seems kinda confused about how time works. That tweet that we screenshotted about about the “1st Amendment” violation is in response to an internal email that Taibbi revealed about what Taibbi (misleadingly) says are “requests from connected actors to delete tweets” followed by a screenshot of Twitter employees listing out some tweets saying “more to review from the Biden team” and someone responding “handled these.”
There was then the next tweet which was a similar set of two tweets sent over from the Democratic National Committee (as compared to the Biden campaign in the first one). This includes a tweet from the actor James Woods, which the Twitter team calls special attention to for being “high profile.”
Except, as a few enterprising folks discovered when looking up those tweets listed, they were… basically Hunter Biden nude images that were found on the laptop hard drive, which clearly violated Twitter’s terms of service (and likely violated multiple state laws regarding the sharing of nonconsensual nude images). This includes the James Woods tweet, which included a fake Biden campaign ad that showed a naked picture of Hunter Biden lying on a bed with his (only slightly blurred) penis quite visible. I’m not going to share a link to the image.
A good investigative reporter might have looked up what was in those tweets before posting a conspiratorial post implying that these were attempts by the campaign to remove the NY Post story or some other important information. But Taibbi did not. Nor has he commented on it since.
On top of that, while Taibbi claims that these were “requests to delete,” as the Twitter email quite clearly says, these are for Twitter to “review.” In other words, these were flagged for Twitter to review if they violate Twitter’s policies as the naked images clearly do.
So, there’s clearly no 1st Amendment concern here because, despite Musk’s understanding of the space-time continuum, the Biden administration was not in the White House in October of 2020. Second, even if we’re concerned about political campaigns asking for content to be deleted, flagging content for companies to review to see if they violate policies is not (in any way) the same as demanding it be deleted. Anyone can flag content. And then the company reviews it and makes a determination.
Even more importantly, nothing revealed so far suggests that the campaign had anything to say to Twitter regarding the NY Post story or any story regarding the laptop. Literally the only concerns raised were about the naked pictures.
Finally, as noted above, the only other Democrat mentioned so far in the Twitter files is Rep. Ro Khanna who told Twitter it was wrong to stop the links to the NY Post article, and urged them to rescind the decision in the name of free speech. That does not sounds like the Democrats secretly pressuring the company to block the story. It kinda sounds like the exact opposite.
So despite what everyone keeps yelling on Twitter (including Elon Musk) this still doesn’t appear to be evidence of “censorship” or even “suppression of the Hunter Biden laptop story.” It’s just focused on the nonconsensual sharing of Hunter’s naked images.
As a side note, Woods has now said he’s going to sue over this, though for the life of me I have no idea what sort of claim he thinks he has, or how it’s going to go over in court when he claims his rights were violated when he was unable to share Hunter’s dick pic.
5. But Jim Baker! He worked for the FBI! And he was in charge of the Twitter files! Clearly he’s covering up stuff!
Here we are ripping from the stupidity headlines. This one came out just last night as Taibbi added a “supplement” to the Twitter files, again seemingly confused about how basically anything works. According to Taibbi in a very unclear and awkwardly worded thread, he and Bari Weiss (another opinion columnist who Musk has decided to share the files with) were having some sort of “complication” in accessing the files. Taibbi claims that Twitter’s Deputy General Counsel, Jim Baker, was reviewing the files, and somehow this was as problem (he does not explain why or how, though there’s a lot of conjecture).
Baker is, in fact, the former General Counsel at the FBI. It made news when he was hired.
Baker was subject to a bunch of conspiracy theory stuff a few years ago regarding the FBI and some of the sillier theories regarding the Trump campaign, including the Steele Dossier and the even sillier “Alfa Bank” story (which had always been silly and lots of people, including us, had mocked when it came out).
But despite all that, there’s really little evidence that Baker has done anything particularly noteworthy here. The stuff about his actions while at the FBI is totally overblown partisan hackery. People talk about the so-called “criminal investigation” he faced for his work looking into Russian interference in the 2020 election, but that appears to be something mostly cooked up by extreme Trumpists in the House and appears to have gone nowhere. And, yes, he was a witness at the Michael Sussman trial, which was sorta connected to the Alfa Bank stuff, but his testimony supported John Durham, not Michael Sussman, in that he claimed that Sussman made a false statement to him, which the entire case hinged on (and, for what it’s worth, the trial ended in acquittal).
In other words, almost all of the FBI-related accusations against Baker are entirely “guilt by association” type claims, with nothing at all legitimate to back them up.
As for Twitter, we already highlighted Baker’s email that Taibbi revealed, which shows a normal, thoughtful, cautious discussion of a normal trust & safety debate, with nothing even remotely political.
The latest claims from Taibbi and Weiss also don’t make much sense. Elon Musk has told his company to hand over a bunch of internal documents to reporters. Any corporate lawyer would naturally do a fairly standard document review before doing so to make sure that they’re not handing over any private information or something else that might create legal issues for Musk. And since a large chunk of the legal team has left the company, it wouldn’t be all that surprising if the task ended up on Baker’s desk.
Now, you can argue (as Taibbi and others now imply) that there’s some massive conflict of interest here, but, uh… that’s not at all clear, and not really how conflict of interest works. And, again, there’s little indication that Baker had a major role here at all, beyond being one of many who weighed in on this matter (and did so in a perfectly reasonable manner).
Honestly, Baker not reviewing the documents first would have potentially put him in legal jeopardy for not doing the very basic function of his job in making sure the company he worked for didn’t put itself in serious legal jeopardy by revealing things that might create huge liabilities for Musk and the company.
Either way, late Tuesday, Musk announced that Baker had “exited” from the company, and when asked by a random Twitter user if he had been “asked to explain himself first” Musk claimed that Baker’s “explanation was… unconvincing.”
And perhaps there’s something more here that will be revealed by Weiss now that the shackles have been removed. But, based on what’s been stated so far, a perfectly plausible explanation is that Musk confronted Baker wanting to know why he was holding back the files and what his role was in “suppressing” the NY Post story. And Baker told him, truthfully, that his role was exactly as was revealed in the email (giving his general thoughts on the proper approach to handling the story) and that he was reviewing documents because that’s his job, and Musk got mad and fired him.
Somewhat incredibly, Musk also seemed to imply he only learned of Baker’s involvement on Sunday.
Some people are claiming that Musk is saying he only discovered that Baker worked for him on Sunday, which is possible but seems unlikely. Conspiracy theorists had pointed out Baker’s role at the company to Musk as far back as April. A more charitable explanation is that Musk only discovered that Baker was handling the document review on Sunday. And I guess that’s plausible but, again, really only reflects extremely poorly on Musk.
If he’s going to reveal internal documents to reporters, especially ones that Musk himself keeps claiming implicate him in potential criminal liability (yes, it happened before his time, but Musk purchased the liabilities of the company as well), it’s not just perfectly normal, but kinda necessary to have lawyers do some document review. Again, as a more charitable explanation, perhaps Musk just wanted a different lawyer to do the review, and my only answer there is maybe he shouldn’t have gotten rid of so many lawyers from the legal team. Might have helped.
So, look, there could be a possible issue here, but given how much has been totally misrepresented throughout this whole process, without any actual evidence to support the “Jim Baker mastermind” theory, it’s difficult to take it even remotely seriously when there’s a perfectly normal, non-nefarious explanation to how all of this went down.
The absence of evidence is not evidence that there’s a coverup. It might just be evidence that you’re prone to believing in unsubstantiated conspiracy theories, though.
6. Still, all this proved that Twitter is “illegally” biased towards Democrats!
Taibbi made a big deal out of the fact that Twitter employees overwhelmingly donated to Democrats in their political contributions, which is not exactly new or surprising. Musk commented on this as well, suggesting sarcastically it was proof of bias at Twitter, but left out that among the companies in the chart he was commenting on… was also Tesla, where over 90% of employee donations went to Democrats.
But, more importantly, it’s not surprising in the least. Employees of many companies lean left. Executives (who donate way more money) tend to lean right. I mean, you can look at a similar chart of executive donations that shows they overwhelmingly go to Republicans. Neither is illegal, or even a problem. It’s just reality.
And companies making editorial decisions are… in fact… allowed to have bias in their political viewpoints. I would bet that if you looked at donations by employees at the NY Post or Fox News, they would generally favor Republicans. Indeed, imagine what would happen if someone took over Fox News and suddenly started revealing (1) communications between Fox News execs and Republican politicians and campaigns and (2) internal editorial meeting notes regarding what to promote. Don’t you think it would be way more biased than what the Twitter files revealed?
Here’s the important point on that: Fox News’ clear bias is not illegal either. And, indeed, if Democrats in Congress held hearings on “Fox News’ bias” and demanded that its top executives appear and explain their editorial decision making in promoting GOP talking points, people should be outraged over the clear intimidation factor, which would obviously be problematic from a 1st Amendment angle. Yet I don’t expect people to get all that worked up about the same thing happening to Twitter, even though it’s actually the same issue.
Companies are allowed to be biased. But the amazing thing revealed in the Twitter files is just how little evidence there is that any bias was a part of the debate on how to handle this stuff. Everything appeared to be about perfectly reasonable business decisions.
And… that’s it. I fear that this story is going to live on for years and years and years. And the narrative full of nonsense is already taking shape. However, I like to work off of actual facts and evidence, rather than fever dreams and misinterpretations. And I hope that you’ll read this and start doing the same.
Do not believe everything you read. Even if it comes from more “respectable” publications. The Intercept had a big story this week that is making the rounds, suggesting that “leaked” documents prove the DHS has been coordinating with tech companies to suppress information. The story has been immediately picked up by the usual suspects, claiming it reveals the “smoking gun” of how the Biden administration was abusing government power to censor them on social media.
The only problem? It shows nothing of the sort.
The article is garbage. It not only misreads things, it is confused about what the documents the reporters have actually say, and presents widely available, widely known things as if they were secret and hidden when they were not.
The entire article is a complete nothingburger, and is fueling a new round of lies and nonsense from people who find it useful to misrepresent reality. If the Intercept had any credibility at all it would retract the article and examine whatever processes failed in leading to the article getting published.
Let’s dig in. Back in 2018, then President Donald Trump signed the Cybersecurity and Infrastructure Security Agency Act into law, creating the Cybersecurity and Infrastructure Security Agency as a separate agency in the Department of Homeland Security. While there are always reasons to be concerned about government interference in various aspects of life, CISA was pretty uncontroversial (perhaps with the exception of when Trump freaked out and fired the first CISA director, Chris Krebs, for pointing out that the election was safe and there was no evidence of manipulation or foul play).
While CISA has a variety of things under its purview, one thing that it is focused on is general information sharing between the government and private entities. This has actually been really useful for everyone, even though the tech companies have been (quite reasonably!) cautious about how closely they’ll work with the government (because they’ve been burned before). Indeed, as you may recall, one of the big revelations from the Snowden documents was about the PRISM program, which turned out to be oversold by the media reporting on it, but was still problematic in many ways. Since then, the tech companies have been even more careful about working with government, knowing that too much government involvement will eventually come out and get everyone burned.
With that in mind, CISA’s role has been pretty widely respected with almost everyone I’ve spoken to, both in government and at various companies. It provides information regarding actual threats, which has been useful to companies, and they seem to appreciate it. Given their historical distrust of government intrusion and their understanding of the limits of government authority here, the companies have been pretty attuned to any attempt at coercion, and I’ve heard of nothing regarding CISA at all.
That’s why the story seemed like such a big deal when I first read the headline and some of the summaries. But then I read the article… and the supporting documents… and there’s no there there. There’s nothing. There’s… the information sharing that everyone already knew was happening and that has been widely discussed in the past.
Let’s go through the supposed “bombshells”:
Behind closed doors, and through pressure on private platforms, the U.S. government has used its power to try to shape online discourse. According to meeting minutes and other records appended to a lawsuit filed by Missouri Attorney General Eric Schmitt, a Republican who is also running for Senate, discussions have ranged from the scale and scope of government intervention in online discourse to the mechanics of streamlining takedown requests for false or intentionally misleading information.
This sounds all scary and stuff, but most of those “meeting minutes” are from the already very, very public Misinformation & Disinformation Subcommittee that was part of an effort to counter foreign influence campaigns. As is clear on their website, their focus is very much on information sharing, with an eye towards protecting privacy and civil liberties, not suppressing speech.
The MDM team’s guiding principle is the protection of privacy, free speech, and civil liberties. To that end, the MDM team closely consults with the DHS Privacy Office and DHS Office for Civil Rights and Civil Liberties on all activities.
The MDM team is also committed to collaboration with partners and stakeholders. In addition to civil society groups, researchers, and state and local government officials, the MDM team works in close collaboration with the FBI’s Foreign Influence Task Force, the U.S. Department of State, the U.S. Department of Defense, and other agencies across the federal government. Federal Agencies respective roles in recognizing, understanding, and helping manage the threat and dangers of MDM and foreign influence on the American people are mutually supportive, and it is essential that we remain coordinated and cohesive when we engage stakeholders.
As professor Kate Starbird notes, the Intercept article makes out like this was some nefarious secret meeting when it was actually a publicly announced meeting with public minutes, and part of the discussion was even on where the guardrails should be for the government so that it doesn’t go too far. Indeed, even though the public output of this meeting is available directly on the CISA website for anyone to download, The Intercept published a blurry draft version, making it seem more secret and nefarious. (Updated: to note that not all of the meeting minutes published by The Intercept were public: they include a couple of extra subcommittee minutes that are not on the CISA website, but which have nothing particularly of substance, and certainly nothing that supports the claims in the article. And all of the claims here stand: the committee is public, their meeting minutes are public, including summaries of the subcommittee efforts, even if not all the full subcommittee meeting minutes are public).
And if you read the actual document it’s… all kinda reasonable? It does talk about responding to misinformation and disinformation threats, mainly around elections — not by suppressing speech, but by sharing information to help local election officials respond to it and provide correct information. From the actual, non-scary, very public report:
Currently, many election officials across the country are struggling to conduct their critical work of
administering our elections while responding to an overwhelming amount of inquiries, including false
and misleading allegations. Some elections officials are even experiencing physical threats. Based on
briefings to this subcommittee by an election official, CISA should be providing support — through
education, collaboration, and funding — for election officials to pre-empt and respond to MD
It includes four specific recommendations for how to deal with mis- and disinformation and none of them involve suppressing it. They all seem to be about responding to and countering such information by things like “broad public awareness campaigns,” “enhancing information literacy,” “providing informational resources,” “providing education frameworks,” “boosting authoritative sources,” and “rapid communication.” See a pattern? All of this is about providing information, which makes sense. Nothing about suppressing. The report even notes that there are conflicting studies on the usefulness of “prebunking/debunking” misinformation, and suggests that CISA pay attention to where that research goes before going too hard on any program.
There’s literally nothing nefarious at all.
The next paragraph in the Intercept piece then provides an email that kinda debunks the entire framing of the article:
“Platforms have got to get comfortable with gov’t. It’s really interesting how hesitant they remain,” Microsoft executive Matt Masterson, a former DHS official, texted Jen Easterly, a DHS director, in February.
Masterson had worked in DHS on these kinds of programs and then moved over to Microsoft. But here he’s literally pointing out that the companies remain hesitant to work too closely with government, which is exactly what we’ve been saying all along, and completely undermines the narrative people have taken out of this article that it proves that the government was too chummy with the companies.
(Also updating to note that the original Intercept story falsely claimed that Masterson was working for DHS at the time of the text, which makes it sound more nefarious. They later quietly changed it, and only added a correction days later when people called them out on it).
Also, this text message is completely out of context, but hold on for that, because it comes up again later in the article.
Next up, the article takes a single quote out of context from an FBI official.
In a March meeting, Laura Dehmlow, an FBI official, warned that the threat of subversive information on social media could undermine support for the U.S. government. Dehmlow, according to notes of the discussion attended by senior executives from Twitter and JPMorgan Chase, stressed that “we need a media infrastructure that is held accountable.”
First off, this is generally no different than the nonsense the FBI says publicly, and there’s nothing in the linked document that suggests the companies were in agreement that anyone should be “held accountable.” But even if we look at what Dehmlow actually said, in context, while she did talk about accountability, she mostly focused on education.
Ms. Dehmlow was asked to provide her thoughts or to define a goal for approaching MDM and she mentioned “resiliency”. She stated we need a media infrastructure that is held accountable; we need to early educate the populace; and that today, critical thinking seems a problem currently, [REDACTED] Senior Advisor for Homeland Security and Director of Defending Democratic Institutions Center for Strategic and International Studies (CSIS) stated that civics education should be provided at all ages.
Read in context, it sure looks like Dehmlow’s use of the phrase that media should be “held accountable,” means by an educated public. I mean, there’s some notable irony in all of this, where Dehmlow is talking about better educating people on critical thinking, and that’s been turned into pure nonsense and misinformation.
From there, the misleading article jumps randomly to Meta’s interface for the government to submit reports, again implying that this is somehow connected to everything above (it’s not, it’s something totally different):
There is also a formalized process for government officials to directly flag content on Facebook or Instagram and request that it be throttled or suppressed through a special Facebook portal that requires a government or law enforcement email to use. At the time of writing, the “content request system” at facebook.com/xtakedowns/login is still live. DHS and Meta, the parent company of Facebook, did not respond to a request for comment. The FBI declined to comment.
Again, this is wholly unrelated to the paragraphs above it. The article is just randomly trying to tie this to it. Every company has systems for anyone to report information for the companies to review. But the big companies, for fairly obvious and sensible reasons, also set up specialized versions of that reporting system for government officials so that reports don’t get lost in the flow. Nothing in that system is about demanding or suppressing information, and it’s basically misinformation for the Intercept to imply otherwise. It’s just the standard reporting tool. The presentation that the Intercept links to is just about how government officials can log into the system because it has multiple layers of security to make sure that you’re actually a government official.
It remains difficult to see (1) how this is connected to the CISA discussion, and (2) how this is even remotely new, interesting or relevant. Indeed, you can find out more about this system on Facebook’s “information for law enforcement authorities” page, and the nefarious sounding “Content Request System (CRS)” highlighted in the document the Intercept shows appears to just be the system for law enforcement agents to request information regarding an investigation. That is, a system for submitting a subpoena, court order, search warrant, or national security letter.
Update: Now there is also a part of the system that enables governments to report potential misinformation and disinformation, though again that appears to be the same kind of reporting that anyone can do, because such information breaks Facebook’s rules. The actual document this comes from again does not seem nefarious at all. It literally is just saying the government can alert Facebook to content that violates its existing rules.
So, it allows law enforcement to report the content, but it shows with it the relevant rules. This is the same kind of reporting that any regular user can do, it’s just that law enforcement is viewed as a “trusted” flagger, so their flags get more attention. It does not mean that the government is censoring content, and Facebook’s ongoing transparency reports show that they often reject these requests.
After tossing in that misleading and unrelated point, the article takes another big shift, jumps to a separate DHS “Homeland Security Review” in which DHS warns about the problem of “inaccurate information” which, you know, is a legitimate thing for DHS to be concerned about, because it can impact security. It’s certainly quite reasonable to be worried about DHS overreach. We’ve screamed about DHS overreach for years.
But I keep reading through the article and the documents, and there’s nothing here.
The report notes that there’s a lot of misinformation, and there is, including on the withdrawal of US troops from Afghanistan. That’s true, and it seems like a reasonable concern for DHS… but the Intercept then throws in a random quote about how Republicans (who have been one source of misinformation about the withdrawal) are planning to investigate if they retake the House.
The inclusion of the 2021 U.S. withdrawal from Afghanistan is particularly noteworthy, given that House Republicans, should they take the majority in the midterms, have vowed to investigate. “This makes Benghazi look like a much smaller issue,” said Rep. Mike Johnson, R-La., a member of the Armed Services Committee, adding that finding answers “will be a top priority.”
But how is that relevant to the rest of the article and what does it have to do with the government supposedly suppressing information or working with the companies? The answer is absolutely nothing at all, but I guess it’s the sort of bullshit you throw in to make things sound scary when your “secret” (not actually secret) documents don’t actually reveal anything.
There’s also a random non sequitur about DHS in 2004 ramping up the national threat level for terrorism. What’s that got to do with anything? ¯\_(ツ)_/¯
The article keeps pinballing around to random anecdotes like that, which are totally disconnected and have nothing to do with one another. For example:
That track record has not prevented the U.S. government from seeking to become arbiters of what constitutes false or dangerous information on inherently political topics. Earlier this year, Republican Gov. Ron DeSantis signed a law known by supporters as the “Stop WOKE Act,” which bans private employers from workplace trainings asserting an individual’s moral character is privileged or oppressed based on his or her race, color, sex, or national origin. The law, critics charged, amounted to a broad suppression of speech deemed offensive. The Foundation for Individual Rights and Expression, or FIRE, has since filed a lawsuit against DeSantis, alleging “unconstitutional censorship.” A federal judge temporarily blocked parts of the Stop WOKE Act, ruling that the law had violated workers’ First Amendment rights.
I keep rereading that, and the paragraph before and after it, trying to figure out if they were working on a different article and accidentally slipped it into this one. It has nothing whatsoever to do with the rest of the article. And Ron DeSantis is not in “the U.S. government.” While he may want to be president, right now he’s governor of Florida, which is a state, not the federal government. It’s just… weird?
Then, finally, after these random tangents, with zero effort to thread them into any kind of coherent narrative, the article veers back to DHS and social media by saying it’s not actually clear if DHS is doing anything.
The extent to which the DHS initiatives affect Americans’ daily social feeds is unclear. During the 2020 election, the government flagged numerous posts as suspicious, many of which were then taken down, documents cited in the Missouri attorney general’s lawsuit disclosed. And a 2021 report by the Election Integrity Partnership at Stanford University found that of nearly 4,800 flagged items, technology platforms took action on 35 percent — either removing, labeling, or soft-blocking speech, meaning the users were only able to view content after bypassing a warning screen. The research was done “in consultation with CISA,” the Cybersecurity and Infrastructure Security Agency.
Again, this is extremely weak sauce. People “report” content that violates social media platform rules all the time. You and I can do it. The very fact that the article admits the companies only “took action” on 35% of reports (and again, only a subset of that was removing) shows that this is not about the government demanding action and the companies complying.
In fact, if you actually read the Stanford report (which it’s unclear if these reporters did), the flagged items they’re talking about are ones that the Election Integrity Project flagged, not the government. And, even then, the 35% number is incredibly misleading. Here’s the paragraph from the report:
We find, overall, that platforms took action on 35% of URLs that we reported to
them. 21% of URLs were labeled, 13% were removed, and 1% were soft blocked.
No action was taken on 65%. TikTok had the highest action rate: actioning (in
their case, their only action was removing) 64% of URLs that the EIP reported
to their team.
So the most active in removals was TikTok, which people already think is problematic, but the big American companies were even less involved. Second, only 13% of the reports resulted in removing the content, and the EIP report actually breaks down what kinds of content were removed vs . labeled, and it’s a bit eye opening (and again destroys the Intercept’s narrative):
If you look, the only cases where the majority of content reported was removed rather than just “labeled” (i.e., providing more information) were phishing attempts and fake official accounts. Those seems like the sorts of things where it makes sense for the platforms to take down that content, and I’m curious if the reporters at the Intercept think we’d be better off if the platforms ignored phishing attempts.
The article then pinballs back to talking about DHS and CISA, how it was set up, and concerns about elections. Again, none of that is weird or secret or problematic. Finally, it gets to another bit that, when read in the article, sounds questionable and certainly concerning:
Emails between DHS officials, Twitter, and the Center for Internet Security outline the process for such takedown requests during the period leading up to November 2020. Meeting notes show that the tech platforms would be called upon to “process reports and provide timely responses, to include the removal of reported misinformation from the platform where possible.”
Except if you look at the actual documents, again, they’re taking things incredibly out of context and turning nothing into something that sounds scary. The first link — supposedly the one that “outlines the process for such takedown events” — does no such thing. It’s literally CISA passing information on to Twitter from the Colorado government, highlighting accounts that they were worried were impersonating Colorado state official Twitter accounts.
The email flat out says that CISA “is not the originator of this information. CISA is forwarding this information, unedited, from its originating source.” And the “information” is literally accounts that Colorado officials are worried are pretending to be Colorado state official government accounts.
Now, it does look like at least some of those accounts may be parody accounts (at least one claims to be in its bio). But there’s no evidence that Twitter actually took them down. And nowhere in that document is there an outline of a process for a takedown.
The second document also does not seem to show what the Intercept claims. It shows some emails, where CISA was trying to set up a reporting portal to make all of this easier (state officials seeing something questionable and passing it on to the companies via CISA). What the email actually shows is that whoever is responding to CISA from Twitter has a whole bunch of questions about the portal before they’re willing to sign on to it. And those concerns include things like “how long will reported information be retained?” and “what is the criteria used to determine who has access to the portal?”
These are the questions you ask when you are making sure that this kind of thing is not government coercion, but is a limited purpose tool for a specific situation. The response from a CISA official does say that their hope is the social media companies will (as the Intercept notes) “process reports and provide timely responses, to include the removal of reported misinformation from the platform where possible.” But in context, again, that makes sense. This portal is for election officials to report problematic accounts, and part of the point of the portal is that if the platforms agree that the content or accounts break their rules they will report back to the election officials.
And, again, this is not all that different from how things work for every day users. If I report a spam account on Twitter, later on Twitter sends me back a notification on the resolution for what I reported. This sounds like the same thing, but perhaps a slightly more rapid response so that election officials know what’s happening.
Again, I’m having difficulty finding anything nefarious here at all, and certainly no evidence of coercion or the companies agreeing to every government request. In fact, it’s quite the opposite.
Then the article pinballs again, back around to the (again, very public) MDM team. And, again, it tries to spin what is clearly reasonable information sharing into something more nefarious:
CISA has defended its burgeoning social media monitoring authorities, stating that “once CISA notified a social media platform of disinformation, the social media platform could independently decide whether to remove or modify the post.”
And, again, as the documents (but not the article!) demonstrate, the companies are often resistant to these government requests.
Then suddenly we come back around to the Easterly / Masterson text messages. The texts are informal, which is not a surprise. They work in similar circles, and both have been at CISA (though not at the same time). The Intercept presents this text exchange in a nefarious manner, even as Masterson is making it clear that the companies are resistant. But the Intercept reporters leave out exactly what Masterson is saying they’re resistant to. Here’s what the Intercept says:
In late February, Easterly texted with Matthew Masterson, a representative at Microsoft who formerly worked at CISA, that she is “trying to get us in a place where Fed can work with platforms to better understand mis/dis trends so relevant agencies can try to prebunk/debunk as useful.”
Here’s the full exchange:
If you can’t read that, Easterly texts:
Thx so much! Really appreciate it. And sorry I didn’t ring last week… think you were on the call this week? Just trying to get us in a place where Fed can work with platforms to better understand the mis/dis trends so relevant agencies can try to prebunk/debunk as useful…
Not our mission but was looking to play a coord role so not every D/A is independently reaching out to platforms which could cause a lot of chaos.
And Masterson replies:
Was on the call. The coordination is greatly appreciated. Was disappointed that platforms including us didn’t offer more (we’ll get there) and sector leadership had 0 questions.
We’ll get there and that kind leadership really helps. Platforms have got to get more comfortable with gov’t. It’s really interesting how hesitant they remain.
Again Microsoft included.
This shows that the platforms are treading very carefully in working with government, even around this request which seems pretty innocuous. CISA is trying to help coordinate so that when local officials have issues they have a path to reach out to the platforms, rather than just reaching out willy-nilly.
We’re now deep, deep in this article, and despite all these hints of nefariousness, and people insisting that it shows how the government is collaborating with social media, all the underlying documents suggest the exact opposite.
Then the article pinballs back to the MDM meeting (whose recommendations are and have been publicly available on the CISA website), and note that Twitter’s former head of legal, Vijaya Gadde, took part in one of the meetings. And, um, yeah? Again, the entire point of the MDM board is to figure out how to understand the information ecosystem and, as we noted up top, to do what they can to provide additional information, education and context.
There is literally nothing about suppression.
But the Intercept, apparently desperate to put in some shred that suggests this proves the government is looking to suppress information, slips in this paragraph:
The report called on the agency to closely monitor “social media platforms of all sizes, mainstream media, cable news, hyper partisan media, talk radio and other online resources.” They argued that the agency needed to take steps to halt the “spread of false and misleading information,” with a focus on information that undermines “key democratic institutions, such as the courts, or by other sectors such as the financial system, or public health measures.”
Note the careful use of quotes. All of the problematic words and phrases like “closely monitor” and “take steps to halt” are not in the report at all. You can go read the damn thing. It does not say that it should “closely monitor” social media platforms of all sizes. It says that the misinformation/disinformation problem involves the “entire information ecosystem.” It’s saying that to understand the flow of this, you have to recognize that it flows all over the place. And that’s accurate. It says nothing about monitoring it, closely or otherwise.
As for “taking steps to halt the spread” it also does not even remotely say that. If you look for the word “spread” it appears in the report seven times. Not once does it discuss anything about trying to halt the spread. It talks about teaching people how not to accidentally spread misinformation, about how the spread of misinformation can create a risk to critical functions like public health and financial services, how foreign adversaries abuse it, and how election officials lack the tools to identify it.
Honestly, the only point where “spread” appears in a proactive sense is where it says that they should measure “the spread” of CISA’s own information and messages.
The Intercept article is journalistic malpractice.
It then pinballs yet again, jumping to the whole DHS Disinformation Governance Board, which we criticized, mainly because of the near total lack of clarity around its rollout, and how the naming of it (idiotic) and the secrecy seemed primed to fuel conspiracy theories, as it did. But that’s unrelated to the CISA stuff. The conspiracy theories around the DGB (which was announced and disbanded within weeks) only help to fuel more nonsense in this article.
The article continues to pinball around, basically pulling random examples of questionable government behavior, but never tying it to anything related to the actual subject. I mean, yes, the FBI does bad stuff in spying on people. We know that. But that’s got fuck all to do with CISA, and yet the article spends paragraphs on it.
And then, I can’t even believe we need to go here, but it brings up the whole stupid nonsense about Twitter and the Hunter Biden laptop story. As we’ve explained at great length, Twitter blocked links to one article (not others) by the NY Post because they feared that the article included documents that violated its hacked materials policy, a policy that had been in place since 2019 and had been used before (equally questionably, but it gets no attention) on things like leaked documents of police chatter. We had called out that policy at the time, noting how it could potentially limit reporting, and right after there was the outcry about the NY Post story, Twitter changed the policy.
Yet this story remains the bogeyman for nonsense grifters who claim it’s proof that Twitter acted to swing the election. Leaving aside that (1) there’s nothing in that article that would swing the election, since Hunter Biden wasn’t running for president, and (2) the story got a ton of coverage elsewhere, and Twitter’s dumb policy enforcement actually ended up giving it more attention, this story is one about the trickiness in crafting reasonable trust & safety policies, not of any sort of nefariousness.
Yet the Intercept takes up the false narrative and somehow makes it even dumber:
In retrospect, the New York Post reporting on the contents of Hunter Biden’s laptop ahead of the 2020 election provides an elucidating case study of how this works in an increasingly partisan environment.
Much of the public ignored the reporting or assumed it was false, as over 50 former intelligence officials charged that the laptop story was a creation of a “Russian disinformation” campaign. The mainstream media was primed by allegations of election interference in 2016 — and, to be sure, Trump did attempt to use the laptop to disrupt the Biden campaign. Twitter ended up banning links to the New York Post’s report on the contents of the laptop during the crucial weeks leading up to the election. Facebook also throttled users’ ability to view the story.
In recent months, a clearer picture of the government’s influence has emerged.
In an appearance on Joe Rogan’s podcast in August, Meta CEO Mark Zuckerberg revealed that Facebook had limited sharing of the New York Post’s reporting after a conversation with the FBI. “The background here is that the FBI came to us — some folks on our team — and was like, ‘Hey, just so you know, you should be on high alert that there was a lot of Russian propaganda in the 2016 election,’” Zuckerberg told Rogan. The FBI told them, Zuckerberg said, that “‘We have it on notice that basically there’s about to be some kind of dump.’” When the Post’s story came out in October 2020, Facebook thought it “fit that pattern” the FBI had told them to look out for.
Zuckerberg said he regretted the decision, as did Jack Dorsey, the CEO of Twitter at the time. Despite claims that the laptop’s contents were forged, the Washington Post confirmed that at least some of the emails on the laptop were authentic. The New York Times authenticated emails from the laptop — many of which were cited in the original New York Post reporting from October 2020 — that prosecutors have examined as part of the Justice Department’s probe into whether the president’s son violated the law on a range of issues, including money laundering, tax-related offenses, and foreign lobbying registration.
The Zuckerberg/Rogan podcast thing has also been taken out of context by the same people. As he notes, the FBI gave a general warning to be on the lookout for false material, which was a perfectly reasonable thing for them to do. And, in response Facebook did not actually block links to the article. It just limited how widely the algorithm would share it until the article had gone through a fact check process. This is a reasonable way to handle information when there are questions about its authenticity.
But neither Twitter nor Facebook suggest that the government told them to suppress the story, because it didn’t. It told them generally to be on the lookout, and both companies did what they do when faced with similar info.
From there, the Intercept turns to a nonsense frivolous lawsuit filed by Missouri’s Attorney General and takes a laughable claim at face value:
Documents filed in federal court as part of a lawsuit by the attorneys general of Missouri and Louisiana add a layer of new detail to Zuckerberg’s anecdote, revealing that officials leading the push to expand the government’s reach into disinformation also played a quiet role in shaping the decisions of social media giants around the New York Post story.
According to records filed in federal court, two previously unnamed FBI agents — Elvis Chan, an FBI special agent in the San Francisco field office, and Dehmlow, the section chief of the FBI’s Foreign Influence Task Force — were involved in high-level communications that allegedly “led to Facebook’s suppression” of the Post’s reporting.
Now here, you can note that Dehmlow was the person mentioned way above who talked about platforms and responsibility, but as we noted, in context, she was talking about better education of the public. The section quoted in Missouri’s litigation is laughable. It’s telling a narrative for fan service to Trumpist voters. We already know that the FBI told Facebook to be on the lookout for fake information. The legal complaint just makes up the idea that Dehmlow tells them what to censor. That’s bullshit without evidence, and there’s nothing to back it up beyond a highly fanciful and politicized narrative.
But from there, the Intercept says this:
The Hunter Biden laptop story was only the most high-profile example of law enforcement agencies pressuring technology firms.
Except… it wasn’t. Literally nothing anywhere in this story shows law enforcement “pressuring technology firms” about the Hunter Biden laptop story.
The article then goes on at length about the silly politicized lawsuit, quoting two highly partisan commentators with axes to grind, before quoting former ACLU president Nadine Strossen claiming:
“If a foreign authoritarian government sent these messages,” noted Nadine Strossen, the former president of the American Civil Liberties Union, “there is no doubt we would call it censorship.”
Because of the horrible way the article is written, it’s not even clear which “messages” she’s talking about, but I’ve gone through every underlying document in the entire article and none of them involve anything remotely close to censorship. Given the selective quoting and misrepresentation in the rest of the article, it makes me wonder what was actually shown to Strossen.
As far as I can tell, the emails they’re discussing (again, this is not at all clear from the article) are the ones discussed earlier in which Colorado officials (not DHS) were concerned that some new accounts were attempting to impersonate Colorado officials. They sent a note to CISA, which auto-forwarded it to the companies. Yes, some of the accounts may have been parodies, but there’s no evidence that Twitter actually took action on the accounts, and the fact is that the accounts did make some effort to at least partially appear as Colorado official state accounts. All the government officials did was flag it.
I think Strossen is a great defender of free speech, but I honestly can’t see how anyone thinks that’s “censorship.”
Anyway, that’s where the article ends. There’s no smoking gun. There’s nothing. There are a lot of random disconnected anecdotes, misreading and misrepresenting documents, and taking publicly available documents and pretending they’re secret.
If you look at the actual details it shows… some fairly basic and innocuous information sharing with nothing even remotely looking like pressure on the companies to take down information. We also see pushback from the companies, which are being extremely careful not to get too close to the government and to keep them at arms’ length.
But, of course, a bunch of nonsense peddlers are turning the story into a big deal. And other media is picking up on it and turning it into nonsense.
None of those headlines are accurate if you actually look at the details. But all are getting tremendous play all over the place.
And, of course, the reporters on the story rushed to appear on Tucker Carlson:
Except that’s not at all what the “docs show.” At no point do they talk about “monitoring disinformation.” And there is nothing about them “working together” on this beyond basic information sharing.
In fact, just after this story came out, ProPublica released a much more interesting (and better reported) article that basically talks about how the Biden administration gave up on fighting disinformation because Republicans completely weaponized it by misrepresenting perfectly reasonable activity as nefarious.
Instead, a ProPublica review found, the Biden administration has backed away from a comprehensive effort to address disinformation after accusations from Republicans and right-wing influencers that the administration was trying to stifle dissent.
Incredibly, that ProPublica piece quotes Colorado officials (you know, like the ones who emailed CISA their concern, which got forwarded to Twitter, about fake accounts) noting how they really could use some help from the government and they’re not getting it:
“States need more support. It is clear that threats to election officials and workers are not dissipating and may only escalate around the 2022 and 2024 elections,” Colorado Secretary of State Jena Griswold, a Democrat, said in an email to ProPublica. “Election offices need immediate meaningful support from federal partners.”
I had tremendous respect for The Intercept, which I think has done some great work in the past, but this article is so bad, so misleading, and just so full of shit that it should be retracted. A credible news organization would not put out this kind of pure bullshit.