When Elon Musk took over Twitter, one of his primary stated goals was to “bring back free speech” to the platform. He was particularly critical of how Twitter had briefly blocked links to a New York Post story about Hunter Biden’s laptop in 2020. But now, the self-proclaimed “free speech absolutist” is doing the very thing he criticized: banning links and suspending journalists.
We’ve discussed at great length how almost everyone misremembers and misunderstands the whole “Twitter suppressed the Hunter Biden laptop story.”
But the key factors are that Twitter chose to block the sharing of that link for 24 hours, claiming it violated its “hacked materials” policy. Many, many people (including us) called this out as bullshit, and Twitter backed down the very next day and admitted that was a mistake and said it was clarifying the policy to not include news reporting. The NY Post account wasn’t allowed to post for a couple of weeks, because it was told it had to delete the offending tweet first.
Elon has expressed his anger at Twitter for doing this a few times. When he took over the company, one of the first things he did was to give Matt Taibbi access to internal communications about that (which showed… not much of interest beyond standard discussions within the company about how to handle potentially sensitive materials). He was very excited to reveal this:
He has also suggested that those involved deserved prison time:
(FWIW, it’s absolutely false that Twitter’s actions had any impact on the election. The Federal Election Commission investigated it and found nothing. Multiple reports show that the story gained more traction after Twitter blocked links to it. The block only lasted 24 hours, and no other site blocked links. Twitter didn’t block any of the links to any other story about the laptop).
Either way, last year we called out Elon’s hypocrisy when he did the same damn thing regarding the JD Vance hacked dossier. And yet, that story disappeared after a few days, even though it was arguably worse. That also involved secretive materials that recipients weren’t supposed to have related to a Presidential campaign. And in that case, Elon had no problem blocking links and suspending the reporter.
The only real difference was that it was done under the “doxing” policy rather than under the “hacked materials” policy. Elon has a history of stretching the definition of the “doxing” policy, and ignoring it when the doxing happens to people he dislikes.
And now it’s happened again.
For a while now, a bunch of people have insisted that a huge Elon stan on ExTwitter, named Adrian Dittman, was really an Elon-alt account. The main “clue” was that Dittman sounds eerily similar to Musk. They even did a Spaces together, though many people argued that it was just Musk talking to himself.
Dittman and Musk have occasionally joked about it, but whenever anyone tried to call them out directly on it, they tended to just play coy.
Over the weekend, the Spectator published a pretty compelling argument, saying that Dittman really is not Elon, and instead is just a German dude who lives in Fiji who is a huge Elon fan who just coincidentally sounds an awful lot like Elon.
Elon even responded to a tweet about the piece (jokingly) claiming to reveal that he really is Dittman. Except, you may notice something odd here:
Yeah, the tweet Elon is responding to is not available, saying it “violated the X Rules.” The company has banned all links to the Spectator article and suspended the author, Jacqueline Sweet, for 30 days, claiming that the article violated its doxing policy.
Yup, just like the NY Post with the Hunter Biden laptop story, where Twitter told them they had to first delete the offending tweet, the new ExTwitter also says the offending tweet must be deleted to start the countdown.
And, just as with the NY Post story, anyone trying to share the link is blocked from doing so:
In no world does this violate any actual “doxxing” policy. Dittman was posting under his own name. The reporting was just confirming that Dittman is who he said he was. How is that doxxing? Revealing that someone is who they claim to be is not doxxing by any reasonable definition. It didn’t reveal his location beyond “Fiji,” a country with about a million inhabitants.
But, either way, this is again Elon doing exactly the same thing that he loudly proclaimed was so horrible before, a supposedly egregious suppression of free speech that apparently required a takeover of Twitter and a public airing of the internal discussions that resulted in that decision.
Of course, as with the JD Vance story, these actions will quickly be forgotten, while we’ll undoubtedly keep hearing the misleading (or downright false) claim that Twitter illegally suppressed the story of the Hunter Biden laptop.
Yes, Elon is free to manage ExTwitter however he wants since it’s his property. But it would be nice if some people (including Elon!) could at least have the intellectual honesty to admit (1) that he’s doing the same damn thing that he got upset at Twitter for doing and (2) that this completely undermines his claims about why he had to take over the site.
The basic idea is that no one actually wants to be on a platform that doesn’t enforce some fairly basic rules. But the actual rules can vary a lot.
And while Elon Musk continues to insist to this day that he wants his site to be the platform for all “legal” speech, he has an odd way of showing it, regularly banning or suppressing perfectly legal speech that he just doesn’t like.
The latest one, though, is particularly telling. For years, an extraordinarily awful (in all the most awful ways) comic called “Stonetoss” has made the rounds among all the worst people. It had some similarities to an earlier cartoon, which was equally awful, called Red Panels. Both comics were focused on the most bigoted, awful shit: anti-LGBTQ, antisemitic, racist nonsense. Some researchers figured out that the person behind Stonetoss (who went by the name “Stonetoss” online) was also behind Red Panels and was actually a dude in Texas named Hans Kristian Graebener. And, not surprisingly, he’s expressed support for literal neo-Nazis (so, no, this isn’t one of those cases where people overreact in calling someone a supporter of neo-Nazis).
And Elon decided to suspend anyone who mentioned Hans Kristain Graebener’s name from ExTwitter. This came after Graebener himself appealed to Musk, asking for the thread revealing his name to be deleted in the name of “free speech” (lol).
On Thursday, the Stonetoss account appealed to X users who have “a direct line” to Musk, X’s owner, to help to get the thread deleted. Musk has, in the past, shared an altered version of a Stonetoss cartoon about the collapse of society. “If Elon’s idea of a ‘free speech’ website is one where people can be intimidated into silence, the outcome will be a site where the Stasi will drive out all dissent,” Stonetoss wrote. The account also tagged Musk and offered to share a list of people to target.
In a subsequent post, Stonetoss said this appeal was not about him but about other “artists.”
“This is about others I know personally,” Stonetoss wrote. “There is a whole ecosystem of artists out there who cannot (or have stopped) making art because of people on twitter organized to punish them IRL for doing so.” The cartoonist also added that sales of his plush toy were “going gangbusters” since his alleged identity was revealed.
Won’t somebody please think of the poor, oppressed neo-Nazi cartoonists?
Now, this is kind of funny, because as many people have pointed out over the years, the reason you have rules on sites and trust & safety teams is not “political censorship,” but rather to avoid harassment and abuse that drives people from the site. So, now, suddenly, this reactionary creep who has regularly made fun of others for wanting safe spaces and the horrible “woke” view that people shouldn’t be harassed is worried about people who will stop “making art” because “people on twitter organized to punish them.” Oh, really?
And, of course, Musk took it further and started banning anyone mentioning Graebener.
Hours later, the account associated with the Anonymous Comrades Collective that posted the thread was deleted, and the account was suspended. On Friday, dozens of users, including a number of researchers and journalists, began discussing the incident and posting some of the details of the research, including Graebener’s name.
X locked down many of these accounts and ordered them to delete the offending tweet to get full access to their accounts back. Among those targeted were Jared Holt, a senior research analyst at the Institute for Strategic Dialogue, who covers right-wing extremism; Hannah Gais, a senior research analyst at Southern Poverty Law Center; and Steven Monacelli, an investigative journalist for the Texas Observer. (WIRED has also published Monacelli’s work.)
X also imposed a ban on sharing the link to the Anonymous Comrades Collective blog detailing its research. WIRED verified this on Monday morning by attempting to post the link, only to be met with a pop-up message that read: ‘We can’t complete this request because this link has been identified by X or our partners as being potentially harmful.”
Free speech!
The whole thing quickly turned into a Streisanding for Graebener with tons of people posting about him on social media, and articles like this:
But, at least in that case, you could argue that there was some information (location of a plane) that some might consider kinda intrusive, even though it’s public information, required by law to be available. But here, we’re just talking about someone’s name.
the identity of an anonymous user, such as their name or media depicting them
Of course, if you go just a few lines below that, to the section “What is not a violation of this policy?”, you also find out that it says “the following are not in violation of this policy:”
sharing information that we don’t consider to be private, including:
names
Ah. Well.
As I noted on our recent Ctrl-Alt-Speech episode, we now have Schrodinger’s privacy policy. Mentioning names both is and isn’t allowed. It’s a quantum superposition of content moderation that only collapses when observed by Musk himself.
And, of course, it’s notable that while he insisted these kinds of things were a problem of “the woke mind virus” that had infected Twitter’s old employees, it seems only fair to point out that Elon seems to be infected by racist brainworms leading to his decision to protect this bigoted asshole against clearly legal speech of revealing the dude’s name.
That shouldn’t be much of a surprise, though. As Greg Sargent at the New Republic recently highlighted, Elon seems to keep diving deeper and deeper into extremist conspiracy nonsense around “the great replacement theory” that has resulted in real world violence.
It shouldn’t be any real surprise, then, that as Musk has embraced the kinds of Nazi-adjacent ideas that Stonetoss has also been promoting for years, he would use his understanding of content moderation to not actually protect marginalized groups, but to protect those pushing for further marginalization and harm.
Musk is turning his platform into a cozy nook for neo-Nazis. He’s rolled out the welcome mat and fluffed the pillows, while making it clear that those who might want to push back on fascism and bigotry are not welcome at all. His moderation practices appear even more biased and arbitrary than the old Twitter. It’s just that it’s in favor of the worst fucking people in the world. But sure, tell us more about how you’re a ‘free speech absolutist,’ Elon. We’re all ears.
The last few days on Twitter have been, well, chaotic, I guess? Beyond the blocking of the ElonJet account, followed by the blocking of the @JoinMastodon account, then the blocking of journalists asking about all this and the silly made up defense of it, over the weekend, Twitter announced a new policy banning linking to or even displaying usernames on a whole host of other social media platforms:
The new “promotion of alternative social platforms policy,” which was quite obviously hastily crafted, said that “Twitter will no longer allow free promotion of specific social media platforms on Twitter.” It said that “at both the Tweet level and the account level, we will remove any free promotion of prohibited 3rd-party social media platforms, such as linking out … to any of the below platforms on Twitter, or providing your handle without a URL.
The “prohibited platforms” list had some odd inclusions, and even odder exclusions:
Facebook, Instagram, Mastodon, Truth Social, Tribel, Post and Nostr
3rd-party social media link aggregators such as linktr.ee, lnk.bio
This is… desperate? Silly?
But it also raised questions. Where was TikTok? Or YouTube? Or Gab? Or Parler? Or a bunch of other small new wannabes? You could say they’re too small, but then again, he included Nostr, a social media protocol that is brand new and has basically zero features. I have personally been playing with it, but I think only about 500 people are currently using it. Maybe. Probably fewer.
Of course, as usual, Musk’s biggest fans immediately started crafting silly breathless defenses of how this was totally consistent with Musk’s claims of bringing his “free speech absolutism” to the platform. Most of these defenses were pathetic. Perhaps none more so than his mother’s.
That’s Elon’s mom saying that his new proposal “makes absolute sense” because “when I give a talk for a corporation, I don’t promote other corporations. If I did, I would be fired on the spot and never booked again? Is that hard to understand?”
I mean, that is not hard to understand, but it’s also not an accurate description of the scenario. The people using Twitter are not paid to give talks “for Twitter.” And, if that were the standard, then, um, that wouldn’t just justify Twitter’s old practices of banning accounts for lots of things that any company would fire you for saying during a “company talk,” but actually make you wonder why Twitter didn’t ban a hell of a lot more people.
But, of course, that’s not the standard. Or the scenario.
And then, of course, a few hours later, Musk (facing pretty loud criticism of this latest policy change) appeared to do an about-face, though you’d have to be following him closely to actually realize it. First he defended it, saying “Twitter should be easy to use, but no more relentless free advertising of competitors. No traditional publisher allows this and neither will Twitter.”
Except that’s also not true. First of all, every other social media platform absolutely allows accounts to link to alternative social media. Second, even “traditional publishers” frequently will link to accounts on alternative social media and they will also (not always, but increasingly) acknowledge competing media providers.
Then he made it more vague saying “casually sharing occasional links is fine, but no more relentless advertising of competitors for free, which is absurd in the extreme.”
Which is not a reasonable policy. Because how does anyone know when they’ve cross that line? Either way, as anyone who works in this space knows, if you have a vague policy like “casually sharing occasional links is fine” while the written policy says no links, you’re going to end up in ridiculous situations, such as when famed startup investor/Musk fan/pontificator Paul Graham pointed out that the policy was so dumb he was leaving for Mastodon… and promptly got banned, leading Musk to promise to have the account restored.
Eventually, in a reply to an account known for posting nonsense conspiracy theories, Musk said that the “policy will be adjusted to suspending accounts only when that account’s *primary* purpose is promotion of competitors, which essentially falls under the no spam rule.”
After that, he posted a poll asking whether he should step down as CEO of Twitter. He lost, 57.5% to 42.5% (though as I’m writing, he’s not said anything further on the results, but I full expect that he’s going to shove someone else into the role while still owning and controlling the company).
The TwitterSafety account also ran a poll asking “should we have a policy preventing the creation of or use of existing accounts for the main purpose of advertising other social media platforms”, and while the poll still has a few hours left as I write this, it seems people are almost universally against it:
So, despite Elon arguing that not having such a policy is “absurd in the extreme” and his mother insisting that such a policy “makes absolute sense,” the “vox populi” on Twitter disagrees.
Why is he doing all this? What is going on?
It seems that I have a bit of experience understanding how new social media CEOs who come in on a wave of “bringing free speech back!” promises end up running the social media content moderation learning curve. Thus, I thought it might be useful to explain the basic thought process that normally one goes through here, and that likely created each of these results. It’s basically the same as how Parler’s then CEO John Matze went from “our content is moderated based off the FCC and the Supreme Court” to “posting pictures of your fecal matter in the comment section WILL NOT BE TOLERATED” in a matter of days.
Basically, it’s exactly what I wrote in my speed run article. These naive social media CEOs come in, thinking that the thing “missing” from social media is “free speech.” But they’re wrong. Even if you strongly believe in “free speech” (as I do), that doesn’t mean you want to allow crazy assholes screaming insults at guests in your house. You ask those people to leave, so that your guests can feel welcome. That doesn’t mean you’re against free speech, you’re just saying “go be a crazy asshole somewhere else.”
Every “free speech” CEO eventually realizes this in some form or another. In Musk’s somewhat selfish view of the world, he only seems to notice the concerns when it comes to himself. While he’s had no problem encouraging brigading and harassing of those he dislikes, when a random crazy person showed up near a car with his child in it, he insisted (falsely, as we now know) that it was an account on his website that put him in danger, and banned it.
But, of course, reporters are going to report on it, and in that frenzied state of “this is bad, must be stopped,” he immediately jumped to “well, anyone talking about that account must also be bad, and obviously should also be stopped.”
The “links to other social media” freakout was likely related to all of this as well. First people were linking to the ElonJet account on other social media (which Musk referred to — incorrectly — as “ban evasion”) and so he saw social media as a sneaky tool for getting around his paradise view of how Twitter should work. Also, while there’s no confirmation on this point from Twitter’s numbers, it sure feels like these other social media sites are getting a nice inflow of users giving up on (or at least decreasing their usage of) Twitter.
The biggest beneficiary (by far) seems to be Mastodon, so Musk could view this as a “kill two birds with one stone” move: trying to blunt Mastodon’s growth while also (in his mind) stopping people from visiting the “dangerous” ElonJet account on Mastodon. Except, of course, the opposite of that occurred, and he created a sort of Streisand Effect bump for Mastodon users:
See those bumps in new signups? Those are Elon bumps. Each time he does something crazy, more people sign up.
So, based on that, Elon quickly started banning reporters who he disliked and who were asking what he saw as sketchy questions, and then tried to retcon policies to justify those bans. First it was the nonsense about “assassination coordinates” and then it became about links to social media. Reporter Taylor Lorenz got accused of both. Elon first claimed that her account was suspended for doxing someone “previously” in her reporting (which is something Lorenz-haters have falsely insisted she did). But Twitter directly told Lorenz she was banned for a tweet showing her accounts on other sites:
This is how tyrants rule when they want to pretend they’re ruling by principles. Punish those who oppose you, and then retcon in some kind of policy later, which you insist is an “obviously” good policy, to justify the bans.
Of course, in the old days, when Twitter had a thoughtful trust & safety team, at least they’d make some effort to game out new policies. They’d discuss how those policies might lead to bad outcomes, or how they might be confusing, or how they might be abused. But Elon and friends have no time for that. They need to ban people who upset him, and come up with the policies to justify it later.
That’s how you end up with the stupidly broad “no doxing” policy and the even dumber “no other social media” policy — and only then do they discover the problems of the policies, and try to adjust them on the fly.
There are two other facts here worth noting, and both apply to a very typical pattern found in authoritarians taking over governments while preaching about how they’re “bringing freedom back.”
First, they often will lie about the oppression that they claim happened under the last regime. That’s absolutely been the case here. As the Twitter files actually showed, Twitter’s former regime was not a bunch of “woke radicals censoring conservatives.” They were a thoughtful group of people doing an impossible task with not nearly enough resources, time, or information. As such, sometimes they made mistakes. But on the whole they were trying to create reasonable policies. This is why all evidence, across multiple studies, showed that Twitter actually bent over backwards to not be biased against conservatives, but Trumpists still insisted it was “obvious” that they were moderating based on bias.
The usefulness for the people now in charge, though, is that they feel they have free rein to do what they (falsely) insisted the previous regime was doing. You see it among many Musk fans now (including some high profile ones who should know better *cough* Marc Andreessen *cough*), who are mocking anyone pointing out the nonsense justifications and hypocrisy of Musk’s new policies, which clearly violate his old stated plans for the site. The people justifying this say, mockingly, “oooooooh, look who’s suddenly supportive of free speech.” The more vile version of this is “oh, well how does it feel now that you’re on the other end?” The more direct version is just “well, you did it to us.”
Except all of that is bullshit. Because people talking about it aren’t screaming about “free speech,” so much as pointing out how Musk is going back on his word. A thoughtful commentator might realize that maybe there were good reasons for older decisions, and it wasn’t just “woke suppression of free speech.” But, instead, they justify their new actions based on it being okay because of the falsely believed cruelty of the previous regime.
Second, this is pretty common with “revolutionaries” promising freedom. When they discover that freedom also allows people to oppose the new leader, those “disloyal” to the new regime need to be put down and silenced. In their minds, they justify it, because the ends (“eventual freedom”) justify the means of getting there. So, yes, the king must kill the protestors, but it’s only because those protestors might ruin this finely planned journey to more freedom.
So, in the mind of the despot who wants to believe they’re bringing a “better world of freedom” to the public, it’s okay to deny that freedom to the agitators and troublemakers, because they’re the ones “standing in the way” of freedom to the wider populace.
It seems like some of both of those factors are showing up here.
Last month, at the COMO Content Moderation Summit in Washington DC, I co-ran a “You Make the Call” session with Emma Llanso from CDT. The idea was to turn the audience into a content moderation/trust & safety team of a fictionalized social media platform. We showed numerous examples of content or accounts that were “flagged” and then showed the associated terms of service, and had the entire audience vote on what to do. One of the fictional examples involved someone posting a link to a third-party website “contactinfo.com” claiming to have the personal phone and email contact info of Harvey Weinstein and urging people “you know what to do!” with a hashtag. The relevant terms of service included this: “You may not post personal information about others without their consent.”
The audience voting was pretty mixed on this. 47% of the audience punted on the question, choosing to escalate it to a supervisor as they felt they couldn’t decide whether to leave the content up or take it down. 32% felt it should just be taken down. 10% said to just leave it up and another 10% said to put a content warning flag on the content. We joked a bit during the session that some of these examples were “ripped from the headlines” but apparently we predicted the headlines in this case, because there are two stories this week that touch on exactly this kind of thing.
Splinternews decided to publish Miller’s phone number after multiple news reports attributed the inhumane* decision to separate children of asylum seekers from their parents to Miller, who has defended the plan. Other reports noted that Miller is enjoying all of the controversy over this policy. Splinternews, citing Donald Trump’s own history of giving out the phone numbers of people who anger him, thought it was only fair that people be able to reach out to Miller.
This is — for fairly obvious reasons — a controversial decision. I think most news organizations would never do such a thing. Not surprisingly, the number spread rapidly on Twitter, and Twitter started locking all of those accounts until the tweets were removed. That seems at least well within reason under Twitter’s rules that explicitly state:
You may not publish or post other people’s private information without their express authorization and permission.
But, that question gets a lot sketchier when it comes to locking the accounts of people who merely linked to the Splinternews article. A la our fictionalized example, those people are not actually publishing or posting anyone’s private info. They are posting a link to a third party that purports to have that information. And, of course, in this case, the situation is complicated even more than our fictionalized example because Splinternews is a news organization (owned by Univision), and Twitter also has said that it has a “newsworthy” exception to its rules.
Personally, I think it’s the wrong call to lock the accounts of those linking to the news story, but… as we discovered in our own sample version, it’s not an easy call and lots of people have strong opinions one way or the other. Indeed, part of the reason why Twitter may have decided to do this was that supporters of Trump/Miller started calling out the article as an example of doxxing and claiming that leaving it up showed that Twitter was unfairly biased against them. It is a no win situation.
And, of course, it wouldn’t take long before people started coming up with clever workarounds, such as Parker Higgins (citing the infamous 09F9 controversy in which the MPAA tried to censor the revelation of a cryptographic key that broke the MPAA’s preferred DRM, and people responded by posting variations on the code, including a color chart in which the hex codes of the colors were the code), who posted the following:
Would Twitter lock his account for posting a two color image? At some point, the whole thing gets… crazy. That’s not to argue that revealing someone’s private cell phone number is a good thing — no matter how you feel about Miller or the border policy. But just on the content moderation side, it puts Twitter in a no win situation in which people are going to be pissed off no matter what it does. Oh, and of course, it also helped create something of a Streisand Effect. I certainly hadn’t heard about the Splinternews article or that people were passing around Miller’s phone number until the story broke about Twitter whac’ing at moles on its site.
And that takes us to the second example, which happened a day earlier — and was also in response to people’s quite reasonable* anger about the border policy. Sam Lavigne decided to make something of a public statement about how he felt about ICE by scraping** LinkedIn for profile information on everyone who works at ICE (and who has a LinkedIn public profile). His database included 1595 ICE employees. He wrote a Medium blog post about this, posted the repository to Github and another user, Russel Neiss, created a Twitter account (@iceHRgov) that tweeted out info about each of those employees from that database. Notice that none of those are linked. That’s because all three companies took them down (though you can still find archives of the Medium post). There was also an archive of the Github repository, but it has since been memory-holed as well.
Again… this raises a lot of questions. Github claimed that it removed the page for “violating community guidelines” — specifically around “doxxing and harassment, and violating a third party’s privacy.” Medium claimed that the post violated rules against “doxxing” and specifically the “aggregation of publicly available information to target, shame, blackmail, harass, intimidate, threaten or endanger.” Twitter, in Twitter’s usual way, is not commenting. LinkedIn put out a statement saying: “We do not support or condone what immigration authorities are doing at the border, but we can?t allow the illegal use of our member data. We will take appropriate action to ensure our members? data is protected and used properly.”
Many people point out that all of this feels kind of ridiculous, seeing as this is all public info that the individuals chose to reveal about themselves on a public website. While Medium’s expansive definition of doxxing makes things interesting by including an intent standard in releasing the info, even if it is publicly available, the whole thing, again, demonstrates how complex this is. I know that some people will claim that these are easy calls — but, just for fun, try flipping the equation a bit. If you’re anti-Trump, how would you feel if a prominent alt-right person compiled and posted your info — even if publicly available — on a site where alt-right folks gather, with the clear intent of having hoards of Trump trolls harassing you. Be careful the precedent you set.
If it were up to me, I think I would have come down differently than Medium, Github and Twitter in this case. My rationale: (1) all of this info was public information (2) that those individuals chose to place on a public website, knowing it was public (3) they are all employed by the federal government, meaning they are public servants and (4) while the compilation was done by someone who is clearly against the border policy, Lavigne never encouraged or suggested harassment of ICE agents. Instead, he wrote: “While I don?t have a precise idea of what should be done with this data set, I leave it here with the hope that researchers, journalists and activists will find it useful.” He separately noted that he believed “it’s important to document what’s happening, and by whom.” That seems to actually make a strong point in favor of leaving the data up, as there is value in documenting what’s happening.
That said, reasonable people can disagree on this question (even if there should be no disagreement about how inhumane the policy at the border has been*) of what is the appropriate way for different platforms to handle these situations — taking into account that this situation could play out with very different players in the future, and there is value in being consistent.
This is the very point that we were demonstrating with that game that we ran at COMO. Many people seem to think that content moderation decisions are easy: you just take down the content that is bad, and leave up the content that is good. But it’s pretty rare that the content is easily classified in one of those categories. There is an enormous gray area — and much of it involves nuance and context, which is not always easy to come by — and which may look incredibly different depending on where you sit and what kind of world you think we live in. I still think there are strong arguments that the platforms should have left much of the content discussed in this post up, but I’m not the one making that call.
When we ran that game in DC last month, it was notable that on every single example we used — even the ones we thought were “easy calls” — there were some audience members who selected every option in the game. That is, there was not a single situation in our examples in which everyone agreed what should be done. Indeed, since there were four options, and all four were chosen by at least one person in every single example, it shows just how difficult it really is to make these calls. They are subjective. And what plays into that subjective decision making includes your own views, your own perspective, your own reading of the content and the rules — and sometimes third party factors, including how people are reacting and what public pressure you’re getting (in both directions). It is an impossible situation.
This is also why the various calls to mandate that platforms do this or face legal liability are even more ridiculous and dangerous. There are no “right” answers to these decisions. There are solutions that seem better to lots of people, but plenty of others will disagree. If you think you know the “right” way that all of these questions should be handled, I guarantee you’re wrong, and if you were in charge of these platforms, you’d end up feeling just as conflicted as well.
This is why it’s really time to start thinking about and talking about better solutions. Simply calling on platforms to be the final arbiters of what goes online and what stays offline is not a workable solution.
* Just a side note: if you are among the small minority of ethically-challenged individuals who gets upset that I describe the policy as inhumane: fuck off. The policy is inhumane and if you’re defending it, you should seriously take time to re-evaluate your ethics and your life choices. On a separate note, if you are among the people who are then going to try to justify this policy as “but Obama/others did it too,” the same applies. Whataboutism is no argument here. The policy is inhumane no matter who did it, and pointing out that others did it too doesn’t change that. And, as inhumane as it may have been in the past, it has been severely ramped up. There is no defense for it. Attempting to defend it only serves to out yourself as a horrible person who has issues. Seriously: get help.
** This doesn’t even fit anywhere in with this story, but scraping LinkedIn is (stupidly) incredibly dangerous. Linkedin has a history of suing people for scraping public info off of LinkedIn. And even if it’s lost some of those cases, the company appears to take a pretty aggressive stance towards scrapers. We can argue about how ridiculous this is, but, dammit, this post is already too long talking about other stuff, so discuss it separately.
Utah Representative David E. Lifferth (R) has filed House Bill 225 which modifies the existing criminal code to include cyber crimes such as doxing, swatting and DoS (denial of service) attacks. According to the amendments, these crimes can now range anywhere from misdemeanors to second-degree felonies.
As is often the case when (relatively) new unpleasantness is greeted with new legislation, the initial move is an awkward attempt to bend the transgressions around existing laws, or vice versa. Lifferth’s is no exception. As GamePolitics points out, only one of the new crimes is specifically referred to by its given name: DoS attacks. The other two can only be inferred by the wording, which is unfortunately broad.
Swatting becomes:
[making] a false report to an emergency response service, including a law enforcement dispatcher or a 911 emergency response service, or intentionally aids, abets, or causes a third party to make the false report, and the false report describes an ongoing emergency situation that as reported is causing or poses an imminent threat of causing serious bodily injury, serious physical injury, or death; and states that the emergency situation is occurring at a specified location.
It’s the stab at doxing that fares the worst. In its present form, the wording would implicate a great deal of protected speech. This is the wording Lifferth would like to add to the “Electronic communication harassment” section.
electronically publishes, posts, or otherwise makes available personal identifying information in a public online site or forum.
Considering it’s tied to “intent to annoy, alarm, intimidate, offend, abuse, threaten, harass, frighten, or disrupt the electronic communications of another,” the amended statute could be read as making the publication of personal information by news outlets a criminal activity — if the person whose information is exposed feels “offended” or “annoyed.” Having your criminal activities detailed alongside personally identifiable information would certainly fall under these definitions, which could lead to the censorship (self- or otherwise) of police blotter postings, mugshot publication or identifying parties engaged in civil or criminal court proceedings.
It also would to make “outing” an anonymous commenter/forum member/etc. a criminal act, even if the amount of information exposed never reaches the level of what one would commonly consider to be “doxing.” Would simply exposing the name behind the avatar be enough to trigger possible criminal charges?
While it’s inevitable that lawmakers will have to tangle with these issues eventually, it’s disheartening to see initial efforts being routinely delivered in terrible — and usually unconstitutional — shape. We expect our legislators to be better than this. After all, it’s their job to craft laws and to do so with some semblance of skill and common sense. If nothing else, we expect them to learn something from previous failures to pass bad laws, whether theirs or someone else’s.