Welcome back to BestNetTech’s favorite faux game show, Playing Semantics! This week, we’re diving back into the semantics of moderation, discretion, and censorship. As a reminder, this bit is what we were arguing about last time:
Moderation is a platform operator saying “we don’t do that here.” Discretion is you saying “I won’t do that there.” Censorship is someone saying “you can’t do that anywhere” before or after threats of either violence or government intervention.
Now, if we’re all caught up, let’s get back into the game!
A Few Nits to Pick
In my prior column, I overlooked a couple of things that I shouldn’t have. I’ll go over them here to help everyone get on the same page as me.
-
anywhere???In re: “you can’t do that anywhere”, this refers to the confines of a given authority or government. It also refers to the Internet in general. Censors work to suppress speech where it matters the most (e.g., within a given country). Such censors often carry the authority necessary to censor (e.g., they work in the government).
-
violence???”Violence” refers to physical violence. I hope I don’t have to explain how someone threatening to harm a journalist is a form of censorship.
-
government???This refers to any branch of any level of government within a given country. And anyone who uses the legal system in an attempt to suppress speech becomes a censor as well. (That person need not be an agent of the government, either.)
Censorship or Editorial Discretion?
From here on out, I’ll be addressing specific comments ? some of which I replied to, some of which I didn’t.
One such comment brought up the idea of a headmaster as a censor. Lexico defines “headmaster” as “(especially in private schools) the man in charge of a school.” We can assume a headmaster is the highest authority of the school.
In a reply to that comment, I said the following:
If the headmaster is a government employee, they’re a censor. If they’re the head of a private institution, they’re a “censor” in a merely colloquial sense. The privately owned and operated Liberty University (henceforth Liberty U), for example, has engaged in what I’d normally call “moderation” vis-?-vis its campus newspaper???which, despite it being a frankly immoral and unethical decision, Liberty U has every right to do as a private institution. (Frankly, I’d be tempted to call such people censors outright, but that would kinda go against my whole bit.)
But the example I used gave me pause to reconsider. Jerry Falwell Jr. (the “headmaster” of Liberty U) and free speech have often come to metaphorical blows. I noted this through a link to an article from the blog Friendly Atheist. The article has a quote from a former editor for Liberty U’s school newspaper, who describes how Falwell’s regime ran the paper:
[W]e encountered an “oversight” system ? read: a censorship regime ? that required us to send every story to Falwell’s assistant for review. Any administrator or professor who appeared in an article had editing authority over any part of the article; they added and deleted whatever they wanted.
That raises the important question: Is that censorship or editorial discretion?
After reading the Washington Post article from which that quote comes, I would refer to this as censorship. I’ll get into the why of my thinking on that soon enough. But suffice to say, “editorial discretion” doesn’t often involve editors threatening writers with lawsuits or violence.
But though I call that censorship, some people might call it “moderation” or “editorial discretion”. Falwell is, after all, exercising his right of association on his private property. What makes that “censorship” are the at-least-veiled threats against “dissenters”.
Censorship Via Threats
Speaking of threats! Another comment took issue with how I defined censorship:
Why should it be “censorship” to threaten someone with a small financial loss (enforced by a court), but not to kick them off the platform they use to make the bulk of their income (independent of the government)? Is “you can speak on some other platform” fundamentally less offensive than “you can speak from another country”, or is that merely a side-effect of the difficulty of physical movement?
To answer this as briefly as I can: A person can find a new platform with relative ease and little-to-no cost. No one can say the same for finding their way out of a lawsuit.
But that raises another important question: Does any kind of threat of personal or financial ruin count as censorship?
As I said above, the Liberty U example counts as censorship. As for the why? The following quotes from that WaPo article should help explain:
Student journalists must now sign a nondisclosure agreement that forbids them from talking publicly about “editorial or managerial direction, oversight decisions or information designated as privileged or confidential.” ? Faculty, staff and students on the Lynchburg, Va., campus have learned that it’s a sin to challenge the sacrosanct status of the school or its leaders, who mete out punishments for dissenting opinions (from stripping people of their positions to banning them from the school).
School leaders don’t have the power of government to back their decisions. But they can still use their power and authority to coerce other people into silence. (“Stop writing stories like this or I’ll kick you out of this school and then what will you do.”) Even if someone can move to another platform and speak, a looming threat could stop them from wanting to do that.
And the threat need not be one of financial or personal ruin. Someone who holds a journalist at knife point and says “shut up about the president or else” is a censor. The violent person doesn’t need government power; their knife and the fear it can cause are all they need.
Money and Speech
A comment I made about companies such as Mastercard and Visa elicited a reply that pointed out how they, too, are complicit in censorship:
I cited Visa and Mastercard specifically because they are at the top of the chain and it’s effectively impossible to create a competitor. If they say something’s not allowed it isn’t unless you want to lose funding. Paypal has been notoriously bad about banning people for innocuous speech over the years, but there are other downstream providers that aren’t Paypal (although if all of them throw someone off, it still erases the speech). I am of the opinion that high-level banks should be held to neutrality standards like ISPs should due to their position of power. Competitors would be preferable, but the lack of either is frightening.
They make a good point. Companies like Visa can legally refuse to do business with, say, an adult film studio. So can banks. This becomes censorship when all such companies cut off access to their services. An artist who creates and sells adult art can end up in a bad place if PayPal cuts the artist off from online payments.
As the comment said, creating a competitor to these services is nigh impossible. Get booted from Twitter and you can open a Mastodon for instance; get booted from PayPal and you’re fucked. That Sword of Damocles?esque threat of financial ruin could be (and often is) enough to keep some artists from creating adult works.
It’s-A Me, Censorship!
Ah, Nintendo and its overzealous need to have a “family-friendly” reputation. Whatever would we do without it~?
Remember when Nintendo of America removed, or otherwise didn’t allow objectionable material in their video games until Mortal Kombat came about and there were Congressional hearings and then the ESRB was formed?
Would you call what Nintendo did censorship or moderation? There’s an argument for moderation in that it was only within their purview and only on their video game systems, but there’s also an argument for censorship in that once the video games went outside of the bounds set by Nintendo of America, they were subpoenaed by the Government with threats of punishment. The ESRB made their censorship/moderations policies moot, but it’s an interesting question. What do you think, Stephen?
This example leads to another good question: Do Nintendo, Sony, etc. engage in censorship when they ask a publisher to remove “problematic” material?
Nintendo can allow or deny any game a spot on the Switch library for any reason. If the company had wanted to deny the publication of Mortal Kombat 11 because of the excessive violence, it could’ve done so without question. To say otherwise would upend the law. But when Nintendo asks publishers to edit out certain content? I’d call that a mix of “editorial discretion” and “moderation”.
Nintendo has the right to have its systems associated with specific speech. Any publisher that wants an association with Nintendo must play by Nintendo’s rules. Enforcing a “right to publication” would be akin to the government compelling speech. We shouldn’t want the law to compel Nintendo into allowing (or refusing!) the publication of Doom Eternal on the Switch. That way lies madness.
Oh, and the ESRB didn’t give Nintendo the “right” to allow a blood-filled Mortal Kombat II on the SNES. Nintendo already had that right. Besides, Mortal Kombat II came out on home consoles one week before the official launch of the ESRB. (The first game to receive the “M” rating was the Sega 32X release of DOOM.) The company allowed blood to stay because the Genesis version of the first game???which had a “blood” code???sold better.
That’s All, Folks!
And thus ends another episode of Playing Semantics! I’d like to thank everyone at home for playing, and if you have any questions or comments, please offer them below. So until next time(?), remember:
Moderation is a platform/service owner or operator saying “we don’t do that here.” Personal discretion is an individual telling themselves “I won’t do that here.” Editorial discretion is an editor saying “we won’t print that here,” either to themselves or to a writer. Censorship is someone saying “you won’t do that anywhere” alongside threats or actions meant to suppress speech.
For whatever changes in art that technology brought forth prior to generative AI, that technology still required human skill to facilitate the creation of art. Paintings on cave walls gave way to paintings on canvases; still photography gave way to motion pictures; cel-based animation gave way to digital animation. In each case, skillsets and processes changed, but they didn’t cut people out of the equation. Generative AI is the first change to the technology of art creation where damn near the entire process is done without human intervention. There isn’t someone drawing the lines or playing the instruments or filming the video when an AI model generates images or music or videos—it’s just an algorithm barfing out the result of a series of “this is the best guess at what combination of bits should be next” decisions, and it doesn’t care if that combination results in some nightmare hallucination (and it also wouldn’t be able tell you why it did that). Something funny about all this: Generative AI evangelists keep trying so damn hard to convince people like me that AI “art” is the future, but for all their puffery, not one of them has ever shown me, or will ever be able to show me, any kind of AI “art” that has managed to imprint on me like the art that I love. You could make a dozen songs that imitate the retrowave style of The Midnight, but you won’t ever generate one that hits me in the soul like “Runaways”—and if you happen to somehow manage that feat, telling me the song was AI-generated would ensure that I will never listen to it again.
Hal Warren had a few local actors and a shitty camera that couldn’t record more than a half-minute of footage at a time, but he still managed to make Manos. Kevin Smith burned through $30k to film Clerks, but he filmed it without the help of any big-name Hollywood people. Justin Bieber posted videos of himself singing on YouTube and ended up becoming a hugely successful mainstream celebrity because someone in the right place in the music industry happened to see one of his videos. Not everyone starts from a place of privilege. Not everyone starts with an innate talent for what it is they end up doing. But it is technically feasible for everyone to have access, in some way, to what they need for what it is they want to do. What gets in the way of 90% of the people who want to be an artist of some kind isn’t the money or the lack of tools or a lack of talent/skill—it’s the flinch. The fear of being judged for making something shitty, of being mocked for not creating the next Sistine Chapel or Saving Private Ryan or “November Rain”. And rather than push through the fear to practice and build their skills like gym rats build muscle, many people give up and think about what could have been if only they’d been “better” without the work needed to, y’know, actually get better. Generative AI is for people who want to be artists without actually being artists—for people who want to see the finish line without having to run the race. If I’m going to make some art, it’s going to be by my own goddamned hands. Getting better at art will take lots of time, sure. But the time will pass anyway. I’d rather spend it getting better at my art than pretending that writing a fucking prompt to put into the Emptiness Machine makes me an artist.
I was gonna take the weekend off from being a crank-ass commenter here, but then I saw your reply. That means I’m gonna have a particular kind of fun before Christmas is over. 😁
Yeah, see, here’s the problem with you taking that conclusion away from me sticking it to the MCU: My joke about Endgame being part of the soulless corporate mass-market-media parade that is the Marvel Cinematic Unvierse doesn’t preclude the acknowledging of all the very real work done by very real people to make the movie I’m taking a jab at. Even if Endgame was the end product of more CGI than a Pixar movie, it still involved actual actors, filmmakers, and crew members working on that movie. That’s more than I can say for the AI-generated McDonald’s commercial that went over worse than someone shitting on a church pew during a Christmas sermon. And the funnier thing is, I talked about Manos precisely because its status as one of the worst movies ever made was earned by Hal P. Warren and the decisions he made at every step of the filmmaking process from ideation to final cut. There is more humanity in Torgo’s fucked-up knees or the one-guy-voicing-three-characters post-production dubbing than in any AI-generated video you could ever show me, and I promise you that I’ll go to my grave defending Manos as a sincere work of art—awful by almost every subjective standard, yes, but still with a sense of humanity to it. The difference is that when someone downloads specific images to use in a specific way, they’re doing that the specific intent to fuck around with those three images in particular and see what happens. They’re not plugging a prompt into an AI model and expecting a result. They’re experimenting and dicking around with a specific intent. The difference isn’t the output—it’s the process. Everyone who defends generative AI always talks about how it gets results faster and produces things faster and I…give no fucks about “efficiency” when it comes to art. What those AI defenders want is the output—the image, the video, the song, the short story, or whatever it is. Artists want the process—the iterating, the figuring out how to express an idea, the actual physical and intellectual work that goes into creation. The joy of creation isn’t the output, but the work used to make that output. AI is a “tool” that cuts out the work and gets to the output; anyone who defends it is someone who wants to take credit for AI-generated output without putting in the work to make an equivalent piece of art themselves. People who make “art” with AI don’t like making art; why they dislike it is irrelevant. What, was looking at reference images made by actual people, then cobbling them together in a shitty sketch before refining it too much work? I repeat: People who make “art” with AI don’t like making art. And yes, gathering and studying reference materials is part of the work of making art. If you don’t want to be ethical about it, that’s on you. I’m Stephen T. Stone, motherfucker—which is to say, I get to decide how I feel about that, and whether anyone agrees with me is irrelevant to the fact that I get to have and express those feelings. That’s a little more morally and ethically acceptable, sure. But the legality of how a model’s training data was obtained is merely one issue with generative AI. Get past that hurdle and the environmental impact of generative AI will still be an obstacle in your path. Good luck getting past that one! Let me say this again: I have every right, as well as the privilege and the honor, to tell generative AI evangelists that they can all go anally fuck themselves with a cactus. Everyone else, including you and those evangelists, has the same right to tell me that I should go jump head-first into the Grand Canyon for being “anti-AI”. My morals and ethics are my own, and I’m not trying to enforce them by way of legislative fiat or threats of violence. If you want to use generative AI just to spite my ass, that’s your decision and you’re free to make it. But if I were you, I wouldn’t expect me to drop to my knees and beg you to forgive me after you burn a bottle of water to “make” an image of you as the Chad and me as the Soyjak. Again: It’s all about the process and the intent. A generative AI model can’t get excited about how it came up with a specific idea because it only barfs up slop based on a prompt. That same model can’t explain its process because it doesn’t have a process, not in the same way that people have a process. It can’t explain the intent behind where it put a certain pixel or a specific word or whatever part of its output you want to ask it about because it doesn’t actually intend to put anything anywhere. The difference between generative AI and a human artist is simple: A human artist can attmept to explain those things with the excitement and attachment that you’d expect from someone who loves the work they put into their creative output, but the Emptiness Machine will only ever wait for the next prompt. I don’t have any issue with people using digital art programs like Photoshop or Krita or Aseprite to make digital art. (Hell, I dabble with pixel art in Aseprite every once in a while.) What I have a problem with is people using generative AI to get output without any work (and helping to destroy the environment and make everyone pay higher electricity bills in the process). As for the use of works that are in the public domain or otherwise licensed for use in a transformative way: Again, the morals and ethics of using such works in a generative AI model aren’t anywhere as big an issue as all of the other issues with generative AI in general. AI slop is “soulless” because, with rare exceptions, it all looks the same. It doesn’t create any kind of emotional connection with me. Once you’ve seen enough AI-generated anime-style pictures to know what the style looks like, even seeing your favorite characters in that style won’t make you feel anything but empty. Even the “best” AI slop is still worse than the worst human-made art because of its inherent lack of any artistic “soul”. That soul is the intent, the process, the work put in by genuine artists. Even when the output is a four-quadrant aim-to-please-everyone MCU movie or one of the worst movies ever made, it still has more humanity behind it than any AI-generated video has ever had and will ever have. Anyone who disagrees with me is free to do exactly that. But if they’re also going to evangelize generative AI like it should ever replace human-crafted art in any way, no matter how small, I’ll still say they’re pieces of shit.I hesitate to comment here because I’ve caught shit in the past for opposing generative AI on the grounds of “it has no soul to it”, and I feel like making that same argument again. But I think I’ve found a better way to make it, and it’s all thanks to Hideo Kojima. But I’m getting ahead of myself a bit. First thing’s first: If there are tangible reasons to oppose generative AI, they’re already out in the open—the ethical sourcing of content for LLMs and the environmental impact of training/using generative AI tools are the two biggest, but I’m fairly sure they’re not the only ones. Point is, any argument in favor of generative AI will have to lay to rest those concerns, and I can’t think of how to steelman such arguments without sounding like an AI evangelist who thinks generative AI “art” or chatbots running on advanced LLMs are ushering in The Singularity or some shit like that. But that’s the more tangible, less “subjective” arguments. For the one I want to make, I have to point out an interview with the co-composer for Death Stranding 2, who had this to say about a lesson he learned from Kojima (the game’s director):
I know Kojima isn’t talking about generative AI here, and I know it’s not an exact quote from Kojima himself, but there’s a phrase in that paragraph that stuck out to me: “it’s already pre-digested for people to like it”. And that gets back to the argument I want to make. I’ve made no attempt to hide my contempt for generative AI. While I will admit to having tinkered with it in the past (because duh), I stand against it now in no uncertain terms because of the more “objective” arguments I mentioned above in addition to the “subjective” argument about how generative AI art is “empty” and “soulless”. Hell, I despise generative AI to the point where I use the title of a recent Linkin Park song (“The Emptiness Machine”) as a derisive nickname for it. But it wasn’t until that bit up there that my argument finally had a shape I could give it: Generative AI “art” is slop precisely because it’s been “pre-digested”—it’s all just art made by talented people that’s been swallowed, chewed up, and spit back out in a way the Emptiness Machine “thinks” will be acceptable to the end user. When I used generative AI, I did generate some images that were aesthetically pleasing. And yeah, some of them were close to the image in my head that I had when I generated them. But if I were to look back on them now (which I can’t because I deleted them months ago), I’d be able to see the flaws in, and the generic nature of, all those images. They’re digital mosaics of other people’s work that were “pre-digested” and barfed back at me without any real human touch to them. I have bits of art from the furry fandom saved on my computer that are at least two decades old; even today, none of them invoke in me the same kind of boredom and emptiness as I get from looking at generative AI images. Generative AI “art” might have some sort of future within the video game industry. It might even have a future in other creative fields, too. But the people who actually give a fuck about supporting human artists won’t give it any space because beyond the arguments about data sourcing and water usage and replacing people with an Emptiness Machine, the one thing generative AI can’t replace is the feeling one gets when they see a work of art made by an actual fucking person. That’s how I know generative AI doesn’t have a bright future ahead of it: Show me a piece of generative AI “art” that has had any kind of cultural impact beyond “ew, look at that slop, everyone bully the company who thought that was a good idea” and I can show you anything from Avengers: Endgame to Manos: The Hands of Fate in response. The fucking four-note Torgo theme has had more of a cultural impact than any individual piece of generative AI “art”. Wanna know how I know that? Most everyone will forget that shit-ass AI-generated McDonald’s ad by this time next year (other than to point at whatever McDonald’s does next and compare it to that ad), but anyone who hears the Torgo theme will have it in their head for the rest of their lives. Even one of the worst movies ever made by the hands offateMan, a movie where “every frame … looks like someone’s last known photograph”, has more of a “soul” than the average AI-generated “funny animal” video. Pre-digested AI slop isn’t art. Manos is art. The only genuine reason to have any kind of “nuance” about generative AI is to separate generative AI from other forms of what we’ve colloquially called “AI” in the past. In the gaming world, that can mean separating generative AI from the use of algorithms to create NPCs (i.e., making “AI-controlled” characters). Beyond that? Nah, fuck the use of generative AI. If programming and concept art and voice acting can be done by a person, it should be done by a person. And if you think there’s an excuse for not having it done by a person, you are—objectively speaking—the wrongest person on the planet.That’s a distinction without much of a difference in our late-stage capitalist hellscape.
So are the deaths. I’d wager that if they thought they could kill trans people en masse without any sociopolitical consequence, they’d do it with a smile on their faces.
Take your AI-authored post and leave.
Okay, that’s about enough of the “Donald Trump is going to secretly do a 9/11” bullshit. For starters, he isn’t smart or clever enough to plan such a thing, and his closest minions are barely any better. And even if he could plan that sort of thing, he wouldn’t do it in secret—he’d straight up send the American military into American cities kill American citizens out in the open, then have his lackeys say it was an operation to take out “designated terrorist organizations”.