I guess I’m a masochist, so here we go. In my recent post about Let It Die: Inferno and the game developer’s fairly minimal use of AI and machine learning platforms, I attempted to make the point that wildly stratified opinions on the use or non-use of AI was making actual nuanced conversation quite difficult. As much as I love our community and comments section — it’s where my path to writing for this site began, after all — it really did look like some folks were going to try as hard as possible to prove me right. Some commenters treated the use of AI as essentially no big deal, while some were essentially “Never AI-ers,” indicating that any use, any at all, made a product a non-starter for them.
Still other comments pointed out that this studio and game are relatively unknown. The game was reviewed poorly for reasons that have nothing to do with use of AI, as I myself pointed out in the post. One commenter even suggested that this might all be an attention-grabbing thing to propel the studio and game into the news, so small and unknown as they are.
Larian Studios is not unknown. They don’t need any hype. Larian is the studio that produces the Divinity series, not to mention the team that made Baldur’s Gate 3, one of the most awarded and best-selling games of 2023. And the studio’s next Divinity game will also make some limited use of AI and machine learning, prompting a backlash from some.
Larian Studios is experimenting with generative AI and fans aren’t too happy. The head of the Baldur’s Gate 3 maker, Swen Vincke, released a new statement to try to explain the studio’s stance in more detail and make clear the controversial tech isn’t being used to cut jobs. “Any [Machine Learning] tool used well is additive to a creative team or individual’s workflow, not a replacement for their skill or craft,” he said.
He was responding to a backlash that arose earlier today from a Bloomberg interview which reported that Larian was moving forward with gen AI despite some internal concerns among staff. Vincke made clear the tech was only being used for things like placeholder text, PowerPoint presentations, and early concept art experiments and that nothing AI-generated would be included in Larian’s upcoming RPG, Divinity.
Alright, I want to be fair to the side of this that takes an anti-AI stance. Vincke is being disingenuous at best here. Whatever use is made of AI technology, even limited use, still replaces work that would be done by some other human being. Even if you’re committed to not losing any current staff through the use of AI, you’re still getting work product that would otherwise require you to hire and expand your team through the use of AI. There is obviously a serious emotional response to that concept, one that is entirely understandable.
But some limited use of AI like this can also have other effects on the industry. It can lower the barrier to starting new studios, which will then hire more people to do the things that AI sucks at, or to do the things where we really don’t want AI involved. It can make Indie studios faster and more productive, allowing them to compete all the more with the big publishers and studios out there. It can create faster output, meaning adjacent industries to developers and publishers might have to hire and expand to accommodate the additional output.
All of this, all of it, relies on AI to be used in narrow areas where it can be useful, for real human beings to work with its output to make it actual art versus slop, and for the end product to be a good product. Absent those three things, the Anti-AI-ers are absolutely right and this will suck.
But the lashing that Larian has been getting is divorced from any of that nuance.
Vincke followed up with a separate statement on on X rejecting the idea that the company is “pushing hard” on AI.
“Holy fuck guys we’re not ‘pushing hard’ for or replacing concept artists with AI.
We have a team of 72 artists of which 23 are concept artists and we are hiring more. The art they create is original and I’m very proud of what they do. I was asked explicitly about concept art and our use of Gen AI. I answered that we use it to explore things. I didn’t say we use it to develop concept art. The artists do that. And they are indeed world class artists.
We use AI tools to explore references, just like we use google and art books. At the very early ideation stages we use it as a rough outline for composition which we replace with original concept art. There is no comparison.”
Yes, exactly. There are uses for this technology in the gaming industry. Pretending otherwise is silly. There will be implications on the direct industry jobs at existing studios due to its use. Pretending otherwise is silly. AI use can also have positive effects on the industry and workers within it overall. Pretending otherwise is silly and ignores all the technological progress that came before we started putting these two particular letters together (AI).
And, ultimately, this technology simply isn’t going away. You can rage against this literal machine all you like, it will be in use. We might as well make the project influencing how it’s used, rather than if it’s used.
On the topic of artificial intelligence, like far too many topics these days, it seems that the vast majority of opinions out there are highly polarized. Either you’re all about making fun of AI not living up to the hype surrounding it, and there are admittedly a zillion examples of this, or you’re an AI “doomer”, believing that AI is so powerful that it’s a threat to all of our jobs, and potentially to our very existence. The latter version of that can get really, reallydangerous and isn’t to be taken lightly.
Stratified opinions also exist in smaller, more focused spaces when it comes to use of AI. Take the video game industry, for example. In many cases, gamers learn about the use of AI in a game or its promotional materials and lose their minds over it. They will often tell you they’re angry because of the “slop” that AI produces and is not found and corrected by the humans overseeing it… but that doesn’t tell the full story. Some just have a knee-jerk response to the use of AI at all and rail against it. Others, including industry insiders, see AI as no big deal; just another tool in a game developer’s tool belt, helping to do things faster than could be done before. That too isn’t the entire story; certainly there will be some job loss or lack of job expansion associated with the use of AI as a tool.
Somewhere in the middle is likely the correct answer. And what developer Supertrick has done in being transparent about the use of AI in Let It Die: Inferno is something of an interesting trial balloon for gauging public sentiment. PC Gamer tells the story of how an AI disclosure notice got added to the game’s Steam page, noting that voices, graphics, and music were all generated within the game in some part by AI. The notice is completely without nuance or detail, leading to a fairly wide backlash from the public.
No one liked that, and in response to no one liking that, Supertrick has come out with a news post to clarify exactly what materials in the game have AI’s tendrils around them. Fair’s fair: it’s a pretty limited pool of stuff. So limited, in fact, that it makes me wonder why use AI for it in the first place.
Supertrick attempted to explain why. The use of AI generated assets breaks down mostly like this:
Graphics/art: AI generated basic images based entirely on human-generated concept art and text and human beings then used those basic images as starting points, fleshing them out with further art over the top of them. Most of the assets in question here are background images for the settings of the game.
Voice: AI was used for only three characters, none of which were human characters. One character was itself an fictional AI machine and the developers used an AI for its voice because they thought that just made sense and provided some realism. The other two characters were also non-human lifeforms, and so the developer used AI voices following that same logic, to make them sound not-human.
Music: Exactly one track was generated using AI, though another AI editor was involved in editing some of the other tracks on a minimal basis.
And that’s it. Are the explanations above all that good? Nah, not all of them, in my opinion. Actors have been portraying computers, robots, and even AI for many years. Successfully in many cases, I would say. Even iconically at times. But using AI to create some base images and then layering human expression on top of them to create a final product? That seems perfectly reasonable to me. As does the use of AI for some music creation and editing in some specific uses.
Overall, the use here isn’t extensive, though, nor particularly crazy. And I very much like that Supertrick is going for a transparency play with this. The public’s reaction to that transparency is going to be very, very interesting. Even if you don’t like Supertrick’s use of AI as outlined above, it’s not extensive and that use certainly hasn’t done away with tens or hundreds of jobs. Continued public backlash would come off as kind of silly, I think.
Though the games overall reception isn’t particularly helpful, either.
Regardless, Let It Die: Inferno released yesterday, and so far has met a rocky reception. At the time of writing, the game has a Mostly Negative user-review score on Steam, with only 39% positive reviews.
Scanning those reviews, there doesn’t seem to be a ton in there about AI usage. So perhaps the backlash has moved on to the game just not being very good.
Aquarter of a century ago, I wrote a book called “Rebel Code”. It was the first – and is still the only – detailed history of the origins and rise of free software and open source, based on interviews with the gifted and generous hackers who took part. Back then, it was clear that open source represented a powerful alternative to the traditional proprietary approach to software development and distribution. But few could have predicted how completely open source would come to dominate computing. Alongside its role in running every aspect of the Internet, and powering most mobile phones in the form of Android, it has been embraced by startups for its unbeatable combination of power, reliability and low cost. It’s also a natural fit for cloud computing because of its ability to scale. It is no coincidence that for the last ten years, pretty much 100% of the world’s top 500 supercomputers have all run an operating system based on the open source Linux.
More recently, many leading AI systems have been released as open source. That raises the important question of what exactly “open source” means in the context of generative AI software, which involves much more than just code. The Open Source Initiative, which drew up the original definition of open source, has extended this work with its Open Source AI Definition. It is noteworthy that the EU has explicitly recognized the special role of open source in the field of AI. In the EU’s recent Artificial Intelligence Act, open source AI systems are exempt from the potentially onerous obligation to draw up a range of documentation that is generally required.
That could provide a major incentive for AI developers in the EU to take the open source route. European academic researchers working in this area are probably already doing that, not least for reasons of cost. Paul Keller points out in a blog post that another piece of EU legislation, the 2019 Copyright in the Digital Single Market Directive (CDSM), offers a further reason for research institutions to release their work as open source:
Article 3 of the CDSM Directive enables these institutions to text and data-mine all “works or other subject matter to which they have lawful access” for scientific research purposes. Text and data mining is understood to cover “any automated analytical technique aimed at analysing text and data in digital form in order to generate information, which includes but is not limited to patterns, trends and correlations,” which clearly covers the development of AI models (see here or, more recently, here).
Keller’s post goes through the details of how that feeds into AI research, but the end-result is the following:
as long as the model is made available in line with the public-interest research missions of the organisations undertaking the training (for example, by releasing the model, including its weights, under an open-source licence) and is not commercialised by these organisations, this also does not affect the status of the reproductions and extractions made during the training process.
This means that Article 3 does cover the full model-development pathway (from data acquisition to model publication under an open source license) that most non-commercial Public AI model developers pursue.
As that indicates, the use of open source licensing is critical to this application of Article 3 of EU copyright legislation for the purpose of AI research.
What’s noteworthy here is how two different pieces of EU legislation, passed some years apart, work together to create a special category of open source AI systems that avoid most of the legal problems of training AI systems on copyright materials, as well as the bureaucratic overhead imposed by the EU AI Act on commercial systems. Keller calls these “public AI”, which he defines as:
AI systems that are built by organizations acting in the public interest and that focus on creating public value rather than extracting as much value from the information commons as possible.
Public AI systems are important for at least two reasons. First, their mission is to serve the public interest, rather than focusing on profit maximization. That’s obviously crucial at time when today’s AI giants are intent on making as much money as possible, presumably in the hope that they can do so before the AI bubble bursts.
Secondly, public AI systems provide a way for the EU to compete with both US and Chinese AI companies – by not competing with them. It is naive to think that Europe can ever match levels of venture capital investment that big name US AI startups currently enjoy, or that the EU is prepared and able to support local industries for as long and as deeply as the Chinese government evidently plans to do for its home-grown AI firms. But public AI systems, which are fully open source, and which take advantage of the EU right of research institutions to carry out text and data mining, offer a uniquely European take on generative AI that might even make such systems acceptable to those who worry about how they are built, and how they are used.
A cofounder of a Bay Area “Stop AI” activist group abandoned its commitment to nonviolence, assaulted another member, and made statements that left the group worried he might obtain a weapon to use against AI researchers. The threats prompted OpenAI to lock down its San Francisco offices a few weeks ago. In researching this movement, I came across statements that he made about how almost any actions he took were justifiable, since he believed OpenAI was going to “kill everyone and every living thing on earth.” Those are detailed below.
I think it’s worth exploring the radicalization process and the broader context of AI Doomerism. We need to confront the social dynamics that turn abstract fears of technology into real-world threats against the people building it.
OpenAI’s San Francisco Offices Lockdown
On November 21, 2025, Wired reported that OpenAI’s San Francisco offices went into lockdown after an internal alert about a “Stop AI” activist. The activist allegedly expressed interest in “causing physical harm to OpenAI employees” and may have tried to acquire weapons.
The article did not mention his name but hinted that, before his disappearance, he had stated he was “no longer part of Stop AI.”1 On November 22, 2025, the activist group’s Twitter account posted that it was Sam Kirchner, the cofounder of “Stop AI.”
According to Wired’s reporting
A high-ranking member of the global security team said [in OpenAI Slack] “At this time, there is no indication of active threat activity, the situation remains ongoing and we’re taking measured precautions as the assessment continues.” Employees were told to remove their badges when exiting the building and to avoid wearing clothing items with the OpenAI logo.
“Stop AI” provided more details on the events leading to OpenAI’s lockdown:
Earlier this week, one of our members, Sam Kirchner, betrayed our core values by assaulting another member who refused to give him access to funds. His volatile, erratic behavior and statements he made renouncing nonviolence caused the victim of his assault to fear that he might procure a weapon that he could use against employees of companies pursuing artificial superintelligence.
We prevented him from accessing the funds, informed the police about our concerns regarding the potential danger to AI developers, and expelled him from Stop AI. We disavow his actions in the strongest possible terms.
Later in the day of the assault, we met with Sam; he accepted responsibility and agreed to publicly acknowledge his actions. We were in contact with him as recently as the evening of Thursday Nov 20th. We did not believe he posed an immediate threat, or that he possessed a weapon or the means to acquire one.
However, on the morning of Friday Nov 21st, we found his residence in West Oakland unlocked and no sign of him. His current whereabouts and intentions are unknown to us; however, we are concerned Sam Kirchner may be a danger to himself or others. We are unaware of any specific threat that has been issued.
We have taken steps to notify security at the major US corporations developing artificial superintelligence. We are issuing this public statement to inform any other potentially affected parties.”
A “Stop AI” activist named Remmelt Ellen wrote that Sam Kirchner “left both his laptop and phone behind and the door unlocked.” “I hope he’s alive,” he added.
Early December, the SF Standardreported that the “cops [are] still searching for ‘volatile’ activist whose death threats shut down OpenAI office.” Per this coverage, the San Francisco police are warning that he could be armed and dangerous. “He threatened to go to several OpenAI offices in San Francisco to ‘murder people,’ according to callers who notified police that day.”
A Bench Warrant for Kirchner’s Arrest
When I searched for any information that had not been reported before, I found a revealing press release. It invited the press to a press conference on the morning of Kirchner’s disappearance:
“Stop AI Defendants Speak Out Prior to Their Trial for Blocking Doors of Open AI.”
When: November 21, 2025, 8:00 AM.
Where: Steps in front of the courthouse (San Francisco Superior Court).
Who: Stop AI defendants (Sam Kirchner, Wynd Kaufmyn, and Guido Reichstadter), their lawyers, and AI experts.
Sam Kirchner is quoted as saying, “We are acting on our legal and moral obligation to stop OpenAI from developing Artificial Superintelligence, which is equivalent to allowing the murder [of] people I love as well as everyone else on earth.”
Needless to say, things didn’t go as planned. That Friday morning, Sam Kirchner went missing, triggering the OpenAI lockdown.
Later, the SF Standard confirmed the trial angle of this story: “Kirchner was not present for a Nov. 21 court hearing, and a judge issued a bench warrant for his arrest.”
“Stop AI” – a Bay Area-Centered “Civil Resistance” Group
“Stop AI” calls itself a “non-violent civil resistance group” or a “non-violent activist organization.” The group’s focus is on stopping AI development, especially the race to AGI (Artificial General Intelligence) and “Superintelligence.” Their worldview is extremely doom-heavy, and their slogans include: “AI Will Kill Us All,” “Stop AI or We’re All Gonna Die,” and “Close OpenAI or We’re All Gonna Die!”
According to a “Why Stop AI is barricading OpenAI” post on the LessWrong forum from October 2024, the group is inspired by climate groups like Just Stop Oil and Extinction Rebellion, but focused on “AI extinction risk,” or in their words, “risk of extinction.” Sam Kirchner explained in an interview: “Our primary concern is extinction. It’s the primary emotional thing driving us: preventing our loved ones, and all of humanity, from dying.”
Unlike the rest of the “AI existential risk” ecosystem, which is often well-funded by effective altruism billionaires such as Dustin Moskovitz (Coefficient Giving, formerly Open Philanthropy) and Jaan Tallinn (Survival and Flourishing Fund), this specific group is not a formal nonprofit or funded NGO, but rather a loosely organized grassroots group of volunteer-run activism. They made their financial situation pretty clear when the “Stop AI” Twitter account replied to a question with: “We are fucking poor, you dumb bitch.”2
According to The Register, “STOP AI has four full-time members at the moment (in Oakland) and about 15 or so volunteers in the San Francisco Bay Area who help out part-time.”
Since its inception, “Stop AI” has had two central organizers: Guido Reichstadter and Sam Kirchner (the current fugitive). According to The Register and the Bay Area Current, Guido Reichstadter has worked as a jeweler for 20 years. He has an undergraduate degree in physics and math. Reichstadter’s prior actions include climate change and abortion-rights activism.
In June 2022, Reichstadter climbed the Frederick Douglass Memorial Bridge in Washington, D.C., to protest the Supreme Court’s decision overturning Roe v. Wade. Per the news coverage, he said, “It’s time to stop the machine.” “Reichstadter hopes the stunt will inspire civil disobedience nationwide in response to the Supreme Court’s ruling.”
Reichstadter moved to the Bay Area from Florida around 2024 explicitly to organize civil disobedience against AGI development via “Stop AI.” Recently, he undertook a hunger strike outside Anthropic’s San Francisco office for 30 days.
Sam Kirchner worked as a DoorDash driver and, before that, as an electrical technician. He has a background in mechanical and electrical engineering. He moved to San Francisco from Seattle, cofounded “Stop AI,” and “stayed in a homeless shelter for four months.”
AI Doomerism’s Rhetoric
The group’s rationale included this claim (published on their account on August 29, 2025): “Humanity is walking off a cliff,” with AGI leading to “ASI covering the earth in datacenters.”
As 1a3orn pointed out, the original “Stop AI” website said we risked “recursive self-improvement” and doom from any AI models trained with more than 10^23 FLOPs. (The group dropped this prediction at some point) Later, in a (now deleted) “Stop AI Proposal,” the group asked to “Permanently ban ANNs (Artificial Neural Networks) on any computer above 10^25 FLOPS. Violations of the immediate 10^25 ANN FLOPS cap will be punishable by life in prison.”
To be clear, tens of current AI models were trained with over 10^25 FLOPs.
In a “For Humanity” podcast episode with Sam Kirchner, “Go to Jail to Stop AI” (episode #49, October 14, 2024), he said: “We don’t really care about our criminal records because if we’re going to be dead here pretty soon or if we hand over control which will ensure our future extinction here in a few years, your criminal record doesn’t matter.”
The podcast promoted this episode in a (now deleted) tweet, quoting Kirchner: “I’m willing to DIE for this.” “I want to find an aggressive prosecutor out there who wants to charge OpenAI executives with attempted murder of eight billion people. Yes. Literally, why not? Yeah, straight up. Straight up. What I want to do is get on the news.”
After Kirchner’s disappearance, the podcast host and founder of “GuardRailNow” and the “AI Risk Network,” John Sherman, deleted this episode from podcast platforms (Apple, Spotify) and YouTube. Prior to its removal, I downloaded the video (length 01:14:14).
Sherman also produced an emotional documentary with “Stop AI” titled “Near Midnight in Suicide City” (December 5, 2024, episode #55. See its trailer and promotion on the Effective Altruism Forum). It’s now removed from podcast platforms and YouTube, though I have a copy in my archive (length 1:29:51). It gathered 60k views before its removal.
The group’s radical rhetoric was out in the open. “If AGI developers were treated with reasonable precaution proportional to the danger they are cognizantly placing humanity in by their venal and reckless actions, many would have a bullet put through their head,” wrote Guido Reichstadter in September 2024.
The above screenshot appeared in a BestNetTech piece, “2024: AI Panic Flooded the Zone Leading to a Backlash.” The warning signs were there:
Also, like in other doomsday cults, the stress of believing an apocalypse is imminent wears down the ability to cope with anything else. Some are getting radicalized to a dangerous level, playing with the idea of killing AI developers (if that’s what it takes to “save humanity” from extinction).
Both PauseAI and StopAI stated that they are non-violent movements that do not permit “even joking about violence.” That’s a necessary clarification for their various followers. There is, however, a need for stronger condemnation. The murder of the UHC CEO showed us that it only takes one brainwashed individual to cross the line.
In early December 2024, I expressed my concern on Twitter: “Is the StopAI movement creating the next Unabomber?” The screenshot of “Getting arrested is nothing if we’re all gonna die” was taken from Sam Kirchner.
Targeting OpenAI
The main target of their civil-disobedience-style actions was OpenAI. The group explained that their “actions against OpenAI were an attempt to slow OpenAI down in their attempted murder of everyone and every living thing on earth.” In a tweet promoting the October blockade, Guido Reichstadter claimed about OpenAI: “These people want to see you dead.”
“My co-organizers Sam and Guido are willing to put their body on the line by getting arrested repeatedly,” said Remmelt Ellen. “We are that serious about stopping AI development.”
The “Stop AI” event page on Luma list further protests in front of OpenAI: on January 10, 2025; April 18, 2025; May 23, 2025 (coverage); July 25, 2025; and October 24, 2025. On March 2, 2025, they had a protest against Waymo.
On February 22, 2025, three “Stop AI” protesters were arrested for trespassing after barricading the doors to the OpenAI offices and allegedly refusing to leave the company’s property. It was covered by a local TV station. Golden Gate Xpress documented the activists detained in the police van: Jacob Freeman, Derek Allen, and Guido Reichstadter. Officers pulled out bolt cutters and cut the lock and chains on the front doors. In a Bay Area Current article, “Why Bay Area Group Stop AI Thinks Artificial Intelligence Will Kill Us All,” Kirchner is quoted as saying, “The work of the scientists present” is “putting my family at risk.”
October 20, 2025 was the first day of the jury trial of Sam Kirchner, Guido Reichstadter, Derek Allen, and Wynd Kaufmyn.
On November 3, 2025, “Stop AI”’s public defender served OpenAI CEO Sam Altman with a subpoena at a speaking event at the Sydney Goldstein Theater in San Francisco. The group claimed responsibility for the onstage interruption, saying the goal was to prompt the jury to ask Altman “about the extinction threat that AI poses to humanity.”
Public Messages to Sam Kirchner
“Stop AI” stated it is “deeply committed to nonviolence“ and “We wish no harm on anyone, including the people developing artificial superintelligence.” In a separate tweet, “Stop AI” wrote to Sam: “Please let us know you’re okay. As far as we know, you haven’t yet crossed a line you can’t come back from.”
John Sherman, the “AI Risk Network” CEO, pleaded, “Sam, do not do anything violent. Please. You know this is not the way […] Please do not, for any reason, try to use violence to try to make the world safer from AI risk. It would fail miserably, with terrible consequences for the movement.”
Rhetoric’s Ramifications
Taken together, the “imminent doom” rhetoric fosters conditions in which vulnerable individuals could be dangerously radicalized, echoing the dynamics seen in past apocalyptic movements.
In “A Cofounder’s Disappearance—and the Warning Signs of Radicalization”, City Journal summarized: “We should stay alert to the warning signs of radicalization: a disaffected young person, consumed by abstract risks, convinced of his own righteousness, and embedded in a community that keeps ratcheting up the moral stakes.”
“The Rationality Trap – Why Are There So Many Rationalist Cults?” described this exact radicalization process, noting how the more extreme figures (e.g., Eliezer Yudkowsky)3 set the stakes and tone: “Apocalyptic consequentialism, pushing the community to adopt AI Doomerism as the baseline, and perceived urgency as the lever. The world-ending stakes accelerated the ‘ends-justify-the-means’ reasoning.”
We already have a Doomers “murder cult” called the Zizians and their story is way more bizarre than anything you’ve read here. Like, awfully more extreme. And, hopefully, such things should remain rare.
What we should discuss is the dangers of such an extreme (and misleading) AI discourse. If human extinction from AI is just around the corner, based on the Doomers’ logic, all their suggestions are “extremely small sacrifices to make.” Unfortunately, the situation we’re in is: “Imagined dystopian fears have turned into real dystopian ‘solutions.’”
This is still an evolving situation. As of this writing, Kirchner’s whereabouts remain unknown.
—————————
Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and the “AI Panic” newsletter.
—————————
Endnotes
Don’t mix StopAI with other activist groups, such as PauseAI or ControlAI. Please see this brief guide on the Transformer Substack. ↩︎
This type of rhetoric wasn’t a one-off. Stop AI’s account also wrote, “Fuck CAIS and @DrTechlash” (CAIS is the Center for AI Safety, and @DrTechlash is, well, yours truly). Another target was Oliver Habryka, the CEO at Lightcone Infrastructure/LessWrong, whom they told, “Eat a pile of shit, you pro-extinction murderer.” ↩︎
Eliezer Yudkowsky, cofounder of the Machine Intelligence Research Institute (MIRI), recently published a book titled “If Anyone Builds It, Everyone Dies. Why Superhuman AI Would Kill Us All.” It had heavy promotion, but you can read here “Why The ‘Doom Bible’ Left Many Reviewers Unconvinced.” ↩︎
A federal judge just ruled that computer-generated summaries of novels are “very likely infringing,” which would effectively outlaw many book reports. That seems like a problem.
This isn’t just about AI—it’s about fundamentally redefining what copyright protects. And once again, something that should be perfectly fine is being treated as an evil that must be punished, all because some new machine did it.
But, I guess elementary school kids can rejoice that they now have an excuse not to do a book report.
To be clear, I doubt publishers are going to head into elementary school classrooms to sue students, but you never know with the copyright maximalists.
Sag highlights how it could have a much more dangerous impact beyond getting kids out of their homework: making much of Wikipedia infringing.
A new ruling in Authors Guild v. OpenAI has major implications for copyright law, well beyond artificial intelligence. On October 27, 2025, Judge Sidney Stein of the Southern District of New York denied OpenAI’s motion to dismiss claims that ChatGPT outputs infringed the rights of authors such as George R.R. Martin and David Baldacci. The opinion suggests that short summaries of popular works of fiction are very likely infringing (unless fair use comes to the rescue).
This is a fundamental assault on the idea, expression, distinction as applied to works of fiction. It places thousands of Wikipedia entries in the copyright crosshairs and suggests that any kind of summary or analysis of a work of fiction is presumptively infringing.
Short summaries of copyright-covered works should not impact copyright in any way. Yes, as Sag points out, “fair use” can rescue in some cases, but the old saw remains that “fair use is just the right to hire a lawyer.” And when the process is the punishment, saying that fair use will save you in these cases is of little comfort. Getting a ruling on fair use will run you hundreds of thousands of dollars at least.
Copyright is supposed to stop the outright copying of the copyright-protected expression. A summary is not that. It should not implicate the copyright in any form, and it shouldn’t require fair use to come to the rescue.
Sag lays out the details of what happened in this case:
Judge Stein then went on to evaluate one of the more detailed chat-GPT generated summaries relating to A Game of Thrones, the 694 page novel by George R. R. Martin which eventually became the famous HBO series of the same name. Even though this was only a motion to dismiss, where the cards are stacked against the defendant, I was surprised by how easily the judge could conclude that:
“A more discerning observer could easily conclude that this detailed summary is substantially similar to Martin’s original work, including because the summary conveys the overall tone and feel of the original work by parroting the plot, characters, and themes of the original.”
The judge described the ChatGPT summaries as:
“most certainly attempts at abridgment or condensation of some of the central copyrightable elements of the original works such as setting, plot, and characters”
He saw them as:
“conceptually similar to—although admittedly less detailed than—the plot summaries in Twin Peaks and in Penguin Random House LLC v. Colting, where the district court found that works that summarized in detail the plot, characters, and themes of original works were substantially similar to the original works.” (emphasis added).
To say that the less than 580-word GPT summary of A Game of Thrones is “less detailed” than the 128-page Welcome to Twin Peaks Guide in the Twin Peaks case, or the various children’s books based on famous works of literature in the Colting case, is a bit of an understatement.
Yikes. I’m sorry, but if you think that a 580-word computer-generated summary of a massive book is infringing, then we’ve lost the plot when it comes to copyright law. If it were, then copyright itself would need to be radically changed to allow for basic forms of human speech. If I see a movie and tell my friend what it was about, that shouldn’t implicate copyright law, even if it summarizes “the plot, characters, and themes of the original work.”
Sag then ties this to what you can find for countless creative works on Wikipedia:
To see why the latest OpenAI ruling is so surprising, it helps to compare the ChatGPT summary of A Game of Thrones to the equivalentWikipedia plot summary. I read them both so you don’t have to.
The ChatGPT summary of a Game of Thrones is about 580 words long and captures the essential narrative arc of the novel. It covers all three major storylines: the political intrigue in King’s Landing culminating in Ned Stark’s execution (spoiler alert), Jon Snow’s journey with the Night’s Watch at the Wall, and Daenerys Targaryen’s transformation from fearful bride (more on this shortly) to dragon mother across the Narrow Sea. In this regard, it is very much like the 800 word Wikipedia plot summary. Each summary presents the central conflict between the Starks and Lannisters, the revelation of Cersei and Jaime’s incestuous relationship, and the key plot points that set the larger series in motion.
And, look, if you want to see the chilling effects on speech created by over expansive copyright law, well:
I could say more about their similarities, but I’m concerned that if I explored the summaries in any greater detail, the Authors Guild might think that I am also infringing George R. R. Martin’s copyright, so I’ll move on to the minor differences.
You can argue that Sag, an expert on copyright law, is kind of making a joke here, but it’s no actual joke. Just the fact that someone even needs to consider this shows how bonkers and problematic this ruling is.
As Sag makes clear, there are few people out there who would legitimately think that the Wikipedia summary should be deemed infringing, which is why this ruling is notable. It again highlights how lots of people, including the media, lawmakers, and now (apparently) judges, get so distracted by the “but this new machine is bad!” in looking at LLM technology that they seem to completely lose the plot.
And that’s dangerous for the future of speech in general. We shouldn’t be tossing out fundamental key concepts in speech (“you can summarize a work of art without fear”) just because some new kind of summarization tool exists.
Both Google and Apple are cramming new AI features into their phones and other devices, and neither company has offered clear ways to control which apps those AI systems can access. Recent issues around WhatsApp on both Android and iPhone demonstrate how these interactions can go sideways, risking revealing chat conversations beyond what you intend. Users deserve better controls and clearer documentation around what these AI features can access.
After diving into how Google Gemini and Apple Intelligence (and in some cases Siri) currently work, we didn’t always find clear answers to questions about how data is stored, who has access, and what it can be used for.
At a high level, when you compose a message with these tools, the companies can usually see the contents of those messages and receive at least a temporary copy of the text on their servers.
When receiving messages, things get trickier. When you use an AI like Gemini or a feature like Apple Intelligence to summarize or read notifications, we believe companies should be doing that content processing on-device. But poor documentation and weak guardrails create issues that have lead us deep into documentation rabbit holes and still fail to clarify the privacy practices as clearly as we’d like.
We’ll dig into the specifics below as well as potential solutions we’d like to see Apple, Google, and other device-makers implement, but first things first, here’s what you can do right now to control access:
Control AI Access to Secure Chat on Android and iOS
Here are some steps you can take to control access if you want nothing to do with the device-level AI features’ integration and don’t want to risk accidentally sharing the text of a message outside of the app you’re using.
How to Check and Limit What Gemini Can Access
If you’re using Gemini on your Android phone, it’s a good time to review your settings to ensure things are set up how you want. Here’s how to check each of the relevant settings:
Disable Gemini App Activity: Gemini App Activity is a history Google stores of all your interactions with Gemini. It’s enabled by default. To disable it, open Gemini (depending on your phone model, you may or may not even have the Google Gemini app installed. If you don’t have it installed, you don’t really need to worry about any of this). Tap your profile picture > Gemini Apps Activity, then change the toggle to either “Turn off,” or “Turn off and delete activity” if you want to delete previous conversations. If the option reads “Turn on,” then Gemini Apps Activity is already turned off.
Control app and notification access: You can control which apps Gemini can access by tapping your profile picture > Apps, then scrolling down and disabling the toggle next to any apps you do not want Gemini to access. If you do not want Gemini to potentially access the content that appears in notifications, open the Settings app and revoke notification access from the Google app.
Delete the Gemini app: Depending on your phone model, you might be able to delete the Gemini app and revert to using Google Assistant instead. You can do so by long-pressing the Gemini app and selecting the option to delete.
How to Check and Limit what Apple Intelligence and Siri Can Access
Similarly, there are a few things you can do to clamp down on what Apple Intelligence and Siri can do:
Disable the “Use with Siri Requests” option: If you want to continue using Siri, but don’t want to accidentally use it to send messages through secure messaging apps, like WhatsApp, then you can disable that feature by opening Settings > Apps > [app name], and disabling “Use with Siri Requests,” which turns off the ability to compose messages with Siri and send them through that app.
Disable Apple Intelligence entirely: Apple Intelligence is an all-or-nothing setting on iPhones, so if you want to avoid any potential issues your only option is to turn it off completely. To do so, open Settings > Apple Intelligence & Siri, and disable “Apple Intelligence” (you will only see this option if your device supports Apple Intelligence, if it doesn’t, the menu will only be for “Siri”). You can also disable certain features, like “writing tools,” using Screen Time restrictions. Siri can’t be universally turned off in the same way, though you can turn off the options under “Talk to Siri” to make it so you can’t speak to it.
For more information about cutting off AI access at different levels in other apps, this Consumer Reports article covers other platforms and services.
Why It Matters
Sending Messages Has Different Privacy Concerns than Receiving Them
Let’s start with a look at how Google and Apple integrate their AI systems into message composition, using WhatsApp as an example.
Google Gemini and WhatsApp
On Android, you can optionally link WhatsApp and Gemini together so you can then initiate various actions for sending messages from the Gemini app, like “Call Mom on WhatsApp” or “Text Jason on WhatsApp that we need to cancel our secret meeting, but make it a haiku.” This feature raised red flags for users concerned about privacy.
By default, everything you do in Gemini is stored in the “Gemini Apps Activity,” where messages are stored forever, subject to human review, and are used to train Google’s products. So, unless you change it, when you use Gemini to compose and send a message in WhatsApp then the message you composed is visible to Google.
If you turn the activity off, interactions are still stored for 72 hours. Google’s documentation claims that even though messages are stored, those conversations aren’t reviewed or used to improve Google machine learning technologies, though that appears to be an internal policy choice with no technical limits preventing Google from accessing those messages.
The simplicity of invoking Gemini to compose and send a message may lead to a false sense of privacy. Notably, other secure messaging apps, like Signal, do not offer this Gemini integration.
For comparison’s sake, let’s see how this works with Apple devices.
According to its privacy policy, when you dictate a message through Siri to send to WhatsApp (or anywhere else), the message, including metadata like the recipient phone number and other identifiers, is sent to Apple’s servers. This was confirmed by researchers to include the text of messages sent to WhatsApp. When you use Siri to compose a WhatsApp message, the message gets routed to both Apple and WhatsApp. Apple claims it does not store this transcript unless you’ve opted into “Improve Siri and Dictation.” WhatsApp defers to Apple’s support for data handling concerns. This is similar to how Google handles speech-to-text prompts.
In response to that research, Apple said this was expected behavior with an app that uses SiriKit—the extension that allows third-party apps to integrate with Siri—like WhatsApp does.
Both Siri and Apple Intelligence can sometimes run locally on-device, and other times need to rely on Apple-managed cloud servers to complete requests. Apple Intelligence can use the company’s Private Cloud Compute, but Siri doesn’t have a similar feature.
The ambiguity around where data goes makes it overly difficult to decide on whether you are comfortable with the sort of privacy trade-off that using features like Siri or Apple Intelligence might entail.
How Receiving Messages Works
Sending encrypted messages is just one half of the privacy puzzle. What happens on the receiving end matters too.
We could not find anything in Google’s Utilities documentation that clarifies what information is collected, stored, or sent to Google from these notifications. When we reached out to Google, the company responded that it “builds technical data protections that safeguard user data, uses data responsibly, and provides users with tools to control their Gemini experience.” Which means Google has no technical limitation around accessing the text from notifications if you’ve enabled the feature in the Utilities app. This could open up any notifications routed through the Utilities app to the Gemini app to be accessed internally or from third-parties. Google needs to publicly make its data handling explicit in its documentation.
If you use encrypted communications apps and have granted access to notifications, then it is worth considering disabling that feature or controlling what’s visible in your notifications on an app-level.
Apple Intelligence
Apple is more clear about how it handles this sort of notification access.
Siri can read and reply to messages with the “Announce Notifications” feature. With this enabled, Siri can read notifications out loud on select headphones or via CarPlay. In a press release, Apple states, “When a user talks or types to Siri, their request is processed on device whenever possible. For example, when a user asks Siri to read unread messages… the processing is done on the user’s device. The contents of the messages aren’t transmitted to Apple servers, because that isn’t necessary to fulfill the request.”
Apple Intelligence can summarize notifications from any app that you’ve enabled notifications on. Apple is clear that these summaries are generated on your device, “when Apple Intelligence provides you with preview summaries of your emails, messages, and notifications, these summaries are generated by on-device models.” This means there should be no risk that the text of notifications from apps like WhatsApp or Signal get sent to Apple’s servers just to summarize them.
New AI Features Must Come With Strong User Controls
As more device-makers cram AI features into their devices, the more necessary it is for us to have clear and simple controls over what personal data these features can access on our devices. If users do not have control over when a text leaves a device for any sort of AI processing—whether that’s to a “private” cloud or not—it erodes our privacy and potentially threatens the foundations of end-to-end encrypted communications.
Per-app AI Permissions
Google, Apple, and other device makers should add a device-level AI permission, just like they do for other potentially invasive privacy features, like location sharing, to their phones. You should be able to tell the operating system’s AI to not access an app, even if that comes at the “cost” of missing out on some features. The setting should be straightforward and easy to understand in ways the Gemini an Apple Intelligence controls currently are not.
Offer On-Device-Only Modes
Device-makers should offer an “on-device only” mode for those interested in using some features without having to try to figure out what happens on device or on the cloud. Samsung offers this, and both Google and Apple would benefit from a similar option.
Improve Documentation
Both Google and Apple should improve their documentation about how these features interact with various apps. Apple doesn’t seem to clarify notification processing privacy anywhere outside of a press release, and we couldn’t find anything about Google’s Utilities privacy at all. We appreciate tools like Gemini Apps Activity as a way to audit what the company collects, but vague information like “Prompted a Communications query” is only useful if there’s an explanation somewhere about what that means.
The current user options are not enough. It’s clear that the AI features device-makers add come with significant confusion about their privacy implications, and it’s time to push back and demand better controls. The privacy problems introduced alongside new AI features should be taken seriously, and remedies should be offered to both users and developers who want real, transparent safeguards over how a company accesses their private data and communications.
When Reddit sued “data scraper” companies and AI firm Perplexity earlier this week, I assumed it was another predictable skirmish over AI training data—the kind of case we’ve been tracking as companies try to wall off the open internet and set up toll booths. But reading the actual complaint made it clear this is something far more dangerous: Reddit isn’t just going after scrapers. It’s mounting a fundamental attack on the very concept of an open internet, using a twisted reading of copyright law that—if it succeeds—would break how search engines, archives, and the web itself operate.
Even if you love Reddit and hate AI, you should be worried about this lawsuit. If it succeeds, it would fundamentally close off most of the open internet.
Most reporting on this is not actually explaining the nuances, which require a deeper understanding of the law, but fundamentally, Reddit is NOT arguing that these companies are illegally scraping Reddit, but rather that they are illegally scraping… Google (which is not a party to the lawsuit) and in doing so violating the DMCA’s anti-circumvention clause, over content Reddit holds no copyright over. And, then, Perplexity is effectively being sued for linking to Reddit.
This is… bonkers on so many levels. And, incredibly, within their lawsuit, Reddit defends its arguments by claiming it’s filing this lawsuit to protect the open internet. It is not. It is doing the exact opposite.
The Background
It is totally reasonable to be concerned about the burden that data scrapers put on websites, and to talk about ways to deal with them. But that’s not what this lawsuit really is. It’s mostly focused on some companies that effectively have built unofficial APIs for getting search results data out of Google. That can be quite useful in some cases! But also, some of the companies in this space can be fairly sketchy. Reddit leans heavily on the sketchiness of the companies to imply “they’re bad.”
But, an open web must mean a programmable web of some sort. Building on other services is a fundamental part of the open web and has always been there. If the building becomes abusive, then there are often technical ways of dealing with it. But here, the “abuse” seems to be Reddit signed a $60 million scraping deal with Google, which was already kinda sketchy.
After all, Reddit has a license to the content users post in order to operate the service, but they don’t hold the copyright on it. Indeed, Reddit’s terms state clearly that users retain “any ownership rights you have in Your content.” Because of Reddit’s agreement that it can license content, the deal with Google could sorta squeeze under that term, but that doesn’t give Reddit the right to then sue over users’ copyrights (as it’s doing in this case).
Either way, there’s an indication that Reddit has gotten greedy. It’s apparently reopened negotiations with Google recently, seeking more money and traffic. But it also wants money from other AI providers. Apparently, that includes Perplexity, which is a pretty useful AI “answer engine” that lets users select from a variety of underlying LLMs (Perplexity has released its own LLMs, but they were modifications of open source LLMs including Llama (from Meta) and Mistral, a popular open source LLM from France. Thus, while Perplexity has offered its own models, it didn’t train them itself).
Because Perplexity is much more focused on being an alternative to a search engine than a traditional “chat bot,” its focus in answering your questions is to actually provide links as sources for the answers it gives. In effect, it combines a traditional search engine with an LLM and it did this before many other chatbot LLMs added web search capabilities (though most now have them).
But that means, if an “answer” to a question from a user comes from a Reddit post, Perplexity is likely to link to it, just like a regular search engine. But, Reddit wants to get paid. And because Reddit has become so closed and persnickety about things, it looks like Perplexity may have chosen to use these other data scraping firms’ unofficial Google search results APIs to find Reddit posts and link to them.
This is… how the open internet is supposed to work, actually. But Reddit presents it as a sneaky “circumvention.”
Recognizing that Reddit denies scrapers like them access to its site, Defendants SerpApi, Oxylabs, and AWMProxy scrape the data from Google’s search results instead. They do so by masking their identities, hiding their locations, and disguising their web scrapers as regular people (among other techniques) to circumvent or bypass the security restrictions meant to stop them. For example, during a two-week span in July 2025, Defendants SerpApi, Oxylabs, and AWMProxy circumvented Google’s technological control measures and automatedly accessed, without authorization, almost three billion search engine results pages (“SERPs”) containing Reddit text, URLs, images, and videos.
That’s Not How Circumvention Works
So you might notice something weird in the paragraph above. Namely the claim that the API/scraping companies “circumvented Google’s technological control measures.”
The fundamental issue is that it says any attempt to “circumvent a technological measure” that tries to protect a copyright-protected work is, itself, copyright infringement. And that’s even if the goal of the circumvention is not even to infringe on the underlying copyright at all. That’s why we’ve seen attempts by companies to use 1201 to, say, block people from using cheaper ink jet cartridges, or getting a cheaper garage door opener. Neither of those sound like copyright issues (because they’re not), but companies tried to abuse 1201 by claiming they put “technological control measures” on those devices, and any “circumvention” should then be seen as infringement.
But here, Reddit is doing something even crazier. Because it’s saying that since these companies (allegedly) get around Google’s technological measures, then somehow Reddit can accuse them of violating 1201.
Reddit and Google have implemented technological measures that effectively control access to Reddit content. Both companies use advanced technological techniques, as described above, to control unauthorized, automated access to their server systems. These measures, in the ordinary course of their operation, limit the freedom and ability of users to access Reddit content, including by prohibiting automated entities from accessing search engine result pages and scraping search engine results that include Reddit content.
Defendants’ actions violate 17 U.S.C. § 1201(a)(1)(A), under which no person shall circumvent a technological measure that effectively controls access to a copyrighted work. Defendants have circumvented these measures in one or more ways, including:
a. Avoiding or bypassing Reddit’s measures entirely in order to obtain Reddit’s content and services, and the content authored by its users, that appear in Google search results; and
b. Avoiding, removing, deactivating, impairing, and/or bypassing SearchGuard and Google’s other technological control measures by using devices, systems, processes, and/or protocols, including large-volume proxy networks, to improperly gain access to Google Search results.
Let’s break this down, because we have to look at how crazy this is.
They’re saying that these companies are “avoiding or bypassing” Reddit’s TCMs. But, the way they’re doing that is by not scraping Reddit. You cannot claim that it is “circumventing a TCM” to get the same content… from Google. That’s crazy.
Even crazier is that they’re arguing that the defendants are circumventing Google’s TCM, even though Google isn’t even a party.
They’re making this claim over content that Reddit holds no copyright over. The copyright remains with the original creator. Reddit holds a license, but a license does not grant Reddit the right to sue over that copyright.
Each one of these ideas is crazy. All three of them together is ludicrous. Reddit is claiming that these companies violated copyright law by (1) avoiding Reddit and (2) getting the content from publicly available Google searches over (3) content that Reddit has no copyright over.
And somehow that’s supposed to be copyright infringement.
This Is Not Protecting the Open Internet
Even more obnoxiously, Reddit crowns itself a protector of the open internet with this nonsense:
Because Reddit has always believed in the open internet, it takes its role as a steward of its users’ communities, discussions, and authentic human discourse seriously.
Elsewhere in the lawsuit, it says:
As articulated in its Public Content Policy, Reddit believes in an open internet, but it “do[es] not believe that third parties have a right to misuse public content just because it’s public.”
If that’s the case, then… you don’t believe in an open internet. Text and data mining is a part of the open internet. Building on the work of others is part of the open internet. You can’t just claim “we support the open internet, but not if we say you’re misusing it.” It’s not your call.
Yes, there are copyright restrictions on what you can do with others’ content, but (again) Reddit has no copyright interest here. And it can’t even legitimately claim a “circumvention” of a TCM just because these companies got the same data elsewhere.
This Isn’t Even About Training
Some people will still insist this is bad because they hate all AI training based on scraping, but that’s not even what’s happening here. We discussed this a bit in our last piece on cutting off the open internet. It’s one thing to argue that you want to block your content from being trained upon, but it’s a wholly different thing to say “you can’t retrieve this page based on a user search.” That latter scenario is the basis of how search engines exist online, which are fundamental to an open web.
But, as Perplexity notes in its response to the lawsuit (ironically, in the Perplexity subreddit on Reddit), that’s exactly what Reddit is looking to block:
What does Perplexity actually do with Reddit content? We summarize Reddit discussions, and we cite Reddit threads in answers, just like people share links to posts here all the time. Perplexity invented citations in AI for two reasons: so that you can verify the accuracy of the AI-generated answers, and so you can follow the citation to learn more and expand your journey of curiosity.
And that’s what people use Perplexity for: journeys of curiosity and learning. When they visit Reddit to read your content it’s because they want to read it, and they read more than they would have from a Google search.
The company also notes that Reddit demanded Perplexity license its data, but Perplexity explained to them (as mentioned above) that they don’t train their own LLM so they don’t need to license data for training.
Here’s where we push back. Reddit told the press we ignored them when they asked about licensing. Untrue. Whenever anyone asks us about content licensing, we explain that Perplexity, as an application-layer company, does not train AI models on content. Never has. So it is impossible for us to sign a license agreement to do so.
A year ago, after explaining this, Reddit insisted we pay anyway, despite lawfully accessing Reddit data. Bowing to strong arm tactics just isn’t how we do business.
For what it’s worth, Perplexity also claims that this is part of Reddit’s plan to “extort” more money from Google.
This is an Anti-Open Internet Lawsuit
If this lawsuit succeeds, it would signal a huge destruction of the open internet. It would fundamentally make it impossible for search engines to work without licensing all content. It would, in effect, close off huge parts of the open internet to only those with the largest wallets.
Beyond that, it would extend our understanding of Section 1201’s anti-circumvention provisions to absurdity. Saying that not scraping your site is circumvention? Crazy. Saying that (allegedly) “bypassing” someone else’s technological measures lets you sue? Absurd. And saying that you can do all that over content you don’t even hold the copyright on? Preposterously stupid.
If this lawsuit succeeds, it would open up a cottage industry of frivolous lawsuits, while greatly diminishing the nature of the open web.
I’ve long considered Reddit one of the “good” examples of how narrow, more focused, communities can operate. On our latest Ctrl-Alt-Speech, we talked about how it’s one of the examples of the “good” parts of the internet. I know and respect many people at Reddit, including on their legal team.
But I just don’t get this lawsuit. It seems massively destructive to the open internet in what appears to be a very misguided and mis-targeted attempt to shake down extra licensing revenue. There are better ways to do this, and I hope that Reddit reconsiders its approach.
Earlier this month, we noted how Wired and Business Insider were among a half-dozen or so major news organizations that were busted publishing fake journalism by fake journalists using AI to make up completely bogus people, narratives, and stories. The Press Gazette found that at least six outlets were conned by a fraudster going by the name of “Margaux Blanchard.”
A week later and the scandal is much bigger than originally stated.
Business Insider has had to pull upward of 40 stories offline for being fabricated. Washington Post and Daily Beast have found that “Margaux Blanchard” appears to be part of a much larger operation using “AI” to defraud news outlets and mislead the public. Most of the pieces were fake personal essay type writing for experiences that were completely made up, by a rotating crop of different fake authors.
And most of this stuff should have been caught by any competent editor before publication:
“The Beast’s review found several red flags within the since-deleted essays that suggest the writing did not reflect the authors’ lived experiences. This included contradictory information in separate essays by the same author, such as changing the gender and ages of their supposed children, and author-contributed photos that reverse-image searches confirm were pulled from elsewhere online.”
Recall that back in May, Business Insider executives celebrated the fact they had laid off another 21 percent of their workforce as part of a rushed pivot toward automation. But not only does that automation have problems with doing basic things (not plagiarizing, writing basic headlines, and citations), it’s opened up new problems in relation to propaganda and fraud.
Again, early LLM automation has some potential. But the kind of folks who own (or fail upward into positions of management at) major corporate media outlets primarily see AI as a way to lazily cut corners and undermine already underpaid and mistreated labor. As you see at places like Business Insider and Politico, these folks don’t appear to genuinely really care whether AI works or makes their product better. In large part because they’re exceptionally terrible at their jobs.
There’s automation and what it can actually do. And then there’s a deep layer of fatty fraud and representation by hucksters cashing in on the front end of the AI hype cycle. That latter part is expected to have a very ugly collision with reality over the next year or so (it’s something research firms like Gartner call the “trough of disillusionment.”) Others might call it a bubble preparing to pop.
Most extraction class media owners have completely bought into the hype, in part because they really desperately want to believe in a future where they can eliminate huge swaths of their payroll with computers. But they’re not apparently bright enough to actually see the limitations of the tech through the haze of hype, despite no limit of examples of the hazards of rushed adoption of undercooked tech.
The rushed integration of half-cooked automation into the already broken U.S. journalism industry simply isn’t going very well. There’s been just countless examples where affluent media owners rushed to embrace automation and LLMs (usually to cut corners and undermine labor) with disastrous impact, resulting in lots of plagiarism, completely false headlines, and a giant, completely avoidable mess.
Earlier this month, we noted how Politico is among the major media companies rushing to embrace AI without really thinking things through or ensuring the technology actually works first. They’ve implemented “AI” systems –without transparently informing staff — that generate articles rife with all sorts of gibberish and falsehoods (this Brian Merchant post is a must read to understand the scope).
Politico management also recently introduced another AI “report builder” for premium Politico PRO subscribers that’s supposed to offer a breakdown of existing Politico reporter analysis of complicated topics. But here too the automation constantly screws up, conflating politicians and generating all sorts of errors that, for some incoherent reason, isn’t reviewed by Politico editors.
Actual human Politico journalists are understandably not pleased with any of this, especially because the nontransparent introduction of the new automation was in direct violation of the editorial union’s contract struck just last year. So unionized Politico employees have since been battling with Politico via arbitration.
On July 11, the PEN Guild (which has about 250 Politico union members) and Politico held an arbitration hearing to determine whether the publication had broken its collective bargaining agreement. Nieman Lab obtained access to the arbitration hearing transcript, at which Politico higher up editors tried to claim that automation shouldn’t be held to the same editorial standards as humans.
Specifically asked about the problems with Report Builder, deputy editor-in-chief Joe Schatz insisted that because Report Builder was technically built by coders, and its output isn’t reviewed by professional editors (which is insane) it shouldn’t have to adhere to the site’s broader editorial standards:
“He went on to argue that Report Builder sits “outside the newsroom,” since Politico’s product and engineering teams built the tool and editorial workers don’t review its outputs. As a result, he said, the AI-generated reports should not be held to the newsroom’s editorial standards.”
That’s… incoherent. LLMs are tools, they’re not inherently exempt from editorial standards and material reality just because management is bullish on AI. The CEO of Politico Owner Axel Springer, Mathias Döpfner, recently introduced a company wide mandate that every single employee in the organization has to not only use AI, but consistently file reports justifying why they don’t. It’s rather… cultish.
This tap dancing around what constitutes “newsgathering” is effectively a way for Politico management to try and tap dance around their contract with union employees, since said contract plainly states:
“If AI technology is used by Politico or its employees to supplement or assist in their newsgathering, such as the collection, organization, recording or maintenance of information, it must be done in compliance with Politico’s standards of journalistic ethics and involve human oversight.”
Again, most U.S. media is owned by affluent older, white, Conservative men who generally see AI not as a way to make their products or employees’ lives better or more efficient, but as a way to cut corners and undermine already underpaid labor. Men like Döpfner, who like our authoritarian President, and whose editorial standards and relationship with labor were pretty fucking shaky to begin with.
These men want to create a fully automated ad engagement ouroboros that effectively shits money without having to pay humans a living wage, and that goal is evident everywhere you look.
In an ideal world this would result in surging demand for intelligent, savvy journalism and analysis by competent, experienced people who actually have something to say. But this isn’t an ideal world, and increasingly the kind of folks dictating the trajectory of U.S. media (and automation) are routinely demonstrating they lack any sort of ethical competency for the honor.
A bill purporting to target the issue of misinformation and defamation caused by generative AI has mutated into something that could change the internet forever, harming speech and innovation from here on out.
The Nurture Originals, Foster Art and Keep Entertainment Safe (NO FAKES) Act aims to address understandable concerns about generative AI-created “replicas” by creating a broad new intellectual property right. That approach was the first mistake: rather than giving people targeted tools to protect against harmful misrepresentations—balanced against the need to protect legitimate speech such as parodies and satires—the original NO FAKES just federalized an image-licensing system.
The updated bill doubles down on that initial mistaken approach by mandating a whole new censorship infrastructure for that system, encompassing not just images but the products and services used to create them, with few safeguards against abuse.
The new version of NO FAKES requires almost every internet gatekeeper to create a system that will a) take down speech upon receipt of a notice; b) keep down any recurring instance—meaning, adopt inevitably overbroad replica filters on top of the already deeply flawed copyright filters; c) take down and filter tools that might have been used to make the image; and d) unmask the user who uploaded the material based on nothing more than the say so of person who was allegedly “replicated.”
This bill would be a disaster for internet speech and innovation.
Targeting Tools
The first version of NO FAKES focused on digital replicas. The new version goes further, targeting tools that can be used to produce images that aren’t authorized by the individual, anyone who owns the rights in that individual’s image, or the law. Anyone who makes, markets, or hosts such tools is on the hook. There are some limits—the tools must be primarily designed for, or have only limited commercial uses other than making unauthorized images—but those limits will offer cold comfort to developers given that they can be targeted based on nothing more than a bare allegation. These provisions effectively give rights-holders the veto power on innovation they’ve long sought in the copyright wars, based on the same tech panics.
Takedown Notices and Filter Mandate
The first version of NO FAKES set up a notice and takedown system patterned on the DMCA, with even fewer safeguards. NO FAKES expands it to cover more service providers and require those providers to not only take down targeted materials (or tools) but keep them from being uploaded in the future. In other words, adopt broad filters or lose the safe harbor.
Filters are already a huge problem when it comes to copyright, and at least in that instance all it should be doing is flagging for human review if an upload appears to be a whole copy of a work. The reality is that these systems often flag things that are similar but not the same (like two different people playing the same piece of public domain music). They also flag things for infringement based on mere seconds of a match, and they frequently do not take into account context that would make the use authorized by law.
But copyright filters are not yet required by law. NO FAKES would create a legal mandate that will inevitably lead to hecklers’ vetoes and other forms of over-censorship.
The bill does contain carve outs for parody, satire, and commentary, but those will also be cold comfort for those who cannot afford to litigate the question.
Threats to Anonymous Speech
As currently written, NO FAKES also allows anyone to get a subpoena from a court clerk—not a judge, and without any form of proof—forcing a service to hand over identifying information about a user.
We’ve already seen abuse of a similar system in action. In copyright cases, those unhappy with the criticisms being made against them get such subpoenas to silence critics. Often that the criticism includes the complainant’s own words as proof of the criticism, an ur-example of fair use. But the subpoena is issued anyway and, unless the service is incredibly on the ball, the user can be unmasked.
Not only does this chill further speech, the unmasking itself can cause harm to users. Either reputationally or in their personal life.
Threats to Innovation
Most of us are very unhappy with the state of Big Tech. It seems like not only are we increasingly forced to use the tech giants, but that the quality of their services is actively degrading. By increasing the sheer amount of infrastructure a new service would need to comply with the law, NO FAKES makes it harder for any new service to challenge Big Tech. It is probably not a coincidence that some of these very giants are okay with this new version of NO FAKES.
Requiring removal of tools, apps, and services could likewise stymie innovation. For one, it would harm people using such services for otherwise lawful creativity. For another, it would discourage innovators from developing new tools. Who wants to invest in a tool or service that can be forced offline by nothing more than an allegation?
This bill is a solution in search of a problem. Just a few months ago, Congress passed Take It Down, which targeted images involving intimate or sexual content. That deeply flawed bill pressures platforms to actively monitor online speech, including speech that is presently encrypted. But if Congress is really worried about privacy harms, it should at least wait to see the effects of the last piece of internet regulation before going further into a new one. Its failure to do so makes clear that this is not about protecting victims of harmful digital replicas.
NO FAKES is designed to consolidate control over the commercial exploitation of digital images, not prevent it. Along the way, it will cause collateral damage to all of us.
Originally posted to the EFF’s Deeplinks blog, with a link to EFF’s Take Action page on the NO FAKES bill, which helps you tell your elected officials not to support this bill.