Radicalized Anti-AI Activist Should Be A Wake Up Call For Doomer Rhetoric
from the stay-connected-to-reality dept
A cofounder of a Bay Area “Stop AI” activist group abandoned its commitment to nonviolence, assaulted another member, and made statements that left the group worried he might obtain a weapon to use against AI researchers. The threats prompted OpenAI to lock down its San Francisco offices a few weeks ago. In researching this movement, I came across statements that he made about how almost any actions he took were justifiable, since he believed OpenAI was going to “kill everyone and every living thing on earth.” Those are detailed below.
I think it’s worth exploring the radicalization process and the broader context of AI Doomerism. We need to confront the social dynamics that turn abstract fears of technology into real-world threats against the people building it.

OpenAI’s San Francisco Offices Lockdown
On November 21, 2025, Wired reported that OpenAI’s San Francisco offices went into lockdown after an internal alert about a “Stop AI” activist. The activist allegedly expressed interest in “causing physical harm to OpenAI employees” and may have tried to acquire weapons.
The article did not mention his name but hinted that, before his disappearance, he had stated he was “no longer part of Stop AI.”1 On November 22, 2025, the activist group’s Twitter account posted that it was Sam Kirchner, the cofounder of “Stop AI.”
According to Wired’s reporting
A high-ranking member of the global security team said [in OpenAI Slack] “At this time, there is no indication of active threat activity, the situation remains ongoing and we’re taking measured precautions as the assessment continues.” Employees were told to remove their badges when exiting the building and to avoid wearing clothing items with the OpenAI logo.

“Stop AI” provided more details on the events leading to OpenAI’s lockdown:
Earlier this week, one of our members, Sam Kirchner, betrayed our core values by assaulting another member who refused to give him access to funds. His volatile, erratic behavior and statements he made renouncing nonviolence caused the victim of his assault to fear that he might procure a weapon that he could use against employees of companies pursuing artificial superintelligence.
We prevented him from accessing the funds, informed the police about our concerns regarding the potential danger to AI developers, and expelled him from Stop AI. We disavow his actions in the strongest possible terms.
Later in the day of the assault, we met with Sam; he accepted responsibility and agreed to publicly acknowledge his actions. We were in contact with him as recently as the evening of Thursday Nov 20th. We did not believe he posed an immediate threat, or that he possessed a weapon or the means to acquire one.
However, on the morning of Friday Nov 21st, we found his residence in West Oakland unlocked and no sign of him. His current whereabouts and intentions are unknown to us; however, we are concerned Sam Kirchner may be a danger to himself or others. We are unaware of any specific threat that has been issued.
We have taken steps to notify security at the major US corporations developing artificial superintelligence. We are issuing this public statement to inform any other potentially affected parties.”

A “Stop AI” activist named Remmelt Ellen wrote that Sam Kirchner “left both his laptop and phone behind and the door unlocked.” “I hope he’s alive,” he added.
Early December, the SF Standard reported that the “cops [are] still searching for ‘volatile’ activist whose death threats shut down OpenAI office.” Per this coverage, the San Francisco police are warning that he could be armed and dangerous. “He threatened to go to several OpenAI offices in San Francisco to ‘murder people,’ according to callers who notified police that day.”
A Bench Warrant for Kirchner’s Arrest
When I searched for any information that had not been reported before, I found a revealing press release. It invited the press to a press conference on the morning of Kirchner’s disappearance:
“Stop AI Defendants Speak Out Prior to Their Trial for Blocking Doors of Open AI.”
When: November 21, 2025, 8:00 AM.
Where: Steps in front of the courthouse (San Francisco Superior Court).
Who: Stop AI defendants (Sam Kirchner, Wynd Kaufmyn, and Guido Reichstadter), their lawyers, and AI experts.
Sam Kirchner is quoted as saying, “We are acting on our legal and moral obligation to stop OpenAI from developing Artificial Superintelligence, which is equivalent to allowing the murder [of] people I love as well as everyone else on earth.”
Needless to say, things didn’t go as planned. That Friday morning, Sam Kirchner went missing, triggering the OpenAI lockdown.

Later, the SF Standard confirmed the trial angle of this story: “Kirchner was not present for a Nov. 21 court hearing, and a judge issued a bench warrant for his arrest.”

“Stop AI” – a Bay Area-Centered “Civil Resistance” Group
“Stop AI” calls itself a “non-violent civil resistance group” or a “non-violent activist organization.” The group’s focus is on stopping AI development, especially the race to AGI (Artificial General Intelligence) and “Superintelligence.” Their worldview is extremely doom-heavy, and their slogans include: “AI Will Kill Us All,” “Stop AI or We’re All Gonna Die,” and “Close OpenAI or We’re All Gonna Die!”
According to a “Why Stop AI is barricading OpenAI” post on the LessWrong forum from October 2024, the group is inspired by climate groups like Just Stop Oil and Extinction Rebellion, but focused on “AI extinction risk,” or in their words, “risk of extinction.” Sam Kirchner explained in an interview: “Our primary concern is extinction. It’s the primary emotional thing driving us: preventing our loved ones, and all of humanity, from dying.”
Unlike the rest of the “AI existential risk” ecosystem, which is often well-funded by effective altruism billionaires such as Dustin Moskovitz (Coefficient Giving, formerly Open Philanthropy) and Jaan Tallinn (Survival and Flourishing Fund), this specific group is not a formal nonprofit or funded NGO, but rather a loosely organized grassroots group of volunteer-run activism. They made their financial situation pretty clear when the “Stop AI” Twitter account replied to a question with: “We are fucking poor, you dumb bitch.”2
According to The Register, “STOP AI has four full-time members at the moment (in Oakland) and about 15 or so volunteers in the San Francisco Bay Area who help out part-time.”
Since its inception, “Stop AI” has had two central organizers: Guido Reichstadter and Sam Kirchner (the current fugitive). According to The Register and the Bay Area Current, Guido Reichstadter has worked as a jeweler for 20 years. He has an undergraduate degree in physics and math. Reichstadter’s prior actions include climate change and abortion-rights activism.
In June 2022, Reichstadter climbed the Frederick Douglass Memorial Bridge in Washington, D.C., to protest the Supreme Court’s decision overturning Roe v. Wade. Per the news coverage, he said, “It’s time to stop the machine.” “Reichstadter hopes the stunt will inspire civil disobedience nationwide in response to the Supreme Court’s ruling.”
Reichstadter moved to the Bay Area from Florida around 2024 explicitly to organize civil disobedience against AGI development via “Stop AI.” Recently, he undertook a hunger strike outside Anthropic’s San Francisco office for 30 days.
Sam Kirchner worked as a DoorDash driver and, before that, as an electrical technician. He has a background in mechanical and electrical engineering. He moved to San Francisco from Seattle, cofounded “Stop AI,” and “stayed in a homeless shelter for four months.”
AI Doomerism’s Rhetoric
The group’s rationale included this claim (published on their account on August 29, 2025): “Humanity is walking off a cliff,” with AGI leading to “ASI covering the earth in datacenters.”
As 1a3orn pointed out, the original “Stop AI” website said we risked “recursive self-improvement” and doom from any AI models trained with more than 10^23 FLOPs. (The group dropped this prediction at some point) Later, in a (now deleted) “Stop AI Proposal,” the group asked to “Permanently ban ANNs (Artificial Neural Networks) on any computer above 10^25 FLOPS. Violations of the immediate 10^25 ANN FLOPS cap will be punishable by life in prison.”
To be clear, tens of current AI models were trained with over 10^25 FLOPs.

In a “For Humanity” podcast episode with Sam Kirchner, “Go to Jail to Stop AI” (episode #49, October 14, 2024), he said: “We don’t really care about our criminal records because if we’re going to be dead here pretty soon or if we hand over control which will ensure our future extinction here in a few years, your criminal record doesn’t matter.”

The podcast promoted this episode in a (now deleted) tweet, quoting Kirchner: “I’m willing to DIE for this.” “I want to find an aggressive prosecutor out there who wants to charge OpenAI executives with attempted murder of eight billion people. Yes. Literally, why not? Yeah, straight up. Straight up. What I want to do is get on the news.”

After Kirchner’s disappearance, the podcast host and founder of “GuardRailNow” and the “AI Risk Network,” John Sherman, deleted this episode from podcast platforms (Apple, Spotify) and YouTube. Prior to its removal, I downloaded the video (length 01:14:14).
Sherman also produced an emotional documentary with “Stop AI” titled “Near Midnight in Suicide City” (December 5, 2024, episode #55. See its trailer and promotion on the Effective Altruism Forum). It’s now removed from podcast platforms and YouTube, though I have a copy in my archive (length 1:29:51). It gathered 60k views before its removal.
The group’s radical rhetoric was out in the open. “If AGI developers were treated with reasonable precaution proportional to the danger they are cognizantly placing humanity in by their venal and reckless actions, many would have a bullet put through their head,” wrote Guido Reichstadter in September 2024.

The above screenshot appeared in a BestNetTech piece, “2024: AI Panic Flooded the Zone Leading to a Backlash.” The warning signs were there:
Also, like in other doomsday cults, the stress of believing an apocalypse is imminent wears down the ability to cope with anything else. Some are getting radicalized to a dangerous level, playing with the idea of killing AI developers (if that’s what it takes to “save humanity” from extinction).
Both PauseAI and StopAI stated that they are non-violent movements that do not permit “even joking about violence.” That’s a necessary clarification for their various followers. There is, however, a need for stronger condemnation. The murder of the UHC CEO showed us that it only takes one brainwashed individual to cross the line.
In early December 2024, I expressed my concern on Twitter: “Is the StopAI movement creating the next Unabomber?” The screenshot of “Getting arrested is nothing if we’re all gonna die” was taken from Sam Kirchner.
Targeting OpenAI
The main target of their civil-disobedience-style actions was OpenAI. The group explained that their “actions against OpenAI were an attempt to slow OpenAI down in their attempted murder of everyone and every living thing on earth.” In a tweet promoting the October blockade, Guido Reichstadter claimed about OpenAI: “These people want to see you dead.”
“My co-organizers Sam and Guido are willing to put their body on the line by getting arrested repeatedly,” said Remmelt Ellen. “We are that serious about stopping AI development.”
On January 6, 2025, Kirchner and Reichstadter went on trial for blocking the entrance to OpenAI on October 21, 2024, to “stop AI before AI stop us” and on September 24, 2024 (“criminal record doesn’t matter if we’re all dead”), as well as blocking the road in front of OpenAI on September 12, 2024.
The “Stop AI” event page on Luma list further protests in front of OpenAI: on January 10, 2025; April 18, 2025; May 23, 2025 (coverage); July 25, 2025; and October 24, 2025. On March 2, 2025, they had a protest against Waymo.
On February 22, 2025, three “Stop AI” protesters were arrested for trespassing after barricading the doors to the OpenAI offices and allegedly refusing to leave the company’s property. It was covered by a local TV station. Golden Gate Xpress documented the activists detained in the police van: Jacob Freeman, Derek Allen, and Guido Reichstadter. Officers pulled out bolt cutters and cut the lock and chains on the front doors. In a Bay Area Current article, “Why Bay Area Group Stop AI Thinks Artificial Intelligence Will Kill Us All,” Kirchner is quoted as saying, “The work of the scientists present” is “putting my family at risk.”
October 20, 2025 was the first day of the jury trial of Sam Kirchner, Guido Reichstadter, Derek Allen, and Wynd Kaufmyn.
On November 3, 2025, “Stop AI”’s public defender served OpenAI CEO Sam Altman with a subpoena at a speaking event at the Sydney Goldstein Theater in San Francisco. The group claimed responsibility for the onstage interruption, saying the goal was to prompt the jury to ask Altman “about the extinction threat that AI poses to humanity.”
Public Messages to Sam Kirchner
“Stop AI” stated it is “deeply committed to nonviolence“ and “We wish no harm on anyone, including the people developing artificial superintelligence.” In a separate tweet, “Stop AI” wrote to Sam: “Please let us know you’re okay. As far as we know, you haven’t yet crossed a line you can’t come back from.”
John Sherman, the “AI Risk Network” CEO, pleaded, “Sam, do not do anything violent. Please. You know this is not the way […] Please do not, for any reason, try to use violence to try to make the world safer from AI risk. It would fail miserably, with terrible consequences for the movement.”
Rhetoric’s Ramifications
Taken together, the “imminent doom” rhetoric fosters conditions in which vulnerable individuals could be dangerously radicalized, echoing the dynamics seen in past apocalyptic movements.
In “A Cofounder’s Disappearance—and the Warning Signs of Radicalization”, City Journal summarized: “We should stay alert to the warning signs of radicalization: a disaffected young person, consumed by abstract risks, convinced of his own righteousness, and embedded in a community that keeps ratcheting up the moral stakes.”
“The Rationality Trap – Why Are There So Many Rationalist Cults?” described this exact radicalization process, noting how the more extreme figures (e.g., Eliezer Yudkowsky)3 set the stakes and tone: “Apocalyptic consequentialism, pushing the community to adopt AI Doomerism as the baseline, and perceived urgency as the lever. The world-ending stakes accelerated the ‘ends-justify-the-means’ reasoning.”
We already have a Doomers “murder cult” called the Zizians and their story is way more bizarre than anything you’ve read here. Like, awfully more extreme. And, hopefully, such things should remain rare.
What we should discuss is the dangers of such an extreme (and misleading) AI discourse. If human extinction from AI is just around the corner, based on the Doomers’ logic, all their suggestions are “extremely small sacrifices to make.” Unfortunately, the situation we’re in is: “Imagined dystopian fears have turned into real dystopian ‘solutions.’”
This is still an evolving situation. As of this writing, Kirchner’s whereabouts remain unknown.
—————————
Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and the “AI Panic” newsletter.
—————————
Endnotes
- Don’t mix StopAI with other activist groups, such as PauseAI or ControlAI. Please see this brief guide on the Transformer Substack. ↩︎
- This type of rhetoric wasn’t a one-off. Stop AI’s account also wrote, “Fuck CAIS and @DrTechlash” (CAIS is the Center for AI Safety, and @DrTechlash is, well, yours truly). Another target was Oliver Habryka, the CEO at Lightcone Infrastructure/LessWrong, whom they told, “Eat a pile of shit, you pro-extinction murderer.” ↩︎
- Eliezer Yudkowsky, cofounder of the Machine Intelligence Research Institute (MIRI), recently published a book titled “If Anyone Builds It, Everyone Dies. Why Superhuman AI Would Kill Us All.” It had heavy promotion, but you can read here “Why The ‘Doom Bible’ Left Many Reviewers Unconvinced.” ↩︎
Filed Under: activism, ai, ai doomerism, doomers, generative ai, guido reichstadter, remmelt ellen, sam kirchner, threats
Companies: openai, stopai




Comments on “Radicalized Anti-AI Activist Should Be A Wake Up Call For Doomer Rhetoric”
Great summary of this horrible story
It is really a doomsday strategy to constantly shift the goalposts.
I always thought they were pathetic. I now realize they are dangerous.
If anyone’s wondering, ominous (or even hysterical) ads for “If Anyone Builds It, Everyone Dies” still appear inside Metro trains here in D.C.
I hope the police will find him… before causing further damage to his life, other lives, and the AI safety movement.
He needs to be condemned as sharply as possible.
This turning point of escalation cannot and should not be a preview.
Violence is never the way (as even his doomers friends wrote to him).
this may not be fair, but i gotta wonder who these dolts voted for?
why do they NEED fear? why do they WANT to be afraid?
Re:
Fear and anger are drugs that have been pushed on the populace by politicians for generations. It should be no surprise that some wind up addicted.
Re:
Because fear gives you power, and whoever is the most fearsome is the most powerful.
Some of them might have stepped back from the ledge of physical violence, but it’s hard to miss that groups like this thrive around people who are basically one-upping each other to show how extreme and radical they are.
Re:
On the one hand, I’d guess it probably wasn’t the guy the Silicon Valley billionaires were backing.
On the other, guys like Musk and Thiel are irrationally obsessed with AGI and have built a whole weird-ass science-fiction-meets-evangelical-Christianity eschatology around it.
Re:
It might be the other way around in that they look around at everything else burning around them and that makes them more inclined to believe that this(AI) is an existential threat too because it’s not a thing that’s a massive problem it’s just another thing on the pile.
AI, especially under the current LLM approach will never go full Skynet and eradicate humanity with killer robots.
No, it’s so, so very much dumber than that.
It’s going to kill a lot of people by sucking vast amounts resources and polluting the environment, and then even more by poisoning the legal, medical, educational, and social systems we rely on with bigoted (remember it’s trained on the internet) and misinformed slop. And that’s before people like Musk deliberately twist them to suit their own ends.
GenAI needs to be massively reigned in but not on the way, and not for the reasons StopAI insist. To paraphrase Milo Rossi, you don’t need something shadowy to get mad at, you can get mad at what’s actually there.
Re:
Also how these companies want to replace people with AI.
Re: Re:
People have always wanted to replace human employees with machines. Done right, we should all be in favor of that; do you really want to be one of a hundred people digging with shovels, just to prevent a big hydraulic excavator from coming in?
The problem is that progress kind of got stuck. In the U.K., it was common for people to work 10 to 16 hours a day, six days a week. The “eight-hour day movement” cut that in half (to 40-48 hours per week), like a hundred years ago, and spread worldwide. But things mostly haven’t changed since. There’s always talk of maybe a 32-hour week, but by now we could probably get by with 16—which is about the amount of work people estimate was done by early hunter-gatherers.
Of course, that assumes there’s no ultra-rich person skimming off the top. But if we had five days a week away from “work”, we might not be so desperate to find such a person to fund the tasks we actually want to do.
Re: Re: Re:
“Done right” is kind of the rub, there. We don’t have a great history of that
Re: Re: Re:2
To wit: the current state of the CDC.
Re: Re: Re:2
Right; as written, working hours haven’t changed much in a hundred years. And there are some ultra-rich people skimming the profits…
Still, let’s not compromise when it comes to stating goals. Neither “enrich the ultra-rich further” nor “work 40 hours a week” need to be goals, even if we do have to compromise (that is, do those things) in pursuit of our goals.
In other words, we don’t need to work against the machines that are taking our jobs; we need to make sure we, and not some “elites”, reap the benefits of this work-reduction.
Re: Re: Re:3
I think that even if you aren’t compromising on the final goal itself, when talking about plans to get to those goals, I do think you want to factor in things like how likely to achievable the goal is, the timeline for achieving it, what to do in the meantime while achieving that goal, etc. It’s important to deal with the world as it is, when you’re optimizing the journey towards that final goal. Otherwise you leave a lot of improvements on the table.
There might be situations in the short term where it’s worth delaying machines taking jobs until we get the elites aspects sorted out, even though it’s technically inefficient. While it may be inefficient, it might be more feasible politically as a second-best thing. It would be better to handle elites, but that may not be as feasible in the short/medium term.
If we can make sure elites don’t reap all the benefits, that would be better. But that is so far from our current and historical situation, that I don’t think we can rely on that for sure happening in the short/medium term. It’s important to keep working towards that long term goal, and we’ll make partial progress in the short term, but it’ll probably take time.
Re: Re: Re:4
Sure, absolutely, but this idea of “stopping A.I.” was an over-reaction even before these people got violent. As you say, delaying is more reasonable, and lots of people are suggesting we slow things down and think before we bring in “helpful” technologies that don’t even really work. They’re noting how it’ll make things worse, and won’t (yet) really result in people working less—but maybe they’ll get paid less, because they’re “fixing work” done by computers instead of “doing work”.
But it’s hard for me to even imagine people standing up a hundred years ago, announcing that they wanted to cut working time in half. Remember that it was, like today, a time of “robber barons” and monopolization. That was a damn ambitious goal, and they accomplished it. So if we’re dreaming of, like, an eventual 30-hour week, maybe we’re dreaming too small.
Re: Re: Re:
“People have always wanted to replace human employees with machines. Done right, we should all be in favor of that; do you really want to be one of a hundred people digging with shovels, just to prevent a big hydraulic excavator from coming in?”
Automation benefits employers, not employees. Those hundred people digging with shovels didn’t suddenly get to put their feet up and get paid for doing nothing. They had to find different work.
It’s a deeply old fashioned notion this idea that “automation means more free time”, when what it actually means is fewer opportunities to find work.
Re: Re: Re:2
Of course, that assumes there’s no ultra-rich person skimming off the top.
Re: Re: Re:2
Too much of the benefit goes to the employers, but not all of it. When we went from 90 to 40 hours of work per week, regular people benefitted; many still expect to spend about two-thirds of their waking hours away from work. And modern living standards just wouldn’t be possible without automation, except for the rich.
For example, my grandparents had to can their own fruit, because otherwise they’d have none in the winter; there were no supermarkets. Every family did it. They’d also had to beat out their rugs (no vacuum cleaners), wash and wring their laundry by hand, mend whatever clothing needed it (it was too expensive to buy new, even with the automation of industrial sewing machines), and so on. People with good jobs often had servants to do this type of thing, but even most people with servants couldn’t afford cars.
The washing machine alone was called a “revolution” for women, that “freed up countless hours that women could then dedicate to other pursuits.” Many dedicated it to paid work, and this near-doubling of the work force eventually affected salaries to the extent that a single-income family is now widely considered infeasible. Of course, it’d be deeply regressive to suggest people stop having jobs based on their sex, so I’m not sure what to do with that information. Still, such appliances were considered to have life-changing benefits at the time, and for a decade or two, people really did have more free time.
In the book “Bullshit Jobs”, David Graeber makes the claim that over half of societal work is pointless. We’re all, somehow, paying for this pointless work. We just have to figure out how to reliably identify and eliminate that work, without throwing people into poverty.
Re:
* reined
Re:
But you won’t get the likes of Altman, Musk, et al to stop going on camera pretending that they have genuine concerns that they might, as a marketing pitch that overstates the capabilities of their product and uses fear as a motivational argument for why state actors should give them money…
Exactly how many times has this guy watched The Terminator?
Hold up, surely the UHC assassination was good and just?
I think you’re missing some of the dynamics with this framing here. For instance:
A lot of people working on AI unironically think they’re creating machine God (and have been convinced of this by people like Yud). They’re the literal opposite of Doomers (still a cult, though), they think it’s a good thing and/or want to control it.
Second to that:
So while the “imminent doom” thing is a bit too strong, I think there’s an uncomfortable conversation because a) even the current technology is consequential enough to lead to radicalization, and b) non-imminent damage is… kind of a problem.
There’s too much going on to just label it a doomsday cult like yesteryear and move on. You’re underselling the dynamics here.
That’s not even getting into issues like OpenAI’s leverage requiring them to go maximum hype machine. The incentives are really really bad, and they’re not going to stop. They can’t. (Never mind companies’ irresponsible rollouts which… are really not helping with the idea that we can responsibly use the technology.)
Re:
That doesn’t make much of a difference.
If your cause is to save humanity from near-certain extinction, then all moral boundaries crumble. You can justify any crime with the excuse that you are saving all of humanity.
If you are certain that AI will lead us to Paradise, then any opposition can be pushed aside, by whatever means, with the excuse that the future rewards will be infinite.
AI Doom or AI God, those are two sides of the same (toxic) coin.
I read about them in The Atlantic.
What Kirchner said before disappearing:
“The nonviolence ship has sailed for me”
Re:
Thank you for sharing.
I’ll add it to the Substack repost.
Ah, so the fraudsters running these operations are culpable then, for their constant stream of marketing-speak thinly veiled as fearmongering about a nonexistent threat for the purpose of spreading disinformation that convinces technologically illiterate politicians and investors to funnel money to them.
Re:
It’s literally ALWAYS THE SAME GUYS talking about this so-called “threat”.
Altman, Thiel, Andreesen, etc.
What possible reason could the people with a financial stake in this have for whipping up fear of a sci-fi apocalypse you cannot prove isn’t real, and therefore Pascal’s Wager says you should give them money in the hope they will find a way to prevent it?
Re: I'll say this for Charles Ponzi....
At least he didn’t go around giving people schizophrenia as part of his business strategy…
Person of interest
The show you want to watch is Person of Interest. Res ipsa loquitur.
Also the person that went into Sam’s apartment to find the cellphone – was he charged with trespass or is The Machine protecting him.
I hear sound and fury.
Maybe it should be a wake up call to AI glazers to stop dismissing genuine concerns as doomism too. People are only radicalised because they don’t feel like their concerns are being listened to. There needs to be some growing up done so the discussion can meet in the middle. There are genuine positives to AI technology but there are a lot of negatives as well.
Why this author’s trail of attacks on evryone that is against AI doesn’t stink of astroturfing at all. Nothing to see here.
Re:
I mean, the only astroturfing I see here is in the deliberate and strategic mobilisation of credulous cretins who take CEOs at their word when they hype up the “threats” posed by a technological dead end incapable of ever attaining the capabilities they ascribe to it in order to make out like bandits off the next financial bubble.
We do not have the architecture correct to make AGI and we are not presently on the path towards making AGI. AGI will not happen in our lifetime and likely will not happen in our children’s lifetime either.
It seems likely that it will not happen before humanity goes extinct.
Re:
The idea of it being possible is entirely a fraud being manufactured by the CEOs of these companies to hype up their bubble.
Re: Re:
Give credit where it’s due. The idea of it being possible was invented by story-tellers—such as the ancient Greeks telling the stories of Hephaestus, Samuel Butler’s novel “Erewhon”, and a lot of modern science fiction. It might be interesting to note how much of that fiction is dystopian.
Whether it’ll eventually be possible doesn’t even really matter. The fraudulent aspect is pretending we’re close to getting there, when really we have no fucking idea how to even guess at the requirements or timeline (or the wisdom of trying). The hucksters were saying the same shit 60 years ago, right up until the first A.I. winter. I expect the third major one will start before the end of this decade.