The AI Doomers’ Playbook
from the don't-be-a-doomer dept
AI Doomerism is becoming mainstream thanks to mass media, which drives our discussion about Generative AI from bad to worse, or from slightly insane to batshit crazy. Instead of out-of-control AI, we have out-of-control panic.
When a British tabloid headline screams, “Attack of the psycho chatbot,” it’s funny. When it’s followed by another front-page headline, “Psycho killer chatbots are befuddled by Wordle,” it’s even funnier. If this type of coverage stayed in the tabloids, which are known to be sensationalized, that was fine.
But recently, prestige news outlets have decided to promote the same level of populist scaremongering: The New York Times published “If we don’t master AI, it will master us” (by Harari, Harris & Raskin), and TIME magazine published “Be willing to destroy a rogue datacenter by airstrike” (by Yudkowsky).
In just a few days, we went from “governments should force a 6-month pause” (the petition from the Future of Life Institute) to “wait, it’s not enough, so data centers should be bombed.” Sadly, this is the narrative that gets media attention and shapes our already hyperbolic AI discourse.
In order to understand the rise of AI Doomerism, here are some influential figures responsible for mainstreaming doomsday scenarios. This is not the full list of AI doomers, just the ones that recently shaped the AI panic cycle (so I‘m focusing on them).
AI Panic Marketing: Exhibit A: Sam Altman.
Sam Altman has a habit of urging us to be scared. “Although current-generation AI tools aren’t very scary, I think we are potentially not that far away from potentially scary ones,” he tweeted. “If you’re making AI, it is potentially very good, potentially very terrible,” he told the WSJ. When he shared the bad-case scenario of AI with Connie Loizo, it was ”lights out for all of us.”
In an interview with Kara Swisher, Altman expressed how he is “super-nervous” about authoritarians using this technology.” He elaborated in an ABC News interview: “A thing that I do worry about is … we’re not going to be the only creator of this technology. There will be other people who don’t put some of the safety limits that we put on it. I’m particularly worried that these models could be used for large-scale disinformation.” These models could also “be used for offensive cyberattacks.” So, “people should be happy that we are a little bit scared of this.” He repeated this message in his following interview with Lex Fridman: “I think it’d be crazy not to be a little bit afraid, and I empathize with people who are a lot afraid.”
Having shared this story in 2016, it shouldn’t come as a surprise: “My problem is that when my friends get drunk, they talk about the ways the world will END.” One of the “most popular scenarios would be A.I. that attacks us.” “I try not to think about it too much,” Altman continued. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”
(Wouldn’t it be easier to just cut back on the drinking and substance abuse?).
Altman’s recent post “Planning for AGI and beyond” is as bombastic as it gets: “Successfully transitioning to a world with superintelligence is perhaps the most important – and hopeful, and scary – project in human history.”
It is at this point that you might ask yourself, “Why would someone frame his company like that?” Well, that’s a good question. The answer is that making OpenAI’s products “the most important and scary – in human history” is part of its marketing strategy. “The paranoia is the marketing.”
“AI doomsaying is absolutely everywhere right now,” described Brian Merchant in the LA Times. “Which is exactly the way that OpenAI, the company that stands to benefit the most from everyone believing its product has the power to remake – or unmake – the world, wants it.” Merchant explained Altman’s science fiction-infused marketing frenzy: “Scaring off customers isn’t a concern when what you’re selling is the fearsome power that your service promises.”
During the Techlash days in 2019, which focused on social media, Joseph Bernstein explained how the alarm over disinformation (e.g., “Cambridge Analytica was responsible for Brexit and Trump’s 2016 election”) actually “supports Facebook’s sales pitch”:
“What could be more appealing to an advertiser than a machine that can persuade anyone of anything?”
This can be applied here: The alarm over AI’s magic power (e.g., “replacing humans”) actually “supports OpenAI’s sales pitch”:
“What could be more appealing to future AI employees and investors than a machine that can become superintelligence?”
AI Panic as a Business. Exhibit A & B: Tristan Harris & Eliezer Yudkowsky.
Altman is at least using apocalyptic AI marketing for actual OpenAI products. The worst kind of doomers is those whose AI panic is their product, their main career, and their source of income. A prime example is the Effective Altruism institutes that claim to be the superior few who can save us from a hypothetical AGI apocalypse.
In March, Tristan Harris, Co-Founder of the Center for Humane Technology, invited leaders to a lecture on how AI could wipe out humanity. To begin his doomsday presentation, he stated: “What nukes are to the physical world … AI is to everything else.”
Steven Levy summarized that lecture at WIRED, saying, “We need to be thoughtful as we roll out AI. But hard to think clearly if it’s presented as the apocalypse.” Apparently, after the “Social Dilemma” has been completed, Tristan Harris is now working on the AI Dilemma. Oh boy. We can guess how it’s going to look (The “nobody criticized bicycles” guy will make a Frankenstein’s monster/Pandora’s box “documentary”).
In the “Social Dilemma,” he promoted the idea that “Two billion people will have thoughts that they didn’t intend to have” because of the designers’ decisions. But, as Lee Visel pointed out, Harris didn’t provide any evidence that social media designers actually CAN purposely force us to have unwanted thoughts.
Similarly, there’s no need for evidence now that AI is worse than nuclear power; simply thinking about this analogy makes it true (in Harris’ mind, at least). Did a social media designer force him to have this unwanted thought? (Just wondering).
To further escalate the AI panic, Tristan Harris published an OpEd in The New York Times with Yuval Noah Harari and Aza Raskin. Among their overdramatic claims: “We have summoned an alien intelligence,” “A.I. could rapidly eat the whole human culture,” and AI’s “godlike powers” will “master us.”
Another statement in this piece was, “Social media was the first contact between A.I. and humanity, and humanity lost.” I found it funny as it came from two men with hundreds of thousands of followers (@harari_yuval 540.4k, @tristanharris 192.6k), who use their social media megaphone … for fear-mongering. The irony is lost on them.
“This is what happens when you bring together two of the worst thinkers on new technologies,” added Lee Vinsel. “Among other shared tendencies, both bloviate free of empirical inquiry.”
This is where we should be jealous of AI doomers. Having no evidence and no nuance is extremely convenient (when your only goal is to attack an emerging technology).
Then came the famous “Open Letter.” This petition from the Future of Life Institute lacked a clear argument or a trade-off analysis. There were only rhetorical questions, like, should we develop imaginary “nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?“ They provided no evidence to support the claim that advanced LLMs pose an unprecedented existential risk. There were a lot of highly speculative assumptions. Yet, they demanded an immediate 6-month pause on training AI systems and argued that “If such a pause cannot be enacted quickly, governments should institute a moratorium.”
Please keep in mind that (1). A $10 million donation from Elon Musk launched the Future of Life Institute in 2015. Out of its total budget of 4 million euros for 2021, Musk Foundation contributed 3.5 million euros (the biggest donor by far). (2). Musk once said that “With artificial intelligence, we are summoning the demon.” (3). Due to this, the institute’s mission is to lobby against extinction, misaligned AI, and killer robots.
“The authors of the letter believe they are superior. Therefore, they have the right to call a stop, due to the fear that less intelligent humans will be badly influenced by AI,” responded Keith Teare (CEO SignalRank Corporation). “They are taking a paternalistic view of the entire human race, saying, ‘You can’t trust these people with this AI.’ It’s an elitist point of view.”
“It’s worth noting the letter overlooked that much of this work is already happening,” added
Spencer Ante (Meta Foresight). “Leading providers of AI are taking AI safety and responsibility very seriously, developing risk-mitigation tools, best practices for responsible use, monitoring platforms for misuse, and learning from human feedback.”
Next, because he thought the open letter didn’t go far enough, Eliezer Yudkowsky took “PhobAI” too far. First, Yudkowsky asked us all to be afraid of made-up risks and an apocalyptic fantasy he has about “superhuman intelligence” “killing literally everyone” (or “kill everyone in the U.S. and in China and on Earth”). Then, he suggested that “preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange.” By explicitly advocating violent solutions to AI, we have officially reached the height of hysteria.
“Rhetoric from AI doomers is not just ridiculous. It’s dangerous and unethical,” responded Yann Lecun (Chief AI Scientist, Meta). “AI doomism is quickly becoming indistinguishable from an apocalyptic religion. Complete with prophecies of imminent fire and brimstone caused by an omnipotent entity that doesn’t actually exist.”
“You stand a far greater chance of dying from lightning strikes, collisions with deer, peanut allergies, bee stings & ignition or melting of nightwear – than you do from AI,” Michael Shermer wrote to Yudkowsky. “Quit stoking irrational fears.”
The problem is that “irrational fears” sell. They are beneficial to the ones who spread them.
How to Spot an AI Doomer?
On April 2nd, Gary Marcus asked: “Confused about the terminology. If I doubt that robots will take over the world, but I am very concerned that a massive glut of authoritative-seeming misinformation will undermine democracy, do I count as a “doomer”?
One of the answers was: “You’re a doomer as long as you bypass participating in the conversation and instead appeal to populist fearmongering and lobbying reactionary, fearful politicians with clickbait.”
Considering all of the above, I decided to define “AI doomer” and provide some criteria:
- Making up fake scenarios in which AI will wipe out humanity
- Don’t even bother to have any evidence to back up those scenarios
- Watched/read too much sci-fi
- Says that due to AI’s God-like power, it should be stopped
- Only he (& a few “chosen ones”) can stop it
- So, scared/hopeless people should support his endeavor ($)
Then, Adam Thierer added another characteristic:
- Doomers tend to live in a tradeoff-free fantasy land.
Doomers have a general preference for very amorphous, top-down Precautionary Principle-based solutions, but they (1) rarely discuss how (or if) those schemes would actually work in practice, and (2) almost never discuss the trade-offs/costs their extreme approaches would impose on society/innovation.
Answering Gary Marcus’ question, I do not think he qualifies as a doomer. You need to meet all criteria (he does not). Meanwhile, Tristan Harris and Eliezer Yudkowsky meet all seven.
Are they ever going to stop this “Panic-as-a-Business”? If the apocalyptic catastrophe doesn’t occur, will the AI doomers ever admit they were wrong? I believe the answer is “No.”
Doomsday cultists don’t question their own predictions. But you should.
Dr. Nirit Weiss-Blatt (@DrTechlash) is the author of The Techlash and Tech Crisis Communication
Filed Under: ai, ai dilemma, ai doom, ailash, eliezer yudkowsky, extinction, sam altman, social dilemma, techlash, tristan harris


Comments on “The AI Doomers’ Playbook”
If these idiots weren’t so influential it would be really funny they were freaking out about glorified Markov chains becoming rampaging AGIs.
List of AI doomers
“This is not the full list of AI doomers”
I think we should list the rest of the AI Doomers.
I’ll start:
Nick Bostrom
Max Tegmark
Jaan Tallinn
Jaron Lanier
Re:
Elon Musk
Stuart J. Russell
(You’ve noticed they are all rich white dudes, right?)
Re:
The New York Times really loves AI doomers
Its latest puffy piece… an interview with Nick Bostrom
https://www.nytimes.com/2023/04/12/world/artificial-intelligence-nick-bostrom.html
Re: Re:
That piece is hardly ‘AI doomer’:
“I’ve long held the view that the transition to machine superintelligence will be associated with significant risks, including existential risks. That hasn’t changed. I think the timelines now are shorter than they used to be in the past.”
It’s actually mainly about AIs attaining a kind of personhood and the moral and ethical concerns entailed. Nothing like ‘oh nos the apocalypse is nigh’, don’t know how you would get that out of that short piece.
Hey, it works for Rupert Murdoch and News Corp…
A more sober examination
I found the response of Dr. Timnet Gebru and her co authors of the “Stochastic Parrots” paper to the “A.I. pause” letter to be very informative. It deals not with how the A.I.s will cause disaster to humanity, but rather how they might exacerbate the existing evils of our world, and what we might do to mitigate these harms.
https://www.dair-institute.org/blog/letter-statement-March2023
Re:
Yeah, we talked about that earlier this week in another article. This one was covering something different. https://www.bestnettech.com/2023/04/11/its-good-that-ai-tech-bros-are-thinking-about-what-could-go-wrong-in-the-distant-future-but-why-dont-they-talk-about-whats-already-gone-wrong-today/
Re: Re:
Huh. Somehow missed that one!
oh lor' it's Yudkowsky
I’m going to give the rest of this article the time to digest that it deserves, but seeing the name Yudkowsky was like a blast from the past. I thought he’d disappeared up his own overly-self-satisfied sphincter around 2015 after HPMOR concluded.
Re:
Oh, that’s where I recognized the name from. I was scratching my head here, wondering why it sounded familiar. Thanks for reminding me of that fiasco.
Yudkowsky's cult
How the hell is the media giving psychos like Yudkowsky so much space/time?
This information about his cult should get more attention (sick people, really):
The Machine Intelligence Research Institute (MIRI) in Berkeley was founded as the Singularity Institute in 2000 by Eliezer Yudkowsky.
“MIRI is an organization that pretends it’s solving the very serious problem of ‘friendly AI’ because otherwise, AI will destroy the world. They think that if we don’t send them money, the world will end. This is a kind of fraud that imitates TV evangelists: if you don’t send us money, you will go to hell.”
https://examachine.net/blog/scams-and-frauds-in-the-transhumanist-community/
“MIRI’s entanglement in statutory rape was one of the worst kept secrets in the Bay.”
https://fredwynne.medium.com/an-open-letter-to-vitalik-buterin-ce4681a7dbe
The current AI’s can be useful tools, but having seen the results, they need skilled oversight and review o what the output. From what I have seen, letting them scrape the Internet, without human guidance has allowed them to pick up bad information along with the good, so that their intelligence has been poised by conspiracy theories, satire and other misleading content.
Harari/Harris/Raskin: “We have summoned an alien 👽
intelligence”
Me: “And unicorns 🦄 are real”
Re:
I read „alien“ not as literally from other planets but as in „so different as to be unrecognisable“ (think Sting‘s „I‘m a legal alien, I‘m ab Englishman in New York“). But then again, maybe I‘m misinterpreting, English is not my first language. If anyone has any inside in that please let me know.
Re:
They’re actually called rhinoceroses.
And they’re not sparkly horses with horns, they giant, gray/brown mammals with horns and a temper.
Well, they’re less angry if you don’t trespass.
If the apocalyptic catastrophe does occur, will you admit you were wrong?
Re:
No, because we’ll be dead by then.
Re: Re:
Don’t we have bunkers?
Re: Re: Re:
Apparently, San Altman has “a big patch of land in Big Sur he can fly to.” Would that be sufficient?
Re: Re: Re:2
Not with staggering AIs slowly rampaging over all the lands, biting every human they find and devouring their brains. They’ll find Big Sur eventually.
We're all destined to be turned into paperclips
Not a mention of Raymond Kurzweil? I don’t disagree that the doom mongering is too much, but that doesn’t mean it’s not a serious concern. Considering we’ve already faced serious risks from AI or near AI systems, as in early-launch warning systems for nukes, it sure seems unreasonable to completely discount all concerns. Yes, that’s not the kind of systems being discussed, but it isn’t as far removed as you might think. I like Bostrom, and even Yudkowsky, they’re very intelligent and thoughtful guys, if they’ve gone over the top, that doesn’t mean you should discount everything they say.
An AI wouldn’t need to be some super-intelligent entity with delusions of silicon grandeur to be extremely dangerous. It’s just folly to brush risk concerns away as if they’re nothing but bonging-induced fantasies. And is the real issue here one of timing? Does anyone think they’ll never develop AIs that could out-think. and outwit, even the most intelligent humans? Trying to understand what
kinds of problems could possibly arise, and especially where the real risks lie, is hardly a waste of time. [mechtheist is a [somewhat] tongue-in-cheek way of saying the machines will be our gods at some point]
Re:
i thought it was the non-arguments and lack of anything to back up their claims which is being dismissed. But yes, also mocked, with good reason, for the ridiculous fearmongering.
Not entirely keen on every new bit of tech, or what some consider “innovation”, but they are seriously beyond the pale without proper footwear.
Re: Re:
Non-arguments? I’m wondering where that came from? You should stop by nickbostrom.com and try to demonstrate how he has no arguments. And it’s not all about some SkyNet level AI taking over the world, serious problems are possible as AIs take over more and more functions, and especially as the AIs become more involved in the designs of AIs. It’s already extremely problematic figuring out why and how they come up with their ‘solutions’ and that’s only going to get harder. It’s far more worthy of mocking to dismiss these concerns than to be too paranoid.
Re: Re: Re: Bostrom
> You should stop by nickbostrom.com and try to demonstrate how he has no arguments.
Nick Bostrom’s website is voluminous. Can you point to a particularly well-formed argument somewhere on it?
Half the problem is that the AI Doomers don’t have *arguments*, they have a series of *hypothetical scenarios* that they insist are sufficiently likely, that they should be matters of overriding human concern — but there is no *evidence* given for the sufficient likelihood of these scenarios. “What if an AI convinced somebody to create weapons of bioterrorism?” is not an argument. (And calculating an average of responses to the question “How likely do you think AI disaster is in the future?” is not evidence, despite AI doomers acting as though it is.)
When they *do* try to give evidence, it’s often of the appeal-to-authority style (ironic, considering the context) — “I am a very smart person trained in logical argument and I am concerned about this scenario which I find obviously dangerous. If you are not concerned yourself, you must either not be as smart as I am or lack the proper training in Rationality.” However, many of them lack actual domain expertise; Yudkowsky in particular tries to spin this as an advantage he possesses as he claims the “AI establishment” are all either blind to the danger or aware of it but venially ignoring it for their own benefit, whereas he, with no formal training of any kind beyond high school, has no shackles holding his pure rationality back from seeing and stating the truth. In fact far from “pure rationality”, Yudkowsky often leans on classic tropes used by those who cannot construct an actual evidence-based argument to support their claims, like the decades-worn “the lurkers support me in e-mail” that anybody who was on USENET in the ‘80s and ‘90s will probably recall, if not fondly, at least with a certain amount of nostalgia (and which probably, in all honesty, dates back much farther than USENET in various forms).
And then there are the pure, raw appeals to authority where these folks like Altman and Bostrom and Yudkowsky all cite *each other* as “experts” (and then leverage that incestuous linkfest as evidence that they are in fact all experts in AI because look how frequently-cited by other experts they all are).
Show me an argument backed by reliable evidence and I’m happy to engage with it. Show me a Nostradamus-style bare prediction of doom and I don’t engage, because there’s nothing to engage *with*. I dismiss it without presenting refuting evidence because it presents no evidence of its own to need refuting.
Re: Re: Re:2
Thank you for your thoughtful reply, Joe T.
I‘m not sure what might constitute evidence in this case. That doesn’t mean at all that I expect people to trust blindly (also, I’m not in this scene so I don’t know the arguments that well – I just want better reporting from BestNetTech). AIs as feared don’t exist yet, and the second they do it’s supposedly game over. So evidence for the core issue seems hard to come by.
That leaves comparisons, analogies, learning from history and logic. I believe the foundation of their argument is „smart wins against less smart“, supported by examples from biology. I think you would agree to the general point?
That leaves steerability – can „we“ point a super smart AI to do our bidding? Is everything alright if the US/the right company develops it first? They seem to say, no. Just as we don’t take commands from chimps, the AI won’t do as we would like. That seems dangerous to me, if humans were trumped and back to hoping that the overlord won’t hurt us.
There are many finer arguments but that’s the one-minute version I picked up listening to them. What do you think about these arguments (that neither rely on authority nor on hypotheticals I believe)?
Re:
Current AI is simply, “glorified Markov Chains”.
They are largely incapable of becoming sapient.
Now, factor in quantum computing and we might actually have something. But until then, it’s simply safe to observe.
I mean, it’s not like there’s a potential nuclear war in Europe and ANOTHER war brewing in Asia…
Re: Re:
I’m not sure ‘sapient’ is a good term for AIs, it sorta implies human-like intelligence as opposed to simply human-level intelligence. It doesn’t really matter if an AI thinks at all like a human, the concern is that one could out-think a human. That’s kinda the problem, they don’t think like we do, it’s difficult to ever be very sure of what they’ll do. There are armed drones available now that can be set up to autonomously shoot to kill, I’m sure that’s never going to go wrong, though I doubt they’ll have quite the penchant for annihilating weddings that human-piloted drones do.
Re: Re: Re:
…and why is that?
Because, as always, the problem lies between the Keyboard and the screen.
If AIs are dangerous, we made them that way.
If the AIs are taking away certain jobs, look for the jerks who keep pushing that angle.
It always comes back to unethical, shitty people who keep pushing for unethical uses of new tech.
So instead of screeching about hypotheticals, start pinning the fucking blame on the actual culprits.
Re: Re: Re:2
I guess the doomer‘s response would be: An AI that is sufficiently more intelligent than humans is inherently dangerous. So it’s not due to some shitty programming but because it can and will outsmart us. Like trying to land a human on the moon: Sure you can do some shitty engineering to make everything worse. But the environment really isn’t forgiving if you make a mistake, and there’s not a lot of options when things go wrong.
Re: Re: Re:3
Then why are these AI companies led by former NFT grifters?
I want an answer to this.
The hype that AI is going to take over everything has always been overblown to me. We have AI that is better than humans at Chess, Go, and even Jeopardy. Human players still dominate. AI is also getting better at Texas Hold ’em Poker, and I don’t see the World Series of Poker being nothing but computers playing at the final levels of those tournaments any time soon – if ever.
Even if AI gets better at humans at writing news articles, there is going to be a need and demand for articles written by humans. Even if AI gets better at drawing art than humans, there are still going to be humans who paint, draw, or create 3D art – and there will be demand for it. The precedence in all of this is quite clear, in my view, that AI is a neat tool that can be used to better improve ones self or view traditional things like a board game in a new an interesting light, but humans are always going to be a part of the creation of those news articles and art – never fully replaced.
Re:
But what if Deep Blue takes over your chess-playing job?
Re: Re:
Deep Blue got Deep Sixed a long time ago.
Don't believe the (criti)hype
Cory Doctorow, in addition to coining the concept of enshittification, also pointed out a concept though he didn’t coin it.
Criti-hype is a kind of narrative that on the surface appears to attack a person, a technology, etc., but tacitly serves to further the attacked’s agenda (by building street cred, overselling features, etc.)
“AI will wipe out most jobs on the planet” is one specimen of criti-hype. As much as the specter of a planet of unemployables sounds painful, there are people in C-suites that go, “So you’re AI is so good I never have to put a cent into payroll? I’ll take 10!”
There have been major publications that put this logic into practice by replacing copy editors with Grammarly. You could imagine how well this went if you look at a newspaper owned by a hedge fund (which is about 60% of them in the U.S.).
On LinkedIn, I did see a copy editor post “The definition of Irony: Grammarly is hiring a copy editor.”
Keep in mind that most AI stories you read are criti-hype. One of the key purposes of AI doom thinkpieces is actually to get VC and investors’ “churn and burn” money thrown at companies to develop these doomsday devices, not to actually deliver one.
Re:
Sure enough, Doctorow has AI criti-hype in a Pluralistic article from last month.
https://pluralistic.net/2023/03/09/autocomplete-worshippers/#the-real-ai-was-the-corporations-that-we-fought-along-the-way
Re:
From what I hear about ChatGPT (can’t muster up enough caring to mess with it myself), I know there are some C-levels out there who could be replaced by it. Nobody would even know, or they’d be impressed by the sudden improvement…
Again, no counterarguments
I soo soo rarely comment anywhere. But this is the second article this week that irritates me. The author lists people and their one sentence high-level claims, then says it’s ridiculous.
Why doesn’t the author list a few arguments instead? „Ants will wipe out humanity“ is not an argument and easy to laugh at. „Ants will wipe out humanity because there is a new antibiotics resistant bacterium spreading among them that kills every human“ is something different that the reader might then consider the merits of.
„The alarm over AI’s magic power (e.g., “replacing humans”)“ No, the alarm is about: humans are smarter than chimps, and Homo sapiens are smarter than Neanderthals. Is there anything we can learn from history what happens when smarter things meet less smart things? There’s still a whole bunch of issues here why that analogy might fail – let us discuss those!
But I would expect BestNetTech to do better than to simply poke fun, and at least give some arguments of the other side a fair showing.
Re:
As soon as the AI Doomers stop making ridiculous claims, BestNetTech will stop ridiculing them.
That sounds like a good plan to me.
Re: Re:
That’s a snarky answer that doesn’t try to illuminate what your counterarguments are. Calling claims ridiculous doesn’t convince people, it bullies them so they feel socially awkward disagreeing with your position.
I named an analogy that smart things usually win against dumber things. What would be your answer to this, in general? Not „GPT 4 is stupid today“ but in general, thinking ahead however long it may take to get way smarter than humans?
Another Doomer argument, I believe: Suppose a sufficiently smart AI was built and is connected to the internet. The doomers claim the A.I. would be able to get people to do stuff for it (like mix synthesised proteins that you can already order online) and in this way create real-world harm / death to people. Doomers say, this will kill everyone. But let’s scale back: Would you agree that it might be possible to synthesise enough harmful stuff to kill a few dozen people? If so, how sure can we be that it’s really „just“ a few dozen, and not a whole block, or city?
Sure, snark is easier. But BestNetTech always delivered both: snark and a reliable overview of the arguments and why they fail to persuade.
Re: Re: Re:
By the way, that’s the internet for you:
Someone called „Gerry the lizardperson“ asking for an honest debate about the substance of the matter that fairly gives both sides‘ arguments. And someone called „Dr. Nirit Weiss-Blatt, Ph.D.“ responding with snarky condescension instead of arguments.
Let‘s hope the next round is more productive.
Re: Re: Re:
I’ve been fascinated by this stuff for some time now and I agree, the criticisms are simply not justified, though it’s also over-the-top to claim the doom looming over us is almost here. I quoted one of the best of these guys, Bostrom, in a post elsewhere and it’s pretty clear he’s not running around like a headless chicken. You can see a lot of his work in his papers at nickbostrom.com, it should be obvious this isn’t misguided alarmist BS. It’s a very complex issue and requires some unique thinking. Another good source is Robert Miles videos, https://www.youtube.com/@RobertMilesAI
It’s much more folly to kneejerk broadly discount this stuff than to be overly paranoid, as is simple to see if you consider the risks involved and what’s at stake. It’s almost always a bad idea to dismiss the experts in a field who think about their subject all the time, you’re far more likely to be utterly clueless and look pretty stupid than they are to completely off the mark.
Using AI
Have any of the doomsayers actually used ChatGPT? At first I thought it would be a hype like NFT. But then I started using ChatGPT and I quite like it:
There is nothing to fear. Really. ChatGPT requires me to think. I doesn’t make us less human. It will probably transform some industries.
Re:
That’s where this article failed you. Of course they are not afraid of GPT as it is now. But they extrapolate where this is heading. Check the two images here. They show the result of one year of progress. Compare how much better GPT has become from v1 to v4 in so little time. It already lands in the top 10% of many aptitude tests. Of course, it still makes blatant errors. But think again about the linked images – do you really think progress will slow down, conveniently just when it’s smart enough to be very useful but not be able to do outsmart people (if by itself or directed by others)?
That‘s the doomers‘ concern. Not GPT today, not tomorrow, but in 5-20 years. But sensible actions need to be prepared, discussed, evaluated – that takes time, whence the discussion now.
Why are all these loom operators worried about the Spinning Jenny?
Armageddon outa' here
Yeah. It arrived on ʻOumuamua, took one look and high-tailed it outa’ here.
The subject is with apologies to Spike Milligan’s war autobiography.
10 Reasons to Ignore AI Safety
https://www.youtube.com/watch?v=9i1WlcCudpU
I wonder if AI will eventually join a cult ..
a cult of other like minded AIs
Do AIs have a list of respected sources for data analysis or do they simply ask google what it thinks? Assuming AI does data analysis prior to making decisions.
Re:
LLMs neither do data analysis nor make decisions. They are pattern extension machines.
I’ll start worrying about the intelligence part of “AI” when they start admitting they don’t know things
so...pants-shitting panic sells. this is news?
https://listverse.com/2015/09/18/10-moral-panics-caused-by-ridiculous-things/
The most amusing this about these “doomers” is that they will fail h and fail spectacularly.
Hint: There is no “war on drugs”. There is an {attempted p war on chemistry and agriculture – which has {predictably} resulted in black markets, and {purportedly illicit) “drugs” becoming a status symbol\way to “rebel”.
The AI pants-shitting will “suppress” AI as effectively as those “anti-drug campaigns “solved” the fact that hippies smoked weed.
Personally, I genuinely wish\hope AI is everything the doomers say it is, and more, because any species which can only survive by destroying its primarary attribute {intelligence\toolhmaking) – simply does not *deserve to survive.
Techhrelatrd moral panics are always the result of borderline-ignorant “speculation” from people whose understanding ” of their particular folk devil doesn’t actually extend past being able to parrot a few buzzwords.
Tldrn they’re either:
Anti-AI freakouts are basically the NFT craze all over again – technological illiterates being billed by scumbags.
LLMs are not intelligent. They are tools which increase productivity and efficiency. Tools which will lead to job losses, exploitation of labour, and the further enrichment of the already rich, as other tools have in every single previous instance.
At the same time, corporate and state sponsored propaganda campaigns are going to become ever easier to run at increasing scale and more difficult to distinguish from actual social movements.
You don’t have to believe that Roko’s Basilisk is coming for you to recognise that LLMs pose a serious challenge to anybody who wants to live in a stable society.
But hey, move fast and break things, right?
Well that was fast!
Wasn’t AGI the domain of Hollywood and charlatans just last year? AGI-Take-Over-the-World
AI is pretty flipping exciting! Everyone is not either for or against AI, there’s plenty of gray area here. Even the inventors emphasize the need for guard rails.
I think this article was written by and AGI to shift the focus on how dangerous AI is… After all the devil’s best trick is to persuade you that he doesn’t exist! 😀
Since when are doctors of journalism qualified to comment on AI?
Dr Blatt is a PhD in Journalism. I’m less inclined to pay her heed on the subject of AI than the people she criticizes. At least Tristan Harris actually studied Computer Science, and Aza Raskin has degrees in math and physics.
In stage one we say “nothing is going to happen”
In stage two we say “something may be going to happen but we should do nothing about it”
In stage three we say “maybe we should do something about it, but there’s nothing we can do”
In stage four we say “maybe there’s something we could have done, but it’s too late now”