Like Apple, Google’s AI News Tech Misinterprets Stories, Generates Gibberish Headlines
from the screwing-up-the-basics dept
Despite all the recent hype about “AI,” the technology still struggles with very basic things and remains prone to significant errors. Which makes it maybe not the best idea to rush the nascent technology into widespread adoption in industries prone to all sorts of deep-rooted problems already (like say, health insurance, or journalism).
We’ve already seen how news outlets have gotten egg on their faces by using AI “journalists” who completely make up sources, quotes, facts, and other information. But earlier this year, Apple also had to pull their major news AI system offline after it repeatedly couldn’t generate accurate headlines, and in many instances just fabricated major events that never happened (whoops!).
Google has recently also been experimenting with letting AI generate news headlines for its Discover feature (the news page you reach by swiping right on Google Pixel phones), and the results are decidedly… mixed. The technology, once again, routinely misconstrues meaning when trying to sum up news events:
“I also saw Google try to claim that “AMD GPU tops Nvidia,” as if AMD had announced a new groundbreaking graphics card, when the actual Wccftech story is about how a single German retailer managed to sell more AMD units than Nvidia units within a single week’s span.”
Other times, it just produces gibberish:
“Then there are the headlines that simply don’t make sense out of context, something real human editors avoid like plague. What does “Schedule 1 farming backup” mean? How about “AI tag debate heats”?
Google has already redirected a ton of advertising revenue away from journalists who do actual work, and toward its own synopsis and search tech. Now it’s effectively rewriting the headlines editors and journalists (the good ones, anyway) spend a lot of time working on to try and be as accurate and inviting as possible. And they’re doing an embarrassingly shitty job of it.
Not that the media companies themselves have been doing much better. Most major American media companies are owned by people who see AI not as a way to improve journalism quality and make journalism more efficient, but as a path toward cutting corners and undermining labor.
Meanwhile, in the quest for massive engagement at impossible scale, tech giants like Meta and Google have simply stopped caring so much about quality and accuracy. The results are everywhere, from Google News’ declining quality, to substandard search results, to the slow decline of key, popular services, to platforms filled with absolute clickbait garbage. It’s not been great for informed consensus or factual reality.
You’d like to think that ultimately we emerge from the age of slop with not just better technology, but a better understanding of how to use and adapt to it. But the problem remains that most of the folks dictating the trajectory of this emerging technology have no idea what they’re doing, have prioritized making money over the public interest, or are just foundationally shitty human beings bad at their jobs.
Filed Under: ai, headlines, journalism, llms, media
Companies: google, meta


Comments on “Like Apple, Google’s AI News Tech Misinterprets Stories, Generates Gibberish Headlines”
I mean, AI is prone to some of the most immoral “both sides” cowardice that would get someone shot if the person listening to their gibberish was a violent criminal and the person spouting the both sides cowardice was a person.
Well, it’s the “model collapse” researchers have been talking about for years (much before recent AI hype), where AI is trained on AI generated content, which finally produces garbage.
And since AI generated content on the web increases every day, it keeps getting tougher to find new real “human generated” content to properly train AI.
The only solution is to create new AI that are smarter (and so much more expensive) than current AI to detect this, but not using them to generate new content…
Who could have imagined 25 years ago that this century would suck that much?
At this point the best-case scenario is that the bubble pops sooner and doesn’t wreck the global economy as badly as it will if it pops later.
“Google has already redirected a ton of advertising revenue away from journalists who do actual work…”
The Autopian is fighting AI enshittification and employing real jourbnalists. They had a pretty good write-up about how Google has been directing traffic away from them and toward AI slop. https://www.theautopian.com/google-is-why-grandpa-thinks-a-mustang-truck-is-coming-and-its-trying-to-kill-our-business/
Re:
404 Media has a 25% off deal on right now, for those interested.
And of course BestNetTech’s looking for subscribers too.
(I’m not affiliated with either site; I’m subscribed to both because they do good work.)
Well, how often does someone bitch about a news headline on the internet, only to have apologists appear to point out that article authors have nothing to do with the headlines? Apparently nobody’s ever been willing to take responsibility for them, and the errors are often much worse than this example (which could’ve just as easily come from a human). Including, sometimes, blatently ungrammatical or incomprehensible sentences, if not “nonsense” per se.
'If we go fully AI we won't have to pay workers! ... Other than to fix the AI's mistakes.'
A comment on the most recent Jimquisition put it perfectly I’d say, in noting that companies will pay billions to avoid paying millions.
Execs are so desperate to make use of this new magic(and make no mistake, they do seem to think it’s magic) to replace human workers that they’re shelling out huge piles of money only to have to go back later and pay even more to those same previously canned workers to fix the many, many mistakes that AI hallucinated into existence.
Re:
No, see, that’s the beauty of it: revising someone else’s draft pays less than writing the first draft.
Sure, they’ll be hiring the same workers back to fix the AI’s mistakes.
But they’ll be hiring them back at a lower pay rate.
It may surprise people but there is already a thing which summarises an article, it’s called the “headline”. A “headline” compresses the information in the article into a short sentence, often losing the nuance, depth, and content found within the full article.
Similarly, many long articles (called “papers”) have a summary called an “abstract” which compresses the contents of the paper into a paragraph or two.
Perhaps someone should inform the Great AI Thinkers at apple and Google that these exist, so there’s no need for this unnecessary AI summarising.
Then they can turn their attention to the really important problems in life.
Re:
Honestly at this point I’d like to see newspaper editors reminded of this, since they seem incapable of doing it properly without resorting to clickbait.
English language and interpretation.
The English language is a composite of many languages that its collected words from Over many centuries.
IF you can find any word with only 1 simple meaning… You are a very bored person to search for it.
Expect computers to get Any hint of its complexity? You will need a computer with Totally different concept and design. Trying to Fit your brain around Languages, is like yelling at a 2 year old for using the Wrong word. When it takes our children until 6th grade, I should say, that WE expect most documents and information to be based on an education of a 6th grader.(its still not happening)
Unless you can get your AI OUT of the Box and into an interactive world. Its only going to be used as a Simulacrum, for Spam Interaction, Sales calls, Basic Crap that saves Corps Money on Hiring HUMANS to call people.
Its only going to be a Glorified Software bundle thats over priced and does the SAME as what we already have.