Hide Three days left! Support our fundraiser by January 5th and get the first BestNetTech Commemorative Coin »

Stanford Study: ‘AI’ Generated ‘Workslop’ Actually Making Productivity Worse

from the I'm-sorry-I-can't-do-that,-Dave dept

Automation undeniably has some useful applications. But the folks hyping modern “AI” have not only dramatically overstated its capabilities, many of them generally view these tools as a way to lazily cut corners or undermine labor. There’s also a weird innovation cult that has arisen around managers and LLM use, resulting in the mandatory use of tools that may not be helping anybody — just because.

The result is often a hot mess, as we’ve seen in journalism. The AI hype simply doesn’t match the reality, and a lot of the underlying financial numbers being tossed around aren’t based in reality; something that’s very likely going to result in a massive bubble deflation as the reality and the hype cycles collide (Gartner calls this the “trough of disillusionment,” and expects it to arrive next year).

One recent study out of MIT Media Lab found that 95% of organizations see no measurable return on their investment in AI (yet). One of many reasons for this, as noted in a different recent Stanford survey (hat tip: 404 Media), is because the mass influx of AI “workslop” requires colleagues to spend additional time trying to decipher genuine meaning and intent buried in a sharp spike in lazy, automated garbage.

The survey defines workslop as “AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.” Somewhat reflective of America’s obsession with artifice. And it found that as use of ChatGPT and other tools have risen in the workplace, it’s created a lot of garbage that requires time to decipher:

“When coworkers receive workslop, they are often required to take on the burden of decoding the content, inferring missed or false context. A cascade of effortful and complex decision-making processes may follow, including rework and uncomfortable exchanges with colleagues.”

Confusing or inaccurate emails that require time to decipher. Lazy or incorrect research that requires endless additional meetings to correct. Writing full of errors that requires supervisors to edit or correct themselves:

“A director in retail said: “I had to waste more time following up on the information and checking it with my own research. I then had to waste even more time setting up meetings with other supervisors to address the issue. Then I continued to waste my own time having to redo the work myself.”

In this way, a technology deemed a massive time saver winds up creating all manner of additional downstream productivity costs. This is made worse by the fact that a lot of these technologies are being rushed into mass adoption in business and academia before they’re fully cooked. And by the fact the real-world capabilities of the products are being wildly overstated by both companies and a lazy media.

This isn’t inherently the fault of the AI, it’s the fault of the reckless, greedy, and often incompetent people high in the extraction class dictating the technology’s implementation. And the people so desperate to be innovation-smacked, they’re simply not thinking things through. “AI” will get better; though any claim of HAL-9000 type sentience will remain mythology for the foreseeable future.

Obviously measuring the impact of this workplace workslop is an imprecise science, but the researchers at the Stanford Social Media Lab try:

“Each incidence of workslop carries real costs for companies. Employees reported spending an average of one hour and 56 minutes dealing with each instance of workslop. Based on participants’ estimates of time spent, as well as on their self-reported salary, we find that these workslop incidents carry an invisible tax of $186 per month. For an organization of 10,000 workers, given the estimated prevalence of workslop (41%), this yields over $9 million per year in lost productivity.”

The workplace isn’t the only place the rushed application of a broadly misrepresented and painfully under-cooked technology is making unproductive waves. When media outlets rushed to adopt AI for journalism and headlines (like at CNET), they, too, found that the human editorial costs to correct and fix all the problems, plagiarism, false claims, and errors really didn’t make the value equation worth their time. Apple found that LLMs couldn’t even do basic headlines with any accuracy.

Elsewhere in media you have folks building giant (badly) automated aggregation and bullshit machines, devoid of any ethical guardrails, in a bid to hoover up ad engagement. That’s not only repurposing the work of real journalists, it’s redirecting an already dwindling pool of ad revenue away from their work. And it’s undermining any sort of ethical quest for real, informed consensus in the authoritarian age.

This is all before you even get to the environmental and energy costs of AI slop.

Some of this are the ordinary growing pains of new technology. But a ton of it is the direct result of poor management, bad institutional leadership, irresponsible tech journalism, and intentional product misrepresentation. And next year is going to likely be a major reckoning and inflection point as markets (and people in the real world) finally begin to separate fact from fiction.

Filed Under: , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Stanford Study: ‘AI’ Generated ‘Workslop’ Actually Making Productivity Worse”

Subscribe: RSS Leave a comment
10 Comments
Anonymous Coward says:

It confirms many things with LLMs: Theses are nifty tools but:
* They spread truth and garbage with the same confidence,
* Work as black boxes with many unverifiable abstract layers,
* Are extremely expensive to train (and the more they “think”, the more funds they burn),
* Don’t make money (around $60B for more than $500B invested in 2025),
* And are mostly developed by only few companies that have scammer as CEO.

AlmostAnonymous says:

Experience this regularly

Ask someone a technical question, get an obviously AI generated response in reply, all too often either completely wrong or wrong in critical ways. AI has basically become “Let Me Google That For You”, which is even more ironic now that Google includes AI “summaries” for almost every query now, that are also almost always incorrect in some way.

db says:

Just saw my first case of this

I work in a company that does Agile and one day I noticed this extra junk in our tickets that added no value to the job. It turns out the scrum leader thought it would be a good idea to add this in so the tickets would look more impressive. I pushed back hard against this and also posted a link to that Stanford article in our team chat and made it clear that we should not allow this kind of workslop.
Fortunately, a few days later I noticed that the workslop started disappearing from the tickets.

Anonymous Coward says:

The survey defines workslop as “… content that masquerades as good work, but lacks the substance to meaningfully advance a given task.”

So has enough buzzwords to make it sound intelligent, but it’s really nothing more than just a bunch of disconnected words. Sounds like every corporate press release from the last 15+ years. No wonder c-suite-types fall for that garbage.

Anonymous Coward says:

I think it’s worse than that.

I’ve seen two issues.

  1. Workslop but it’s worse than that. People telling me how to do my expertise with based on what Ai gave them. Creating extra work by putting in requests based off what ai told them they needed to do, or what should be done, that then needs to be talked over to figure out what they actually even want. People trying to solve problems outside their job or role with no understanding using Ai garbage.
  2. Instead of attempting to solve the absolute basics, simple, repetitive tasks, I see other software engineers trying to use ai to solve the hard and difficult tasks and burning money up without getting much useful code generated.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a BestNetTech Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

BestNetTech community members with BestNetTech Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the BestNetTech Insider Shop »

Follow BestNetTech

BestNetTech Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the BestNetTech Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
BestNetTech needs your support! Get the first BestNetTech Commemorative Coin with donations of $100
BestNetTech Deals
BestNetTech Insider Discord
The latest chatter on the BestNetTech Insider Discord channel...
Loading...