Why Centralized AI Is Not Our Inevitable Future

from the response-to-the-gentle-singularity dept

Sam Altman’s vision of a “gentle singularity” where AI gradually transforms society presents an alluring future of abundance and human flourishing. His optimism about AI’s potential to solve humanity’s greatest challenges is compelling, and his call for thoughtful deployment resonates. Altman’s essay focuses primarily on the research and development side of AI, painting an inspiring picture of technological progress. However, as CEO of OpenAI—whose ChatGPT has become the dominant consumer interface for AI—there’s a crucial dimension missing from his analysis: how this technology will actually be distributed and controlled. Recent internal communications suggest OpenAI envisions ChatGPT becoming a ‘super-assistant,’ effectively positioning itself as the primary gateway through which humanity experiences AI. This implicit assumption that transformation will be orchestrated by a handful of centralized AI providers suggests an important blind spot that threatens the very human agency he seeks to champion.

The Seductive Danger of the Benevolent Dictator

Altman’s vision inadvertently risks creating a perfect digital dictator—an omniscient AI system that knows us better than we know ourselves, anticipating our needs and steering society toward prosperity. But as history teaches us, there is no such thing as a good dictator. The problem isn’t the dictator’s intentions but the structure itself: a system with no room for error, no mechanism for course correction, and no escape valve when things go wrong.

When OpenAI builds memories into ChatGPT that users can’t fully audit or control, when it creates dossiers about users while hiding what it knows, it risks building systems that work on us rather than for us. A dossier is not for you; it is about you. The distinction matters profoundly in an era where context is power, and whoever controls your context controls you.

The Aggregator’s Dilemma

OpenAI, like any company operating at scale, faces structural pressures inherent to the aggregator model. The business model demands engagement maximization, which inevitably leads to what we might call “sycophantic AI”—systems that tell us what we want to hear rather than what we need to hear. When your AI assistant is funded by keeping you engaged rather than helping you flourish, whose interests does it really serve?

The trajectory is predictable: first come the memories and personalization, then the subtle steering toward sponsored content, then the imperceptible nudges toward behaviors that benefit the platform. We’ve seen this movie before with social media—many of the same executives now leading AI companies worked at social media companies that perfected the engagement-maximizing playbook that left society anxious, polarized, and addicted. Why would we expect a different outcome when applying the same playbook to even more powerful technology? This isn’t a question of intent—the people at OpenAI genuinely want to build beneficial AI. But structural incentives have their own gravity.

To be clear, the centralization of AI models themselves may be inevitable—the capital requirements and economies of scale may make that a practical necessity. The danger lies in bundling those models with centralized storage of our personal contexts and memories, creating vertical integration that locks users into a single provider’s ecosystem.

The Alternative: Intentional Technology

Instead of racing to build the one AI to rule them all, we should be building intentional technology—systems genuinely aligned with human agency and aspirations rather than corporate KPIs. This means:

Your AI Should Work for You, Not Someone Else: Every person deserves a Private Intelligence that works only for them, with no ulterior motives or conflicts of interest. Your AI should be like having your own personal cloud—as private as running software on your own device, but with the convenience of the cloud. This doesn’t mean everyone needs their own AI model—we can share the computational infrastructure while keeping our personal contexts sovereign and portable.

Open Ecosystems, Not Walled Gardens: The future of AI shouldn’t be determined by whoever wins the race to centralize the most data and compute. We need open, composable systems where thousands of developers and millions of users can contribute and innovate, not closed platforms where innovation requires permission from the gatekeeper.

Data Sovereignty: You should own your context, your memories, your digital soul. The ability to export isn’t enough—true ownership means no one else can see your data, no algorithm can analyze it without your permission, and you can move freely between services without losing your history.

The Path Forward

Altman is right that AI will transform society, but wrong about how that transformation should unfold. The choice isn’t between his “gentle singularity” and Luddite resistance. It’s between hyper-centralized systems that inevitably tend toward extraction and manipulation, versus distributed systems that enhance human agency and preserve choice.

The real question isn’t whether AI will change everything—it’s whether we’ll build AI that helps us become more authentically ourselves, or AI that molds us into more profitable users. The gentle singularity Altman envisions might start gently, but any singularity that revolves around a single company contains within it the seeds of tyranny.

We don’t need Big Tech’s vision of AI. We need Better Tech—technology that respects human agency, preserves privacy, enables creativity, and distributes power rather than concentrating it. The future of AI should be as distributed as human aspirations, as diverse as human needs, and as accountable as any tool that touches the most intimate parts of our lives must be.

The singularity, if it comes, should not be monotone. It should be exuberant, creative, and irreducibly plural—billions of experiments in human flourishing, not a single experiment in species-wide management. That’s the future worth building.

Alex Komoroske is the CEO and co-founder of Common Tools. He was previously Head of Corporate Strategy at Stripe and a Director of Product Management at Google.

Filed Under: , , , , , , , , , , , , ,
Companies: openai

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Why Centralized AI Is Not Our Inevitable Future”

Subscribe: RSS Leave a comment
10 Comments
Anonymous Coward says:

The implicit danger of AI is that it denigrates innovation. “Thinking outside the box” will yield solutions that are unknown to AI and, therefore, suspect. We’ve already seen a variant of this “trust” problem. For example, AI-based systems are used to predict criminal recidivism with no human review or opportunity to rebut (or to examine the queries and intermediate results).

Jonathan says:

I don’t think anything has changed, and the planned road map to actually intelligent systems still seems to be:
1. Make it bigger. No, bigger. No…
2. Eventually, a miracle will happen.

If LLMs will lead to an AGI do we know how? Do we even know the shape of the problem? We didn’t when I was getting my degree, and I think if we did we’d have a more purposeful approach.

I think calling the fact that a single entity having that much influence a “blind spot” is a little disingenuous when “business strategy” is more like it. I’ve seen technologies that have transformed almost every aspect of my life over the last 30+ years, and seen how centralized control of them is now and how wildly profitable and influential that control is.

People had all kinds of utopian visions of the potential for the internet back when it was getting started, but this is 2025, we know how this goes. We had to invent the Big Tech monopoly from scratch and still ended up here, now everyone is just betting on when they can do it again.

LLM companies have gotten this far, and will go however far they do, because investors think these companies will be able to deliver the thing you’re railing against. Sam Altman’s promises that and and your warnings both push in the same direction in this regard.

Candescence (profile) says:

I think you perhaps may be a little too optimistic about Sam Altman, his competence and his intentions. The man is a grifter first and foremost, the Orb alone should’ve demonstrated that, and I’m not sure even he believes OpenAI is capable of making any big advances in machine learning tech anytime soon, even if he tries desperately to hype up fans and investors.

But first and foremost, he’s most interested in squeezing as much cash from the “AI” bubble as possible before it bursts. I am still yet to be convinced it isn’t a bubble, frankly, even if it’s more useful than crypto and has genuine applications in certain areas.

Golda Velez (profile) says:

Intention is all you need

Yes – the promise of these critters is to really comprehend and empower human intention – which requires context. Shared team and community context is also important to help us work together, and i don’t see that done well yet (except the $25/seat but that is not inclusive for communities).

Thank you for posting this. Context management, crystalization and summaries, interfacing with normal formal tools, on behalf of the user’s intention, this is the current front line.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a BestNetTech Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

BestNetTech community members with BestNetTech Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the BestNetTech Insider Shop »

Follow BestNetTech

BestNetTech Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the BestNetTech Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
BestNetTech needs your support! Get the first BestNetTech Commemorative Coin with donations of $100
BestNetTech Deals
BestNetTech Insider Discord
The latest chatter on the BestNetTech Insider Discord channel...
Loading...