Hide BestNetTech is off for the holidays! We'll be back soon, and until then don't forget to check out our fundraiser »

When People Realize How Good The Latest Chinese Open Source Models Are (And Free), The GenAI Bubble Could Finally Pop

from the not-with-a-whimper-but-a-bang dept

Although the field of artificial intelligence (AI) goes back more than half century, its latest incarnation generative AI is still very new: ChatGPT was launched just three years ago. During that time a wide variety of issues have been raised, ranging from concerns about the impact of AI on copyright, people’s ability to learn or even think, job losses, the flood of AI slop on the Internet, the environmental harms of massive data centers, and whether the creation of a super-intelligent AI will lead to the demise of humanity. Recently, a more mundane worry is that the current superheated generative AI market is a bubble about to pop. In the last few days, Google’s CEO, Sundar Pichai, has admitted that there is some “irrationality” in the current AI boom, while the Bank of England has warned about the risk of a “sharp correction” in the value of major players in the sector.

One element that may not yet be factored in to this situation is the rising sophistication of open source models from China. Back in April, BestNetTech wrote about how the release of a single model from the Chinese company DeepSeek had wiped a trillion dollars from US markets. Since then, DeepSeek has not been standing still. It has just launched its V3.2 model, and a review on ZDNet is impressed by the improvements:

the fact that a company — and one based in China, no less — has built an open-source model that can compete with the reasoning capabilities of some of the most advanced proprietary models currently on the market is a huge deal. It reiterates growing evidence that the “performance gap” between open-source and close-sourced models isn’t a fixed and unresolvable fact, but a technical discrepancy that can be bridged through creative approaches to pretraining, attention, and posttraining.

It is not just one open source Chinese model that is close to matching the best of the leading proprietary offerings. An article from NBC News notes that other freely downloadable Chinese models like Alibaba’s Qwen were also “within striking distance of America’s best.” Moreover, these are not merely theoretical options: they are already being put to use by AI startups in the US.

Over the past year, a growing share of America’s hottest AI startups have turned to open Chinese AI models that increasingly rival, and sometimes replace, expensive U.S. systems as the foundation for American AI products.

NBC News spoke to over 15 AI startup founders, machine-learning engineers, industry experts and investors, who said that while models from American companies continue to set the pace of progress at the frontier of AI capabilities, many Chinese systems are cheaper to access, more customizable and have become sufficiently capable for many uses over the past year.

As well as being free to download and completely configurable, these open source models from Chinese companies have another advantages over many of the better-known US products: they can be run locally without needing to pay any fees. This also means no data leaves the local system, which offers enhanced privacy and control over sensitive business data. However, as the NBC article notes, there are still some worries about using Chinese models:

In late September, the U.S. Center for AI Standards and Innovation released a report outlining risks from DeepSeek’s popular models, finding weakened safety protocols and increased pro-Chinese outputs compared to American closed-source models.

And the success of China’s open source models is prompting US efforts to take catch up:

In July, the White House released an AI Action Plan that called for the federal government to “Encourage Open-Source and Open-Weight AI.”

In August, ChatGPT maker OpenAI released its first open-source model in five years. Announcing the model’s release, OpenAI cited the importance of American open-source models, writing that “broad access to these capable open-weights models created in the US helps expand democratic AI.”

And in late November, the Seattle-based Allen Institute released its newest open-source model called Olmo 3, designed to help users “build trustworthy features quickly, whether for research, education, or applications,” according to its launch announcement.

The open source approach to generative AI is evidently growing in importance, driven by enhanced capabilities, low price, customizability, reduced running costs and better privacy. The free availability of these open source and open weight models, whether from China or the US, is bound to call into question the underlying assumption of today’s generative AI companies that there will be a commensurate payback for the trillions of dollars they are currently investing. Maybe it will be the realization that today’s open source models are actually good enough for most applications that finally pops the AI bubble.

Follow me @glynmoody on  on Bluesky and Mastodon.

Filed Under: , , , , , , , , , , , , , , , ,
Companies: alibaba, chatgpt, deepseek, google, nbc

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “When People Realize How Good The Latest Chinese Open Source Models Are (And Free), The GenAI Bubble Could Finally Pop”

Subscribe: RSS Leave a comment
46 Comments
Anonymous Coward says:

All that without noting all the problems with them.

Censorship limits data output. Lack of training data limits use cases.

The models that can actually rival the big boys require tens or hundreds of gigs of ram.

Relying on China has the same issue as relying on companies. Their self interest can be baked into the model.

Finally all open source models run into the same issue. They didn’t get to steal everyone’s private data to train on.

Anonymous Coward says:

Re:

Another issue that piggybacks on this: can you trust the model, or does it have intentional or unintentional backdoors? Do we have full access to the training data, or is it only the weights that are open?

Any model where we don’t have access to the training set itself as well as the weights is suspect.

And China has a history of flooding the market with cheap alternatives, only for people to discover after they’ve become dependent on them that there are baked-in gotchas that intentionally favor the CCP viewpoint.

Anonymous Coward says:

Re:

It’s very likely worse, environmentally speaking, because datacenters will typically get a lot more work per watt out of hardware than distributed PCs do. The only way to reduce the environmental impact is to greatly reduce the amount of compute required. That doesn’t seem to be the direction anybody is going though.

Anonymous Coward says:

Re: Re:

” The only way to reduce the environmental impact is to greatly reduce the amount of compute required. That doesn’t seem to be the direction anybody is going though.”

No, it’s not. That’s because doing that kind of work is exceedingly difficult: it takes knowledge and time and diligence. It’s far easier to just throw hardware at the problem, wreck the environment, overload the power grid, and burn VC money like there’s no tomorrow.

(I’ve done the former, including a computer vision system that learned how to recognize tools — screwdrivers, pliers, etc. — and instruct a robotic arm with a gripper how to position itself to pick them up. Ran in 2M (not a typo) of RAM. But it took months of careful work and “careful” isn’t in the vocabulary of any of these AI companies.)

Anonymous Coward says:

Re: Re:

It’s very likely worse, environmentally speaking, because datacenters will typically get a lot more work per watt out of hardware than distributed PCs do.

People could still run these models in data centers—such as those run by Amazon, who’ve already announced special machine instances for that purpose.

There’s not much reason to think that open models will use fewer resources than the proprietary ones. People trying to get by with smaller models might have a very minor benefit.

Arianity (profile) says:

Re: Re:

The only way to reduce the environmental impact is to greatly reduce the amount of compute required. That doesn’t seem to be the direction anybody is going though.

TBF, the models are getting massively more efficient over time. The amount of compute for a given task is going down. The issue is it just gets plowed back into doing bigger/more tasks

Anonymous Coward says:

Re: Re:

Yeah. This is what people like Moody here, and Jarvis up in the most recent BestNetTech Podcast don’t seem to get. I read Jarvis’ article that the podcast episode is based on and like… no, I don’t know anybody who wants an AI that tells them about what restaurants around them have empanadas, and I don’t see how any of it would help journalists. And when the AI bubble pops, does Jarvis have an idea on how AI companies are gonna be helping to fund journalism through these API systems?

None of the people I know in art or in science or in IT want AI shoved down their throats as the new way to do things. Does BestNetTech think that people are gonna suddenly have a vibe shift and start loving all the AI slop and services shoveled into Windows 11 or their browsers or their phones?

Mike Masnick (profile) says:

Re: Re: Re:

I read Jarvis’ article that the podcast episode is based on and like… no, I don’t know anybody who wants an AI that tells them about what restaurants around them have empanadas, and I don’t see how any of it would help journalists.

I know a TON of people who use AI for exactly that. Lots of people use AI as a form of search engine, for better or for worse, and that includes things like “Hey, I’m looking for a good place to meet for lunch that is in this area, and has reservable seating for 6 and has a variety of vegetarian options” or whatever, and AI tools are getting pretty good at delivering that kind of thing.

But people are also asking it about news, and that’s why Jeff is concerned about quality news sources opting out of AI. Because then the results are going to be full of nonsense.

So, the fact that you maybe don’t think you know people using AI this way does not mean that a TON of people absolutely are using AI that way all the time.

Mike Masnick (profile) says:

Re: Re: Re:3

That is one of the dumbest things ever written here. It’s a use case example of something that the tech is actually good at doing, providing a better result than existing tools and which is used widely by all kinds of people irrelevant of political association.

I’m sorry, but it’s fine to personally dislike AI tools. And it’s totally reasonable to distrust the nonsense marketing around AI and the people running AI companies.

But to insist anyone who finds value from the tools is an idiot is delusional.

Mike Masnick (profile) says:

Re: Re: Re:5

To insist there’s value to find in these tools is delusional.

Millions of people get value out of these tools every day. Including me.

I know that there are plenty of stupid uses. But used well, they can be very helpful in assisting all sorts of things that couldn’t be done otherwise. The problem is that everyone is focused on the stupid chatbot model.

But honestly, if you think there’s no value in LLM technology, it’s most likely that you chose to stick your head in the sand and have never seen how people actually use these tools.

Anonymous Coward says:

Re: Re: Re:2

I know a TON of people who use AI for exactly that. Lots of people use AI as a form of search engine, for better or for worse, and that includes things like “Hey, I’m looking for a good place to meet for lunch that is in this area, and has reservable seating for 6 and has a variety of vegetarian options” or whatever, and AI tools are getting pretty good at delivering that kind of thing.

I think that this is firmly in the “for worse” column. I can use Google Maps and a phone call or browsing a restaurant’s menu on a website to accomplish much the same thing, or ask a friend or coworker about what’s good to eat around here. And as a bonus, I’m not feeding into a system that’s causing RAM and other computer prices to skyrocket.

Mike Masnick (profile) says:

Re: Re: Re:3

Yes, and instead of the internet you can buy a newspaper.

But there’s a reason people like the web.

You honestly sound like one of those “back in my day, we didn’t have these new fangled automobiles and it was fine, so let’s go back to horses.”

Yes, AI is wildly overhyped. And yes it has problems and externalities.

But seriously, people pretending that it isn’t actually useful in some circumstances are either ignorant or stupid.

Anonymous Coward says:

Re: Re: Re:4

I like the web too and use it every day. Using Google Maps to find a restaurant, as well as its phone number or the menu website for the restaurant to call them up and ask them about their options, or asking your coworkers, whether that be in person or through a group chat, is not like saying “Let’s go back to horses & buggies”.

Yes, AI is wildly overhyped. And yes it has problems and externalities.

I think that BestNetTech needs more articles on these externalities. The rising power and resource and hardware demands of data centers meant to feed AI is a major issue that’s causing distortions and price increases that regular people are being impacted by. The people building the Resonant Computing future are going to need hardware all their own, and if they get priced out of it because AI data centers keep gobbling up all the RAM, rare earths, copper, and more, will we get that Resonant Computing?

Tanner Andrews (profile) says:

Re: Re: Re:3 one of the few actual uses

I think that this is firmly in the “for worse” column. I can use Google Maps and a phone call or browsing a restaurant’s menu on a website to accomplish much the same thing

If it takes a little bit of AI for the search engine to answer the question of what restaurants have empanadas, that seems like a good use of it. Phoning a bunch of restaurants not only chews up their employee time if anyone answers, but it also chews up more of my time.

Yes, if the menus are on line in some sort of usable form, rather than as images, I could probably go through them. That still takes more time than having some sort of robot answer the question.

I am not looking for creativity here. I am trying to find lunch. It seems that some sort of primitive AI might actually help in this endeavor. Compare cooking lunch, which I would prefer not to leave to the AI.

Thad (profile) says:

Re: Re: Re:2

Hell, Gemini too for that matter.

Two of your three examples are cases where companies with a large existing userbase have added AI to their products, turned on by default.

That’s not evidence that people want genAI, dude. It’s evidence that the only way Big Tech can get most people to use it is to force them to.

Thad (profile) says:

Re: Re: Re:5

Here’s a guide from last month that names a few:

How to Disable All the AI Features in Firefox Web Browser

More significantly, their new CEO announced plans to turn Firefox into an “AI browser” yesterday.

Anyway yeah I’ve already switched my main devices to Waterfox, Librewolf, or Fennec, but I’m looking into switching over the remaining devices that I haven’t.

Thad (profile) says:

Re: Re: Re:4

Firefox is in a separate category because it’s not a widely-used product like Windows or Google.

Firefox’s problem isn’t that it’s trying to justify AI investment by forcing it on existing customers. Firefox’s problem is that management thinks its userbase is small because it doesn’t imitate Chrome hard enough, instead of understanding that being different from Chrome is the only reason it has any userbase at all.

Bloof (profile) says:

They’re just going to announce the latest point something iteration of western models in response to chinese performance gains and the tiny boosts in ‘performance’ will somehow use even more power and water than before. The bubble will keep on inflating until the VC money well runs dry or Trump has a freakout at Taiwan becausenhe realises the people there aren’t white and tries to give them the Ukraine treatment.

That One Guy (profile) says:

'A bunch of 'AI is the digital messiah' companies crashed? Oh noes. Anyway...'

Recently, a more mundane worry is that the current superheated generative AI market is a bubble about to pop. In the last few days, Google’s CEO, Sundar Pichai, has admitted that there is some “irrationality” in the current AI boom, while the Bank of England has warned about the risk of a “sharp correction” in the value of major players in the sector.

Strange way to spell ‘silver lining’…

The sooner all the AI hype dies and companies and general public stop treating AI like the second coming of digital Jesus come to solve all of the world’s ills and crowbarring it into everything the better.

Sure it has it’s uses but people are treating it as though it was literal magic that knows all and can do even more so bloody often, the sooner both businesses and the public chill the hell out and treat it as what it is, just another tool that has it’s uses and problems the better for both the public and companies using it.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a BestNetTech Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

BestNetTech community members with BestNetTech Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the BestNetTech Insider Shop »

Follow BestNetTech

BestNetTech Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the BestNetTech Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
BestNetTech needs your support! Get the first BestNetTech Commemorative Coin with donations of $100
BestNetTech Deals
BestNetTech Insider Discord
The latest chatter on the BestNetTech Insider Discord channel...
Loading...