When People Realize How Good The Latest Chinese Open Source Models Are (And Free), The GenAI Bubble Could Finally Pop
from the not-with-a-whimper-but-a-bang dept
Although the field of artificial intelligence (AI) goes back more than half century, its latest incarnation — generative AI — is still very new: ChatGPT was launched just three years ago. During that time a wide variety of issues have been raised, ranging from concerns about the impact of AI on copyright, people’s ability to learn or even think, job losses, the flood of AI slop on the Internet, the environmental harms of massive data centers, and whether the creation of a super-intelligent AI will lead to the demise of humanity. Recently, a more mundane worry is that the current superheated generative AI market is a bubble about to pop. In the last few days, Google’s CEO, Sundar Pichai, has admitted that there is some “irrationality” in the current AI boom, while the Bank of England has warned about the risk of a “sharp correction” in the value of major players in the sector.
One element that may not yet be factored in to this situation is the rising sophistication of open source models from China. Back in April, BestNetTech wrote about how the release of a single model from the Chinese company DeepSeek had wiped a trillion dollars from US markets. Since then, DeepSeek has not been standing still. It has just launched its V3.2 model, and a review on ZDNet is impressed by the improvements:
the fact that a company — and one based in China, no less — has built an open-source model that can compete with the reasoning capabilities of some of the most advanced proprietary models currently on the market is a huge deal. It reiterates growing evidence that the “performance gap” between open-source and close-sourced models isn’t a fixed and unresolvable fact, but a technical discrepancy that can be bridged through creative approaches to pretraining, attention, and posttraining.
It is not just one open source Chinese model that is close to matching the best of the leading proprietary offerings. An article from NBC News notes that other freely downloadable Chinese models like Alibaba’s Qwen were also “within striking distance of America’s best.” Moreover, these are not merely theoretical options: they are already being put to use by AI startups in the US.
Over the past year, a growing share of America’s hottest AI startups have turned to open Chinese AI models that increasingly rival, and sometimes replace, expensive U.S. systems as the foundation for American AI products.
NBC News spoke to over 15 AI startup founders, machine-learning engineers, industry experts and investors, who said that while models from American companies continue to set the pace of progress at the frontier of AI capabilities, many Chinese systems are cheaper to access, more customizable and have become sufficiently capable for many uses over the past year.
As well as being free to download and completely configurable, these open source models from Chinese companies have another advantages over many of the better-known US products: they can be run locally without needing to pay any fees. This also means no data leaves the local system, which offers enhanced privacy and control over sensitive business data. However, as the NBC article notes, there are still some worries about using Chinese models:
In late September, the U.S. Center for AI Standards and Innovation released a report outlining risks from DeepSeek’s popular models, finding weakened safety protocols and increased pro-Chinese outputs compared to American closed-source models.
And the success of China’s open source models is prompting US efforts to take catch up:
In July, the White House released an AI Action Plan that called for the federal government to “Encourage Open-Source and Open-Weight AI.”
In August, ChatGPT maker OpenAI released its first open-source model in five years. Announcing the model’s release, OpenAI cited the importance of American open-source models, writing that “broad access to these capable open-weights models created in the US helps expand democratic AI.”
And in late November, the Seattle-based Allen Institute released its newest open-source model called Olmo 3, designed to help users “build trustworthy features quickly, whether for research, education, or applications,” according to its launch announcement.
The open source approach to generative AI is evidently growing in importance, driven by enhanced capabilities, low price, customizability, reduced running costs and better privacy. The free availability of these open source and open weight models, whether from China or the US, is bound to call into question the underlying assumption of today’s generative AI companies that there will be a commensurate payback for the trillions of dollars they are currently investing. Maybe it will be the realization that today’s open source models are actually good enough for most applications that finally pops the AI bubble.
Follow me @glynmoody on on Bluesky and Mastodon.
Filed Under: ai, allen institute, artificial intelligence, bank of england, bubble, china, customizability, genai, olmo, open source, open weight, openai, privacy, safety protocols, startups, sundar pichai, white house
Companies: alibaba, chatgpt, deepseek, google, nbc
BestNetTech is off for the holidays! We'll be back soon, and until then don't forget to




Comments on “When People Realize How Good The Latest Chinese Open Source Models Are (And Free), The GenAI Bubble Could Finally Pop”
…
All that without noting all the problems with them.
Censorship limits data output. Lack of training data limits use cases.
The models that can actually rival the big boys require tens or hundreds of gigs of ram.
Relying on China has the same issue as relying on companies. Their self interest can be baked into the model.
Finally all open source models run into the same issue. They didn’t get to steal everyone’s private data to train on.
Re:
Another issue that piggybacks on this: can you trust the model, or does it have intentional or unintentional backdoors? Do we have full access to the training data, or is it only the weights that are open?
Any model where we don’t have access to the training set itself as well as the weights is suspect.
And China has a history of flooding the market with cheap alternatives, only for people to discover after they’ve become dependent on them that there are baked-in gotchas that intentionally favor the CCP viewpoint.
Re: Re:
The better question is “should you?”, because people are really good at trusting the untrustworthy. See, for example, the Gell-Mann amnesia effect, with regards to trusting the media.
Re: Re:
Sigh. Do we know what “open source” means?
Not a fan of China. Not a fan of “AI”. But just because you don’t understand the code doesn’t mean no one else does.
Re:
I’m not concerned with whether or not Chinese open source models are as good as closed models, commercially useful, or even safe to use.
If the existence of these models causes the AI bubble to pop, then that’s a good thing.
So what theses dozen millions Nvidia GPU cards will be used to if anyone can run models on their computer or phone? They are not even very good at mining Bitcoin.
Datacenters are supposed to consume as much electricity as the whole Japan by 2030, don’t tell me we’ll have to use this electricity for other useful things?!
That’s cool and all, but is it going to make more or less of an impact on the environment? Because if this model still requires the use of supermassive data centers, the fact that it’s open source will be rendered irrelevant.
Re:
It’s very likely worse, environmentally speaking, because datacenters will typically get a lot more work per watt out of hardware than distributed PCs do. The only way to reduce the environmental impact is to greatly reduce the amount of compute required. That doesn’t seem to be the direction anybody is going though.
Re: Re:
” The only way to reduce the environmental impact is to greatly reduce the amount of compute required. That doesn’t seem to be the direction anybody is going though.”
No, it’s not. That’s because doing that kind of work is exceedingly difficult: it takes knowledge and time and diligence. It’s far easier to just throw hardware at the problem, wreck the environment, overload the power grid, and burn VC money like there’s no tomorrow.
(I’ve done the former, including a computer vision system that learned how to recognize tools — screwdrivers, pliers, etc. — and instruct a robotic arm with a gripper how to position itself to pick them up. Ran in 2M (not a typo) of RAM. But it took months of careful work and “careful” isn’t in the vocabulary of any of these AI companies.)
Re: Re:
People could still run these models in data centers—such as those run by Amazon, who’ve already announced special machine instances for that purpose.
There’s not much reason to think that open models will use fewer resources than the proprietary ones. People trying to get by with smaller models might have a very minor benefit.
Re: Re:
TBF, the models are getting massively more efficient over time. The amount of compute for a given task is going down. The issue is it just gets plowed back into doing bigger/more tasks
Re:
Also, the problem isn’t that AI costs too much, it’s that people fucking don’t want it.
Re: Re:
Hey, speak for yourself! I want it.
I want it to die!
Re: Re:
Spending money on something that nobody fucking wants is, kind of by definition, spending “too much” money.
Re: Re: Re:
Well, the reality is, a few people DO want it… and they happen to be the people holding a lot of other people’s money that they get to spend as they see fit.
Re: Re: Re:2
So, it still costs too much, but someone else pays. And isn’t that the best kind of “costs too much”?
Re: Re:
Yeah. This is what people like Moody here, and Jarvis up in the most recent BestNetTech Podcast don’t seem to get. I read Jarvis’ article that the podcast episode is based on and like… no, I don’t know anybody who wants an AI that tells them about what restaurants around them have empanadas, and I don’t see how any of it would help journalists. And when the AI bubble pops, does Jarvis have an idea on how AI companies are gonna be helping to fund journalism through these API systems?
None of the people I know in art or in science or in IT want AI shoved down their throats as the new way to do things. Does BestNetTech think that people are gonna suddenly have a vibe shift and start loving all the AI slop and services shoveled into Windows 11 or their browsers or their phones?
Re: Re: Re:
I know a TON of people who use AI for exactly that. Lots of people use AI as a form of search engine, for better or for worse, and that includes things like “Hey, I’m looking for a good place to meet for lunch that is in this area, and has reservable seating for 6 and has a variety of vegetarian options” or whatever, and AI tools are getting pretty good at delivering that kind of thing.
But people are also asking it about news, and that’s why Jeff is concerned about quality news sources opting out of AI. Because then the results are going to be full of nonsense.
So, the fact that you maybe don’t think you know people using AI this way does not mean that a TON of people absolutely are using AI that way all the time.
Re: Re: Re:2
And those people are idiots who give us trump.
Re: Re: Re:3
That is one of the dumbest things ever written here. It’s a use case example of something that the tech is actually good at doing, providing a better result than existing tools and which is used widely by all kinds of people irrelevant of political association.
I’m sorry, but it’s fine to personally dislike AI tools. And it’s totally reasonable to distrust the nonsense marketing around AI and the people running AI companies.
But to insist anyone who finds value from the tools is an idiot is delusional.
Re: Re: Re:4
To insist there’s value to find in these tools is delusional. Moody’s paean is one of the dumbest things ever posted to this site.
Re: Re: Re:5
Millions of people get value out of these tools every day. Including me.
I know that there are plenty of stupid uses. But used well, they can be very helpful in assisting all sorts of things that couldn’t be done otherwise. The problem is that everyone is focused on the stupid chatbot model.
But honestly, if you think there’s no value in LLM technology, it’s most likely that you chose to stick your head in the sand and have never seen how people actually use these tools.
Re: Re: Re:2
I think that this is firmly in the “for worse” column. I can use Google Maps and a phone call or browsing a restaurant’s menu on a website to accomplish much the same thing, or ask a friend or coworker about what’s good to eat around here. And as a bonus, I’m not feeding into a system that’s causing RAM and other computer prices to skyrocket.
Re: Re: Re:3
Yes, and instead of the internet you can buy a newspaper.
But there’s a reason people like the web.
You honestly sound like one of those “back in my day, we didn’t have these new fangled automobiles and it was fine, so let’s go back to horses.”
Yes, AI is wildly overhyped. And yes it has problems and externalities.
But seriously, people pretending that it isn’t actually useful in some circumstances are either ignorant or stupid.
Re: Re: Re:4
I like the web too and use it every day. Using Google Maps to find a restaurant, as well as its phone number or the menu website for the restaurant to call them up and ask them about their options, or asking your coworkers, whether that be in person or through a group chat, is not like saying “Let’s go back to horses & buggies”.
I think that BestNetTech needs more articles on these externalities. The rising power and resource and hardware demands of data centers meant to feed AI is a major issue that’s causing distortions and price increases that regular people are being impacted by. The people building the Resonant Computing future are going to need hardware all their own, and if they get priced out of it because AI data centers keep gobbling up all the RAM, rare earths, copper, and more, will we get that Resonant Computing?
Re: Re: Re:3 one of the few actual uses
If it takes a little bit of AI for the search engine to answer the question of what restaurants have empanadas, that seems like a good use of it. Phoning a bunch of restaurants not only chews up their employee time if anyone answers, but it also chews up more of my time.
Yes, if the menus are on line in some sort of usable form, rather than as images, I could probably go through them. That still takes more time than having some sort of robot answer the question.
I am not looking for creativity here. I am trying to find lunch. It seems that some sort of primitive AI might actually help in this endeavor. Compare cooking lunch, which I would prefer not to leave to the AI.
Re: Re: Re:3
But these bubbles always pop, and you’ll be able to stock up on cheap RAM just as people stocked up on cheap Aeron chairs in 2001. We might have a useful oversupply of electricity, too, especially if people also give up on proof-of-work cryptocurrencies.
Re: Re: Re:2
With all due respect, and that’s a lot of respect, you’ve earned it – those people are fucking morons, at least in this area.
This is what we waste money, power, and network on. Frivolities with bottomless cost. i am so glad the privileged set can find their vegan empanadas. Thanks.
Re: Re:
That’s very clearly not true. Tons of people use AI daily, whether it’s ChatGPT, Gemini, CoPilot and so on.
What people generally don’t want is AI-generated culture, i.e. games, movies, images.
Re: Re: Re:
Whether they want to or not.
Re: Re: Re:2
Hell, Gemini too for that matter.
Two of your three examples are cases where companies with a large existing userbase have added AI to their products, turned on by default.
That’s not evidence that people want genAI, dude. It’s evidence that the only way Big Tech can get most people to use it is to force them to.
Re: Re: Re:3
You can add Mozilla to that list, considering how it added AI features to Firefox and had them all turned on by default.
Re: Re: Re:4
Would you happen to have any more details on that? I checked just a bit ago and I only really noticed one AI feature(that I disabled), if there’s more in there I’d love to know so I can disable them too.
Re: Re: Re:5
Here’s a guide from last month that names a few:
How to Disable All the AI Features in Firefox Web Browser
More significantly, their new CEO announced plans to turn Firefox into an “AI browser” yesterday.
Anyway yeah I’ve already switched my main devices to Waterfox, Librewolf, or Fennec, but I’m looking into switching over the remaining devices that I haven’t.
Re: Re: Re:6
Much appreciated, and if the CEO’s that deranged looks like I should start looking into alternatives as well.
Re: Re: Re:4
Firefox is in a separate category because it’s not a widely-used product like Windows or Google.
Firefox’s problem isn’t that it’s trying to justify AI investment by forcing it on existing customers. Firefox’s problem is that management thinks its userbase is small because it doesn’t imitate Chrome hard enough, instead of understanding that being different from Chrome is the only reason it has any userbase at all.
Re: Re: Re:5
Mozilla’s inability to realize this very basic truth has been the source of almost every single misstep they make. It’s so frustrating. They need to mulch their leadership.
Re: Re: Re:3
Oh hey, one more for now:
Microsoft Scales Back AI Goals Because Almost Nobody Is Using Copilot
Re: Re: Re:
Care to name a product or servicr that added AI features due to widesprea consumer demand?
Re: Re:
Depends on the people really.
The general public? ‘Stop shoveling that crap down our throats!’
Execs and other tech-bros? ‘The more magic AI the better!’
This is another plea not to collaborate in openwashing these models. They simply are not open source.
You do not, under any circumstance, “gotta hand it to them”.
They’re just going to announce the latest point something iteration of western models in response to chinese performance gains and the tiny boosts in ‘performance’ will somehow use even more power and water than before. The bubble will keep on inflating until the VC money well runs dry or Trump has a freakout at Taiwan becausenhe realises the people there aren’t white and tries to give them the Ukraine treatment.
'A bunch of 'AI is the digital messiah' companies crashed? Oh noes. Anyway...'
Recently, a more mundane worry is that the current superheated generative AI market is a bubble about to pop. In the last few days, Google’s CEO, Sundar Pichai, has admitted that there is some “irrationality” in the current AI boom, while the Bank of England has warned about the risk of a “sharp correction” in the value of major players in the sector.
Strange way to spell ‘silver lining’…
The sooner all the AI hype dies and companies and general public stop treating AI like the second coming of digital Jesus come to solve all of the world’s ills and crowbarring it into everything the better.
Sure it has it’s uses but people are treating it as though it was literal magic that knows all and can do even more so bloody often, the sooner both businesses and the public chill the hell out and treat it as what it is, just another tool that has it’s uses and problems the better for both the public and companies using it.
Re:
The infuriating thing is that generative AI does have some practical uses (not fancy autocomplete or shitty image generation) and when the bubble bursts nobody’s going to distinguish between the valuable uses and the stupid ones, nobody’s going to want to invest in anything ML-shaped.
Anyone here who was around when Mozilla Firefox first came out? Did Microsoft panic when users began fleeing Internet Explorer (or as some people called it, Microshaft Internet Exploder) in droves?