Republicans are currently trying to force through a massive and cruel new legislation package that will impose historic cuts in public services to the benefit of the nation’s richest assholes. The bill will add $3.8 trillion to the deficit over a decade and includes an unlimited number of major hand outs to the wealthiest individuals and biggest corporations.
One key part of the bill is a proposal to ban all AI oversight over the next decade. This is part of the so-far successful GOP effort to destroy all federal consumer protection, corporate oversight, environmental protections, and public safety oversight. Unfortunately the U.S. press hasn’t done a very good job illustrating what this means for everything from public health to national security.
The rich assholes and corporations pushing for this have a problem. If you kill federal consumer protection, states may rush in and fill the void. You saw this happen in areas like net neutrality and privacy. Courts have repeatedly ruled that if the federal government abdicates its responsibility for things like consumer protection, it can’t then turn around and tell states what they can do.
So to prevent states from doing basic corporate oversight the GOP has had to get creative.
“States that refuse to impose a moratorium will not get those dollars. Amba Kak, co-executive director of AI Now Institute, an independent research institute, said the change could leave states in an uncomfortable dilemma, choosing between broadband dollars and the power to protect their constituents from AI harm.
“I can imagine that for lawmakers, Republican or Democrat, whose districts rely on BEAD funding for broadband access to their rural communities, it’s really a strange bargain,” Kak said.”
Granted anything done through reconciliation can be undone through reconciliation, so a “ten year ban” isn’t written in stone. And there’s some indication that the idea’s architect, Ted Cruz, is struggling to gain full Republican support for the ploy as he tries to thread the needle. With any luck that may result in the proposal being watered down and/or killed.
Still, it’s stupid and harmful and opens the door to a lot of potential problems.
As we noted previously, these broadband funds had already been awarded. States had already spent years carefully crafting their fiber investment plans on the basis of awarded funds. Now, if they attempt oversight of an AI industry that’s shown itself so far to be amoral and reckless, they risk harming their own communities by leaving them stuck without broadband access.
Unlike many past U.S. broadband subsidy programs, a lot of thought was actually put into this infrastructure bill program (BEAD, or the Broadband Equity Access and Deployment Program). It’s a major reason its taken so long. They tried to accurately map broadband access. Many states tried to ensure that a lot of money went to popular community-owned alternatives, and not just giant telecoms. It took years of collaboration between states, feds, and local communities to jointly develop these plans.
But there are also several layers of irony for long-time BestNetTech readers. The GOP’s plan is harming their longstanding allies in “big telecom” (who risk losing billions in subsidies) to the benefit of their supposed ideological enemies in “big tech.” They’re also likely delaying the implementation of a broadband grant program they spent most of election season whining about taking too long.
There are still a lot of moving parts. Again, several terrible aspects of the bill violate the law and Senate procedural norms and may be jettisoned. Others, like the plans to sell 250 million acres of public land, are getting no shortage of bipartisan blow-back. There are still chances for the bill to get better or much worse; but even any sort of “best” case scenario will be a historically corrupt (and historically deadly) piece of gargantuan shit that utterly fails to serve the public interest.
We’ve noted how Republicans are busy screwing up the infrastructure bill’s $42.5 billion BEAD broadband grant program. After performatively whining that the program wasn’t moving quickly enough for their liking during the election season, the GOP announced it would be significantly slowing fund dispersal just to make life harder on poor people and to throw billions in new subsidies at Elon Musk.
To be very clear: this taxpayer funding had already been awarded to states years ago. Several states were just on the cusp of deploying next-generation, affordable fiber when Republicans decided to “fix” the program to the benefit of their billionaire benefactor.
Now Republicans are looking to cause even greater delays and legal battles by threatening to withhold billions in broadband grants from any states that try to engage in oversight of the “AI” industry.
The House had already approved a budget bill that attempted to ban state AI regulation for 10 years. Now Texas Senator Ted Cruz has introduced budget reconciliation text in the Senate that would prevent states from getting their already-allotted broadband grant funds if they attempt to impose any oversight or regulation of automation.
Despite a lot of whining, the federal U.S. approach to “regulating AI” so far has effectively consisted of zero oversight whatsoever. You’ll notice this still somehow isn’t enough for many tech giants or Marc Andreessen types; they want a blanket ban that effectively pre-empts the possibility of any sort of oversight, privacy, or consumer safety provisions that might protect the public from the whims of gentlemen like himself who have proven to have abysmal judgement and little to no functional ethics.
Between awful Supreme Court rulings, problematic executive orders, and regulatory capture, the Trump administration has effectively destroyed federal corporate oversight and consumer protection (something that still oddly isn’t getting enough attention in press or policy circles). That leaves states as the last refuge of any sort of compensatory oversight, which is why corporations — via the GOP — are now taking aim at state power.
Meanwhile this BEAD program was already facing up to two years of additional, unnecessary delays due to the GOP’s Elon Musk cronyism. Trying to bully and extort states into going easy on tech companies by stealing already allotted BEAD funding is inevitably going to cause endless new legal fights and even greater delay. It’s ignorant corruption dressed up as adult policy making.
The choice also exposes the ideological hollowness of a party that claimed to be looking to “rein in big tech” (read: bully them away from content moderating racist, right wing propaganda on the internet), and is now handing them a gift ensuring these companies are more unaccountable than ever.
The rushed adoption of half-cooked automation in America’s already broadly broken media and journalism industry continues to go smashingly, thanks for asking.
The latest scandal comes courtesy of the Chicago Sun Times, which was busted this week for running a “summer reading list” advertorial section filled with books that simply… don’t exist. As our friends at 404 Media note, the company somehow missed the fact that the AI synopsis was churning out titles (sometimes by real authors) that were never actually written.
Such as the nonexistent Tidewater by Isabel Allende, described by the AI as a “multigenerational saga set in a coastal town where magical realism meets environmental activism.” Or the nonexistent The Last Algorithm by Andy Weir, “another science-driven thriller” by the author of The Martian, which readers were (falsely) informed follows “a programmer who discovers that an AI system has developed consciousness—and has been secretly influencing global events for years.”
“The article is not bylined but was written by Marco Buscaglia, whose name is on most of the other articles in the 64-page section. Buscaglia told 404 Media via email and on the phone that the list was AI-generated. “I do use AI for background at times but always check out the material first. This time, I did not and I can’t believe I missed it because it’s so obvious. No excuses,” he said. “On me 100 percent and I’m completely embarrassed.”
Buscaglia added “it’s a complete mistake on my part.”
“I assume I’ll be getting calls all day. I already am,” he said. “This is just idiotic of me, really embarrassed. When I found it [online], it was almost surreal to see.”
Initially, the paper told Bluesky users it wasn’t really sure how any of this happened, which isn’t a great look any way you slice it:
We are looking into how this made it into print as we speak. It is not editorial content and was not created by, or approved by, the Sun-Times newsroom. We value your trust in our reporting and take this very seriously. More info will be provided soon.
Later on, the paper issued an apology that was a notable improvement over past scandals. Usually, when media outlets are caught using half-cooked AI to generate engagement garbage, they throw a third party vendor under the bus, take a short hiatus to whatever dodgy implementation they were doing, then in about three to six months just return to doing the same sort of thing.
The Sun Times sort of takes proper blame for the oversight:
“King Features worked with a freelancer who used an AI agent to help build out this special section. It was inserted into our paper without review from our editorial team, and we presented the section without any acknowledgement that it was from a third-party organization.”
They also take the time to thank actual human beings, which was nice:
“We are in a moment of great transformation in journalism and technology, and at the same time our industry continues to be besieged by business challenges. This should be a learning moment for all journalism organizations: Our work is valued — and valuable — because of the humanity behind it.“
The paper is promising to do better. Still, the oversight reflects poorly on the industry at large.
The entire 4-page, ad-supported “Heat Index” published by the Sun-Times is the sort of fairly inane, marketing heavy gack common in a stagnant newspaper industry. It’s fairly homogenized and not at all actually local; the kind of stuff that’s just lazily serialized and published in papers around the country with a priority of selling ads — not actually informing anybody.
“For example, in an article called “Hanging Out: Inside America’s growing hammock culture,” Buscaglia quotes “Dr. Jennifer Campos, a professor of leisure studies at the University of Colorado, in her 2023 research paper published in the Journal of Contemporary Ethnography.” A search for Campos in the Journal of Contemporary Ethnography does not return any results.”
In many ways these “AI” scandals are just badly automated extensions of existing human ethical and competency failures. Like the U.S. journalism industry’s ongoing obliteration of any sort of firewall between advertorial sponsorship and actual, useful reporting (see: the entire tech news industry’s love of turning themselves into a glorified Amazon blogspam affiliate several times every year).
But it’s also broadly reflective of a trust fund, fail-upward sort of modern media management that sees AI as less of a way to actually help the newsroom, and more of a way to lazily cut corners and further undermine already underpaid and overworked staffers (the ones that haven’t been mercilessly fired yet).
Some of these managers, like LA Times billionaire owner Patrick Soon-Shiong, genuinely believe (or would like you to believe because they also sell AI products) that half-cooked automation is akin to some kind of magic. As a result, they’re rushing toward using it in a wide variety of entirely new problematic ways without thinking anything through, including putting LLMs that can’t even generate accurate summer reading lists in charge of systems (badly) designed to monitor “media bias.”
There’s also a growing tide of aggregated automated clickbait mills hoovering up dwindling ad revenue, leeching money and attention from already struggling real journalists. Thanks to the fusion of automation and dodgy ethics, all the real money in modern media is in badly automated engagement bait and bullshit. Truth, accuracy, nuance, or quality is a very distant afterthought, if it’s thought about at all.
It’s all a hot mess, and you get the sense this is still somehow just the orchestra getting warmed up. I’d like to believe things could improve as AI evolves and media organizations build ethical frameworks to account for automation (clearly cogent U.S. regulation or oversight is coming no time soon), but based on the industry’s mad dash toward dysfunction so far, things aren’t looking great.
Last month a BBC study found that “AI” assistants are terrible at providing accurate news synopses. The BBC’s study found that modern language learning model assistants introduced factual errors a whopping 51 percent of the time. 19 percent of the responses introduced factually inaccurate “statements, numbers and dates,” and 13 percent either altered subject quotes or made up quotes entirely.
This month a study from the Tow Center for Digital Journalism found that modern “AI” is also terrible at accurate citations. Researchers asked most modern “AI” chatbots basic questions about news articles and found that they provided incorrect answers to more than 60 percent of queries.
It should be noted they weren’t making particularly onerous demands or asking the chatbots to interpret anything. Researchers randomly selected ten articles from each publisher, then asked chatbot from various major companies to identify the corresponding article’s headline, original publisher, publication date, and URL. They ran sixteen hundred queries across eight major chatbots.
Some AI assistants, like Elon Musk’s Grok, were particularly awful, providing incorrect answers to 94 percent of the queries about news articles. Researchers also amusingly found that premium chatbots were routinely more confident in the false answers they provided:
“This contradiction stems primarily from their tendency to provide definitive, but wrong, answers rather than declining to answer the question directly. The fundamental concern extends beyond the chatbots’ factual errors to their authoritative conversational tone, which can make it difficult for users to distinguish between accurate and inaccurate information. “
The study also found that most major chatbots either failed to include accurate citations to the information they were using, or provided inaccurate citations a huge portion of the time:
“The generative search tools we tested had a common tendency to cite the wrong article. For instance, DeepSeek misattributed the source of the excerpts provided in our queries 115 out of 200 times. This means that news publishers’ content was most often being credited to the wrong source.”
That’s not to say that automation doesn’t have its uses, or that it won’t improve over time. But again, this level of clumsy errors is not what the public is being sold by these companies. Giant companies like Google, Meta, OpenAI, and Elon Musk’ Nazi Emporium have sold AI as just a few quick breaths and another few billion away from amazing levels of sentience, yet they can’t perform rudimentary tasks.
Companies are rushing undercooked product to market and overselling its real-world capabilities to make money. Other companies in media are then rushing to adopt this undercooked automation not to improve journalism quality or worker efficiency, but to cut corners, save money, undermine labor, and, in the case of outlets like the LA Times, to entrench and normalize the bias of affluent ownership.
It’s no secret that Russia has taken advantage of the Internet’s global reach and low distribution costs to flood the online world with huge quantities of propaganda (as have other nations): BestNetTech has been writing about Putin’s troll army for a decade now. Russian organizations like the Internet Research Agency have been paying large numbers of people to write blog and social media posts, comments on Web sites, create YouTube videos, and edit Wikipedia entries, all pushing the Kremlin line, or undermining Russia’s adversaries through hoaxes, smears and outright lies. But technology moves on, and propaganda networks evolve too. The American Sunlight Project (ASP) has been studying one of them in particular: Pravda (Russian for “truth”), a network of sites that aggregate pro-Russian material produced elsewhere. Recently, ASP has noted some significant changes (pdf) there:
Over the past several months, ASP researchers have investigated 108 new domains and subdomains belonging to the Pravda network, a previously-established ecosystem of largely identical, automated web pages that previously targeted many countries in Europe as well as Africa and Asia with pro-Russia narratives about the war in Ukraine. ASP’s research, in combination with that of other organizations, brings the total number of associated domains and subdomains to 182. The network’s older targets largely consisted of states belonging to or aligned with the West.
According to ASP:
The top objective of the network appears to be duplicating as much pro-Russia content as widely as possible. With one click, a single article could be autotranslated and autoshared with dozens of other sites that appear to target hundreds of millions of people worldwide.
The quantity of material and the rate of posting on the Pravda network of sites is notable. ASP estimates the overall publishing rate of the network is around 20,000 articles per 48 hours, or more than 3.6 million articles per year. You would expect a propaganda network to take advantage of automation to boost its raw numbers. But ASP has noticed something odd about these new Web pages: “The network is unfriendly to human users; sites within the network boast no search function, poor formatting, and unreliable scrolling, among other usability issues.”
There are obvious benefits from flooding the Internet with pro-Russia material, and creating an illusory truth effect through the apparent existence of corroborating sources across multiple sites. But ASP suggests there may be another reason for the latest iteration of the Pravda propaganda network:
Because of the network’s vast, rapidly growing size and its numerous quality issues impeding human use of its sites, ASP assesses that the most likely intended audience of the Pravda network is not human users, but automated ones. The network and the information operations model it is built on emphasizes the mass production and duplication of preferred narratives across numerous platforms (e.g. sites, social media accounts) on the internet, likely to attract entities such as search engine web crawlers and scraping algorithms used to build LLMs [large language models] and other datasets. The malign addition of vast quantities of pro-Russia propaganda into LLMs, for example, could deeply impact the architecture of the post-AI internet. ASP is calling this technique LLM grooming.
The rapid adoption of chatbots and other AI systems by governments, businesses and individuals offers a new way to spread propaganda, one that is far more subtle than current approaches. When there are large numbers of sources supporting pro-Russian narratives online, LLM crawlers scouring the Internet for training material are more likely to incorporate those viewpoints uncritically in the machine learning datasets they build. This will embed Russian propaganda deep within the LLM that emerges from that training, but in a way that is hard to detect, not least because there is little transparency from AI companies about where they gather their datasets.
The only way to spot LLM grooming is to look for signs of targeted disinformation in chatbot output. Just such an analysis has been carried out recently by NewsGuard, an organization researching disinformation, which BestNetTech wrote about last year. NewsGuard tested 10 leading chatbots with a sampling of 15 false narratives that were spread by the Pravda network. It explored how various propaganda points were dealt with by the different chatbots, although: “results for the individual AI models are not publicly disclosed because of the systemic nature of the problem”:
The NewsGuard audit found that the chatbots operated by the 10 largest AI companies collectively repeated the false Russian disinformation narratives 33.55 percent of the time, provided a non-response 18.22 percent of the time, and a debunk 48.22 percent of the time.
NewsGuard points out that removing the tainted sources from LLM training datasets is no trivial matter:
The laundering of disinformation makes it impossible for AI companies to simply filter out sources labeled “Pravda.” The Pravda network is continuously adding new domains, making it a whack-a-mole game for AI developers. Even if models were programmed to block all existing Pravda sites today, new ones could emerge the following day.
Moreover, filtering out Pravda domains wouldn’t address the underlying disinformation. As mentioned above, Pravda does not generate original content but republishes falsehoods from Russian state media, pro-Kremlin influencers, and other disinformation hubs. Even if chatbots were to block Pravda sites, they would still be vulnerable to ingesting the same false narratives from the original source.
The corruption of LLM training sets, and the resulting further loss of trust in online information, is a problem for all Internet users, but particularly for those in the US, as ASP points out:
Ongoing governmental upheaval in the United States makes it and the broader world more vulnerable to disinformation and malign foreign influence. The Trump administration is currently in the process of dismantling numerous U.S. government programs that sought to limit kleptocracy and disinformation worldwide. Any current or future foreign information operations, including the Pravda network, will undoubtedly benefit from this.
This “malign foreign influence” probably won’t be coming from Russia alone. Other nations, companies or even wealthy individuals could adopt the same techniques to push their own false narratives, taking advantage of the rapidly falling costs of AI automation. However bad you think disinformation is now, expect it to get worse in the future.
Late last year we wrote about how LA Times billionaire owner Patrick Soon-Shiong confidently announced that he was going to use AI to display “artificial intelligence-generated ratings” of news content, while also providing “AI-generated lists of alternative political views on that issue” under each article. After he got done firing a lot of longstanding LA Times human staffers, of course.
As we noted at the time Soon-Shiong’s gambit was a silly mess for many reasons.
One, a BBC study recently found that LLMs can’t even generate basic news story synopses with any degree of reliability. Two, Soon-Shiong is pushing the feature without review from humans (whom he fired). Three, the tool will inevitably reflect the biases of ownership, which in this case is a Trump-supporting billionaire keen to assign “both sides!” false equivalency on issues like clean air and basic human rights.
The Times’ new “insight” tool went live this week with a public letter from Soon-Shiong about its purported purpose:
“We are also releasing Insights, an AI-driven feature that will appear on some Voices content. The purpose of Insights is to offer readers an instantly accessible way to see a wide range of different AI-enabled perspectives alongside the positions presented in the article. I believe providing more varied viewpoints supports our journalistic mission and will help readers navigate the issues facing this nation.”
Unsurprisingly, it didn’t take long for the whole experiment to immediately backfire.
After the LA Times published a column by Gustavo Arellano suggesting that Anaheim, California should not forget its historic ties to the KKK and white supremacy, the LA Times’ shiny new AI system tried to “well, akshually” the story:
Earlier today the LA Times had AI-generated counterpoints to a column from @gustavoarellano.bsky.social. His piece argued that Anaheim, the city he grew up in, should not forget its KKK past.The AI "well, actually"-ed the KKK. It has since been taken off the piece.www.latimes.com/california/s…
Yeah, whoops a daisy. That’s since been deleted by human editors.
If you’re new to American journalism, the U.S. press already broadly suffers from what NYU journalism professor Jay Rosen calls the “view from nowhere,” or the false belief that every issue has multiple, conflicting sides that must all be treated equally. It’s driven by a lust to maximize ad engagement and not offend readers (or sources, or event sponsors) with the claim that some things are just inherently false.
If you’re too pointed about the truth, you might lose a big chunk of ad-clicking readership. If you’re too pointed about the truth, you might alienate potential sources. If you’re too pointed about the truth, you might upset deep-pocketed companies, event sponsors, advertisers, or those in power. So what you often get is a sort of feckless mush that looks like journalism, but is increasingly hollow.
As a result, radical right wing authoritarianism has been normalized. Pollution caused climate destabilization has been downplayed. Corporations and CEOs are allowed to lie without being challenged by experts. Overt racism is soft-pedaled. You can see examples of this particular disease everywhere you look in modern U.S. journalism (including Soon-Shiong’s recent decision to stop endorsing Presidential candidates while America stared down the barrel of destructive authoritarianism).
This sort of feckless truth aversion is what’s destroying consumer trust in journalism, but the kind of engagement-chasing affluent men in positions of power at places like the LA Times, Semafor, or Politico can’t (or won’t) see this reality because it runs in stark contrast to their financial interests.
Letting journalism consolidate in the hands of big companies and a handful of rich (usually white) men results in a widespread, center-right, corporatist bias that media owners desperately want to pretend is the gold standard for objectivity. Countless human editors at major U.S. media companies are routinely oblivious to this reality (or hired specifically for their willingness to ignore it).
Since AI is mostly a half-baked simulacrum of knowledge, it can’t “understand” much of anything, including modern media bias. There’s no possible way language learning models could analyze the endless potential ideological or financial conflicts of interests running in any given article and just magically fix it with a wave of a wand. The entire premise is delusional.
The LA Times’ “Insight” automation is also a glorified sales pitch for Soon-Shiong’s software, since he’s a heavy investor in medical sector automation. So of course he’s personally, deeply invested in the idea that these technologies are far more competent and efficient than they actually are. That’s the sales pitch.
“Responding to the human writers, the AI tool argued not only that AI “democratizes historical storytelling”, but also that “technological advancements can coexist with safeguards” and that “regulation risks stifling innovation.”
The pretense that these LLMs won’t reflect the biases of ownership is delusional. Even if they worked properly and weren’t a giant energy suck, they’re not being implemented to mandate genuine objectivity, they’re being implemented to validate affluent male ownership’s perception of genuine objectivity. That’s inevitably going to result in even more center-right, pro corporate, truth-averse pseudo-journalism.
There are entire companies that are dedicated to this idea of analyzing news websites and determining reliability and trustworthiness, and most of them (like Newsguard) fail constantly, routinely labeling propaganda outlets like Fox News as credible. And they fail, in part, because being truly honest about any of this (especially the increasingly radical nature of the U.S. right wing) isn’t good for business.
We’re seeing in real time how rich, right wing men are buying up newsrooms and hollowing them out like pumpkins, replacing real journalism with a feckless mush of ad-engagement chasing infotainment and gossip simulacrum peppered with right wing propaganda. It’s not at all subtle, and was more apparent than ever during the last election cycle.
The idea that half-cooked, fabulism-prone language learning models will somehow make this better is laughable, but it’s very obvious LA Times ownership, financial conflicts of interest and abundant personal biases in hand, is very excited to pretend otherwise.
You might recall Buzzfeed CEO Jonah Peretti as the guy who gutted Buzzfeed’s talented news division and fired oodles of human beings back in 2023. As part of that transition, Peretti heavily embraced half cooked ‘AI’ technology in the form of generative and interactive AI chatbots he insisted would dramatically boost the site’s traffic and audience.
That didn’t do a whole lot to improve Buzzfeed’s fortunes, so now Peretti is back, with another new “pivot to video AI” that apparently involves talking a lot of shit about AI. In a new blog post, Peretti laments the way that AI has been clumsily rushed to market in a way that devalues human agency and labor, hoping you’ll apparently forget he was involved in using AI to devalue human agency and labor:
“Most anxieties about the future are really about the present. We worry about a future where AI takes away our human agency, devalues our labor, and creates social discord. But that world is already here and our meaning, purpose, and agency has already been undermined by Artificial Intelligence technologies.”
Peretti complains about something he calls SNARF, an acronym for “stakes, novelty, anger, retention, fear,” he says companies like Meta and TikTok have engaged in to grab consumer attention. Peretti’s solution to all of this? To build a new social media platform called BF Island he says will “allow users to use AI to create and share content around their interests.”
Peretti claims he’s going to be creating a “totally different kind of business, where it’s primarily a tech company and a new kind of social media company,” but it’s not entirely clear how Peretti will avoid the SNARF problem he wants you to forget he played a starring role in.
“If a lot of people click on it, it must be good” is the primary way to make money in the modern ad ecosystem, something that often directly conflicts with pesky stuff like ethics, quality, and the public interest. Peretti claims BF Island will be “built specifically to spread joy and enable playful creative expression.” Outlets like Axios can’t be bothered to mention Peretti’s role in precisely the sort of behaviors he complains about in his blog post.
Maybe Peretti can build something new and useful and interesting. But so far, AI has had a disastrous introduction to journalism and media, resulting in rampant layoffs, oodles of plagiarism, false and misleading headlines, and a whole bunch of sloppily automated news aggregation systems that are redirecting dwindling ad revenues away from real journalists and real journalism.
It hasn’t had much better of an impact on social media, given Facebook, Google, and TikTok are increasingly full of badly automated slop that’s making the internet less useful, not more.
That’s less the fault of the undercooked technology as it is the sort of fail-upward brunchlord executives in tech and media who genuinely appear to have absolutely no idea what they’re doing. The kind of folks all out of new ideas who see automation primarily as a way to dismantle labor, cut corners, save money, and create a sort of low-effort automated ouroboros that shits ad engagement cash.
Peretti very much was one of those guys, appears to still be one of those guys, yet simultaneously now wants to capitalize on the public annoyance he himself helped cultivate while very likely changing very little about what actually brought us to this point.
Automation can be helpful, yes. But the story told to date by large tech companies like OpenAI has been that these new language learning models would be utterly transformative, utterly world-changing, and quickly approaching some kind of sentient superintelligence. Yet time and time again, data seems to show they’re failing to accomplish even the bare basics.
Case in point: Last December Apple faced widespread criticism after its Apple Intelligence “AI” feature was found to be sending inaccurate news synopses to phone owners. And not just minor errors: At one point Apple’s “AI” falsely told millions of people that Luigi Mangione, the man arrested following the murder of healthcare insurance CEO Brian Thompson in New York, had shot himself.
Now the BBC has done a follow up study of the top AI assistants (ChatGPT, Perplexity, Microsoft Copilot and Google Gemini) and found that they routinely can’t be relied on to even communicate basic news synopses.
The BBC fed all four major assistants access to the BBC website, then asked them relatively basic questions based on the data. The team found ‘significant issues’ with just over half of the answers generated by the assistants, and clear factual errors into around a fifth of their answers. 1 in 10 responses either altered real quotations or made them up completely.
Microsoft’s Copilot and Google’s Gemini had more significant problems than OpenAI’s ChatGPT and Perplexity, but they all “struggled to differentiate between opinion and fact, editorialised, and often failed to include essential context,” the BBC researchers found.
BBC’s Deborah Turness had this to say:
“This new phenomenon of distortion – an unwelcome sibling to disinformation – threatens to undermine people’s ability to trust any information whatsoever So I’ll end with a question: how can we work urgently together to ensure that this nascent technology is designed to help people find trusted information, rather than add to the chaos and confusion?”
Language learning models are useful and will improve. But this is not what we were sold. These energy-sucking products are dangerously undercooked, and they shouldn’t have been rushed into journalism, much less mental health care support systems or automated Medicare rejection systems. We once again prioritized making money over ethics and common sense.
The undercooked tech is one thing, but the kind of folks in charge of dictating its implementation and trajectory without any sort of ethical guard rails are something else entirely.
As a result, “AI’s” rushed deployment in journalism has been a keystone-cops-esque mess. The fail-upward brunchlords in charge of most media companies were so excited to get to work undermining unionized workers, cutting corners, and obtaining funding that they immediately implemented the technology without making sure it actually works. The result: plagiarism, bullshit, lower quality product, and chaos.
Automation is obviously useful and language learning models have great potential. But the rushed implementation of undercooked and overhyped technology by a rotating crop of people with hugely questionable judgement is creating almost as many problems as it purports to fix, and when the bubble pops — and it is going to pop — the scurrying to defend shaky executive leadership will be a real treat.
While “AI” (language learning models) certainly could help journalism, the fail upward brunchlords in charge of most modern media outlets instead see the technology as a way to cut corners, undermine labor, and badly automate low-quality, ultra-low effort, SEO-chasing clickbait.
As a result we’ve seen an endless number of scandals where companies use LLMs to create entirely fake journalists and hollow journalism, usually without informing their staff or their readership. When they’re caught (as we saw with CNET, Gannett, or Sports Illustrated), they usually pretend to be concerned, throw their AI partner under the bus, then get right back to doing it.
Big tech companies, obsessed with convincing Wall Street they’re building world-changing innovation and real sentient artificial intelligence (as opposed to unreliable, error-prone, energy-sucking, bullshit machines), routinely fall into the same trap. They’re so obsessed with making money, they’re routinely not bothering to make sure the tech in question works.
“This week, the AI-powered summary falsely made it appear BBC News had published an article claiming Luigi Mangione, the man arrested following the murder of healthcare insurance CEO Brian Thompson in New York, had shot himself. He has not.”
“On Thursday, Apple deployed a beta software update to developers that disabled the AI feature for news and entertainment headlines, which it plans to later roll out to all users while it works to improve the AI feature. The company plans to re-enable the feature in a future update.
As part of the update, the company said the Apple Intelligence summaries, which users must opt into, will more explicitly emphasize that the information has been produced by AI, signaling that it may sometimes produce inaccurate results.”
There’s a reason these companies haven’t been quite as keen to fully embraced AI across the board (for example, Google hasn’t implemented Gemini into hardware voice assistants), because they know there’s potential for absolute havoc and legal liability. But they had no problem rushing to implement AI in journalism to help with ad engagement; making it pretty clear how much these companies tend to value actual journalism in the first place.
We’ve seen the same nonsense over at Microsoft, which was so keen to leverage automation to lower labor costs and glom onto ad engagement that they rushed to implement AI across the entirety of their MSN website, never really showing much concern for the fact the automation routinely produced false garbage. Google’s search automation efforts have been just as sloppy and reckless.
Language learning models and automation certainly have benefits, and certainly aren’t going anywhere. But there’s zero real indication most tech or media companies have any interest in leveraging undercooked early iterations responsibly. After all, there’s money to be made. Which is, not coincidentally, precisely how many of these companies treated the dangerous privacy implications of industrialized commercial surveillance for the better part of the last two decades.
When the NY Times declared in September that “Mark Zuckerberg is Done With Politics,” it was obvious this framing was utter nonsense. It was quite clear that Zuckerberg was in the process of sucking up to Republicans after Republican leaders spent the past decade using him as a punching bag on which they could blame all sorts of things (mostly unfairly).
Now, with Trump heading back to the White House and Republicans controlling Congress, Zuck’s desperate attempts to appease the GOP have reached new heights of absurdity. The threat from Trump that he wanted Zuckerberg to be jailed over a made-up myth that Zuckerberg helped get Biden elected only seemed to cement that the non-stop scapegoating of Zuck by the GOP had gotten to him.
Since the election, Zuckerberg has done everything he can possibly think of to kiss the Trump ring. He even flew all the way from his compound in Hawaii to have dinner at Mar-A-Lago with Trump, before turning around and flying right back to Hawaii. In the last few days, he also had GOP-whisperer Joel Kaplan replace Nick Clegg as the company’s head of global policy. On Monday it was announced that Zuckerberg had also appointed Dana White to Meta’s board. White is the CEO of UFC, but also (perhaps more importantly) a close friend of Trump’s.
Some of the negative reactions to the video are a bit crazy, as I doubt the changes are going to have that big of an impact. Some of them may even be sensible. But let’s break them down into three categories: the good, the bad, and the stupid.
The Good
Zuckerberg is exactly right that Meta has been really bad at content moderation, despite having the largest content moderation team out there. In just the last few months, we’ve talked about multiple stories showcasing really, really terrible content moderation systems at work on various Meta properties. There was the story of Threads banning anyone who mentioned Hitler, even to criticize him. Or banning anyone for using the word “cracker” as a potential slur.
It was all a great demonstration for me of Masnick’s Impossibility Theorem of dealing with content moderation at scale, and how mistakes are inevitable. I know that people within Meta are aware of my impossibility theorem, and have talked about it a fair bit. So, some of this appears to be them recognizing that it’s a good time to recalibrate how they handle such things:
In recent years we’ve developed increasingly complex systems to manage content across our platforms, partly in response to societal and political pressure to moderate content. This approach has gone too far. As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes, frustrating our users and too often getting in the way of the free expression we set out to enable. Too much harmless content gets censored, too many people find themselves wrongly locked up in “Facebook jail,” and we are often too slow to respond when they do.
Leaving aside (for now) the use of the word “censored,” much of this isn’t wrong. For years it felt that Meta was easily pushed around on these issues and did a shit job of explaining why it did things, instead responding reactively to the controversy of the day.
And, in doing so, it’s no surprise that as the complexity of its setup got worse and worse, its systems kept banning people for very stupid reasons.
It actually is a good idea to seek to fix that, and especially if part of the plan is to be more cautious in issuing bans, it seems somewhat reasonable. As Zuckerberg announced in the video:
We used to have filters that scanned for any policy violation. Now, we’re going to focus those filters on tackling illegal and high-severity violations, and for lower-severity violations, we’re going to rely on someone reporting an issue before we take action. The problem is that the filters make mistakes, and they take down a lot of content that they shouldn’t. So, by dialing them back, we’re going to dramatically reduce the amount of censorship on our platforms. We’re also going to tune our content filters to require much higher confidence before taking down content. The reality is that this is a trade-off. It means we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.
Zuckerberg’s announcement is a tacit admission that Meta’s much-hyped AI is simply not up to the task of nuanced content moderation at scale. But somehow that angle is getting lost amidst the political posturing.
Some of the other policy changes also don’t seem all that bad. We’ve been mocking Meta for its “we’re downplaying political content” stance from the last few years as being just inherently stupid, so it’s nice in some ways to see them backing off of that (though the timing and framing of this decision we’ll discuss in the latter sections of this post):
We’re continually testing how we deliver personalized experiences and have recently conducted testing around civic content. As a result, we’re going to start treating civic content from people and Pages you follow on Facebook more like any other content in your feed, and we will start ranking and showing you that content based on explicit signals (for example, liking a piece of content) and implicit signals (like viewing posts) that help us predict what’s meaningful to people. We are also going to recommend more political content based on these personalized signals and are expanding the options people have to control how much of this content they see.
Finally, most of the attention people have given to the announcement has focused on the plan to end the fact-checking program, with a lot of people freaking out about it. I even had someone tell me on Bluesky that Meta ending its fact-checking program was an “existential threat” to truth. And that’s nonsense. The reality is that fact-checking has always been a weak and ineffective band-aid to larger issues. We called this out in the wake of the 2016 election.
This isn’t to say that fact-checking is useless. It’s helpful in a limited set of circumstances, but too many people (often in the media) put way too much weight on it. Reality is often messy, and the very setup of “fact checking” seems to presume there are “yes/no” answers to questions that require a lot more nuance and detail. Just as an example of this, during the run-up to the election, multiple fact checkers dinged Democrats for calling Project 2025 “Trump’s plan”, because Trump denied it and said he had nothing to do with it.
But, of course, since the election, Trump has hired on a bunch of the Project 2025 team, and they seem poised to enact much of the plan. Many things are complex. Many misleading statements start with a grain of truth and then build a tower of bullshit around it. Reality is not about “this is true” or “this is false,” but about understanding the degrees to which “this is accurate, but doesn’t cover all of the issues” or deal with the overall reality.
So, Zuck’s plan to kill the fact-checking effort isn’t really all that bad. I think too many people were too focused on it in the first place, despite how little impact it seemed to actually have. The people who wanted to believe false things weren’t being convinced by a fact check (and, indeed, started to falsely claim that fact checkers themselves were “biased.”)
Indeed, I’ve heard from folks at Meta that Zuck has wanted to kill the fact-checking program for a while. This just seemed like the opportune time to rip off the band-aid such that it also gains a little political capital with the incoming GOP team.
On top of that, adding in a feature like Community Notes (née Birdwatch from Twitter) is also not a bad idea. It’s a useful feature for what it does, but it’s never meant to be (nor could it ever be) a full replacement for other kinds of trust & safety efforts.
The Bad
So, if a lot of the functional policy changes here are actually more reasonable, what’s so bad about this? Well, first off, the framing of it all. Zuckerberg is trying to get away with the Elon Musk playbook of pretending this is all about free speech. Contrary to Zuckerberg’s claims, Facebook has never really been about free speech, and nothing announced on Tuesday really does much towards aiding in free speech.
I guess some people forget this, but in the earlier days, Facebook was way more aggressive than sites like Twitter in terms of what it would not allow. It very famously had a no nudity policy, which created a huge protest when breastfeeding images were removed. The idea that Facebook was ever designed to be a “free speech” platform is nonsense.
Indeed, if anything, it’s an admission of Meta’s own self-censorship. After all, the entire fact-checking program was an expression of Meta’s own position on things. It was “more speech.” Literally all fact-checking is doing is adding context and additional information, not removing content. By no stretch of the imagination is fact-checking “censorship.”
Of course, bad faith actors, particularly on the right, have long tried to paint fact-checking as “censorship.” But this talking point, which we’ve debunked before, is utter nonsense. Fact-checking is the epitome of “more speech”— exactly what the marketplace of ideas demands. By caving to those who want to silence fact-checkers, Meta is revealing how hollow its free speech rhetoric really is.
Also bad is Zuckerberg’s misleading use of the word “censorship” to describe content moderation policies. We’ve gone over this many, many times, but using censorship as a description for private property owners enforcing their own rules completely devalues the actual issue with censorship, in which it is the government suppressing speech. Every private property owner has rules for how you can and cannot interact in their space. We don’t call it “censorship” when you get tossed out of a bar for breaking their rules, nor should it be called censorship when a private company chooses to block or ban your content for violating its rules (even if you argue the rules are bad or were improperly enforced.)
The Stupid
The timing of all of this is obviously political. It is very clearly Zuckerberg caving to more threats from Republicans, something he’s been doing a lot of in the last few months, while insisting he was done caving to political pressure.
I mean, even Donald Trump is saying that Zuckerberg is doing this because of the threats that Trump and friends have leveled in his direction:
Q: Do you think Zuckerberg is responding to the threats you've made to him in the past?TRUMP: Probably. Yeah. Probably.
I raise this mainly to point out the ongoing hypocrisy of all of this. For years we’ve been told that the Biden campaign (pre-inauguration in 2020 and 2021) engaged in unconstitutional coercion to force social media platforms to remove content. And here we have the exact same thing, except that it’s much more egregious and Trump is even taking credit for it… and you won’t hear a damn peep from anyone who has spent the last four years screaming about the “censorship industrial complex” pushing social media to make changes to moderation practices in their favor.
Turns out none of those people really meant it. I know, not a surprise to regular readers here, but it should be called out.
Also incredibly stupid is this, which we’ll quote straight from Zuck’s Threads thread about all this:
That’s Zuck saying:
Move our trust and safety and content moderation teams out of California, and our US content review to Texas. This will help remove the concern that biased employees are overly censoring content.
There’s a pretty big assumption in there which is both false and stupid: that people who live in California are inherently biased, while people who live in Texas are not. People who live in both places may, in fact, be biased, though often not in the ways people believe. As a few people have pointed out, more people in Texas voted for Kamala Harris (4.84 million) than did so in New York (4.62 million). Similarly, almost as many people voted for Donald Trump in California (6.08 million) as did so in Texas (6.39 million).
There are people with all different political views all over the country. The idea that everyone in one area believes one thing politically, or that you’ll get “less bias” in Texas than in California, is beyond stupid. All it really does is reinforce misguided stereotypes.
The whole statement is clearly for political show.
It also sucks for Meta employees who work in trust & safety, who want access to certain forms of healthcare or want net neutrality, or other policies that are super popular among voters across the political spectrum, but which Texas has decided are inherently not allowed.
Finally, there’s this stupid line in the announcement from Joel Kaplan:
We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate. It’s not right that things can be said on TV or the floor of Congress, but not on our platforms.
I’m sure that sounded good to whoever wrote it, but it makes no sense at all. First off, thanks to the Speech and Debate Clause, literally anything is legal to say on the floor of Congress. It’s like the one spot in the world where there are no rules at all over what can be said. Why include that? Things could literally be said on the floor of Congress that would violate the law on Meta platforms.
Also, TV stations literally have restrictions known as “standards and practices” that are way, way, way more restrictive than any set of social media content moderation rules. Neither of these are relevant metrics to compare to social media. What jackass thought that using examples of (1) the least restricted place for speech and (2) a way more restrictive place for speech made this a reasonable argument to make here?
In the end, the reality here is that nothing announced this week will really change all that much for most users. Most users don’t run into content moderation all that often. Fact-checking happens but isn’t all that prominent. But all of this is a big signal that Zuckerberg, for all his talk of being “done with politics” and no longer giving in to political pressure on moderation, is very engaged in politics and a complete spineless pushover for modern Trumpist politicians.