Our second annual live at TrustCon recording of Ctrl-Alt-Speech! Ben was unable to make the trip halfway around the world, but Mike was joined by trust & safety influencer Alice Hunsberger from Musubi and Ashken Kazaryan, a Senior Legal Fellow at the Future of Free Speech at Vanderbilt University. They cover:
This week’s sponsor is Modulate. In our bonus chat Mike Masnick talks with Modulate founder and CEO Mike Pappas, live at TrustCon, about the kinds of voice scams they’re seeing, with a focus on scams using social engineering techniques to pressure people to do things they probably shouldn’t do.
In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Ben is joined by guest host Bridget Todd, a technology and culture writer, speaker and trainer and host of two great podcasts, There are No Girls on the Internet and IRL: Online Life is Real Life. Together, they cover:
In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Zeve Sanderson, the founding Executive Director of the NYU Center for Social Media & Politics. Together, they cover:
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Modulate. In our Bonus Chat, we speak with Modulate CTO Carter Huffman about how their voice technology can actually detect fraud.
In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Hank Green, popular YouTube creator and educator. After spending some time talking about being a creator at the whims of platforms, they cover:
The rushed adoption of half-cooked automation in America’s already broadly broken media and journalism industry continues to go smashingly, thanks for asking.
The latest scandal comes courtesy of the Chicago Sun Times, which was busted this week for running a “summer reading list” advertorial section filled with books that simply… don’t exist. As our friends at 404 Media note, the company somehow missed the fact that the AI synopsis was churning out titles (sometimes by real authors) that were never actually written.
Such as the nonexistent Tidewater by Isabel Allende, described by the AI as a “multigenerational saga set in a coastal town where magical realism meets environmental activism.” Or the nonexistent The Last Algorithm by Andy Weir, “another science-driven thriller” by the author of The Martian, which readers were (falsely) informed follows “a programmer who discovers that an AI system has developed consciousness—and has been secretly influencing global events for years.”
“The article is not bylined but was written by Marco Buscaglia, whose name is on most of the other articles in the 64-page section. Buscaglia told 404 Media via email and on the phone that the list was AI-generated. “I do use AI for background at times but always check out the material first. This time, I did not and I can’t believe I missed it because it’s so obvious. No excuses,” he said. “On me 100 percent and I’m completely embarrassed.”
Buscaglia added “it’s a complete mistake on my part.”
“I assume I’ll be getting calls all day. I already am,” he said. “This is just idiotic of me, really embarrassed. When I found it [online], it was almost surreal to see.”
Initially, the paper told Bluesky users it wasn’t really sure how any of this happened, which isn’t a great look any way you slice it:
We are looking into how this made it into print as we speak. It is not editorial content and was not created by, or approved by, the Sun-Times newsroom. We value your trust in our reporting and take this very seriously. More info will be provided soon.
Later on, the paper issued an apology that was a notable improvement over past scandals. Usually, when media outlets are caught using half-cooked AI to generate engagement garbage, they throw a third party vendor under the bus, take a short hiatus to whatever dodgy implementation they were doing, then in about three to six months just return to doing the same sort of thing.
The Sun Times sort of takes proper blame for the oversight:
“King Features worked with a freelancer who used an AI agent to help build out this special section. It was inserted into our paper without review from our editorial team, and we presented the section without any acknowledgement that it was from a third-party organization.”
They also take the time to thank actual human beings, which was nice:
“We are in a moment of great transformation in journalism and technology, and at the same time our industry continues to be besieged by business challenges. This should be a learning moment for all journalism organizations: Our work is valued — and valuable — because of the humanity behind it.“
The paper is promising to do better. Still, the oversight reflects poorly on the industry at large.
The entire 4-page, ad-supported “Heat Index” published by the Sun-Times is the sort of fairly inane, marketing heavy gack common in a stagnant newspaper industry. It’s fairly homogenized and not at all actually local; the kind of stuff that’s just lazily serialized and published in papers around the country with a priority of selling ads — not actually informing anybody.
“For example, in an article called “Hanging Out: Inside America’s growing hammock culture,” Buscaglia quotes “Dr. Jennifer Campos, a professor of leisure studies at the University of Colorado, in her 2023 research paper published in the Journal of Contemporary Ethnography.” A search for Campos in the Journal of Contemporary Ethnography does not return any results.”
In many ways these “AI” scandals are just badly automated extensions of existing human ethical and competency failures. Like the U.S. journalism industry’s ongoing obliteration of any sort of firewall between advertorial sponsorship and actual, useful reporting (see: the entire tech news industry’s love of turning themselves into a glorified Amazon blogspam affiliate several times every year).
But it’s also broadly reflective of a trust fund, fail-upward sort of modern media management that sees AI as less of a way to actually help the newsroom, and more of a way to lazily cut corners and further undermine already underpaid and overworked staffers (the ones that haven’t been mercilessly fired yet).
Some of these managers, like LA Times billionaire owner Patrick Soon-Shiong, genuinely believe (or would like you to believe because they also sell AI products) that half-cooked automation is akin to some kind of magic. As a result, they’re rushing toward using it in a wide variety of entirely new problematic ways without thinking anything through, including putting LLMs that can’t even generate accurate summer reading lists in charge of systems (badly) designed to monitor “media bias.”
There’s also a growing tide of aggregated automated clickbait mills hoovering up dwindling ad revenue, leeching money and attention from already struggling real journalists. Thanks to the fusion of automation and dodgy ethics, all the real money in modern media is in badly automated engagement bait and bullshit. Truth, accuracy, nuance, or quality is a very distant afterthought, if it’s thought about at all.
It’s all a hot mess, and you get the sense this is still somehow just the orchestra getting warmed up. I’d like to believe things could improve as AI evolves and media organizations build ethical frameworks to account for automation (clearly cogent U.S. regulation or oversight is coming no time soon), but based on the industry’s mad dash toward dysfunction so far, things aren’t looking great.
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Modulate. In our Bonus Chat, we speak with Modulate CEO Mike Pappas about the evolving landscape of online fraud and how the company’s work detecting abuse in gaming environments is helping identify financial misconduct across different types of digital platforms.
Last year Microsoft announced that it was bringing a new feature to its under-performing Windows 11 OS dubbed “Recall.” According to Microsoft’s explanation of Recall, the “AI” powered technology was supposed to take screenshots of your activity every five seconds, giving you an “explorable timeline of your PC’s past,” that Microsoft’s AI-powered assistant, Copilot, can then help you peruse.
The idea is that you can use AI to help you dig through your computer use to remember past events (helping you find that restaurant your friend texted you about, or remember that story about cybernetic hamsters that so captivated you two weeks ago).
But it didn’t take long before privacy advocates understandably began expressing concerns that this not only provides Microsoft with an even more detailed way to monetized consumer privacy, it creates significant new privacy risks should that data be exposed.
Early criticism revealed that consumer privacy genuinely was nowhere near the forefront of their thinking during Recall development. After some criticism, Microsoft said it would take additional steps to try and address concerns, including making the new service opt-in only, and tethering access to encrypted Recall information to the PIN or biometric login restrictions of Windows Hello Enhanced Sign-in Security.
But that (quite understandably) didn’t console critics, and Microsoft eventually backed off the launch entirely.
Until now.
Last week, Microsoft, clearly hungry to further monetize absolutely everything you do, announced that were bringing Recall back. Microsoft’s hoping that making the service opt-in (for now) with greater security will help quiet criticism:
“To use Recall, you will need to opt-in to saving snapshots, which are images of your activity, and enroll in Windows Hello to confirm your presence so only you can access your snapshots.”
But as Ars Technica’s Dan Goodin notes, even if user A opts out of recall, all the users he’s interacting with may not, opening the door to a long chain of potential privacy violations:
“That means anything User A sends them will be screenshotted, processed with optical character recognition and Copilot AI, and then stored in an indexed database on the other users’ devices. That would indiscriminately hoover up all kinds of User A’s sensitive material, including photos, passwords, medical conditions, and encrypted videos and messages.”
The simple act of creating this additional massive new archive of detailed user interactions may thrill Microsoft in the era of unregulated data brokers and rampant data monetization, but it creates an entirely new target for bad actors, spyware, subpoena-wielding governments, and foreign and domestic intelligence. In a country that’s literally too corrupt to pass a modern privacy law.
It’s all very… Microsoft.
It’s a bad idea being pushed by a company well aware that King Donald is taking a hatchet to any government regulators that might raise concerns about it. It’s another example of enshittification pretending to be progress, and Microsoft isn’t responding to press inquiries about it because it knows that barreling forth without heeding privacy concerns is a bad idea. It just doesn’t care.