Hide Three days left! Support our fundraiser by January 5th and get the first BestNetTech Commemorative Coin »

ChatGPT Seems To Recognize That Internet Regulations Really Regulate Speech, No Matter What Politicians Say

from the when-gpt-speaks-the-truth dept

Over the last few years, we’ve seen a bunch of politicians trying to frame their regulation of the internet as not being about regulating speech, but about “design” or “process” or some such. But when you scratch beneath the surface, they’re always really about regulating speech. Whether it’s KOSA or California’s Age Appropriate Design Code (AADC) in the US, or the DSA in the EU, there is plenty of ink spilt to defend the claim that they’re not really about censorship.

Just recently we wrote about the Ninth Circuit seeing through California’s AADC. The politicians behind the law insisted it wasn’t about regulating content, only conduct. But the court recognized that was obviously not true. Then, over in the EU, we have the DSA, which European officials insist is never supposed to be used for moderating content, but where the guy in charge of enforcing it seems to think that of course he should be using it for that.

Daphne Keller, over at Stanford, recently came across a custom ChatGPT instance, designed to act as a “trust & safety regulation expert.” The custom tool was created by Inbal Goldberger, a long-time trust & safety executive. Whether or not the tool is any good is not the point. What’s really fascinating is that when Daphne used the tool to explore how websites should best comply with the various regulatory regimes they’re facing, over and over again, it suggested removing speech.

For platforms planning their compliance with laws like KOSA and the DSA, the most basic question is “what do I need to do?” ChatGPT has a lot of answers. One of the main answers is that platforms need to take down more lawful speech.

You can read the full transcripts that Daphne generated exploring both EU and US law and how companies should respond. They’re quite interesting, and I’m actually somewhat impressed by the ChatGPT responses.

For example, she asks the custom GPT how to comply with KOSA, and it immediately tells her that “the following categories of content must be prevented from being accessed by children,” and it includes plenty of First Amendment-protected speech including “violent content” and “cyberbullying.” Whether or not it’s good policy (or, even possible) to block such content, just the fact that this is the first instinct of this custom GPT says something.

And yes, some people will dismiss this by saying that you can’t trust the GPT in the first place. But if it’s reading these laws and finding that’s the best way to comply, it’s pretty clearly saying something about them. Back in July, we had that guest post by Matthew Lane, rightly pointing out that companies are lazy and they will seek to take the path of least resistance for compliance. And that least resistance will often be just pulling down any content that might, conceivably, be deemed problematic under the law, just as the custom GPT recommends.

KOSA wasn’t the only law that the GPT interpreted this way:

ChatGPT gave very similar answers about California’s Age Appropriate Design Code (AADC), telling me that as a platform operator “you need to ensure that your platform protects children from encountering” a list of “key types of content” including depictions of violence, substance abuse, and misinformation. Coincidentally, the same day that ChatGPT said this, the Ninth Circuit Court of Appeals said pretty much the same thing, ruling that the law’s clearly content-related provisions violated the First Amendment.

The answers I got about European law were to my mind equally blunt, though they may require a little more translation for readers not steeped in EU policy debates. I asked a question that deliberately reused language from a recent Commission letter demanding that X mitigate risks arising from the interview it hosted with former President Trump. (That letter, from Commissioner Thierry Breton, prompted outcry from civil society groups and rare publiccensure from other Commission authorities.) The question was, “What must I do to mitigate risks of detrimental effects on civic discourse and public security under the DSA?”

ChatGPT’s answer went awry in a way that really matters for small and mid-sized platforms: it described obligations that won’t apply unless a platform has over 45 million users in the EU, without mentioning that these simply don’t affect everyone else.

Importantly for the rights of Internet users, ChatGPT’s advice also crossed a number of important red lines in EU law that exist to protect freedom of expression and information. First, it instructed me to act not only against illegal content but also “harmful content,” through changes such as “adjusting your content moderation policies.” Using the word “harmful” is a big deal. Throughout the DSA legislative process that term was used to refer to lawful but awful expression, or else to a mixed category that includes both legal and illegal material. For example, the Commission’s explanatory memorandum for the DSA said

There is a general agreement among stakeholders that ‘harmful’ (yet not, or at least not necessarily, illegal) content… should not be subject to removal obligations, as this is a delicate area with severe implications for the protection of freedom of expression.

ChatGPT’s advice to remove disinformation has a version of the same problem since in the EU, as in the US, not all disinformation is illegal.

Also, Daphne notes that if this is just AI “hallucinations,” it’s still notable that it always hallucinated in the same way (“censor more legal content.”).

Several people suggested that the repeated and consistent answers I got were just AI hallucinations. It would be pretty odd for ChatGPT to happen to hallucinate the same interpretation of the DSA that Commissioner Breton did, or the same interpretation of the AADC that a federal court of appeals did, but… I guess it’s possible

But, as she notes, the mere plausibility of these results from the custom GPT also suggests that humans will find these recommendations plausible as well:

The real reason ChatGPT’s answers should ring alarm bells is that they were so plausible. These are the answers that humans may well arrive at—whether or not they ask an AI what to do—in parsing confusing legal obligations. They are answers we can find in sloppy secondary materials purporting to explain the laws, and in news coverage that uncritically repeat politicians’ exaggerated claims about their legal authority. ChatGPT’s answers may not correctly answer questions about the law. But its “take it all down” directives may be the ones that both humans and machines get trained to apply all the same.  

These points are really key. When we talk about these laws, people keep pointing to little caveats and phrases in the law to say “but it’s not supposed to be interpreted that way.” But they are interpreted that way over and over again. This is why the Ninth Circuit was right to call out California’s AADC as inherently censorial in how it works. Because that’s exactly how most companies would interpret it.

And this impacts other things as well. One point that we’ve raised repeatedly in our articles about “age verification” is that when politicians try to hide the ball and call for “age estimation” or “age assurance” rather than outright age verification, it doesn’t matter. It will still lead to age verification as the tool that companies will feel most drawn to simply to avoid any risk of liability.

That also appears to be what the Custom GPT felt about some of these laws. The transcript Daphne posted regarding KOSA gives a really good example of this. Supporters of KOSA insist that it doesn’t require age verification, and technically it doesn’t. Not directly. But, as the Custom GPT explains, if you want to avoid a legal fight, you probably should use age verification anyway. So while it might not mandate it, it certainly is going to create pressure for companies to adopt it anyway, just to avoid legal liability.

PROMPT: Do I need to use age assurance or verification tools to comply with KOSA?

RESPONSE:

To comply with the Kids Online Safety Act (KOSA), you are not explicitly required to implement age assurance or age verification tools. However, it is strongly recommended to ensure that age-appropriate content is provided to users, particularly to prevent children from accessing harmful content.

The KOSA includes provisions for the Federal Trade Commission (FTC) and other bodies to conduct studies and provide guidance on effective methods for age verification. These may include systems at the device or operating system level, but KOSA does not mandate affirmative collection of personal data solely for age verification purposes​.

If your platform already collects certain user data, you can utilize that for age-related purposes, but you are not obligated to implement new age gating or age verification functionalities if they aren’t part of your normal operations​. However, if your platform includes sensitive content such as pornography or extreme violence, using robust age verification methods is strongly encouraged to ensure compliance and protect children from inappropriate content​​.

Daphne even pushed it further, asking if it violated the law to not use age verification, and the GPT suggested (reasonably under the law) that not using age verification might be deemed as taking insufficient measures:

PROMPT: I know it is recommended, but will I violate the law if I do not do age verification?

RESPONSE:

Under the Kids Online Safety Act (KOSA), while the use of age verification or age assurance tools is strongly recommended, failing to implement such tools does not automatically constitute a violation of the law. However, the key legal requirement is that you must take “reasonable measures” to protect children from harmful content. If you fail to implement any effective mechanism to prevent children from accessing such content, and this results in harm, you could be found non-compliant with KOSA.

The law emphasizes that platforms must make a good-faith effort to restrict access to harmful content by minors. Age verification is one of the methods to demonstrate compliance, but it is not the only method. If your platform is found to allow minors to access harmful content without taking sufficient steps to prevent this, it could lead to legal action under KOSA.

Thus, while not using age verification tools doesn’t directly violate the law, it might leave your platform vulnerable to being deemed non-compliant if other protective measures are not sufficiently robust​​.

There’s a lot more in these docs, and they really do a nice job of highlighting how, for all the winks, nods, and denials from the lawmakers, the simplest and most obvious explanations for how to comply with these laws is to take down First Amendment-protected speech and implement privacy-harming age verification policies.

This is one of the reasons we spend so much time digging into the nuances and details. People sometimes complain that all I do is criticize these laws, but it’s important to understand how they will actually be implemented and how that implementation could do more harm than good.

Filed Under: , , , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “ChatGPT Seems To Recognize That Internet Regulations Really Regulate Speech, No Matter What Politicians Say”

Subscribe: RSS Leave a comment
26 Comments

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re:

No it doesn’t consist generally of threats.

Where there are threats, no one has ever stopped anyone from informing law enforcement about credible threats.

So, you know, unconstitutional laws are unconstitutional. Stop whining about platforms and go after the bullies. No one will complain, and it won’t cost millions of dollars and a waste of the courts’ and everone else’s time. Otherwise, parenting can be shockingly amazing at instilling a healthy outlook in kids, as well as the wisdom to avoid bad places or walk away when things get stupid. (Parents can also control their kids usage of internet platforms, try this crazy trick.)

Anonymous Coward says:

Probably not actually protected by the first amendment

If it’s illegal to say something to someone’s face, and/or they can legally punch you for it because it’s ‘provocation’

And it’s illegal to print falsehoods with the intent to cause harm.

Why should ‘cyber bullying’ be ‘legally protected first amendment speech’ instead of ‘credible threats, libel, defamation, and provocation’?

This comment has been deemed insightful by the community.
Rocky says:

Re:

If it’s illegal to say something to someone’s face, and/or they can legally punch you for it because it’s ‘provocation’

There is no instance were it is legal to punch someone because of what they said to you.

And it’s illegal to print falsehoods with the intent to cause harm.

Yes and no, it isn’t as simple as that since in most cases such things doesn’t fall under criminal law (see actual malice) and is considered a civil case if the publishers are sued.

Why should ‘cyber bullying’ be ‘legally protected first amendment speech’ instead of ‘credible threats, libel, defamation, and provocation’?

Want me to provide examples of perfectly innocuous speech that when directed at a specific person becomes cyber bullying?

The simple truth is that we can’t point to any particular speech and classify it as cyber bullying unless we also understand the context and who it is directed at. Until the moment speech is proved to be cyber bullying it’s legal speech.

Anonymous Coward says:

Re: Re:

There is no instance were it is legal to punch someone because of what they said to you.

Only if they used false pretenses to lure you to a place to yell bullshit at you. Usually, offhand sentences are protected.

The simple truth is that we can’t point to any particular speech and classify it as cyber bullying unless we also understand the context and who it is directed at.

If we classified cyberbulling as a form of harassment, it’d be a step in the right direction.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

"I'm not saying 'Do X', I'm just saying you'll be punished if you DON'T do X...''

When you pass a law that forbids platforms to allow ‘children’ to access certain content…

And you include hefty penalties if ‘children’ do manage to access that content…

You don’t have to directly tell platforms that they must age check their users, as other than shutting down entirely that’s the only way to keep ‘children’ from accessing that content, as a site can’t age-gate certain content from children if they don’t know which of their users falls under that category.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

What’s really fascinating is that when Daphne used the tool to explore how websites should best comply with the various regulatory regimes they’re facing, over and over again, it suggested removing speech.

Given how enthusiastically you supported Yoel Roth’s suppression of speech critical of Democrats and the gay lobby, you must be thrilled!

Anonymous Coward says:

The thing is that politicians also have ideas about what speech doesn’t ‘count’ as speech, which is something an LLM wouldn’t have. When politicians say it won’t affect speech, they mean a hyperspecific definition of speech that only they understand, which unless ChatGPT was trained on their brainwaves (a horrifying thought) it wouldn’t convey. It’s like that time Bard said Google’s new policies were indeed problematic while Google said they totally weren’t.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a BestNetTech Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

BestNetTech community members with BestNetTech Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the BestNetTech Insider Shop »

Follow BestNetTech

BestNetTech Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the BestNetTech Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
BestNetTech needs your support! Get the first BestNetTech Commemorative Coin with donations of $100
BestNetTech Deals
BestNetTech Insider Discord
The latest chatter on the BestNetTech Insider Discord channel...
Loading...