ChatGPT Dreams Up Fake Studies, Alaska Cites Them To Support School Phone Ban
from the who-needs-reality-any-more? dept
Sometimes I love a good “mashup” story hitting on two of the different themes we cover here at BestNetTech. This one is especially good: Alaska legislators relying on fake stats generated by an AI system to justify banning phones in schools, courtesy of the Alaska Beacon. It’s a mashup of the various stories about mobile phone bans in schools (which have been shown not to be effective) and people who should know better using ChatGPT as if it was trustworthy for research.
The state’s top education official relied on generative artificial intelligence to draft a proposed policy on cellphone use in Alaska schools, which resulted in a state document citing supposed academic studies that don’t exist.
The document did not disclose that AI had been used in its conception. At least some of that AI-generated false information ended up in front of state Board of Education and Early Development members.
Oops.
Alaska’s Education Commissioner Deena Bishop tried to talk her way out of the story. She claimed that she had just used AI to help her “create the citations” for a “first draft” but that “she realized her error before the meeting and sent correct citations to board members.”
Except, that apparently is as accurate as the AI’s hallucinations. Are we sure Deena Bishop isn’t just three ChatGPTs in a trench coat?
However, mistaken references and other vestiges of what’s known as “AI hallucination” exist in the corrected document later distributed by the department and which Bishop said was voted on by the board.
The resolution directs DEED to craft a model policy for cellphone restrictions. The resolution published on the state’s website cited supposed scholarly articles that cannot be found at the web addresses listed and whose titles did not show up in broader online searches.
Four of the document’s six citations appear to be studies published in scientific journals, but were false. The journals the state cited do exist, but the titles the department referenced are not printed in the issues listed. Instead, work on different subjects is posted on the listed links.
Cool cool. Passing laws based on totally made up studies created by generative AI.
What could possibly go wrong?
And, really, this stuff matters a lot. We’ve had multiple discussions on how lawmakers seem completely drawn to junk science to push through ridiculous anti-tech bills “for the children.” Here they’re skipping over even relying on junk science to go with non-existent made up science.
It’s difficult to see how you get good policy when it’s based on something dreamed up by an AI system.
Alaska officials pathetically tried to say this was no big deal and that these were “placeholder” citations:
After the Alaska Beacon asked the department to produce the false studies, officials updated the online document. When asked if the department used AI, spokesperson Bryan Zadalis said the citations were simply there as filler until correct information would be inserted.
“Many of the sources listed were placeholders during the drafting process used while final sources were critiqued, compared and under review. This is a process many of us have grown accustomed to working with,” he wrote in a Friday email.
Again, the version that had the hallucinated citations was distributed to the board and used as the basis for the vote.
Shouldn’t that matter?
For example, the department’s updated document still refers readers to a fictitious 2019 study in the American Psychological Association to support the resolution’s claim that “students in schools with cellphone restrictions showed lower levels of stress and higher levels of academic achievement.” The new citation leads to a study that looks at mental health rather than academic outcomes. Anecdotally, that study did not find a direct correlation between cellphone use and depression or loneliness.
Great. Great.
The Alaska Beacon article has a lot more details in it and is well worth a read. In the past, we’ve talked about concerns about people relying on AI too much, but that was more about things like figuring out prison sentences or whether or not someone should be hired.
Passing regulations based on totally AI-hallucinated studies is another thing entirely.
Filed Under: ai, alaska, citations, hallucinations, mobile phone bans, schools
BestNetTech is off for the holidays! We'll be back soon, and until then don't forget to




Comments on “ChatGPT Dreams Up Fake Studies, Alaska Cites Them To Support School Phone Ban”
'Of course we found the AI trustworthy, it agreed with us!'
Nothing like using fake studies to support your position, and then when called on it claiming that they’re just ‘placeholders’ for when the real supporting evidence is found to expose that you’ve already made up your mind and evidence to support your position is entirely optional.
I do have to wonder though, if they’re willing to use fraudulent studies and claim it’s cool because they’re sure that eventually they’ll get supporting evidence how valid would they consider their own tactic used against them? Would they treat a report all about how cell-phone use in school is hugely beneficial to students, back by equally fraudulent ‘studies’, to be not only acceptable but present a persuasive argument, or would AI hallucinations suddenly become a problem?
Re: well, of course,...
…considering those who rise to claim justification from ChatGPT’s responses, are those same individuals/groups that FIRST provided ChatGPT with said ‘ideas.’
alas, exactly akin to the nation’s pining for the ability to imbibe liquid poisons, hence, the gub’ment allowing said self-poisoning, i mean, ‘legalization’ of alcohol.
exactly akin to ancient Israel’s fall, where the PEOPLE claimed that they “wanted a leader(dictator) like the other nations have!” first planted those ideas in their (human) “leadership.”
“Many of the sources listed were placeholders during the drafting process used while final sources were critiqued, compared and under review.”
Even if that were absolutely true, you just flat out admitted to cherry-picking sources to meet your narrative. You’re citing things that had no impact on writing your proposal, but are added after in an attempt to legitimize your policy. And if the facts don’t support you, you look for better facts.
(I get that this happens far more often than any of us would be comfortable with, but the casual brazeness is just galling.)
Re:
‘We regularly use citations that haven’t been checked for accuracy or have been confirmed to be remotely valid and supporting our position’ is certainly an argument, just not the one they think it is.
This comment has been flagged by the community. Click here to show it.
Re: Re:
Please convert to my anti-gaming cause, and read and help out in the comments under https://www.bestnettech.com/2024/10/30/vghf-libraries-lose-again-on-dmca-exemption-request-to-preserve-old-video-games/
Re: Re: Re:
How bout you go fuck off.
Re: Re: Re:
Roflmao the anti video game studies you are talking about are full of shit.
Re: Re: Re:2
Actually, the games included in the studies are far from shit.
Re: Re: Re:2
I think that might be the point of this satirical comment.
Re:
I came here to post this, but you’ve already done it, so please have an “insightful” vote.
Why the hell are they using full up article citations for “placeholders” instead of something like “Nature article 1”? Something that someone doing an editor/proofing pass can easly spot as needing fixing?
Re:
Sorry editors to proof read aren’t in the budget.
Re: Re:
They didn’t review my reply either.
Re: Re:
Too many shareholders whose pockets we have to fill.
Re:
Because they got as far as “LLMs might be able to replace interns” but not as far as “actually, we need people to fact check LLMs” before they were. called on it.
Of Course Lawmakers Use Fake Studies
First they use made up sound bites to whip up the base. Then they pass legislation to deal with a non-existent problem. And, then they get some flunky to find some studies to make it look good.
That seems like SOP for half the lawmakers out there.
Saying that ‘the computer did it’ is not a valid excuse.
It is along the same lines as the famous ‘My dog ate my homework’. You still get an incomplete.
I imagine this tactic will soon be employed by the politicians who lie compulsively.
Yeah, making up “placeholder” citations to replace with real studies doesn’t work all that great when there are no real studies to insert.
It’s weird that some BestNetTech writers act like AI is the greatest thing since baked bread, or at least shouldn’t be banned/blocked by privacy bills and other BestNetTech writers act like it’s the newest scam since bitcoin and we need to beef up online privacy bills to prevent it from getting worse.
Re:
Ai is a massive umbrella term that encompasses all sorts of things, from helping scientists find new drugs, to improving communication protocols, to helping sift through proves of data to find the pertinent information.
It’s the bullshit-generating generative ai that’s the problem, especially when it’s used by those in power to justify their own actions, despite knowing full well (or at least they should) that the garbage the machine spits out is precisely that.
Re:
It’s almost like tools aren’t necessarily pure good or pure evil, depending on their use or misuse.
Surely this isn’t all that difficult to understand, right?
Could you point to a story like this? Or is it possible — just possible — that the privacy bills you’re referring to just sucked, and adding “But AI!!1!” to them wouldn’t have made them better?
Re:
Citation needed.
At the end of the day, it’s a tool and like most tools it has positive and negative uses. Saying it’s an interesting and cool thing in one area doesn’t mean you support it replacing research assistants, for example.
Also, if you’re going to criticise “AI”, understand that’s a wide area, of which LLMs with a chatbot interface is only a small part, even if everyone’s fixated on ChatGPT right now.
Think of the children
What is this teaching them? That they don’t have to actually study or research anything, and that they can simply go with whatever preconceived notions they like just as long as they can get an ai to agree with them?
Just another fine example of America’s education system.
The real issue...
The real issue here isn’t even that it’s piss poor AI generated slop, that hasn’t been properly vetted. It’s AN issue for sure, but I think there’s a bigger one.
It’s the process that lead to this AI slop being created in the first place. I think it went something like this:
1 Start with the conclusion. Phones bad.
2 Ask ChatGPT for justification for our conclusion.
3 ChatGPT does as it’s told, even if it has to hallucinate.
Now, most people who have gone through basic education know that it should look more like this:
1 Gather evidence.
2 Form a hypothesis.
3 Test it.
4 Draw conclusion from the data.
Yes, I hate the AI slop too – but lets be honest here: this Commissioner would have made the exact same proposal without access to AI, because she had already decided what the truth was. She just used AI to lazily justify it. Without AI, it would have taken more work, but she would have still have found a (dishonest) way of justifying it.
Re: I disagree about the real issue...
In my opinion, the real issue is: Reliance on authority
The various parties relied on the presenter to be presenting the truth.
The fact that the citations (and the ‘facts’ relying on them) were ChatGPT hallucinations is only a technical detail. The presenter could have invented ‘studies’, links, and ‘results’ the old fashioned way – by making them up out of whole cloth – and the results would have been the same.
If you’re not going to check those assertions and references, you’re going to find yourself startled that your “price-locked for lifetime” deal was actually “for as long as the phone company wanted to keep the prices the same”. Or that those 6 months “interest free” simply meant that the interest held back for those 6 months would be added back in on month 7 if you hadn’t paid it off completely by then. Or that your dog or cat in Springfield is perfectly safe from marauding politicians.
It pays to check your facts.
After reading this which I found interesting I had a thought that kinda makes me raise a question and I’ll admit it’s a bit far fetched but makes me wonder so here’s my unusual question.
I’m wondering if the advocates pushing for bills like KOSA are using AI to make up fake poll numbers like “91% of people support KOSA” which of course is total bullshit and using AI to push misinformation on how KOSA will protect children and using stories about children dying/injured from TikTok, Facebook, etc to justify that bills like KOSA would’ve prevented the children from doing stupid stuff but the stories at times seem off where it could’ve been easily prevented with information/education also.
I’ll admit my question seems a bit stupid and unusual but at times with stuff how it’s going the stupidest crap is happening more and more these days and add in politicians with advocacy groups using AI to justify “think of the children” and making up bullshit it wouldn’t surprise me anymore.
Re:
They don’t need to use AI to make shit up they habitually make up themselves on a daily basis. They could though, if they wanted to, for some reason.
Re: Re:
True on that.
I know politicians use a lot of verbal bullshit to push for “think of the children” laws like KOSA and usually bills like KOSA get blocked/thrown out in court due to first amendment violations as well.
I’m so tired of politicians like Blumenthal and Blackburn who don’t understand the internet and come up with junk science and verbal bullshit to justify that the internet needs to be controlled to save the children.
The 20th Circuit is going to have a lot to say about this.
Uh. No. This is NOT how you do this. You do not use “placeholder” sources. Either you include the source as a citation because you have determined that it does in fact say what you claim it says, or you don’t include it at all. No good research process would ever allow “placeholder” sources (at least, I’m like 99 percent sure about this, and extremely sure that if someone tried this on, say, Wikipedia, the article would be very quickly challenged).
What about parents who will tell the school they are not going to make theur kids obey a specific rule
When I used to take on cb radio there was one woman who let her teenage son carry a cb walkie talkie with him so he could contact her if needed saying “don’t call me about my kids carrying bc walkie talkies, I allow it”
Just like my father’s friend who was very much a smokers rights advocate told the school “don’t call me if you see my kids smoking, I allow it”
A parent could the school they allow their kids to carry cell phones because they want to be able to contact their kids if needed.
Parents still have rights in this country
Re:
Just like my father’s friend who was very much a smokers rights advocate told the school “don’t call me if you see my kids smoking, I allow it”
Never seen ‘being not just a dumbass but a terrible parent’ phrased as ‘smokers rights’, learn something new every day I guess.
Re:
Hey if they aren’t on school property, more power to ’em, i guess.
Everyone else has rights, too. Weird how that works.
Re:
That is a weird take on the subject. Two of the things you listed were that you require direct contact with your kids even while in class and even if they disrupt the wishes of other parents, without involving the school themselves. Which just seems creepy.
The other is advocating that children who aren’t legally allowed to but cigarettes be allowed to smoke them on premises no matter the objections of other children, parents, or potentially law enforcement.
As for “rights”, other people have them too. Parents do indeed have rights – that includes the parents who don’t want your spawn lighting up illegally obtained things in front of their kids, for example.
That should be illeagal
They Alaska legislators should be brought up on perjury charges for crap like this,
Re:
Those poor sick eagles ..
“Source?”
“This was pnce revealed to the AI in a dream.”
Alaska’s Education Commissioner Deena Bishop tried to talk her way out of the story. She claimed that she had just used AI to help her “create the citations” for a “first draft” but that “she realized her error before the meeting and sent correct citations to board members.”
So, you just admitted to making everything up. “Place-filler citations”? Like lorem ipsum? But for facts that don’t exist?
Look, sweetie. You don’t make shit up then go looking for something to support your claims. You deal with the data, then reach conclusions.
Fook Board Against Education.