Hide BestNetTech is off for the holidays! We'll be back soon, and until then don't forget to check out our fundraiser »

Australia Completely Loses The Plot, Plans To Ban Kids From Watching YouTube

from the down-under-and-upside-down-policymaking dept

Last fall, heavily influenced by Jonathan Haidt’s extremely problematic book, Australia announced that it was banning social media for everyone under the age of 16. This was already a horrifically stupid idea—the kind of policy that sounds reasonable in a tabloid headline but crumbles under any serious scrutiny. Over and over again studies have found that social media is neither good nor bad for most teens. It’s also good for some—especially those who are in need of finding community or like-minded individuals. It is, also, not so great for a small group of kids, though the evidence there suggests that it’s worst for those dealing with untreated mental health issues, which causes them to use social media as an alternative to help.

There remains little to no actual evidence that an outright ban will be helpful, and plenty to suggest it will be actively harmful to many.

But now Australia has decided to double down on the stupid, announcing that YouTube will be included in the ban. This escalation reveals just how disconnected from reality this entire policy framework has become. We’ve gone from “maybe we should protect kids from social media” to “let’s ban children from accessing one of the world’s largest repositories of educational content.”

Australia said on Wednesday it will add YouTube to sites covered by its world-first ban on social media for teenagers, reversing an earlier decision to exempt the Alphabet-owned video-sharing site and potentially setting up a legal challenge.

The decision came after the internet regulator urged the government last week to overturn the YouTube carve-out, citing a survey that found 37% of minors reported harmful content on the site.

This is painfully stupid and ignorant. The claim that 37% of minors reported seeing harmful content is also… meaningless without a lot more context and details. What counts as “harmful”? A swear word? Political content their parents disagree with? A video explaining evolution? What was the impact? Is this entirely self-reported? What controls were there? Just saying 37% is kind of meaningless without the details.

This is vibes-based policymaking dressed up in statistics. You could probably get 37% of kids to report “harmful content” on PBS Kids if you asked them vaguely enough. The fact that Australia’s internet regulator is using this kind of methodological garbage to reshape internet policy tells you everything you need to know about how seriously they’ve thought this through.

But also, YouTube is not just effectively the equivalent of television for teens today—it’s often far superior to traditional television because it’s not gatekept by media conglomerates with their own agendas. The idea that you should need to be 16 years old to watch some YouTube programs is beyond laughable, especially given the amount of useful educational content on YouTube. These days there are things like Complexly, Khan Academy, Mark Rober, and plenty of other educational content that kids love and which lives on YouTube. Kids are learning calculus from 3Blue1Brown, exploring history through Crash Course, and getting better science education from YouTube creators than from most traditional textbooks. This isn’t just entertainment—it’s democratized education that bypasses the gatekeeping of traditional media entirely.

This isn’t just unworkable—it’s the construction of a massive censorship infrastructure that will inevitably be used for purposes far beyond “protecting children.” Once you’ve built the system to block kids from YouTube, you’ve built the system to block anyone from anything. And that system will be irresistible to future governments with different ideas about what content people need to be “protected” from.

And the Australian government already knows that age verification tech is a privacy and security nightmare. They admitted as much two years ago.

Of course, kids will figure out ways around it anyway. VPNs exist. Older friends exist. Parents who aren’t idiots exist—and they’ll help their kids break this law. The only thing this accomplishes is teaching an entire generation that their government’s laws are arbitrary, unenforceable, and fundamentally disconnected from reality. It’s teaching kids to have less respect for government.

This isn’t happening in a vacuum, either. Australia is part of a broader global trend of governments using “protect the children” rhetoric as cover for internet control. The UK’s porn age verification disaster, the US Kids Online Safety Act, similar proposals across Europe—they all follow the same playbook. Identify a genuine concern (kids sometimes see stuff online that isn’t great for them), propose a solution that sounds reasonable in a headline (age limits!), then implement it through surveillance and censorship infrastructure that can be repurposed for whatever moral panic comes next.

The end result will be that Australia has basically taught a generation of teenagers not to trust the government, that their internet regulators are completely out of touch, and that laws are stupid. But it goes deeper than that. This kind of blatantly unworkable policy doesn’t just breed contempt for specific laws—it undermines the entire concept of legitimate governance. When laws are this obviously disconnected from technological and social reality, it signals that the people making them either don’t understand what they’re regulating or don’t care about whether their policies actually work. It’s difficult to see how that benefits anyone at all.

Filed Under: , , , , ,
Companies: youtube

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Australia Completely Loses The Plot, Plans To Ban Kids From Watching YouTube”

Subscribe: RSS Leave a comment
45 Comments

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

What counts as “harmful”? A swear word? Political content their parents disagree with? A video explaining evolution? What was the impact?

The question not really asked is “who, if anyone, was actually harmed?”

None of the shit listed above causes harm any more than hearing a knight say “ni” (unless, of course, it interrupts sleep or has a volume above about 85 dBA). But I was actually harmed by school: at least one bullying-related physical injury, and regular nightmares about assignments and exams until about a decade after graduation. I think the numbers of people affected in such ways would be well above 37%—that’s one of the most common nightmare themes, for example, repeated in countless TV shows.

For that matter, why is TV not included in the ban? I wonder if this is a stealth handout to prop up the linear-TV-related businesses, which would otherwise be mostly unknown to the young people. Not that it’ll work; I and my siblings knew our parents’ passwords long before we had any of our own. A lot of parents today are probably not even logging out of web sites when walking away from their computers.

Miles Archer says:

73.6% of statistics...

I don’t have the exact number, but research* shows that 73.6% of statistics were made up on the spot.

I’m amused that this statistic is 37% not 73% – they likely come from the same source.

  • Several web sites quote this number. These web sites are extremely accurate and dependable, like everything else you can find on the internet.
Jason says:

Re:

Firstly. In the proposal Youtube is not banned. The kids just can’t open an account. Im assuming this is because currently vile content has just a tick box asking if you are over 18. The kids can still access videos that aren’t flagged over 18. Please use your right to free speech with more thoughtfulness and dedication in the future so we dont give governments the ok to suppress it in the name of those that are to lazy to take the responsibility… same reason we have road laws and other laws because unfettered freedom causes chaos and is a pipe dream… Thanks for your attention and have a nice day.👍

Anonymous Coward says:

People will just sign in and let kids use their account even the UK does not consider a block on YouTube to be a good idea . I have never seen any bar content in YouTube also do parents not have some role in controlling content . There’s most kids content on YouTube cartoons and education than any tv channel

Dad of 4 says:

Re:

Most of the kids from my kid’s school had their parents setup their accounts, so they have adult accounts already. The same goes for Roblox and such. This isn’t because they want their kids to have adult accounts, it’s just most are tech-illiterate and don’t really know, or care how anything works, so they haven’t bothered delving deep enough to do it differently. Some just block YT entirely, as they don’t know about parental filters, so those kids just use friend’s accounts. Either way, it’s frustrating for my kids, that can’t view or play the same as their friends.

The people making these laws are entirely oblivious, or they only care about control. Otherwise, they must not have ever had any involvement in parenting.

Anonymous Coward says:

Re:

I wonder how familiar American kids are with PBS, anyway. Are 37% even aware of it? Cable TV is an old-person thing, and antennas are a thing rumored to have been used in the days of their grandparents or maybe great-grandparents. But perhaps PBS puts stuff online (and let’s hope not just on Youtube).

alister (profile) says:

It's not a ban - it's even stupider than that

It’s not a ban on using YouTube or other social media services. It’s stupider than that; it’s a ban on making an account. See Crikey for an explainer. Today, a child can have an account that their parent/s can also see. That account can be steered away from harmful content. Once the ban goes into place, the child is at thr whims of YouTube’s algorithm, with no real ability to govern what they see. It’s not that this law is stupid – which it is – it’s that it’s actively counterproductive.

Anonymous Coward says:

Re:

Wait, is that all this is? I was under the impression that what you describe was the original plan, and this is a change in the plan.

Today, a child can have an account that their parent/s can also see. That account can be steered away from harmful content.

This statement presumes that children can be harmed by watching videos. It’s a common hypothesis often treated as fact, despite the lack of any evidence.

alister (profile) says:

Re: Re:

This statement presumes that children can be harmed by watching videos. It’s a common hypothesis often treated as fact, despite the lack of any evidence.

My life is made much easier if I can be confident my children aren’t being radicalised by MRAs or TERFs. I can hear what they’re watching now – I think parents should actually parent – but I think it’s a stretch to say there’s no evidence to suggest children can be harmed by watching videos. I wouldn’t let a five year old watch Nightmare on Elm Street, and I have directly observed harm from videos, if only in terms of nightmares and fear of being left alone.

Anonymous Coward says:

Re: Re: Re:

I wouldn’t let a five year old watch Nightmare on Elm Street

That was the age I was when I first saw the bed-blender scene from the original movie, and from the way the blood dripped from a pool that hung impossibly from the ceiling, I knew it wasn’t real (my mechanical savantism started kicking in around that age). Because of that, I didn’t have any nightmares from it either. I’m not saying that Nightmare on Elm Street is never harmful to kids, but I do think how harmful it is depends on the individual kid.

Anonymous Coward says:

Re: Re: Re:

if only in terms of nightmares

Sure, but school causes those too (quite famously). And that’s sometimes considered post-traumatic stress disorder, whereas nightmares in general are not considered harmful unless they’re so frequent and intense as to make people sleep-deprived.

Are videos more likely to “radicalise” kids than books, video games, radio dramas, and such? And is it actually reasonable for you to be “confident” that they’re not seeing particular videos? As other commenters wrote, they’ll see stuff from friends, via the accounts of parents, or just while not logged in.

Arianity (profile) says:

There remains little to no actual evidence that an outright ban will be helpful, and plenty to suggest it will be actively harmful to many.

Well, we’re about to get a whole bunch of data in one direction or another.

The claim that 37% of minors reported seeing harmful content is also… meaningless without a lot more context and details. What counts as “harmful”? A swear word? Political content their parents disagree with? A video explaining evolution? What was the impact? Is this entirely self-reported? What controls were there? Just saying 37% is kind of meaningless without the details.

The website is kind of a mess, with half the links not working, but there is an actual survey with methodology they’re referencing. See e.g. https://web.archive.org/web/20250708201126/https://www.esafety.gov.au/research/the-online-experiences-of-children-in-australia

From what I can find,it seems to be ‘Content associated with harm’ includes such things as sexist, misogynistic or hateful content, content depicting dangerous online challenges or fight videos, or content that encourages unhealthy eating or exercise habits. link

you’ve built the system to block anyone from anything. And that system will be irresistible to future governments with different ideas about what content people need to be “protected” from.

Ignoring the slippery slope part for a moment, those systems already exist. To the extent that anything is new, it’s the potential for tracking.

Andrew Johnston (user link) says:

Re:

Here’s the report:

https://www.esafety.gov.au/sites/default/files/2025-07/Digital-use-and-risk-Online-platform-engagement-10-to-15.pdf?v=1754539552676

Their categories of “content associated with harm” are as follows:

Offensive, sexist or hurtful things about girls or women
Fight videos
Dangerous online challenges
Things that encourage unhealthy eating or exercise habits
Offensive or threatening things about other people online because others are hateful of their identity
Sexual images or videos
Things that show or encourage illegal drug taking
Extreme real-life violence
Things that suggest how a person can hurt or kill themselves on purpose
Violent sexual images or videos
Something else upsetting

And the methodology is here:

https://www.esafety.gov.au/sites/default/files/2025-05/Keeping-kids-safe-online-methodology.pdf?v=1754540342575

It took about five minutes to find this. It really feels like Masnick could have at least tried to look this up before going to the juvenile hyperbole.

Arianity (profile) says:

Re: Re:

Thanks! Can confirm, those links work for me. (It seems I may have just happened to search when they were doing maintenance on Aussie hours. Reopening previous links that broke on me are working fine now)

And yeah, this was pretty disappointing from Mike. It’s very clear from context she’s referencing something, and the article he’s linking to is from her spoken comments, so you can’t even ding her for not linking it or whatever.

Anonymous Coward says:

Re: Re:

…juvenile hyperbole.

What “juvenile hyperbole”? Included in the list you linked to is the category “Something else upsetting”, so like Mike, I can easily imagine a website about spiders being falsely categorized as “harmful to children” and blocked under this law because a parent of a child with arachnophobia reported it, leading to the preventable deaths of people bitten by actinopodidae because online information about the best treatment for trapdoor spider venom is no longer available in Australia. Might that be the “juvenile hyperbole” you refer to?

Tdestroyer209 says:

Agree with what you said in the article Mike.

Here’s where the age verification shit in Australia gets more ridiculous is that Collective Shout aka the group that is pissing off gamers across the globe with their actions as of late.

Well it turns out several of their members are affiliated with the Australian Government and those members are involved in the age verification crap for search engines in Australia.

First UK doing dumb age verification crap that is failing and soon Australia going to have a similar situation especially when the members of Collective Shout don’t seem to be very tech literate either.

Ugh I wish the age verification stupidity would just screw off.
(Starts banging head at a nearby wall and screaming profanities non stop)

Anonymous Coward says:

We need dedicated devices that are sold for child users and identify themselves as child devices to web servers.
Parents can decide if they give their children a child device or an adult device.
No other system will work, this at least has a chance cos it’s built in to the hardware. Then if your child is exposed to harmful content given them an adult device or the website is clearly breaking the law.
Computers don’t know who is using them.

Ben (profile) says:

Re: client side validation

What you’re suggesting is that sites will have to take the word of the connecting device that it is a ‘child device’. This is a known anti-pattern – web developers have long known that it is stupid in the extreme to trust validation on the client side. One should always validate on the server side.

And once you believe the device that’s connected, you’ll believe anything… fancy this bridge I’ve got for sale?

Anonymous Coward says:

Re: Re:

web developers have long known that it is stupid in the extreme to trust validation on the client side.

That’s true if the developers care about the validation. But all they care about in this case—which is already “stupid in the extreme”—is staying out of trouble with the government. Doing the absolute minimum that the law requires is best for everyone.

Lizzie O'Shea says:

Thank you for bringing this to the attention of your readers. For the record, this policy has been extremely controversial and resisted by civil society (including my org, Digital Rights Watch). It is popular, but tenuously so, with the public. People instinctively support it I would argue for good reasons (they worry about the harmful aspects of social media), but remain sanguine about the possibility it will work, and worried about privacy and security. (This is the govt’s own poll on the topic.) Just for completeness, there are sensible voices and ideas over here, they just don’t always get picked up by policy makers.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a BestNetTech Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

BestNetTech community members with BestNetTech Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the BestNetTech Insider Shop »

Follow BestNetTech

BestNetTech Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the BestNetTech Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
BestNetTech needs your support! Get the first BestNetTech Commemorative Coin with donations of $100
BestNetTech Deals
BestNetTech Insider Discord
The latest chatter on the BestNetTech Insider Discord channel...
Loading...