Hide BestNetTech is off for the holidays! We'll be back soon, and until then don't forget to check out our fundraiser »

When The Internet Grew Up — And Locked Out Its Kids

from the taking-the-lazy-way-out dept

In December 2025, the world crossed a threshold. For the first time ever, access to the major social media platforms was no longer guaranteed by interest, connection, or curiosity — but by a birth date. A new law in Australia decrees that people under 16 may no longer legally hold accounts on major social-media services. What began as parental warnings and optional “age checks” has transformed into something more fundamental: a formal re-engineering of the Internet’s social contract — one increasingly premised on the assumption that young people’s participation in networked spaces is presumptively risky rather than conditionally beneficial.

Australia’s law demands that big platforms block any user under 16 from having an account, or face fines nearing A$50 million. Platforms must take “reasonable steps” — and many will rely on ID checks, biometric checks, or algorithmic age verification rather than self-declared ages, which are easily falsified. The law was officially enforced in December 10, 2025, and by that date, major platforms are expected to have purged under-16 accounts or face consequences. 

It’s not just Australia. Across the Atlantic, the European Parliament has proposed sweeping changes to the digital lives of minors across the European Commission’s domain. In late November 2025, MEPs voted overwhelmingly in favor of a non-binding resolution that would make 16 the default minimum age to access social media, video-sharing platforms and even AI-powered assistants — unless parental consent is given. Access for 13–15-year-olds would still be possible but only with consent. 

The push is part of a broader EU effort. The Commission is working on a harmonised “age-verification blueprint app,” designed to let users prove they are old enough without revealing more personal data than necessary. The tool might become part of a future EU-wide “digital identity wallet.” Its aim: prevent minors from wandering into corners of the web designed without their safety in mind. 

Several EU member states are already acting. Countries such as Denmark propose banning social media for under-15s unless parental consent is granted; others — including France, Spain and Greece — support an EU-wide “digital majority” threshold to shield minors from harmful content, addiction and privacy violations. 

The harm narrative – and its limits

The effectiveness of these measures remains uncertain, and the underlying evidence is more mixed than public debate often suggests. Much of the current regulatory momentum reflects heightened concern about potential harms, informed by studies and reports indicating that some young people experience negative effects in some digital contexts — including anxiety, sleep disruption, cyberbullying, distorted self-image, and attention difficulties. These findings are important, but they do not point to uniform or inevitable outcomes. Across the research, effects vary widely by individual, platform, feature, intensity of use, and social context, with many young people reporting neutral or even positive experiences. The strongest evidence, taken as a whole, does not support the claim that social media is inherently harmful to children; rather, it points to clustered risks associated with specific combinations of vulnerability, design, and use.

European lawmakers point to studies indicating that one in four minors displays “problematic” or “dysfunctional” smartphone use.  But framing these findings as proof of universal addiction risks collapsing a complex behavioral spectrum into a single moral diagnosis — one that may obscure more than it clarifies.

From the outside, the rationale feels compelling: we would never leave 13-year-olds unattended in a bar or a casino, so why leave them alone in an attention economy designed to capture and exploit their vulnerabilities? Yet this comparison quietly imports an assumption — that social media is analogous to inherently harmful adult-only environments — rather than to infrastructure whose effects depend heavily on design, governance, norms, and support.

What gets lost when we generalize harm

When harm is treated as universal, the response almost inevitably becomes universal exclusion. Nuance collapses. Differences between children — in temperament, resilience, social context, family support, identity, and need — are flattened into a single risk profile.

The Internet, however, was never meant to serve a single type of user. Its power came from universality — from its ability to give voice to the otherwise voiceless: shy kids, marginalized youth, LGBTQ+ children, rural teenagers, creative outsiders, identity seekers, those who feel alone. For many young people, social media platforms are not simply entertainment. They are places of learning, authorship, peer support, political awakening, and cultural participation. They are where teens practice argument, humor, creativity, solidarity, dissent — often more freely than in offline institutions that are tightly supervised, hierarchical, or unwelcoming.

When policymakers speak about children online primarily through the language of damage, they risk erasing these positive and formative uses. The child becomes framed not as an emerging citizen, but as a passive object of protection — someone to be shielded rather than supported, managed rather than empowered.

This framing matters because it shapes solutions. If social media is assumed to be broadly toxic, then the only responsible response appears to be removal. But if harm is uneven and situational, then exclusion becomes a blunt instrument — one that protects some children while actively disadvantaging others.

Marginalized and vulnerable youth are often the first to feel this loss. LGBTQ+ teens, for example, disproportionately report finding affirmation, language, and community online long before they encounter it offline. Young people in rural areas or restrictive households rely on digital spaces for exposure to ideas, mentors, and peers they cannot access locally. For these users, access is not a luxury — it is infrastructure.

Generalized harm narratives also obscure agency. They imply that young people are uniquely incapable of learning norms, developing judgment, or negotiating risk online — despite doing so, imperfectly but meaningfully, in every other social domain. This assumption can become self-fulfilling: if teens are denied the chance to practice digital citizenship, they are less prepared when access finally arrives. Treating youth presence online as a problem to be solved — rather than a reality to be shaped — risks turning protection into erasure. When the gate is slammed shut, a lot more than TikTok updates are lost: skills, social ties, civic voice, cultural fluency, and the slow, necessary process of learning how to exist in public.

As these policies spread from Australia to Europe — and potentially beyond — we face a world in which digital citizenship is awarded not by curiosity or contribution, but by age and identity verification. The Internet shifts from a public square to a credential-gated club.

Three futures for a youth-shaped Internet

What might this reshape look like in practice? There are three broad futures that could emerge, depending on how regulators, platforms and civil society act.

1. The Hard-Gate Era

In the first future, exclusion becomes the primary safety mechanism. More countries adopt strict minimum-age laws. Platforms build age-verification gates based on government IDs or biometric systems. This model treats youth access itself as the hazard — rather than interrogating which platform designs, incentive structures, and governance failures generate harm.

The social cost is high. Marginalized young people may lose access to vital communities and the Internet becomes something young people consume only after permission — not something they help shape.

2. The Hybrid Redesign Era

In a second future, regulatory pressure triggers transformation rather than exclusion. Age gates are narrow and specific. Platforms are forced to redesign for youth safety. Crucially, this approach assumes that harm is contingent, not inherent — and therefore preventable through design.

Infinite scroll and autoplay may be disabled by default for minors. Algorithmic amplification might be limited or made transparent. Data harvesting and targeted advertising curtailed. Privacy defaults strengthened. Friction added where needed.

Here, minors remain participants in the public sphere — but within environments engineered to reduce exploitation rather than maximize engagement at any cost.

3. The Parallel Internet Era

In the third future, bans fail to eliminate demand. Underage users migrate to obscure platforms beyond regulatory reach. This outcome highlights a central flaw in the “inherent harm” narrative: when access is blocked rather than improved, risk does not disappear — it relocates.

The harder question

There is real urgency behind these debates. Some children are struggling online. Some platform practices are demonstrably irresponsible. Some business models reward excess and compulsion. But if our response treats social media itself as the toxin — rather than asking who is harmed, how, and under what conditions — we risk replacing nuanced care with blunt control.

A digital childhood can be safer without being silent, protected without being excluded and, supported without being stripped of voice.

The question is not whether children should be online. It is whether we are willing to do the harder work: redesigning systems, reshaping incentives, and offering targeted support — instead of declaring an entire generation too fragile for the public square.

Konstantinos Komaitis is Resident Senior Fellow, Democracy and Tech Initiative, Atlantic Council

Filed Under: , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “When The Internet Grew Up — And Locked Out Its Kids”

Subscribe: RSS Leave a comment
30 Comments

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

There is no known cure for stupidity, but electing stupids makes us all stupid. Next step: you will need to pass a test and a background check to obtain a license that allows you to own a computer, but you need an advanced license to own a fast computer. Licenses are valid for only two years; after that, you must requalify. Then we limit the number of search requests you’re allowed in a day/week/month/year.

Anonymous Coward says:

I think it’s gonna be a mix of 2 and 3.

The broader trend (mostly by big tech companies to retain a larger base) is to have users of all ages but attempt to thread the needle via “safety by design”.

This will inevitably lead younger users who are suffering into more obscure corners of the internet where they feel they can express themselves thus harm is not eliminated, just swept under the rug.

VJGoh says:

Crucial context missing

I saw a comment left by an Australian about this on a different site, and they pointed out a crucial bit of context that you’re also missing here: this applies to social media apps/sites that are governed by an algorithm. TikTok and Facebook and Instagram feed you content based on an algorithm, and the algorithm is tuned for engagement and nothing else.

Youth in Australia are still allowed to use the internet. They’re still allowed to chat on Discord and play in Roblox. This entire article seems predicated on the assumption that the whole internet is cut off from them, but it’s not. They can still watch YouTube, they just can’t log in and have the algorithm deliver suggestions to them, or let them be engage with strangers in the comments.

To be sure, this is not perfect. If nothing else, the ability to circumvent all of this is pretty trivial. But this isn’t the catastrophe that you’re making it out to be. You don’t even offer any particularly good alternatives in the end, just more questions. They’re fine questions, but we’ve already got plenty of those. To boldly declare that we’ve tried nothing and we’re all out of ideas simply isn’t good enough anymore.

Anonymous Coward says:

Re:

Then these people need to learn what an algorithm is. Sorting by date is an algorithm. Random sort is an algorithm. Web search is an algorithm. The fucking Dewey Decimal System is an algorithm.

“The [evil] algorithm” may not deliver suggestions to the kids, but an algorithm does. Unless there are no suggestions, no searching for things, and no way to actually do anything. You can’t just ban algorithms and expect everything to work itself out.

Rocky (profile) says:

Re: Re: Re:

When writing laws one shouldn’t use colloquialisms and instead use well defined terminology to avoid ambiguity and application creep which often leads to interpretation issues, “unintentional” use, collateral damage and protracted legal wrangling.

Using colloquialisms and vague language in laws generally means that either the writers of the laws are bad at it or it is done intentionally for various reasons – most of them not good.

Try again?

Anonymous Coward says:

Re: Re: Re:2

Disingenuous much? The ridiculous and outdated American standards of allowing internet predation are being increasingly proven irrelevant, finally. These companies cause mass social harm and feed addiction, a clear collective action problem means simple solutions, parents cannot solve this only simple rules can.

Drew Wilson (user link) says:

Re:

To be sure, this is not perfect. If nothing else, the ability to circumvent all of this is pretty trivial. But this isn’t the catastrophe that you’re making it out to be.

The technology is broken, fails to regularly detect people who re under age, already have a history of getting hacked and having people’s personal identities, facial recognition scans, and government ID stolen, and makes children worse off as they are now taught to use different platforms or hide the fact that they are under age. All of this while governments around the world are increasingly morph this into an effort to track the day to day movements of every day citizens who aren’t even suspected of committing a crime (re: The UKs renewed effort for a country-wide Digital ID system). How, exactly, is failing in every way imaginable and making everything significantly worse not a “catastrophe”?

You don’t even offer any particularly good alternatives in the end, just more questions.

That’s easy: the alternative is to scrap age verification efforts altogether. No more easy access to massive troves of personal information for cyber-criminals thanks to these laws.

To boldly declare that we’ve tried nothing and we’re all out of ideas simply isn’t good enough anymore.

Every accusation is a confession. This is exactly what those pushing age verification laws are guilty of doing. Did they offer counselling for people dealing with problems? Nope. Did they revamp privacy laws? Nope. Are they offering information courses? Nope. Are they giving private sector tools to parents to help them protect their children online? Nope. Are they doing anything to protect LGBTQ+ communities online? Nope. Age verification lobbyists responded to what they saw and said, “we tried nothing, and we’re all out of ideas, time to demand age verification laws”. As you said, this “isn’t good enough”.

Anonymous Coward says:

Re:

They can still watch YouTube, they just can’t log in and have the algorithm deliver suggestions to them, or let them be engage with strangers in the comments.

Have you ever fucking tried that? i dare you. i double-dog dare you. You think there is no algorithm because you’re not signed in? Do you know what an algorithm is? Have you ever sorted anything alphabetically?

Sure, first thing i’d want from social media is a bunch of random shit, rather than posts from my friends and people or organizations i follow for a reason. That would be great.

VJGoh says:

Re: Re:

  1. “Have you ever tried that?”: Yes. I do not log into YouTube on my work machine very deliberately. I still use it. I still find and watch videos. It’s not terribly difficult.
  2. “Do you know what an algorithm is?” I have a degree in computing science. Do YOU know what an algorithm is? I’ve been a professional programmer for my entire career. I’ve probably worked on games that you’ve played.

Let’s stop playing dumb here, shall we? Algorithmic delivery of content is the critical issue here, and we all know it. To talk about sorting alphabetically is comically reductive, besides the point, and the kind of diversion I would expect from an audience much less sophisticated than the people here. That is: you know better, stop playing like you don’t. I don’t believe for a second you think I’m talking about sorting videos and moreover, I ALSO don’t think you think the Australian law is about sorting videos. It’s about the stuff that YouTube does that leads kids down alt-right pipelines, that delivers misogynist content, the kind of stuff that tells them that they’re too fat or too ugly or not good enough.

We know perfectly well that the algorithms I’m talking about are tuned for engagement, and ONLY engagement. Not positive engagement, ANY engagement. We ALSO know that making people mad is the surest way to catch their attention, so that’s what they do. We’ve all seen it.

So I’d appreciate it if everyone reading this thread (not just you) stops strawmanning this whole affair. I’m not talking about fucking bubblesort and you all know it.

If you don’t like this solution, OFFER SOMETHING BETTER. I’ve read literally no solutions at all here, just “not like that” over and over again. I’m not married to this solution if someone serves up a better one. I’m not actually that excited about age verification either because it makes EVERYONE’S life harder, including mine. But just because the problem is hard and the solutions are incomplete doesn’t mean we get to fiddle while Rome burns.

Strawb (profile) says:

Re:

I saw a comment left by an Australian about this on a different site, and they pointed out a crucial bit of context that you’re also missing here: this applies to social media apps/sites that are governed by an algorithm.

So…all of them. Even if a user set their feed to Newest/Chronological, that’s still an algorithm.

Youth in Australia are still allowed to use the internet. They’re still allowed to chat on Discord and play in Roblox.

For now, at least. From what I can find, a lot of people are suggesting adding Roblox and Discord to the ban list.

You don’t even offer any particularly good alternatives in the end, just more questions.

Criticising something doesn’t mean that one needs to come up with alternatives.

To boldly declare that we’ve tried nothing and we’re all out of ideas simply isn’t good enough anymore.

So any port in a storm, eh? That’s what this ban is: something had to be done about kids on social media, and the politicians did something.

Arianity (profile) says:

we would never leave 13-year-olds unattended in a bar or a casino, so why leave them alone in an attention economy designed to capture and exploit their vulnerabilities? Yet this comparison quietly imports an assumption — that social media is analogous to inherently harmful adult-only environments

Are casinos “inherently” harmful? I would say it’s more that some are vulnerable to it, and children more so due to having less developed self control. I wouldn’t call it uniform or inevitable.

Arianity (profile) says:

Re: Re:

Eh, I mean most businesses are trying to do that, right? If you look at it like an entertainment expense, paying money for an experience isn’t inherently harmful. It’s not so different from a night out at Disneyland or the movies, they’re trying to bilk you too. Where it becomes a problem is when it intentionally taps into addictive behaviors/lack of self control. And gambling gets it’s hooks into people a lot more than Disneyland.

I’ve never been a gambling guy because I hate knowing it’s rigged, but I have friends that treat it like a night out. They bring $100 once a month, and once it’s gone, night’s over. If they win, that’s a bonus. But then you get that one guy who can’t make rent because he blew it all on payday looking for his next hit.

cls says:

not a moral panic at all!

All of this banning the youth is just cover for sports betting! Sports betting is huge in Aus, and everywhere, these days.

Folks were concerned that kids were involved, and sports betting interests didn’t want to give up anything, so they went hardball and invented “think of the children” to keep the underage out – of everything.

Anonymous Coward says:

When harm is treated as universal, the response almost inevitably becomes universal exclusion. Nuance collapses. Differences between children — in temperament, resilience, social context, family support, identity, and need — are flattened into a single risk profile.

This is the key point in the article IMO.

Not a single person who claims to be “reducing harm” to kids by creating age-gating/censorship policies ever bothers to give any serious measure of the “harm” they claim to be preventing.

The cure should not be worse than the disease.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a BestNetTech Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

BestNetTech community members with BestNetTech Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the BestNetTech Insider Shop »

Follow BestNetTech

BestNetTech Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the BestNetTech Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
BestNetTech needs your support! Get the first BestNetTech Commemorative Coin with donations of $100
BestNetTech Deals
BestNetTech Insider Discord
The latest chatter on the BestNetTech Insider Discord channel...
Loading...