We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

Alamy Stock Photo

U-16s social media ban Protecting kids online shouldn’t mean killing privacy

As governments consider banning children from social media, critics say the focus should be on regulating tech companies and algorithms — not restricting young people’s digital rights, writes Andrea Horan.

ON MONDAY, THE UK voted against implementing a social media ban for under-16s. There was the expected knee-jerk hyper moral reaction that is always expected when there are children involved – won’t somebody think of the children?

It is very difficult to remove emotion from the conversation when the safety and rights of children are involved, which is completely understandable. Most people want children to be safe and kept away from harm.

But it’s important too to interrogate policies like this in an honest way, and free from the emotional weight of instinct that comes with implementing a blanket ban on access to social media for children. We need to delve into solutions that will actually keep children safe in this digital age without compromising our privacy, online anonymity and security rights.

Calling for an outright ban on social media for children seems, on the face of it, the right thing to do. At this point, we are all aware of the much-documented issues with social media and living a digital life. Implementing a ban for under 16’s feels like we’re at least doing *something*. We’re restricting their exposure to harmful material. Governments can say they’ve tried to protect children without actually having to deal with defining what constitutes harmful material and who gets to decide where that line is.

However, as we’ve seen consistently with prohibition, outright bans don’t work. In countries where similar bans have been introduced, searches for Virtual Private Networks (VPNs) have increased. When the Online Safety Act went into effect in the UK in July 2025, one provider, Proton VPN, reported that its signups surged by 1400%,  whilst VPN apps became the most downloaded on Apple’s App Store. There, the NSPCC children’s charity warned a ban could drive teenagers into unregulated corners of the internet.

Knee-jerk reaction

Closer to home, Children’s Rights Alliance, Online Safety Coordinator Noeline Blackwell says a ban “punishes children for the fixable faults created by the tech giants by denying them the social engagement that is some of the best parts of social media.

“A ban does not provide the clubs, the youth workers, the safe playgrounds, the links to their families and friends in the offline world that might nourish and support children,” and that a ban could drive them to “socialise in secretive ways which predators use to groom children for sexual and financial abuse”.

Jeff Guenther, also known as popular online therapist TherapyJeff, said, “Kids who need the phones most- queer kids in unsupportive homes; kids with absent parents; kids who found their only community online – are the exact kids who get hurt most with blanket bans. The ones already on the margins are the ones who pay the price.

“Delaying access doesn’t magically give kids skills to handle it; you have to teach them, gradually, with guardrails. And with ongoing conversations about what is out there.”

jeff Jeff Guenther, known as 'TherapyJeff', shares his views on smartphone use for kids. Instagram / therapyjeff Instagram / therapyjeff / therapyjeff

We have to accept that we live in a world that is digital, and instead of trying to simply delay access, admit that we need to make the digital world better. We have a huge amount of tactics that already exist in our arsenal that we aren’t implementing, which could instantly reduce the need for as blunt a tool as a ban.

The Irish Council for Civil Liberties (ICCL) has already urged the government to focus on holding tech companies accountable rather than restricting the digital rights of young people.

They suggest doing this by:

  • Turning off social media algorithms by default. Social media companies use algorithms to drive engagement. These increasingly drive users, especially children and vulnerable users, to increasingly extreme videos.
  • Implementing Section 30 of the Data Protection Act 2018. This section was never put into force, but it would make it illegal to target children with advertising or algorithms.
  • Putting in place the resources to meaningfully regulate online advertising to limit how it targets both children and adults with advertising which is non-transparent and often fraudulent and malevolent.

What the ICCL also raises as an issue is the proposed method for implementing the social media ban. Currently, the Office of Public Expenditure and Reform has said the proposal to deliver age verification would be through a new Irish Government Digital Wallet like MyGovID, where users can “store, manage, and share verified digital credentials issued by public-service bodies”, which amounts to state-run digital identity checks for internet users.

MyGovID

The database that has been created enrolling people in the MyGovID scheme, the legality of which has been in question for the past 15 years, was recently found to be in breach of privacy rules by the Data Protection Commission (DPC).

Olga Cronin, Human Rights and Surveillance Senior Policy Officer of ICCL, said: “For the State to consider expanding or relying further on the already-controversial MyGovID system, especially as it’s underpinned by an unlawful facial biometric database and is still under investigation by the DPC, is deeply irresponsible.

“For the State to team up with Big Tech as a means to dismantle online anonymity and identify internet users only compounds these issues.”

In October last, a digital hack of the messaging app Discord exposed 70,000 government-issued IDs. The hackers then set up a Telegram channel and exposed the sensitive data used for age verification. 

We already voluntarily give over so much of our data to use online services, and have even started wearing trackers that record our health and fitness data, which is now being used by some insurance companies to provide dynamic pricing based on continuous data for risk assessments.

Surely an online identification system that links our online usage to our PPS numbers, social welfare, tax and all government services is a step too far?

Who is in charge?

In Australia, since stricter age-verification checks have been introduced, including facial recognition technology, digital IDs and credit card details, the use of VPN’s has surged.

Aylo, the owner of PornHub, RedTube, YouPorn and Tube8, has blocked access to nudity and is not currently accepting new account registrations for Australians to protest the ban.

A spokesperson for Aylo told the Sydney Morning Herald that Australia’s new approach “does not effectively protect minors, and instead creates harms relating to data privacy and exposure to illegal content on non-compliant platforms”.

Sex workers have been sounding the warning bells on surveillance and its potential for damage for decades. Olivia Snow, Research Fellow, UCLA’s Center on Resilience & Digital Justice and author of forthcoming book: “Canaries in the Coal Mine: Sex Work and Surveillance”, says “sex workers are one of many test populations for surveillance technologies that ultimately seek to surveil the population at large.”

Billionaire Larry Ellison, Oracle co-founder and ally, confidant and donor to Donald Trump said: “Citizens will be on their best behaviour because we’re constantly recording and reporting everything that’s going on.” Currently, body cams, doorbell cams, drones that lock on your car are all feeding into Oracle, a private corporation.

So who gets to decide what defines best behaviour? What may be acceptable today may not be in the very near future. Accounts talking about women’s health, abortion, LGBTQ issues and trans rights are finding themselves shadow-banned online, and with government IDs introduced, what we’re browsing today might be off the cards tomorrow depending on who is deciding these acceptable behaviours.

For example, in a hidden-camera recording, Russ Vought, the Project 2025 co-author, explains that age verification laws are just a pretext to shut down porn sites

He said, “We came up with an idea on pornography to make it so that the porn companies bear the liability for the underage use, as opposed to the person who visits the website getting to just certify”, that they are of legal age.

Vought called this a “back door starting with the kids” and offered the age verification laws as an example of an “immediate fight leverage point that we can win” that sets up “the next fight.”

“We’d have a national ban on pornography if we could, right?” he added.

According to the Project 2025 foreword ‘all pornography “should be outlawed” and its producers “imprisoned”.

Bob Corn-Revere, chief counsel at the Foundation for Individual Rights and Expression in the US, says, “Using children as a way to get a ‘foot in the door’ has long been a strategy employed by those in anti-free speech movements”.

What’s the answer?

It’s easy to moralise sex, so porn today, but what comes next? As the Heritage Foundation continues its assault on women’s rights, trans people, LGBTQ people and racial justice, who knows?

Carole Cadwalladr the investigative journalist who exposed the Facebook–Cambridge Analytica data scandal has previously said, “Act as if you are now living in East Germany and Meta / Facebook/ Instagram/ WhatsApp is the Stasi. It is.”

So, is there a solution that protects children whilst also protecting internet users’ privacy?

In December, Mashable spoke to several experts who all mooted device level filters as the solution.

A device-level filter puts the onus on the operating system of a device, rather than on an app, website or government database to request a date of birth during set-up that then restricts access based on this setting.

This stops the requirement for any data to be entered and tracked across platforms, including biometric data and reduces algorithmic surveillance and the sale of such data to the highest bidder.

Simon McGarr, lawyer and founder of The Gist, said in 2024, “My idea about what content may impair the moral development of minors” and that of others “may be quite different.

“We are verging close to revisiting Ireland’s history with the Censorship Board, where everything from Ulysses to Women’s Magazines could be declared immoral.”

It is imperative that we balance the rights of our children with privacy rights so that the internet, a tool that transformed our lives, doesn’t bring us back in time.

Andrea Horan is the founder of Dublin nail bar Tropical Popical, The Hunreal Issues, co-founder of No More Hotels, and co-presenter of podcasts Don’t Stop Repealin‘ and United Ireland. 

Readers like you are keeping these stories free for everyone...
A mix of advertising and supporting contributions helps keep paywalls away from valuable information like this article. Over 5,000 readers like you have already stepped up and support us with a monthly payment or a once-off donation.

View 5 comments
Close
5 Comments
This is YOUR comments community. Stay civil, stay constructive, stay on topic. Please familiarise yourself with our comments policy here before taking part.
Leave a Comment
    Submit a report
    Please help us understand how this comment violates our community guidelines.
    Thank you for the feedback
    Your feedback has been sent to our team for review.

    Leave a commentcancel

     
    JournalTv
    News in 60 seconds