Advertisement

We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

The Committee heard that children are particularly vulnerable to AI systems online. Alamy Stock Photo
Social Media

AI algorithms pushing content 'romanticising' suicide to children, Oireachtas Committee hears

The inclusion of children’s protections in AI policy has often been an afterthought, according to one contributor.

SOCIAL MEDIA ALGORITHMS are pushing suicide related content to distressed children online, the Oireachtas committee for children has heard. 

In a meeting of the committee this afternoon, the panel of TDs and senators heard from various experts about the need to protect children from the effects of Artificial Intelligence (AI) online, which pushes harmful content aimed at vulnerable individuals and offers cybercriminals new tools for exploitation and extortion.

The Irish Council for Civil Liberties (ICCL) was among the contributors to the meeting and described how AI-driven algorithms called Recommender Systems provide personalised social media feeds designed to provoke and addict users by pushing more and more extreme content, including videos that romanticise suicide. 

“Over the past year, we’ve seen new AI features being rolled out and into the hands of children with little thought to the consequences,” said Clare Daly of Cyber Safe Kids.  

‘Glamourising suicide’ 

As part of its submission to the Committee, the ICCL quoted from a story shared by the campaign community Uplift, in which a contributor described how their niece fell victim to this AI-driven curated content and eventually died by suicide after being exposed to content that romanticised it. 

An investigation by Amnesty International revealed how this happens, the ICCL said.

“Just one hour after their researchers started a TikTok account posing as a 13-year-old child who views mental health content, TikTok’s AI started to show the child videos glamourising suicide,” they said. 

“This recommender system AI manipulates and addicts our children. It promotes hurt, hate, self-loathing, and suicide.”

Another contributor in today’s meeting was Cyber Safe Kids, an NGO dedicated to protecting children online. 

“The internet was not designed with children in mind: these are environments that many adults struggle to understand and to manage effectively, let alone children and young people,” they said. 

Extortion 

Among the concerns raised by Cyber Safe Kids was the extortion of children by cybercriminals.

“Cybercriminals seeking to sexually extort online users, including children, are using advanced social engineering tactics to coerce their victims into sharing compromising content,” they said. 

“We know that this is impacting children in this country because we have had calls from families whose children have been affected,” they said. 

They cited the case of a teenage boy who thought he was talking to a girl of his own age in a different county and was persuaded to share intimate images. He was then told in the that if he didn’t pay several thousand Euro, the images would be shared in a private Instagram group of his peers and younger siblings.

“This threat is very real and terrifying”, said the Cyber Safe Kids’ Clare Daly, who added that new technology is making it possible for criminals to create digital images of children without even persuading them to send them. 

An ‘afterthought’ 

The inclusion of children’s protections in AI policy has often been an afterthought, according to one contributor. 

“All too often, children are simply entirely left out of conversations regarding AI, and the development of AI,” said Caoilfhionn Gallagher, Ireland’s Special Rapporteur on Child Protection. 

Gallagher told the Committee that legislation and policy related to AI has largely left out detailed references to protecting children, both abroad and here in Ireland.

“Following what is a clear international pattern, I note that in the Government of Ireland’s 2021 AI Strategy, the section dedicated to ‘risks and concerns’ is brief, and there is no dedicated focus upon child protection issues or children’s rights.”

She said that the Government has the challenge of ensuring that “both the unique opportunities and risks which AI systems pose for children are considered in a child-centred way”.

Gallagher recommended Irish legislators consider two examples of frameworks that buck the international trend of overlooking AI’s impacts on children, drawn up by UNICEF and the Council of Europe.

Both documents recognise the importance of protective measures, ensuring that children are safe but also the importance of ensuring inclusion, non-discriminatory inclusion,” she said. 

A work in progress

“Despite the hype, while the field of AI has made progress over the last decade or so, major obstacles still exist to building systems that really compete with the capabilities of humans,” said UCC computer science professor Barry O’Sullivan, who has worked in AI for 25 years. 

The technology is pervasive though, as he pointed out, describing how it is used in smartphone apps, streaming recommendation systems and social media sites. 

“The technology itself is not problematic, but it is powerful, and can, therefore, be abused in ways that can be extremely impactful. Combined with the power of social media, the combination can be devastating, given the reach that is possible.” 

He said that there are “specific considerations in relation to the protection of children” in the EU’s AI Act, including some “specific use-cases that will be prohibited in the EU”. 

He did say, however, that AI systems are also used to protect children from harmful content by filtering things out of online feeds, something that Cyber Safe Kids said should be harnessed further. 

They described the need for legislators to respond to the “urgent challenge that is growing at pace”.

“There are no quick fixes but a meaningful solution will involve legislation, regulation, education and innovative approaches.”

The ICCL’s Johnny Ryan said that businesses cannot be relied on to implement the required changes to policy, particularly recommender algorithms.

“Tech corporations will not save our children.” 

Your Voice
Readers Comments
15
This is YOUR comments community. Stay civil, stay constructive, stay on topic. Please familiarise yourself with our comments policy here before taking part.
Leave a Comment
    Submit a report
    Please help us understand how this comment violates our community guidelines.
    Thank you for the feedback
    Your feedback has been sent to our team for review.

    Leave a commentcancel