We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

The Data Protection Commission announced last week that it was investigating X over non-consensual, intimate or sexualised images being allegedly created using AI tool Grok.

Ireland's DPC has pushed back against a TD's allegation that it is 'too close to big tech'

The head of the Data Protection Commission said it is “up against very well-resourced legal teams” when big tech companies challenge the fines it levies.

THE DATA AND privacy regulator has defended how it holds big tech to account, arguing that it has levied billions of euros in fines against multinational companies.

The head of the Data Protection Commission told TDs and senators at the Oireachtas Artificial Intelligence (AI) Committee that there was “no fear or favour” in how they apply the law.

Ireland’s media and online regulator, Coimisiún na Meán, also said it was not “inherently illegal” for companies to provide people with access to an AI tool to create child sexual abuse images, even though the creation of such images was an offence.

The Data Protection Commission (DPC) and Coimisiún na Meán were before the committee this morning about to deepfake images and the recent controversy by AI tool Grok.

The social media site X, owned by tech billionaire Elon Musk, has come under criticism over Grok, which has been accused of generating sexualised images, including of children.

The controversy highlighted a possible loophole about regulations in Ireland around non-consensual images.

Senior government figures have insisted that there is sufficient legislation to investigate and prosecute child sexual abuse material and non-consensual sexualised images of adults that have been generated through AI tools online.

However, a senior garda said that intimate abuse imagery of adults needs to be shared to constitute an offence and a complainant is required to prompt an investigation.

The controversy also raised questions over whether children’s access to social media should be restricted by the government.

The European Commission is, with the assistance of Coimisiún na Meán, conducting an investigation into X’s compliance with its obligations after the Grok controversy.

The DPC announced last week that it was investigating X over non-consensual, intimate or sexualised images being allegedly created through generative AI involving the personal data of EU citizens, including children.

€4 billion in fines

Appearing before committee this morning, chairman of the DPC Des Hogan said the transformative potential of AI will only be realised if its “substantive risks and potential harms” are addressed.

Asked by People Before Profit TD Paul Murphy if the DPC was “too close to big tech”, he said: “I think we’ve over €4bn levied fines at the moment.

“I can certainly say that my predecessor and current commissioners were very, very focused on regulating everyone, be it public or private sector, there’s no fear of favour, a guarantee of that, and we will apply the law as we apply the law.

“We work very closely with our peer regulators, we listen to their views, we listen to civil society organisations.”

Hogan said that one of the challenges they face is the legal challenge to the decisions they make against big tech.

He said all fines levied against large platforms were being challenged, bar two.

In the public sector, he said all fines had been accepted, bar one in relation to the €550,000 fine for the Department of Social Protection over the Public Services Card, which is due before the High Court next month.

“We are not only being litigated under statutory appeal, which is provided for in the Data Protection Act, we’re concurrently being judicially reviewed in each of those decisions and that is a difficulty.”

He added:

I would say that we’re up against very well-resourced legal teams, if I could put it like that.

“By and large, we feel that they’re defendable, they were taken over a number of years. We get criticised for taking inquiry decisions over a number of years. We do that because we’re very careful, and we give fair procedural rights to the parties.”

‘Horror stories’

Executive chairman of Coimisiún na Meán Jeremy Godfrey said the creation of child sex abuse material is illegal under Irish law and social media platforms “must remove it when reported”.

329Joint Committee on Artificial Intelligence_90743541 Jeremy Godfrey, executive chairperson of Coimisiún na Meán, arrivng for the meeting this morning. Leah Farrell / RollingNews.ie Leah Farrell / RollingNews.ie / RollingNews.ie

He said that because the non-consensual sharing of intimate images is a criminal offence in Ireland under Coco’s Law, there were consequential obligations on platforms to remove that material.

But he said it was not “inherently unlawful” to deploy an AI system capable of creating child sex abuse material and said action to prohibit this could be useful.

“It’s unlawful in the Irish law to produce the imagery, but it’s the deployment of the tool that would be prohibited under the AI Act.

“So at the moment, it’s not a criminal offence under Irish law to deploy an AI system that can be used in that way, using it in that way is a criminal offence.

“It’s not on the users of AI, not on the people who are putting prompts in, the AI Act puts obligations on the developers and deployers of AI models and AI systems.

“So it would be another tool so that people weren’t provided with the ability to break the law in such an easy way.”

Asked if there are other areas of high risk in relation to AI, he cited generative AI being used as companions and therapists.

There are some horror stories of having very severe and damaging effects on people’s mental health as a result of it.

“So I think there’s certainly risks around the way people interact with generative AI that could potentially be addressed by broadening the categories of high risk systems to include a wider range of chatbots and generative AI tools.”

He added: “We don’t have a very specific proposal about how that might be done.

“It’s something which is in the European Commission’s remit to change, so some review to look at how the list of high risk systems might be added to reflect some of the risks created by generative AI, we think would be a good idea.”

Close
Comments
This is YOUR comments community. Stay civil, stay constructive, stay on topic. Please familiarise yourself with our comments policy here before taking part.
Leave a Comment
    Submit a report
    Please help us understand how this comment violates our community guidelines.
    Thank you for the feedback
    Your feedback has been sent to our team for review.

    Leave a commentcancel

     
    JournalTv
    News in 60 seconds