We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

European Parliament Brussels.

European Parliament votes to ban 'nudification' apps

The European Parliament passed its amendment to the Artificial Intelligence Act by 569 votes to 45, with 23 abstentions.

LAST UPDATE | 26 Mar

THE EUROPEAN PARLIAMENT has banned AI applications which can be used to create non-consensual nude images, often known as ‘nudification’.

The parliament backed amendments to the Artificial Intelligence Act by 569 votes to 45, with 23 abstentions.

Irish independent MEP Michael McNamara, who co-chairs the parliament’s AI Act implementation working group, said people could use a ‘nudifier’ system without realising the profound damage it can cause, but that we need to make sure we are protecting people from the potential harm.

The Strasbourg parliament was recently addressed by Jackie Fox, the Irish anti-cyberbullying campaigner who successfully campaigned for Coco’s Law in Ireland following the tragic death of her 21-year-old daughter, Nicole ‘Coco’ Fox, to suicide in 2018 after years of online bullying.

“She spoke very movingly, and she also spoke about the specific dangers that AI poses because of the scale at which this can be done and that it needs to be to be counted,” McNamara said.

The Journal / YouTube

When asked whether it’s enough to just ban nudification systems or whether young people need to be educated on the dangers that they pose, McNamara said:

“I think you have to do both. I think you have to try to make people aware of the harm that it can cause, but at the same time, also remove this capability to create such harm at scale that AI offers.”

The AI Act was introduced in 2024 and includes a section dealing with high risk AI, the health sphere as well as different spheres that are determined to pose a particular risk to society if it goes wrong. Implementation has been delayed.

“The next steps are that the Council of the European Union and the Parliament will determine their positions and then enter into inter-institutional negotiations to determine their final compromise, which is expected to happen by April,” McNamara said.

“That is a very short timeframe by normal standards, but it is because the the high risk section of the AI act is due to end on the second of August. So if you want to delay its implementation, you have to do so before then.”

Who enforces the ban on nudifier systems?

The AI office in the European Commission will be responsible for enforcing along with market surveillance authorities in the member states. 

“Almost all AI systems could or are capable of performing nudification, but most have chosen not to do that. So if you prompt ChatGPT, Claude, or any of the others, they will refuse to do it,” McNamara said.

“However, some of them have chosen to provide that service and that is what is being targeted. They can no longer do that without explicit consent.”

“At the moment, nudifiers are perfectly lawful. And so they will become unlawful in the in the EU, unless people are doing it with explicit consent of somebody else. So it’s not just one company, it’s any of these apps are the subject of this legislation.

“The companies found to be allowing nudifiers on their platforms will face very large fines, which is a percentage of their overall turnover. They are commercial entities and it is about dissuading them,” he said.

The use of AI for disinformation

McNamara says there are also possibilities that AI can be used for misinformation and disinformation at a scale never seen before  

“I have a concern, and I think the media industry is very concerned by how AI is sort of centralising. If you did an online search before, you would throw up five or six different links that you could click on,” he said.

“Once you clicked on those links, the media outlet whose link you clicked on to, were in a position to monetise their content. Now that’s not the case, because you will get one sort of answer. So that is very worrying, and it’s resulted in a huge concentration of the media in the United States.”

McNamara says it is important that we make sure that if somebody is using the content of somebody else or using copyrighted material, that they’re able to be remunerated for the use of their copyrighted material.

“There’s a debate as to whether this development of licenses in this market, in copyrighted material, should evolve sort of organically, or whether they should be legislated for in the Parliament,” he said.

“It is ultimately quite difficult if people choose to get their news from AI. I wouldn’t choose to get my news from an AI, but if people choose to, then it’s their right to do so.

“But I think they should be cognisant of the risk that brings. I don’t see how you can stop them per se and the alternative is that you have more and more government support for media being introduced.

“That could bring its own risk, because media companies become reliant on support and that could eventually raise questions of media independence down the line.”

Readers like you are keeping these stories free for everyone...
A mix of advertising and supporting contributions helps keep paywalls away from valuable information like this article. Over 5,000 readers like you have already stepped up and support us with a monthly payment or a once-off donation.

Close
JournalTv
News in 60 seconds