We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

MRI scan Alamy Stock Photo

Stronger safeguards needed as AI healthcare grows in Europe, WHO warns

Almost two-thirds of European countries, including Ireland, are already using AI-assisted diagnostics.

THE GROWING USE of artificial intelligence in healthcare necessitates stronger legal and ethical safeguards to protect patients and healthcare workers, the World Health Organisation’s Europe branch said in a report published today.

That is the conclusion of a report on AI adoption and regulation in healthcare systems in Europe, based on responses from 50 of the 53 member states in the WHO’s European region, which includes Central Asia.

Only four countries, or 8%, have adopted a dedicated national AI health strategy, and seven others are in the process of doing so, the report said.

“We stand at a fork in the road,” Natasha Azzopardi-Muscat, the WHO Europe’s director of health systems, said in a statement.

“Either AI will be used to improve people’s health and well-being, reduce the burden on our exhausted health workers and bring down healthcare costs, or it could undermine patient safety, compromise privacy and entrench inequalities in care,” she said.

Almost two-thirds of countries in Europe are already using AI-assisted diagnostics, especially in imaging and detection, while half of countries have introduced AI chatbots for patient engagement and support.

The Mater Hospital in Dublin recently began using artificial intelligence across its radiology department.

It’s used to analyse all head scans for bleeds, all chest scans for blood clots, and all bone x-rays for fractures, to make sure patients with the most urgent needs are seen first.

In September, the Royal College of Surgeons in Ireland (RCSI) began offering an AI in Healthcare course. Trinity College Dublin launched a similar course earlier this year.

The WHO urged its member states to address “potential risks” associated with AI, including “biased or low-quality outputs, automation bias, erosion of clinician skills, reduced clinician–patient interaction and inequitable outcomes for marginalised populations”.

Regulation is struggling to keep pace with technology, the WHO Europe said, noting that 86 percent of member states said legal uncertainty was the primary barrier to AI adoption.

“Without clear legal standards, clinicians may be reluctant to rely on AI tools and patients may have no clear path for recourse if something goes wrong,” said David Novillo Ortiz, the WHO’s regional advisor on data, artificial intelligence and digital health.

The WHO Europe said countries should clarify accountability, establish redress mechanisms for harm, and ensure that AI systems “are tested for safety, fairness and real-world effectiveness before they reach patients”.

Author
View 14 comments
Close
14 Comments
This is YOUR comments community. Stay civil, stay constructive, stay on topic. Please familiarise yourself with our comments policy here before taking part.
Leave a Comment
    Submit a report
    Please help us understand how this comment violates our community guidelines.
    Thank you for the feedback
    Your feedback has been sent to our team for review.

    Leave a commentcancel

     
    JournalTv
    News in 60 seconds