Advertisement

We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

Shutterstock
VOICES

Opinion When it comes to news trust, AI presents both opportunities and risks

Dr Kirsty Park reacts to this year’s global Digital News Report and looks at the impacts of AI on journalism as a whole.

THE DEBUT OF OpenAI’s ChatGPT in November 2022 sparked a surge of interest in the future of artificial intelligence, with the chatbot gaining over 100 million users within a few months.

Tech giants like Google and Microsoft are now actively dedicating considerable resources and manpower to incorporate AI into their business plans, even amidst periods of mass layoffs. While these tools are in their infancy, there has been much speculation on how they might disrupt the way we live and work and journalism has been no exception.

Although newsrooms have long utilised various forms of AI, generative AI and its capacity to engage in advanced creative tasks such as writing, presents entirely new possibilities and risks, particularly regarding trust in news and the future of journalism.

When Buzzfeed CEO Jonah Peretti addressed investors, he discussed the closure of the Pulitzer Prize winning Buzzfeed News while also expressing his belief that “generative AI will begin to replace the majority of static content” and that “audiences will begin to expect all content to be personalised, interactive, dynamic with embedded intelligence”.

Information, disrupted

The Reuters Institute Digital News Report is released today. Covering 46 countries across the world, the study aims to understand how news is being consumed globally, with a particular focus on digital news consumption.

Analysis of the Irish data is carried out by researchers from the DCU Institute of Future Media, Democracy and Society (FuJo). Its findings make for some interesting reading. A little over half of Irish news consumers express apprehension about personalised news potentially omitting important information (53%) or presenting limited perspectives (51%). Furthermore, 31% of respondents consider story selection based on their previous consumption habits to be a favourable method of obtaining news, whereas 25% hold the same view regarding story selection conducted by editors and journalists.

These technological developments occur against a backdrop of increasing levels of concern about what is real and fake online, up 6pp in Ireland in Ireland this year, as well as a decline of 5pp among those who feel they can trust most news most of the time.

Just over half of Irish news consumers are concerned that personalised news may result in missing important information (53%) or challenging viewpoints (51%).

One risk with generative AI is that it can be used to flood the information ecosystem with deceptive or low-quality information, making it a dangerous tool for spreading misinformation.

However, trust is also a concern as publishers experiment with AI. CNET recently revealed that there were substantial mistakes in a number of articles written with the assistance of their own ‘internally designed AI engine’.

The risk of deception goes beyond content too, as The Irish Times issued an apology for the publication of an opinion piece, where both the article itself and the accompanying author photo were fabricated by an individual who employed generative AI to portray themselves as a young immigrant woman.

Who wins?

Another significant risk relates to the impact on journalists themselves. While current AI outputs are prone to errors, as the technology advances, publishers may opt to reduce costs by replacing human work with AI.

The emergence of generative AI also introduces the unsettling possibility that a news outlet could train a customised AI model using their own content, meaning the work of journalists could be threatened by a technology trained on their own journalistic material which has serious ethical implications.

The European Commission is currently developing the AI Act, which will introduce regulation of AI at an EU level. It looks likely that the approach taken will be similar to the Digital Services Act, which acknowledges that some services, like the largest of social media companies, inherently come with societal risks which must be countered with both transparency and accountability measures.

With the ongoing AI revolution, the demand for transparency becomes increasingly crucial within the field of journalism too. In terms of content creation, this requires the use of clear bylines or disclosures regarding the usage of AI as well as public editorial guidelines detailing a publisher’s approach to AI.

These measures are essential to ensure that readers can understand how AI technology is shaping the news they consume.

However, transparency also means more openness and disclosure around why readers see the news they do on both a publisher’s digital channels e.g. website, app or emails, as well as on their own social media feeds, which means more transparency around content delivery and recommender algorithms.

It is clear that the impact of AI on news production and consumption will only grow. Publishers and, to some extent, social media companies bear the responsibility of ensuring that the benefits of AI outweigh its risks.

Dr Kirsty Park is a post-doctoral researcher at DCU’s Institute for Future Media, Democracy and Society and the Disinformation Lead at the European Digital Media Observatory (EDMO) Ireland Hub. This year’s global Digital News Report can be found here and the Irish report, sponsored by Coimisiún na Meán, can be found here. This article was written by Kirsty Park and ChatGPT was used in the editing process.

VOICES

Your Voice
Readers Comments
2
This is YOUR comments community. Stay civil, stay constructive, stay on topic. Please familiarise yourself with our comments policy here before taking part.
Leave a Comment
    Submit a report
    Please help us understand how this comment violates our community guidelines.
    Thank you for the feedback
    Your feedback has been sent to our team for review.

    Leave a commentcancel