Advertisement

We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

Shutterstock/Aleksandra Suzi
VOICES

Opinion How did we ever believe self-regulation by tech platforms would be a good idea?

Noeline Blackwell says we must stop being naive about social media companies and regulation.

IN LEINSTER HOUSE recently, a senior regulator announced that the era of self-regulation by digital tech companies was over. Niamh Hodnett, the Online Safety Commissioner with Coimisiún na Meán, was addressing an Oireachtas Committee.

If indeed that turns out to be the case, then it’s a good start. However, it does beg the question as to why we ever thought that self-regulation by the tech platforms would be a good idea.

While the internet and social platforms have had – and still have – scientific and community values, we’ve known for many years that they are also big business. Like other businesses, they have shareholders looking for returns on their investments, banks seeking repayment of loans and ambitious CEOs wanting to excel. They will have their own standards and values of course but ultimately, their job is to grow their businesses and make profits.

Abuse

This is relevant when it comes to how we address the absolute scourge of online abuse that blights the lives and health of online users. Children and young people are particularly vulnerable. While most tech companies acknowledge this and have some processes in place, it is unrealistic to expect them to prioritise children or other vulnerable users unless they can independently decide the steps they need to take and strike a balance between their self-accepted values and profits.

There has been a lot of focus in recent times on abusive and harmful users of the internet. At the launch of Cuan, the new State agency to tackle domestic sexual and gender-based violence, the short advertisement highlighting the crime of threatening to share intimate images was played as an effective example of raising awareness. There has been less focus however, on how the processes of the digital platforms can create harm.

Big tech has built platforms and services to keep us online as long as possible. The longer we are online, the more the platform can profit. Often, this does no harm, and can even be fun. However, it is different if you happen to be a 12-year-old caught in a loop of negative comment after negative comment with your worst fears being amplified at a rate you cannot get ahead of it.

It is different if you are a 16-year-old and your interests in health and fitness are manipulated towards content on eating disorders. And the online space you are in, is designed in a way to keep you there as long as possible, seeing as much as possible. Venturing into the dark, murky waters of the world of online harm, it is very easy to feel scared and completely alone. Should you decide to report, you will find that every platform has a different way of defining and reporting harmful content. For many it is a fight to even get a response from the platform. If the response is missing or unsatisfactory, then unless there’s a crime you’re willing to report, you remain at the end frustrated, and more defeated than ever.

Genuine protections

It should be a requirement for platforms to ensure no child has to endure the perpetual and pervasive nature of online harm in any form. Not only that, but it should also be a requirement for them to prevent this harm from happening in the first place. The safety and wellbeing of our children and young people in the online world should be a non-negotiable, not something to be debated in degrees by big tech giants who should be using every tool at their disposal to find and build solutions to this.

If we want to end the era of self-regulation that these companies have enjoyed at our expense, then we need robust regulation that sets the standards and lays down penalties and consequences when they are not met. Platforms should not be given the offer to create these standards themselves.

They do not put children first, so Government has to.

We currently have the chance to begin this process through a first legally binding Online Safety Code. Ireland’s Online Safety Commissioner can put safety by design and privacy by default principles on a solid legal footing and actually create a safer online space for every one of us.

However, the current draft of the Online Safety Code is throwing that chance away. It offers nothing more than business as usual and leaves the door wide open for platforms to continue to decide their own rules of operating. It lacks any real ambition or conviction when it comes to the harms that happen now and if that is the case, it certainly will not be able to stand up to the very real threat of harms that may occur as technology develops.

We fought for an Online Safety Commissioner as part of a new regulator with real teeth. The Code will not be the silver bullet solution that answers all our concerns, but it needs to set the tone for how this new regulator and the Irish State responds when harms happen online. And it needs to send a clear message to online platforms and services: that its bite is worse than its bark.

Noeline Blackwell is the Online Safety Co-ordinator at The Children’s Rights Alliance.

Your Voice
Readers Comments
92
This is YOUR comments community. Stay civil, stay constructive, stay on topic. Please familiarise yourself with our comments policy here before taking part.
Leave a Comment
    Submit a report
    Please help us understand how this comment violates our community guidelines.
    Thank you for the feedback
    Your feedback has been sent to our team for review.

    Leave a commentcancel