Sunday, November 17, 2024
HomePoliticsTaking down illegal content online would not have been enough to stop...

Taking down illegal content online would not have been enough to stop the riots  – Politics.co.uk


The UK is still recovering from the fallout of the devastating attack in Southport and racist riots which followed. Disinformation about the identity of the Southport attacker, racist narratives and incitement to violence were being amplified on social media throughout the riots, and horrendous actions including people attempting to burn down a hotel housing asylum seekers. We shouldn’t be surprised. Even an insurrection organised on social media in the United States has not led to widespread reform at social media companies. In many ways, things have gotten worse.

Here in the UK, questions have been raised about the strength of the Online Safety Act. Passed in October last year, it promised to make the UK ‘the safest place in the world to be online’, and is supported by this new Labour Government1. It now must take serious action if it hopes to make that promise a reality.

It should be said that the Act is not properly in effect yet, as Ofcom has still to set out fully the details of the duties that platforms have: the lack of immediate accountability for social media companies does not itself mean the Act has already failed. But even if the Act had been in force, the riots have exposed fundamental flaws in how the Act is set up to deal with harms like the threats of violence and intimidation that have been facing marginalised communities across the UK.

In 2023, we wrote that the Online Safety Bill was no longer fit for purpose and withdrew our support. Its focus had shifted too much onto issues of specific forms of illegal content whilst fundamentally failing to grapple with the problem of how platform design decisions affect the ways harms online operate and spread. Now, we are seeing the same debates play out again, as the Government is focusing on implementing the Act’s requirements on platforms to tackle illegal content. Meanwhile calls to review the Online Safety Act have been met with backlash from those concerned about the implications for freedom of expression.

But continuing to focus on illegal content over a more systemic approach to harm online gives us the worst of both worlds: amplification of harm at scale and threats to freedom of speech.

Scale and speed are the problems we need to grapple with. Individual pieces of illegal content cannot be reliably detected and removed quickly. By the time the courts get around to deciding that an individual post has illegally incited violence, the violence has already spiraled.

And that’s for content which is illegal. A lot of the content around Southport and the riots will not have passed this threshold. For an individual, sharing false information mistakenly online is not – and shouldn’t be – a crime. And at the same time it is a known tactic of disinformation actors to post content which is just on the right side of a social media company’s rules to evade moderation. Taken individually such posts might not meet the threshold for platform removal. It is the cumulative effect of hateful or false posts reaching millions of people that amplifies the risk of harm to dangerous levels. There must be regulatory oversight of how platforms are amplifying and spreading information to millions and often times profiting from it, especially in a crisis.

Ofcom, in their draft guidance, have acknowledged the difficulty of platforms accurately judging whether individual pieces of content are illegal. They point out that to prohibit broad categories of content would effectively ensure that at least, but certainly not only, illegal content was removed. The theory that illegal content duties means only illegal content must be removed evaporates when it meets the reality of social media content moderation at scale.

A more systemic approach could be enforced to slow content down. This could look like demonetisation to disincentivise posting harmful content, removing harmful content from ‘trending’ feeds, turning off engagement-based recommender systems, investing in trust and safety to detect threats early, or adding labels or warning filters more quickly to harmful content.

These enable harm to be mitigated without having to rely on content removal, by adding friction to how easily and quickly content is shared, or by making it less profitable. The Online Safety Act was brought in under the last Government, after years of construction and then, as time went on, deconstruction. What started as an ambitious plan to hold social media platforms accountable for increasing avoidable risks to citizens ended up as an affirmation of something we already knew: that illegal content was illegal.

The new Government will be looking for ways to learn from what happened after the Southport attack, and to protect communities across the UK who have a right to live in safety and with dignity. The public would be on their side: recent polling showed a majority of people think social media platforms failed to do enough during the riots. The Government now has an opportunity to reset the UK’s approach to digital regulation, and infuse it with new creativity and ambition that can help to truly protect people.

 

Ellen Judson, is Senior Investigator Digital Threats, Global Witness (https://x.com/Global_Witness)

Kyle Taylor, is Founding Director, Fair Vote UK (https://x.com/kyletaylor //  https://x.com/FairVoteUK)





Source link

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular

Recent Comments

Verified by MonsterInsights