close
close
Online disinformation triggered a wave of far-right violence in Britain

Riot police push back anti-immigration protesters outside the Holiday Inn Express hotel housing asylum seekers in Rotherham, United Kingdom, on August 4, 2024.

Christopher Furlong | News from Getty Images | Getty Images

It didn’t take long for false claims to appear on social media after three young girls were killed in the British town of Southport in July.

Within hours, false information – about the attacker’s name, religion and migration status – gained traction, triggering a wave of disinformation that led to days of violent unrest across the UK.

“Referring to a post on LinkedIn, a post on X falsely identified the perpetrator as ‘Ali al-Shakati,’ who is rumored to be an immigrant of Muslim faith. By 3 p.m. the following day, the false name had been mentioned over 30,000 times on X alone,” Hannah Rose, hate and extremism analyst at the Institute for Strategic Dialogue (ISD), told CNBC via email.

Other misinformation circulating on social media claimed the attacker was on an intelligence watchlist, arrived in the UK on a small boat in 2023 and was known to local mental health services, according to the ISD analysis.

Police denied the allegations just one day after they emerged, saying the suspect was born in Britain, but the story had already gained traction before then.

Misinformation fueled prejudice and bias

This type of misinformation is consistent with rhetoric that has fueled the anti-migration movement in Britain in recent years, said Joe Ondrak, UK research and technology director at technology company Logically, which develops artificial intelligence tools to combat disinformation.

“It’s like catnip to them, you know. It’s just the right thing to say to provoke a much angrier reaction than probably would have been the case had the misinformation not been circulating,” he told CNBC via video call.

Riot police push back anti-migration protesters in Rotherham, UK, on ​​August 4, 2024

Christopher Furlong |

Soon after, far-right groups began organizing protests against migrants and Islam, including a demonstration at the planned vigil for the killed girls. This escalated into days of unrest in Britain, which included attacks on mosques, immigration centers and hotels housing asylum seekers.

The misinformation spread online takes advantage of pre-existing biases and prejudices, Ondrak explained, adding that fake news often thrives in times of heightened emotions.

“It’s not like this false claim is put out there and then, you know, everyone believes it,” he said. Rather, the reports are “a way to rationalize and reinforce preexisting prejudices, biases and speculation before any kind of truth can come to light.”

“It didn’t matter whether it was true or not,” he added.

Many of the right-wing protesters claim that the high number of migrants in Britain is fuelling crime and violence. Migrant rights groups dispute these claims.

The spread of disinformation on the Internet

According to ISD’s Rose, social media was a key channel for the spread of misinformation, both through amplification of algorithms and through sharing by large accounts.

Accounts with hundreds of thousands of followers and the paid blue ticks at X shared the false information, which was then passed on to other users by the platform’s algorithms, she explained.

“For example, if you searched for ‘Southport’ on TikTok, the ‘Others Wanted’ section, which recommends similar content, promoted the attacker’s false name by the platform itself, even 8 hours after police confirmed that this information was false,” Rose said.

In order to protect the shop windows from the demonstration against right-wing extremism and racism, they are boarded up.

Thabo Jaiyesimi | Sopa pictures | Lightrocket | Getty Images

The ISD’s analysis found that the algorithms worked in a similar way on other platforms such as X, where the attacker’s fake name was displayed as a trending topic.

As the unrest continued, X owner Elon Musk got involved and made controversial comments on his platform about the violent demonstrations. His statements sparked resistance from the British government, with the country’s justice minister urging Musk to “behave responsibly.”

TikTok and X did not immediately respond to CNBC’s request for comment.

The false claims also found their way onto Telegram, a platform that Ondrak said plays a role in consolidating narratives and exposing more and more people to “harder beliefs.”

“All of these claims made their way into Telegram’s so-called post-Covid milieu,” Ondrak added. This included channels that were initially anti-vaccination but were then taken over by far-right figures promoting anti-migrant issues, he explained.

When asked for comment by CNBC, Telegram denied that it was contributing to the spread of misinformation and said its moderators were monitoring the situation and removing channels and posts that incite violence, which is not allowed under its terms of service.

At least some of the accounts calling for people to take part in the protests could be traced back to the far-right wing, according to an analysis by Logically. These include some linked to the banned far-right group National Action, which was designated a terrorist organisation under the UK Terrorism Act in 2016.

Ondrak also noted that many groups that had previously spread false information about the attack have since retracted their statements and said the report was false.

On Wednesday, thousands of people gathered in towns and cities across the UK to demonstrate against racism, far outnumbering recent anti-immigrant protests.

Content moderation?

The UK has an online safety law designed to combat hate speech, but it does not come into force until early next year and may not be enough to protect against certain forms of disinformation.

On Wednesday, the UK’s media regulator Ofcom sent a letter to social media platforms urging them not to wait for the new law to come into force. The UK government also demanded that social media companies do more.

Many platforms already have terms and conditions and community guidelines that address and enforce action against harmful content to varying degrees.

A protester holds a placard reading “Racists are not welcome here” during a counter-demonstration against an anti-immigration demonstration called by far-right activists in the London suburb of Walthamstow on August 7, 2024.

Benjamin Cremel | Afp |

The companies “have a responsibility to ensure that their platforms do not promote hate or violence,” said ISD’s Rose, but added that they need to do more to enforce their rules.

She noted that ISD found a number of pieces of content on several platforms that likely violated their terms of service but remained online.

Riot police push back anti-migration protesters in Rotherham, UK, on ​​August 4, 2024

As disinformation spreads amid riots in the UK, regulators are currently powerless to take action

Henry Parker, vice president of corporate affairs at Logically, also pointed to nuances between different platforms and jurisdictions. Companies invest different amounts in content moderation, he told CNBC, and there are issues with different laws and regulations.

“So there’s a dual role here. Platforms need to take more responsibility, comply with their own terms and conditions and work with third parties like fact-checkers,” he said.

“And then the responsibility of the government is to be really clear about what its expectations are … and then also to be very clear about what will happen if those expectations are not met. And we are not at that point yet.”

By Bronte

Leave a Reply

Your email address will not be published. Required fields are marked *