close
close
Unrest, online security and rights prove difficult pairings – The Irish Times

Online safety laws have returned to the headlines following a wave of destructive unrest by far-right protesters, particularly in the UK.

The recent elections in the EU and the UK, as well as the upcoming US elections in November, have also led to a sustained flood of abuse, untruths and disinformation on social media platforms – sometimes from the hands of the billionaire owners of these platforms.

“Idiots get clicks,” academic and entrepreneur Vivek Wadhwa writes in an opinion piece in Fortune this week, referring to the online actions of certain aggressive tech CEOs and how outrageous online posts are all too often picked up and amplified by media outlets that rely on online traffic to generate advertising revenue.

But this observation also applies to the broader online world, where insults, hate, misinformation, and image and video manipulation can spread quickly.

It is widely acknowledged that social media in particular remains a wild west. Years of studies provide ample evidence that online platforms and messaging apps are used to spread hatred against individuals and groups – in the recent violent protests against vulnerable immigrants and asylum seekers – and to instigate and organize riots and targeted acts of destruction such as arson.

It is also well documented that online abuse leads to reprehensible incidents in the real world. And for at least a decade, especially since the Facebook/Cambridge Analytica scandal, researchers and investigative journalists have gradually uncovered how platforms can be manipulated to collect data and target political campaigns and advertising with the potential to influence elections.

In the current political climate, voters have clearly signaled that they want and expect preventative solutions and that the instigators of these harmful and disruptive acts are held accountable for their online and offline actions. Politicians have been spurred to take a closer look at ways to control the online world to curb its more worrying real-world impacts.

There is little international agreement on whether platforms need to be better monitored and their content controlled. The problem is how this should be done and who is responsible for it.

Added to this is the daunting question of whether any single state, country or region is capable of enforcing controls that realistically manage amorphous, borderless platforms operating in dozens of international jurisdictions, each with its own societal and legal norms. A seemingly simple goal—stop the abuse!—is in reality a tall order.

Many countries, including Ireland and the UK, have some degree of online safety legislation in place. The EU and some US states have also introduced controls, although there are no federal laws in the US (yet).

Other countries such as Sri Lanka and Malaysia have stepped up efforts to enact their own online safety laws in recent weeks, likely due to fears over the horrific unrest in the UK.

But such laws everywhere bring with them a double-edged challenge. They must provide adequate protection, but must not impose disproportionate restrictions that threaten important civil and human rights, such as the right to freedom of expression and dissent, which are subject to different interpretations.

Nor must they violate important privacy and data protection regulations.

Recent calls – in the UK, the US and Ireland – for such laws to be tougher and more effective inevitably ignore any realistic consideration of how this might be achieved.

The difficult question of ‘how’ has already led many countries – including the EU and Ireland – to take the easy way out and declare ‘down with such things’ in their laws, but refrain from giving any further details on the what and how.

The EU’s powerful Digital Services Act, which imposes ambitious but all too often undefined regulations on the big platforms, remains vague on such conundrums (as does the EU’s recent landmark AI regulation). The same goes for Coimisiún na Meán, the official Irish national regulator for the DSA. Because so many of the big tech companies and platforms are based in Ireland, it is also the de facto main European regulator.

In its role as enforcer of part of the Online Safety and Media Regulation Act 2022, Coimisiún na Meán is also implementing an online safety code. The details of the violations and the “how” they will be dealt with are not yet clear.

Added to this is the danger that unsound laws passed in the heat of political debates and political power struggles will be challenged and overturned, which can lead to prosecutions and penalties.

A warning signal comes from California, which tends to lead the US in terms of digital rights legislation and protections. Last week, a key part of its strict online safety law for children was blocked by a federal court. The court objected to the part that required companies to “assess and mitigate the risk that children could be exposed to harmful or potentially harmful online material,” which the judges said violated the right to free speech enshrined in the First Amendment.

The decision could irreversibly affect the rest of the bill. It could also hamper the federal Kids Online Safety Act, recently passed by the U.S. Senate.

And it highlights the challenges of creating or strengthening online safety laws everywhere, which require finding ways to balance important, equally valid and competing rights and protections without compromising them.

By Bronte

Leave a Reply

Your email address will not be published. Required fields are marked *