close
close
Navigating the online bot battlefield has never been more important

When you read a product review on Amazon, browse the comments section of an article on CNN, or get annoyed by a provocative tweet, can you be sure that the person behind the screen is a living, breathing human being?

Absolutely not.

A recent report from Imperva found that bots make up 47% of all internet traffic, with “bad bots” making up 30%. These frightening statistics threaten the integrity on which the open web is built.

But even if the user is a human, there’s a good chance their account is under a false identity. This means that “fake users” are currently just as common on the Internet as real users.

The existential risk of bot campaigns is no stranger to us in Israel. After October 7, large-scale disinformation campaigns orchestrated by bots and fake accounts manipulated public opinion and policymakers.

Monitoring online activities during the war, The New York Times found that “in a single day after the conflict began, approximately one in four accounts on Facebook, Instagram, TikTok, and X (formerly Twitter) posting about the conflict appeared to be fake… In the 24 hours following the explosion at Al-Ahli Arab Hospital, more than one in three accounts posting about it on X were fake.”

Meta Facebook (Source: REUTERS)

With elections taking place in 82 countries in 2024, the risk of bots and fake users is reaching critical levels. Just last week, OpenAI had to disable an account belonging to an Iranian group that used its ChatGPT bot to generate content aimed at influencing the US elections.

Election interference and the far-reaching effects of bots

As Rwanda prepared for its July elections, researchers at Clemson University discovered 460 accounts spreading AI-generated messages on X supporting incumbent President Paul Kagame. And in the past six months alone, the Atlantic Council’s Digital Forensic Research Lab (DFRLab) has identified influence campaigns targeting Georgian protesters and sowing confusion over the death of an Egyptian economist, both based on spurious X accounts.

Bots and fake users have a detrimental impact on national security, but online businesses also pay a high price.

Imagine a company where 30-40% of all digital traffic is generated by bots or fake users. This scenario creates a cascade of problems, including skewed data leading to incorrect decisions, compromised understanding of customer channels and website analytics, sales teams following false leads, and developers focusing on products with deceptive demand.


Stay up to date with the latest news!

Subscribe to the Jerusalem Post newsletter


The consequences are staggering. A study by CHEQ.ai, a Key1 portfolio company and go-to-market security platform, found that in 2022 alone, over $35 billion in advertising spend was wasted and more than $140 billion in potential revenue was lost.

Ultimately, fake users and bots undermine the foundations on which modern businesses are built, creating distrust in data, results, and, in some cases, in teams themselves.

The advent of artificial intelligence has only given the fake web a boost. The technology “democratizes” the ability to create bots and fake identities, lowering the barriers to attack, increasing their sophistication, and greatly expanding their reach.

The magnitude of this growing problem cannot be overstated, but what can be done to minimize the enormous economic, geopolitical and social damage?

It’s time for a global response to take back control and restore our trust in the Internet.

Education is critical in the fight against the fake epidemic online. By raising awareness of the tactics used by bots and fake accounts, we can empower society to identify and mitigate their impact. An important first step is to understand the telltale signs of inauthentic users – such as incomplete profiles, generic information, repetitive phrases, unusually high activity levels, superficial content, and limited engagement. However, as bots become more sophisticated, this challenge will only become more complex, underscoring the need for continued education and vigilance.

In addition, policies and regulations must be put in place to restore trust in digital environments. For example, governments can and should require major social networks to implement the best bot defense tools to detect fake accounts.

Finding the right balance between the freedom of these networks, the integrity of the information published and the potential harm is not easy. But setting these limits is essential to ensure the longevity of these networks.

On the business side, various tools have been developed to contain and block invalid traffic, ranging from simple bot defense solutions that prevent distributed denial-of-service attacks to specialized software that protects APIs from bot-driven data theft attempts.

More advanced bot defense solutions use sophisticated algorithms that perform real-time tests to ensure the integrity of traffic. These tests analyze account behavior, interaction levels, hardware characteristics, and automation tools. They also detect non-human behavior, such as unusually fast typing, and examine email and domain history.

While AI has contributed to the bot problem, it is also proving to be an effective tool in combating it. AI’s improved pattern recognition capabilities allow for more accurate and faster differentiation between legitimate and illegitimate bots. Companies like CHEQ.ai are using AI to help marketers ensure their ads reach human users and are placed in safe, bot-free environments, effectively countering the growing threat of bots in digital advertising.

From national security to corporate integrity, the consequences of the “Fake Internet” are as wide-ranging as they are devastating. Yet there are several effective methods to curb the problem, methods that deserve renewed public and private attention. By raising awareness, improving regulation, and implementing active protections, we can all contribute to a more accurate and far safer Internet environment.

The author is co-founder and partner of Key1 Capital.



By Bronte

Leave a Reply

Your email address will not be published. Required fields are marked *