What’s On

Fraudulent internet traffic threatens efficiency of digital ad campaigns

Vladimir Rass, Ebiquity

Vladimir Rass, Ebiquity

Not everything that happens online is done by humans. And some of what is done by humans isn’t always done with the best of intentions. Artificial, or fraudulent, web traffic is a global phenomenon that threatens the cost-effectiveness and efficiency of digital advertising campaigns.

The scale of the problem is such that this fraud is now thought to account for up to 36 per cent of global internet traffic and up to 50 per cent in social media.

‘Bot’ accounts affect all
This fraudulent traffic comes under two clear categories: human and artificial. The impact is huge and advertisers end up paying for reach and contacts that haven’t actually been delivered. It’s a global issue, affecting advertisers in every market and with an estimated 72 million fake accounts on Facebook and more than 20 million fake accounts on Twitter, issues with fake social media accounts are not limited to a small volume of underhand companies.

Reports suggest that brands as large as Pepsi and Mercedes Benz have recently been found to have significant quantities of ‘bot’ accounts inflating their overall follower figures on Twitter, while Facebook’s own Facebook page fan-count fell by almost 125,000 in December 2012 when the company last attempted to purge the fakes.

Media owners can benefit by failing to take proper precautions and hence gain traffic they haven’t earned or can even actively participate in the fraud in order to hit delivery targets. Agencies can buy fake fans to boost apparent campaign success.

Challenges for advertisers
Artificial attempts to manipulate page views and actions can be inadvertent, but such is the value of this trade that, as well as malicious software that creates clicks without the user’s knowledge, it could also include plug-ins knowingly installed by home computer users who want to earn extra money.

The challenge for advertisers is that, right now, there’s no single tool that can detect the best attempts to inflate traffic numbers. Instead brands need to adopt an approach that includes bespoke tools to deal with particular challenges, as well as a degree of caution.

That’s because artificial fraudulent traffic can mimic any type of human behaviour. It can imitate multiple transitions from search requests to advertising click-throughs, deliver impressions through banners or videos, interact with advertising materials – whatever is needed to boost predefined KPIs such as duration of stay or depth of site visit.

Advertisers need to be aware of high-risk areas. Artificial clicks are more likely to appear within CPM, CPC and CPL models. They can be widely found in banner exchange networks, particularly on medium- and small- size sites, which are not interested in fighting such activity because of the risk of losing the audience and income.

What brands can do
We work with a technology company that allows brands to identify the artificial traffic and enable advertisers to claim the money back. This solution works by embedding a counter into the creative execution alongside the standard analytical and ad-surfing tags. It identifies users by a wide range of characteristics related to browser settings and plug-ins, general computer software, and hardware settings, and automatically connects to a database of fraudulent traffic providers to look for matches.

This approach allows brands to take the immediate steps to improve the campaign performance via a dialogue between the media agency and the site management, including the provision of compensations for the under-delivered human traffic.

Inflated traffic, however, is not something that just affects display; it’s also a significant issue in social media. Bots can be programmed to behave in a similar way to real platform users, generating likes, reposts, comments, and content placement.

Special programs can automatically register large numbers of bots and they are generally sold in large numbers, starting from 1000 accounts. However, it is also possible to identify the similar and simultaneous activity of so many bots because they act in a near identical way.

More difficult to detect, however, are paid user nets, where consumers fulfil key actions in branded social groups in exchange for remuneration. Some will use their real accounts and mix the fraud with genuine activity but others will register several accounts to increase their earnings.

Once again technology exists that can help clean brand groups of fraudulent traffic, verifying the quality of the group’s audience and analysing user activity in such networks as Facebook, Instagram, Twitter, and others.

All social fraud costs advertisers, even if it’s simply by distorting their attempts to optimize their digital activity. However, given the size and scale of the fraudulent traffic problem, the cost for most brands is likely to be significantly more.

With artificial traffic on the rise, brands need to be more watchful than ever about how they assess the digital audiences they are paying to reach.

Tips to spot fake fans
1. If you suddenly acquire a large number of new fans in an unexpected location, it’s likely someone has bought fake fans to artificially boost the follower count.

2. Fake accounts will not engage with any brand content so watch out for declining engagement on your social media pages.

3. Fake accounts on Twitter (aka ‘bots’) often reply to brand posts with a tweet that just consists of a shortened URL.

The author, Vladimir Rass, is General Director at Ebiquity Russia