By Sandeep Kuriakose & R Chandra Mouli, respectively, founder, BPRISE, a programmatic buying and real-time bidding platform, and advisor to martech and ad tech firms

In a scene from The Wizard of Oz, Glinda asks Dorothy: “Are you a good witch, or a bad witch?” Swap “witch” for “bot” and the question writes itself. Or let’s take a Kollywood example: In the film Nayakan, a child asks Kamal Hassan playing the title role of don Velu Nayakkar: “Are you a good person or a bad person?”

Likewise, the question we need to ask in the digital world: Is it a good bot or a bad bot? Strange but true, just as in humans the complexities of good and evil are embedded in every bot, which by design mimic humans.

Bot, short for robot, emerged with the development of software programmes designed to perform automated, repetitive tasks. The first chatbot, ELIZA, was developed in 1966 at the Massachusetts Institute of Technology, US, to imitate human conversation. In the present day, we are familiar with advanced conversational bots such as ChatGPT. We also have chatbots integrated into websites, messaging apps, social media platforms, and voice assistants (example Alexa).

Bots are as interactive as humans, and therefore an asset to voice-led services such as call centres. For example, customers could converse with a chatbot to change passwords, request a balance on an account, or schedule an appointment—scenarios relegated to science fiction a few years earlier and dependent on human interface 
until recently.

The very traits that make bots helpful—speed, scale, and tireless repetition—are the same traits that get repurposed for abuse.

While a bot welcomes you as you log into a portal, there are others at work elsewhere in the digital world with an intent to mislead and misrepresent. Let’s say you are the CEO/ CTO/CMO of a company active in online sales, an advertising agency that launches digital campaigns, or the state or central government communicating to citizens. Bots can dilute what you do and bring benefit to third parties.

Here’s a real-life example (all details are first-hand from a CMO). A campaign for a household brand was humming along, the reach graph perfectly smooth, the clicks eerily regular, even the scroll depth politely consistent. The CMO felt this pattern was unnatural. She pinged her agency lead, who woke a data scientist, who in turn called a publisher. By dawn, the list of “users” behind those immaculate curves read like a cast of shadows: devices that never slept, browsers that never twitched, audiences that reacted faster than thumbs could move. Bad bots had taken the place of good ones.

This brings us to marketers grinning from ear to ear at dashboard reports that claim exemplary outcomes for campaigns they had approved. Imperva, a company that helps protect customers from cyberattacks, says in its 2025 Bad Bot Report: “Automated traffic has surpassed human activity for the first time in a decade, reaching 51% of all internet traffic, with bad bots comprising 37% of that figure. Key sectors like financial services, healthcare, and e-commerce are prime targets for AI-powered bot attacks aimed at data scraping, fraud, and account hijacking.”

Draining your ad budget

Investigations unveiled CycloneBot, a scheme capable of spoofing about 1.5 million devices daily and generating up to 250 million falsified ad requests. ShadowBot faked 35 million mobile and CTV devices. Human Security, a firm dealing with fraud and bot activity, discovered in 2022 a scam they later named Vastflux. The scammers pumped more than 12 billion fraudulent ad requests per day, infecting nearly 11 million devices.

The scammers modified the ad before placing it in the acquired ad space. Malicious JavaScript code was added, containing instructions on which applications to spoof. An additional piece of code was added to digital advertising creations, enabling fraudsters to play up to 25 video ads at once, one under the other, and thus record up to 25 ad impressions instead of one that was visible to the recipient.

Forensics…to detect cyber fraud

Closer home, an Indian media house syndicating OTT programmes through telecom companies saw a surge in affiliate-driven sign-ups. Each registration passed OTP verification against a mobile number, yet a material share of users later denied initiating the purchase. The situation triggered penalties and claw-backs.

Our forensics team was called in to investigate. We flagged server-origin traffic from data-centre autonomous system numbers, headless browser signatures, uniform viewport stacks, and near-fixed inter-event timings (keypress -> mousemove -> click) with minimal variance. Our team recommended corrective measures, much to the relief of the media house.

As an advertiser, you must prevent bots from eating into your ad budget. Fortunately, the same class of models that create a counterfeit audience can also light a trail to it. Our own work in Mumbai has leaned into that symmetry, using graph analysis on bidstreams, sequence models that study dwell and scroll, and cross-referenced device intelligence to separate theatre from attention.

How to stop the rot

Just like the preamble, the headline writes itself on days like these: Spot the bot. Stop the rot. Continue to be watchful 24×7. You are in a theatre where the cast list refreshes by the hour, and the curtain never quite falls.

Views are personal