After both Apple Safari and Mozilla Firefox phased out third-party cookies — Google went back on its plan though — contextual advertising offered marketers hope, with global ad spend in this area projected to shoot up by 14% per year between 2022 and 2030.
Meta’s recent decision to discontinue its fact-checking programme — to start within the US— in favour of the “Community Notes” system seems to have thrown a spanner in the works. The proposed modifications will take effect across Facebook, Instagram and Threads, which are used by more than 3 billion people worldwide. If India is put on the “Community Notes” map this year
and professional third-party fact-checkers are replaced by a user-driven, crowdsourced framework, marketers will have a hard time bridging the schism between free expression and the need to find a trustworthy playing field for their brands.
Estimated at `62,045 crore and growing at a compounded rate of 23.4% this year, digital advertising in India is an obligatory part of brand outreach and consumer engagement. With advertising revenues of `22,730 crore in FY24, Meta India is one of the dominant players in this space, so the risks posed by loose content moderation standards cannot be overestimated. Yasin Hamidani, director at Media Care Brand Solutions, warns, “Without checks on the accuracy of content, Meta risks becoming a breeding ground for fake news and polarising narratives.” This erosion of trust, he adds, can extend to advertiser content, with consumers questioning the credibility of the ads they encounter alongside potentially misleading posts. Such skepticism may lead to reduced engagement and credibility challenges, particularly for knowledge-driven campaigns.
The risks for brands are not limited to diminished trust. The potential association with harmful or flagged content could result in reputational damage, customer boycotts, and long-term credibility issues. Ambika Sharma, founder and chief strategist at Pulp Strategy, points out that while community-driven models like Community Notes might sound promising, they lack the proactive strength of professional moderation. “A single instance of an ad being linked to harmful content can have lasting repercussions for brands,” she says, emphasising the need for stronger, tech-driven solutions to safeguard brand reputations.
The implications for contextual advertising — the totem of digital communication — are far-reaching. The effectiveness of contextual advertising depends on reliable data to match ads with relevant content. Meta’s rollback could lead to algorithms misinterpreting content, resulting in misplaced ads that damage brand perception and reduce ROI. As Sahil Chopra, CEO of iCubesWire, notes, “If the platform becomes cluttered with misleading content, it challenges algorithms designed for accurate ad placement, undermining the effectiveness of contextual targeting.”
Relaxed oversight could compromise the accuracy of data and algorithms, skewing user behaviour insights and preferences. This, in turn, may lead to ineffective ad placements and reduced return on investment (ROI). Hamidani highlights the risk of diminished targeting precision, stating, “Misinformation can distort user engagement metrics, making it harder for advertisers to execute effective strategies.”
While some advertisers may continue leveraging Meta’s vast reach, others might reallocate budgets to platforms with stricter content moderation policies. “Brand safety is non-negotiable for CMOs and marketing teams,” says Sharma. Platforms like LinkedIn and YouTube, which offer more robust moderation systems, could emerge as preferred alternatives for advertisers prioritising credibility and transparency, says Aashna Iyer, AVP, corporate strategy & talent development, BC Web Wise. Meanwhile, Ramya Ramachandran, founder and CEO of Whoppl, highlights the need for diversification, and states that brands must innovate by leveraging first-party data and collaborating with credible creators to counter misinformation’s risks.
Still, the extent of the fallout depends on how effectively Community Notes can address misinformation and safeguard the user experience. Critics argue that the system’s reliance on active and diverse user participation introduces delays and biases. These gaps allow misleading content to proliferate before being flagged. Experts note that the model is reactive, which creates vulnerabilities for advertisers. The urgency for platforms to maintain data integrity and moderation is more crucial than ever.
