Generative technology is set to bring in a “revolution” in the advertising industry, say experts. However, in the absence of regulation and any form of protective warranties, brands and agencies can run into unexpected benders and deadfalls. Geetika Srivastava and Christina Moniz ask experts what the potential booby-traps are and how brands and agencies can protect themselves from such risks.
COLUMN 1
Important to have a human view of AI-generated content
Karan Anand
SVP, strategy, Interactive Avenues
While AI has the potential to reduce production costs, lead time, and overall marketing costs, while increasing engagement rates, just like in any tech, AI systems can be vulnerable to manipulation by those who may use them to spread false information or manipulate public opinion. At the very least, AI ads could be out of tune with brand voice, and at its worst, an AI ad could ruin customer trust, which is very hard to repair.
One risk associated with AI-generated content is the “Uncanny Valley” phenomenon, where artificial humans seem unnaturally off-putting to viewers. Therefore, it’s important to have a human review of AI-generated content before release. The same editorial review processes that apply to human-generated content should apply to AI-generated content.
The topic of AI regulation is an ongoing debate, with some arguing for stricter regulations and others advocating a laissez-faire approach. It’s similar to the question of regulating creativity. The answer isn’t straightforward. Regulation can have an impact on brands and agencies, as compliance costs may increase and the use of AI technology may be limited. On the other hand, clear guidelines and standards for AI usage provided by regulations can increase consumer trust and confidence, and benefit the industry.
In the absence of regulation, brands and agencies can protect themselves from potential risks by implementing best practices and adhering to ethical and legal standards. Regular auditing and monitoring of AI systems, transparency and accountability, and obtaining informed consent from consumers before using their data are some ways brands and agencies can ring-fence themselves from potential risks.
+++++++
COLUMN 2
Powerful tool but can manipulate easily
Samit Sinha
Founder and managing partner, Alchemist Brand Consulting
AI’s impact on society is likely to be greater than that of the printing press, the industrial revolution, nuclear energy, or for that matter, the emergence of the internet, mobile connectivity, and indeed, the information age itself. It is not just the known unknowns, but the unknown unknowns, that confront us, which is why it is impossible to predict outcomes in any field.
AI can be used for benevolent purposes or be misused for nefarious ones. In marketing and advertising, AI can be an incredibly powerful tool to predict purchase behaviour and target the right person, at the right time, in the right context and with the right stimulus, thus reducing, if not eliminating wasteful marketing expenditure. On the flipside, with access to virtually unlimited data, unimaginable processing power, and adaptive learning capabilities, it can manipulate the minds of consumers by exploiting prejudices.
No human mind can compete with AI on logic and linear extrapolations, but does that leave no scope for uniquely human talents, like pure creativity? While AI is certainly more than capable of mimicking human emotions, can it make lateral leaps of faith to deliver paradigm-shifting ideas that have hitherto arisen from human imagination? If the answer is no, then the emotional surplus that great brands generate may remain within the purview of human creators.
I do believe that AI needs to be regulated, but the question is by whom? It is as dangerous for it to be controlled by governments, as it is by private enterprise. I agree with the views of Thomas L Friedman, when he suggests “complex adaptive coalitions” in which “business, government, social entrepreneurs, educators, competing superpowers & moral philosophers come together to define how we get the best and cushion the worst of AI.”
+++++++
COLUMN 3
Intellectual ‘booby-traps’ can lead to plagiarism
Karnika Vallabh
Lawyer, Bharucha & Partners
In the absence of specific regulation, there is no bar on the advertisers using AI assistance for achieving data-driven advertisements which are perceptive of consumer interests. However, AI outputs are entirely dependent on inputs that can (and most likely will) include third-party intellectual property and personal data. Such intellectual ‘booby traps’ can cause inadvertent plagiarism and privacy violations and invite penalties under extant law. Here, the terms and conditions under which the AI product is itself provided for use by brands is also a relevant consideration in assessing with whom the ownership of the AI created work lies, i.e., whether with the AI creator, the brand providing input to the AI, or the AI itself.
Until brands can reasonably ascertain that the AI output is entirely ‘original’ to them, the use of AI-created works in advertisements should be minimal, such that at least the final work product is largely an original creation of the brand. At present, advertisement content is already subject to specific regulation depending on the format of their publication, and in each case, publication of infringing materials in advertisements is proscribed. While AI regulation will be difficult to achieve, clear guidelines as to the commercial exploitation of AI-created works in advertisements could assist brands in ring-fencing themselves against potential risks.
+++++++
COLUMN 4
Self-regulation is the best kind of regulation
Mukund Olety
Chief creative officer, VMLY&R India
AI will propel humans to do even greater things. We are still in the exploration and discovery phase. Soon however, we will start using AI tools to enhance creative outcomes. In this process, we will develop protocols and consensus around user rights, copyrights, privacy, and the use of proprietary content. It is important to understand how AI systems are trained. One way for brands to not fall into the plagiarism trap is by training the AI by using their original content.
There is a lot of conversation taking place around AI bias. Brands therefore will need to ensure that the data used to train the system is diverse and representative. Over time, new laws and regulations will come into place. But I believe self-regulation is the best form of regulation. One thing that won’t change is that consumers lean toward brands that they trust. And trust comes with transparency. In the era of AI, that is going to become even more important. It’s important for brands to be transparent on how they train the AI systems, how they collect and use data, and the security and privacy measures they take. A proactive approach in this regard and engaging in an active conversation with the consumers will be appreciated.
Ultimately, it’s all about adding value to the consumer’s life, and if a brand can do that in an unbiased, respectful and open manner, it gives consumers more reasons to love the brand.