By Aayush Agarwal, Darshil Shah & Pavan Mamidi

Artificial intelligence (AI) is reshaping our digital world, unlocking opportunities while sparking ethical debates. As forms collect data under the guise of “informed” consent, they draw lucrative inferences about our attitudes and behaviours, often without our full understanding. This information asymmetry between users and firms raises questions about the ethics of data hoarding. Furthermore, clever design elements nudge us into sharing more than we realise, leaving us vulnerable to exploitation. The complexities of these issues urge us to take a closer look at the balance between innovation and ethical responsibility.

Is consent really “informed” if users don’t know the extent of the inferences drawn by firms? Within AI and digital markets, informed consent is a prerequisite between users and firms that allows firms to extract user data in exchange for the digital service, which is often free, resulting in a non-price transaction with data as currency. Two aspects of legal consent are unique to digital markets: (a) whether users are adequately informed about the amount of their data that firms will collect (if there exists low awareness, the contract is imperfect, and the value derived by one party is greater than the other, resulting in user exploitation); and (b) whether users are informed of the breadth and depth of inferences drawn by digital firms about their users using their data. In (b), if users are unaware that their data will be used outside of the purposes of firms’ immediate service, perhaps to gain a competitive advantage in an adjacent market, serious consumer welfare and competition concerns arise. For instance, Netflix has leveraged users’ viewing data to enhance the development of Netflix-produced content. This practice challenges the foundational ethics of data use, as users inadvertently contribute to other markets.

Why are people who are concerned about the privacy of their data doing little to actually protect their privacy? Surveys show that people do not fully comprehend key concepts described in privacy policies. Moreover, they report feeling coerced by online platforms into accepting these policies (Bashir et al., 2015). Users do not know enough about what they consent to when using digital services, which goes against major international regulations calling upon digital firms to make it easier for users to make informed decisions. As of 2021, 137 out of 194 countries had enforced data protection and privacy legislation (UNCTAD, 2021). By creating informational asymmetry and undermining informed consent, digital firms may be not compliant with most of these countries’ regulations.

The General Data Protection Regulation (GDPR) definition of consent distinguishes “freely given consent” from consent under any form of pressure. Digital platforms like Instagram design user choices, subtly guiding interactions toward profit-centric goals. This persistent nudging compromises user autonomy.

Digital platforms employ sophisticated design elements to create an illusion of user control over their data while nudging them towards choices that may not align with their best interests. For example, Facebook and Instagram offer users various privacy settings and controls, giving them the perception of having control over their data. However, these settings may be complex and difficult to navigate, leading users to default to the platform’s pre-selected options, which favour data collection and sharing. Moreover, platforms utilise behavioural science techniques to nudge users towards actions that benefit the platform’s bottom line. One such technique is confirm-shaming, where users are subtly coerced into accepting data collection policies by framing privacy-conscious choices as inconvenient or socially undesirable.

Navigating these ethical complexities necessitates a pivot not just towards prioritising consumer welfare but redefining it. Upholding user welfare should mean securing consent and ensuring user data isn’t exploited for undisclosed, intricate inferences. It involves safeguarding the integrity of user choice and curbing persistent nudges towards profit-driven objectives that distort genuine user experiences. Policy initiatives like the European Union’s AI Act and GDPR have set commendable benchmarks in protecting privacy and user data control. The GDPR grants individuals greater control over personal data, requires explicit consent before collecting and using their data, and imposes strict data security measures. However, the ethical challenges posed by inferred preferences and choice architectures demand a broader conversation.

India has taken steps to safeguard digital consumer welfare with the Digital Personal Data Protection (DPDP) Act, 2023, and the Digital Competition Bill (DCB). The DPDP Act mandates that consent must be free, informed, specific, and clear by enforcing stricter transparency requirements. Similar to the GDPR, it attempts to reduce information asymmetry over data usage between users and firms. The DCB could promote fair competition while addressing monopolistic behaviour like rent-seeking and data hoarding. Both legislations emphasise accountability and transparency from digital firms about their data practices and competitive behaviours. If the DCB is enacted, in conjunction with the DPDP Act, they can foster a more ethical AI ecosystem in India by empowering users with better control over their personal information.

Regulatory agencies like the Competition Commission of India are crucial in enforcing these laws and holding firms accountable for violations. Safeguarding digital consumers requires a multi-faceted approach encompassing legal, regulatory, and enforcement measures.

AI systems delving beyond explicit user consent should mandatorily disclose and seek consent for these extended uses. Simultaneously, crafting digital frameworks that prioritise user autonomy over commercial gains holds significance. Ethical AI development demands user-centric designs that respect users’ original objectives, nurturing genuine interactions devoid of coercive nudges towards profit-oriented goals.

The writers are respectively senior associate and lab manager, research associate, and director, Centre for Social and Behaviour Change, Ashoka University.

Disclaimer: Views expressed are personal and do not reflect the official position or policy of FinancialExpress.com. Reproducing this content without permission is prohibited.