Premium

What’s the lurking ‘shadow’ behind Shadow AI 

Market size for shadow AI tools might reach about $156 billion by 2026

Number of workers who acquired, modified, or created technology outside IT's visibility can rise to 75% by 2027
Number of workers who acquired, modified, or created technology outside IT's visibility can rise to 75% by 2027

As businesses automate data extraction and exchange through generative intelligence (Gen AI) based chatbots such as ChatGPT, and Bard, among others, experts, there looms a shadow in terms of safety and security.  With the use of ‘bring-your-own-device (BYOD) across information technology (IT) companies , experts believe sensitive data is being transferred to the dark web, without the knowledge of the organisation.  Organisations are expected to be grappling with the shadow AI trend, where employees use generative AI in business positions without proper authorisation or supervision.  “Lurking beneath the surface of legitimate AI projects, shadow AI operates in unseen corners, leveraging employee ingenuity and readily available tools for unauthorised tasks. This dark AI can analyse data for personal gain, automate work unethically and bypass security protocols, among others,” Tarun Nazare, co-founder and CEO, Neokred, a fintech solutions provider, told FE-TransformX, adding that  risks are substantial and can expose companies to data breaches, intellectual property theft and reputational damage, among others.

Sign Up to get access to the Financial Express Exclusive and Premium Stories.
Already have a account? Sign in

Shadowed AI markets

In July 2023, about 30% of marketing and advertising professionals in North America, South America and Europe, considered that generative AI poses risks to brand safety and misinformation, as per insights from Statista, a market research firm. Software developers can unintentionally help hackers create malicious malware based on the code they have entered into AI tools. Smaller companies are expected to face greater risks, as they mostly use free versions of ChatGPT 3.5 or similar tools, which only include data trained through January 2022. The market size for shadow AI tools is estimated to reach about $156 billion by 2026. 

Shadow AI is expected to introduce a blind spot in cybersecurity. Without requisite IT expertise, these tools may inadvertently mishandle data or operate in contravention of regulations. It is believed that non-compliant tools not only expose organisations to substantial penalties but also make them vulnerable to heightened scrutiny. “Data security transcends mere information; it constitutes a critical asset. While IT departments reinforce their digital defences with state-of-the-art firewalls, encryption methods, and stringent access controls, shadow AI tools operating at the periphery may not be subject to these protective measures,” Vishal Gupta, CEO and co-founder, Seclore, a cybersecurity company, said.

Reportedly, a late 2023 survey asked public relations (PR) professionals worldwide to select options they deemed to be risks that generative artificial intelligence (AI) posed to their industry. About 67% of respondents already utilising AI at work mentioned how younger or newer PRs have not shown interest in learning the profession’s principles and rely too heavily on such tools.  Case in point, in 2023, Samsung restricted the use of generative artificial intelligence tools such as ChatGPT for employees after the company discovered such services were being misused.  Using an LLM, especially an unsanctioned one developed outside the enterprise’s data management policy framework, could expose sensitive company and customer data. “Sharing sensitive information with an LLM could put the organisation’s intellectual property and business strategies at risk, empowering competitors and eroding competitive advantages. Legal risks follow functional and operational risks if shadow AI exposes the company to lawsuits or fines,”  Vineet Kumar, founder, CyberPeace Foundation, highlighted. 

Shadow AI: The double-edged sword

Shadow AI is considered complex and potentially harmful, as it operates without IT governance. Many enterprises, start-ups in the IT sector have embraced LLM and AI projects as per IT system portfolio, which exposes such enterprises to threats of ‘AI Hallucinations’ which is LLM generating incorrect information. The source of the shared content can be from market contributors, where legitimacy and accuracy may be questionable. Sometimes limitations to gauge the author’s credentials and legitimacy of content not being coloured by the author’s own prejudices resulting in the AI engine’s output being hallucinated. “ The lack of a fool-proof testing lab or resource to check for compliances of the emerging gen AI tools for fresh sanctions of new applications and tools, among others can promote shadow AI. The lack of common acceptance standards of responsible AI compliances by governing authorities can also contribute to it,” Arun Moral, managing director, Primus Partner, a management consultancy, explained. 

In 2022, 41% of workers acquired, modified, or created technology outside of IT’s visibility, and this number can rise to 75% by 2027, as per insights from Gartner. Industry experts believe that the industry would witness a rapid increase in acceptance and usage of shadow AI tools across all generations of users. Looking ahead, the future implications of shadow AI are profound. Its evolution can demand continuous adaptation and innovation in cybersecurity strategies. Proactive efforts, such as interdisciplinary research and the development of ethical frameworks, are essential. “ The risk presented by shadow AI is twofold, where understanding ‘how’ it operates and ‘why’ it poses a threat is imperative for developing effective mitigation strategies. Cybersecurity measures, industry collaboration, and stakeholder education, among others, can help in fortifying defences against the threat of shadow AI,” Ravinder Rathi, vice president, technology, Quarks Technosoft Pvt Ltd.,  a custom software development company, explained.

It is believed that some of the large Open Gen AI players such as Google and Microsoft, among others, have carved out an exclusive cohort for responsible AI in their R&D unit. This cohort makes the industry  responsibile  for AI tools. However, the government is yet to support the industry trying to regulate the space of open and responsible AI.  “ The development of shadow AI also calls into question how government supervision and regulation influence the moral and legal frameworks that govern AI technologies. Securing responsible AI development while protecting human rights and social values presents a challenge for policymakers, who must strike a balance between innovation and risk avoidance,” Hoor Fatima, assistant professor, computer science and engineering (CSE), Sharda University, concluded.

Follow us on TwitterFacebookLinkedIn

Get live Share Market updates, Stock Market Quotes, and the latest India News and business news on Financial Express. Download the Financial Express App for the latest finance news.

This article was first uploaded on February thirteen, twenty twenty-four, at zero minutes past eight in the morning.
Market Data
Market Data