Trust & antitrust in the age of AI

Training of AI tools can be an ethical and legal minefield.

Training AI tools can present ethical and legal challenges. (Image Source: Getty Images)
Training AI tools can present ethical and legal challenges. (Image Source: Getty Images)

A few days ago, California-based online education company Chegg filed a lawsuit against Google alleging that the ‘overviews’, which the American multinational tech company’s generative AI tools curate upon every search, are leading to a significant decrease in traffic on their website.

Chegg’s lawsuit, filed in Washington DC, said not only is Google violating its antitrust laws, but is also creating a “hollowed-out information ecosystem of little use and unworthy of trust”.

Isha Suri, research lead at the Centre for Internet and Society (who has worked in areas including competition law, internet governance, intellectual property rights, and privacy and data protection), explains to FE, “When it comes to antitrust, the issue at hand is that Google is allegedly abusing its position as a dominant player which has come to be essential infrastructure in the online advertising market, where it plays a dual role as that of an intermediary, and also a market participant.” This, then means that people who are advertising through Google are also competing with the tech giant.

While the lawsuit was a result of Chegg’s shares declining by over 98% (from its peak in 2021), and the company announcing that it would have to lay off almost a fifth of its staff members later this year, this is not the first such allegation against Google.

In the past two years, a huge legal battle has also been ensuing between Google and news publishers, who have taken the former to court (across countries). News publishers have been alleging something similar to what Chegg has also been claiming: that Google has monopolised the ad market, that it is training its AI tools by letting them crawl through news websites (without compensating the publishers for using their work), and, as Chegg said, eroding “financial incentives” to publish original content.

Interestingly, in August 2024, a US judge at a district court in Washington DC, had ruled that Google had “illegally monopolised” the online search and ad market.

Closer home, news wire Asian News International (ANI) also sued California-based AI giant OpenAI last year, accusing the latter of training its large language models (LLM) using the copyrighted work that ANI has produced. While OpenAI denied all the claims, it opened a bigger debate about how copyright functions in the age of the internet.

Says Suri, “News publishers have been saying that AI companies are using their copyrighted work and not compensating them for it. There are a lot of conversations to be had here – should knowledge not be free, do the knowledge generators then not have a genuine claim, etc. I think there’s a case to be made about what fair compensation actually looks like for owners of copyright. With multiple ongoing cases, it’s difficult to say, at the moment, what that fair compensation would be though.”

It’s also interesting that while the AI companies are training their tools and bots using data which they claim is “publicly available” and in the “open domain,” they are also entering into exclusive partnerships and licensing deals with different publishers. Suri tells FE, “Google has a licensing deal with Reddit, which means no other AI company can train its tools on Reddit’s content — this is something only deep pocketed players can afford to do.”
In early 2024, around the same time as Google and Reddit announced their licensing deal, OpenAI also entered into contractual agreements with Vox Media, The Atlantic, News Corp, 

Dotdash Meredith, and the Financial Times to use their content for training its AI tools.

What this does is indicate that the publishing world is divided — some organisations are taking the AI giants to court, while others are collaborating with them. But what it also shows is that whether you take the AI way or the highway, publishers are essentially in competition with the tools that are trained using their work, and tools whose ‘intelligence’ is to a certain extent dependent on the original content that is going uncompensated.

Where does one find the resolution for these issues, then? 

Litigation is long and expensive, while technology and AI are getting smarter with each second. “I think there is a need for regulatory predictability here and that would be possible only if we have frameworks for AI governance that the companies are required to follow. We need to have 

conversations across different ministries, regulatory bodies, and stakeholders to arrive at a solution that adequately addresses diverse concerns at play here,” advises Suri.

Get live Share Market updates, Stock Market Quotes, and the latest India News and business news on Financial Express. Download the Financial Express App for the latest finance news.

This article was first uploaded on March nine, twenty twenty-five, at zero minutes past four in the morning.
Market Data
Market Data