While developing a future system for AI, it is important to bear in mind the standard-setting process to safeguard the future balance of competitive forces in the market and not to overlook the legal challenges posed by AI.
The evolution of self-learning algorithms and sophisticated computers has posed new challenges to the enforcement of antitrust laws. A decade ago, nobody would have thought that these sophisticated algorithms would become a competition concern. With activities like Internet of Things (IoTs), our everyday movements are being continuously tracked, and companies, to leverage their profitable position in the market, use our data. The purpose of this data collection is to conquer future market for Artificial Intelligence (AI). So, it is relevant to ask if AI calls for a regulatory intervention? (According to Merriam-Webster dictionary, AI means development of software and computers capable of self-learning and intelligent behaviour.)
While developing a future system for AI, it is important to bear in mind the standard-setting process to safeguard the future balance of competitive forces in the market and not to overlook the legal challenges posed by AI. It raises questions about the relationship between man and machine, the ability of humans to control the ‘deep-learning’ algorithms that are fed by data, the liability of humans, accountability for machine activities, and the antitrust liability of algorithm creators and users. Take, for example, the sophisticated pricing algorithms being used by commercial giants in online platform markets. They raise a potential risk of tacit collusion. Prima facie, they appear to be promoting information symmetry and perfect price transparency; however, they contribute to data-driven business models that aid in predicting markets. This has helped online trading platforms to process Big Data at real-time speed, thus making more accurate decisions.
As such, competition law prohibits anticompetitive agreements, abuse of dominance, and mergers that reduce competition. In the case of AI, it is difficult to establish the existence of an illicit/illegal agreement wherein each operator, aware about the development of other machines by its competitors in the market, is most likely to adopt similar pricing algorithms. This could lead to an anticompetitive agreement. Unlike the previous decade, agreements do not take place expressly between the executives in smoke-filled rooms, rather they happen in the digital world through automated algorithms, leading to more elusive forms of collusion.
Conscious parallelism behaviour by firms in the online market, leading to equilibrium prices above competitive levels, does not attract antitrust provisions. Thus, the main challenge before the competition authorities is to bring under its scanner such algorithm developers who program machines to unilaterally support tacit collusion. Competition agencies lack enforcement tools to do so. Such cases might be prosecuted under the banner of ‘unfair trade practices’. As in this case, ‘anticompetitive intent’ is a strong ground for establishing a cartel-like activity; a legislation to counter excessive transparency can do its bit when the competitors in the market abuse this transparency. However, questions like, in the event of tacit collusion being legal in nature, does its formation in the algorithm-led marketplace violate antitrust laws, still remain to be answered. The probable answer to this could be that online trading companies can be held liable if they were motivated to achieve an anticompetitive objective resulting in chilling innovation and competition or they were fully aware about the nature of their pricing algorithms and their anticompetitive consequences.
In this context, we need to answer whether, with the advent of pricing algorithms, unilateral coordinated behaviour of firms and Big Data is the invisible hand sufficient to promote competition? A shift towards ‘smart regulation’ and intervention through ‘digitised hands’ is suggested. For instance, in the case of Uber, it is the algorithm that decides the base price for ride-sharing. This algorithm determines when to implement a surge price, for which areas, for how long, and to what extent. Uber gives the defence of demand-supply dynamics to counter the surge pricing. To solve this problem, it is proposed that governments should make use of Big Data and Data Analytics to effectively set a market price. This shall ensure a sense of belief amongst consumers that prices are competitive and pricing algorithms used by the government are equally reliable.
Competition agencies are currently struggling with the problem of designing new tools to address the difficulties posed by AI. Evidently, it is challenging to draw a framework to determine the illegality of agreements in AI. It requires a specialist approach to study the algorithms to determine the intention of the defendants. Ironically, the rule of law components—such as transparency, predictability and accuracy—prove to be harmful in the AI space. So, in such cases, it is important to keep in mind the degree of control that the algorithm user has over the machines. Is it possible to design such algorithms that operate with proper checks and balances while safeguarding consumer welfare and also fulfil the objective of profit maximisation? The answer to this is tough, given the complex nature of algorithms operating on voluminous data. An alternate approach be encouraging antitrust regulators to call for more information on the nature of algorithms being used in the computerised market environments to determine the level of transparency they end up creating in the market. However, it still remains to be seen how courts and regulators respond to this futuristic challenge posed by AI in antitrust enforcement.
By- Nidhi Singh. Counsel, Delhi High Court, and Founder, Institute for Commercial Law & Policy Research, Delhi