By Siddharth Pai
Last week, my brother-in-law Dr. Ragavendra Baliga, professor of cardiology at Ohio State University College of Medicine, sent me a clipping of an obituary for Robert E Lucas Jr, who passed away about a week ago. Lucas shifted prevailing thinking about macroeconomics by introducing a quirky proof that came to be known as the “Lucas critique”. He ended up being considered a key founder of modern macroeconomics as well as winning the Nobel Prize in economics for his efforts.
Along with the critique, his key contribution was a dynamic model that he introduced that showed that inflation had no effect on the long-run average unemployment rate.
He introduced this in the 1970s, at a time when most macroeconomists thought increased inflation could lower unemployment rates by nudging more people into the workforce. While counterintuitive at the time, the Lucas critique has influenced economic thinking since.
While inflation and unemployment are certainly topics that are on many people’s minds nowadays, I want to shift to another idea also defined by a colleague of Lucas’ at the University of Chicago, who was also a Nobel laureate and who also first defined his theory in the 1970s. This was George Stigler, and he defined the “economic theory of regulation”, also sometimes simply known as the phenomenon of “regulatory capture”.
Stigler observed that regulated industries have a direct and immediate interest in influencing regulators, whereas ordinary people and other participants in an economy are less motivated.
As a result, even though the rules in question, such as food purity standards, often affect all the individuals in an economy, individuals are not likely to lobby regulators to the degree that regulated industries do. Not surprisingly, this influence can lead to policies or decisions that favor the industry’s interests rather than protecting the public.
Investopedia (bit.ly/41Wsm15) says that “Regulatory agencies that come to be controlled by the industries they
are charged with regulating are known as captured agencies, and agency capture occurs when that governmental body operates essentially as an advocate for the industries it regulates. Such cases may not be directly corrupt, as there is no quid pro quo; rather, the regulators simply begin thinking like the industries they regulate, due to heavy lobbying.”
Google’s billions of users will soon see its latest generative used in several products like Gmail, Maps, Docs, Sheets, and the company’s chatbot, Bard. According to MIT Technology Review (bit.ly/43ie4Jm), “a user will be able to simply type a request such as “Write a job description” into a text box that appears in Google Docs, and the AI language model will generate a text template that users can customise. Because of safety and reputational risks, Google has been slower than competitors to launch AI-powered products.
But fierce competition from competitors Microsoft, OpenAI, and others has left it no choice but to start.”
The publication goes on to say “It’s a high-risk strategy, given that AI language models have numerous flaws with no known fixes. Embedding them into its products could backfire and run afoul of increasingly hawkish regulators, experts warn.”
The view that regulators are getting more hawkish about generative AI is certainly supported by some events we have seen recently. For instance, Italy was the first country to ban ChatGPT over privacy concerns.
Last week, we saw Sam Altman, CEO of OpenAI (backed by Microsoft), the organisation that created ChatGPT, testify to the US Congress that that government intervention will be critical to mitigating the risks of increasingly powerful AI systems. This was startling, seeing as it was coming from the creator of the first of such widely usable generative AI models, even if it is still severely hampered by access to less data than we think it has—and by the fact that it often simply tends to make things up.
“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too,” Altman said at the Senate hearing.
Further, he proposed the formation of a US or global agency that would license the most powerful AI systems and have the authority to “take that license away and ensure compliance with safety standards.”
We should not be fooled by Altman’s scrubbed-face presence at the US Congress and his seeming candour in saying that AI should indeed be regulated. (By the way, ChatGPT was available again in Italy within four weeks of its being banned because of OpenAI’s efforts to “address or clarify” all issues).
Such seemingly stark candour is a tool I have often seen employed to win over audiences, with the shocking admission of “mea culpa” introduced into the debate. This is just another way to smartly set about the business of regulatory capture, where regulation ends up being friendly to the few firms that control this technology.
The Investopedia discussion of regulatory capture gives a perfect example of how such seemingly co-operative behaviour hijacks the discussion around sensible regulation. The transportation industry in the US can be considered a classic example of regulatory capture.
In the late 19th century, as the industrial revolution created vast new wealth, government trade regulators ended up advocating for the industries they oversaw, including railroads. Large railroad companies meanwhile themselves advocated for regulation by the Interstate Commerce Commission (ICC) under the Interstate Commerce Act of 1887, and the ICC allowed the railroad industry to function as an effective cartel.
Unless we are very careful, it is possible for us to expect global regulatory capture from the merchants of Generative AI.
(Siddharth Pai is technology consultant and venture capitalist)