GPT-3 writing an op-ed piece in The Guardian shows the progress on AI development, and its present limits
The op-ed by GPT-3, OpenAI’s natural language processing robot, in The Guardian, was titled “A robot wrote this entire article. Are you scared yet, human?” Should we be scared, or should we marvel? The op-ed reads like a human wrote it, so that is the now-elementary Turing test passed. There was some complexity of “thought” reflected in GPT-3’s writing—but we could chalk it out to the fact that GPT-3 is the largest trained model so far, with 175 billion parameters. It was repetitive at times, but so can be the most intelligent, and predictably, the dullest, of us. The article’s purpose was to convince the human of AI’s great dispassion towards controlling humans or playing their adversary in any way. But, the facts are that this is what its brief was for the piece and that it, all told, it relies on inputs from a human. But, if its argument convinces a reader, it reason to marvel, even if that means we must put our fears aside for the time being. GPT-3’s predecessor, GPT-2 wrote a children’s story with “89% accuracy”.
The editorial note appended to GPT-3’s op-ed provides more insight than the op-ed itself. The Guardian says it had to edit from eight different articles that GPT-3 had written and had to provide cues to the robot. In other cases, like filling up a spreadsheet, GPT-3 was found to autocomplete with inaccurate information.
But, OpenAI and peers are also working to create more accurate AI. Earlier this month, Diffbot announced that it would try to surpass GPT-3 by using more trustworthy models like knowledge graphs in order to create more accurate systems. When that happens, we will again marvel at our own brilliance—and also fear what we think is coming.