But, what AI/ML still must contend with is the problem of inherent bias. As most data is collated from online sources, it is quite influenced by vox populi.
Last week, OpenAI, an AI research concern, launched the third iteration of its natural language processing software GPT-3. Considered one of the most advanced natural language processing algorithms, GPT-3 is already proving its mettle as tests show that it can be used to create code, query databases, and even write tweets and headlines. But, a significant feature is the large database it is dependent on—at 175 billion parameters, this is a 100-fold increase over GPT-2.
Although, the full suite of functions are not yet available, test results from GPT-2 show how far the NLP has progressed. In the case of children’s textbook writing, the accuracy of GPT-2 was 89%. In contrast, humans did only slightly better at 92%. Now, GPT-3 will surpass that.
But, what AI/ML still must contend with is the problem of inherent bias. As most data is collated from online sources, it is quite influenced by vox populi. This means that, if an opinion is tilted in favour of one side, the technology will replicate this bias. This was also the problem when Microsoft launched its chatbot, Tay, on Twitter years ago. Although Tay behaved well for a few hours, it soon got corrupted, spewing extremist propaganda. GPT-3 could face similar problems.
The issue is complicated by scale. Given the advancement, GPT-3 or any such algorithm can create, say, fake news faster and likely in a harder to fact-check manner than before. This is also the reason why people have been calling for responsible AI. But, for that, governments, companies and researchers will need to collaborate.