For instance, in the case of facial recognition, she says it is good for airports to have the technology to screen for potential terrorists, but not in the case of governments to use it.
Last year, Goldman Sachs was blamed for infecting its algorithms with human biases with regards to the Apple Card. The algo unknowingly assigned higher credit limit to males than females. But, Goldman and Apple have not been the only ones to face such issues. A few years ago, Microsoft was the target, as its Twitter bot Tay, within a day of being released had started spewing venom with Nazi hate speech and white supremacist agenda. The company realised that AI was not mature enough to know what to absorb and what not to. IBM’s chief Ginni Rometty is calling for precision regulation on AI to selectively allow AI in certain sectors and not allowing it in others. For instance, in the case of facial recognition, she says it is good for airports to have the technology to screen for potential terrorists, but not in the case of governments to use it.
Few weeks earlier, Google chief Sunder Pichai had wondered on this question. But, does AI need to be more regulated or left unregulated? Regulation may have its own biases, and some say unregulated AI systems have a better chance of exhibiting acceptable operations. In such cases, AI screens only for what it wants and reports in an unbiased for it to happen. But, for all this to be processed, AI will have to be controlled by civil society actors, governments and companies. Unless, all actors don’t come to the table, there can be no solution.