Tech companies can learn from the NeurIPS conference screening submissions for potential biases and misuses
Often, in the operation of MSMEs, customer service tends to suffer from functional discrepancies.
There may be a dime a dozen examples of human bias creeping into artificial intelligence (AI) algorithms, little is being done by companies to factor in ethical considerations in AI decisions. Rather, most companies tend to carry out corrections post facto, with much apologising, of course. However, a reputed conference is trying to change the rules of the game. As per Nature, the Neural Information Processing Systems (NeurIPS) conference that took place earlier this month incorporated an ethics board to screen papers and assess whether the technology suggested would have any potential side-effects or if the research could have unintended uses. The aim of the board is to suggest participants to go beyond their remit and develop technology that considers larger ramifications for the society. This year, of the 9,467 submissions, 290 were flagged, and four even got rejected.
Although many would argue that screening papers on ethical aspects would take away focus from research, what also needs to be considered is that the world is increasingly battling cases of technology misuse. For instance, deep-fake technology is being used by miscreants to spread fake news. Although many companies like IBM have withdrawn their facial recognition offerings for policing, many police departments and city administrations still use such technology, despite reports of bias and faulty recognition. Ethical considerations would force researchers to consider aspects of privacy and incorrect use of technology. Given how deeply technology can penetrate every-day life, developers need to avoid creating tech-Frankensteins, and NeurIPS shows the way.