Indian IT firms are launching responsible AI solutions to help enterprises balance innovation with ethical considerations to maximise their return on investments. The need for responsible AI is felt when the model errs in its answers.
Infosys last week launched its responsible AI suite, a part of Infosys Topaz, an AI-first set of services, solutions and platforms using generative AI.
According to the Infosys generative AI radar, by Infosys Knowledge Institute, “enterprises worldwide are identifying data privacy, security, ethics and bias as the primary challenges in their pursuit of innovation with AI”.
Phil Fersht, CEO and chief analyst, HFS Research, said, “With the challenges of responsible AI currently forcing many enterprises to slow their progress towards achieving scaled value with AI, smart offerings such as Infosys Topaz’s responsible AI suite can clear the path to help them accelerate their critical AI initiatives”.
Another IT firm, Coforge, launched “Quasar responsible AI solution” last December. Coforge Quasar responsible AI is a comprehensive solution that tackles biases in datasets and models, identifies potential risks and compliance issues and provides tools to govern, mitigate and remediate these challenges”, said the company in a statement.
Similarly, last year another IT firm, Sonata Software, launched Harmoni.AI – a responsible-first AI offering for enterprise scale, leveraging the power of generative AI. The company claims that its ‘responsible by design’ approach ensures uncompromising ethics, trust, privacy, security, and compliance. The IT firm is helping enterprises leverage the most relevant use cases for their specific business needs within a governed framework.
Akhilesh Tuteja, partner & national leader, clients and markets and technology, media & telecommunications, KPMG in India, said, unlike traditional computing where the answers are always predictable (ie same inputs gives same outputs), in AI there are lot of probabilistic outcome because it tries to predict. AI system works on probabilistic model rather than deterministic model.
“The range of probabilities can be very wild. One may not get the answer he / she is looking for. Therefore, one has to make sure that AI used for mission critical systems, uses the right kind of parameters and is going to give right answers,” Tuteja said.
Responsible AI frames the right kind of boundaries. The aspects of responsible AI are – transparency, accountability, accuracy and explainability – the system can answer why it has come up with an answer, added Tuteja.
When guardrails are set, temperatures are also set. Temperature in AI parlance is a parameter that is used to control the randomness of the outputs or answers. If temperatures are low, the range of randomness is limited. And when it’s higher, the randomness is high or the imagination of AI will be very wild. “If one allows AI to be more creative, the chances of hallucination will be more. Responsible AI requires one to comply with data governance, which basically means data which goes into the model is clean so that it doesn’t give wrong answers”, concluded Tuteja.