Alongside the positive impact of artificial intelligence (AI), it’s also facing backlashes. We often have come across false information spread by AI. To stop this Microsoft has come up with a new AI tool that can fix mistakes made by AI.
The tech giant announced in an official blog that it will roll out ‘correction’, an AI tool that not only identifies your mistakes but also corrects them. The tech giant said that the tool doesn’t need any prompt as it will automatically detect the mistake.
Meet ‘Correction’; Your AI corrector
The tool ‘Correction’ can automatically detect and rectify false information generated by AI. Microsoft explained that this new AI tool is built to completely remove any form of information that can mislead you. According to an official blog, the ‘Correction’ tool is a feature in Microsoft Azure ‘AI Content Safety’s Groundedness detection’ feature. With this feature you can fix hallucination issues in real time before others see them.
The company further added that Microsoft’s new Correction tool is powered by a new process. This process uses small and large language models to align outputs with grounding documents.
So, how does this feature work? The feature uses a ‘AI hallucinations’ classifier model which can identify potentially incorrect or fake information in texts generated by AI. If hallucinations are detected by the classifier, a second model then starts to correct it. The second model then utilises both small and large language models to correct these errors by aligning the text with verified information, known as “grounding documents.” Moreover, there are two AI models which work together.
Other updates
The development of AI has taken the world by storm, with AI being integrated into our daily lives. However, there have been incidents where we have been misled with wrong information by AI. To be specific, we have been victims of ‘AI hallucinations.’ But what is ‘AI Hallucinations’ and how can it impact you? ‘AI hallucinations’ is a glitch in AI that can make it share wrong information. Most of these are often associated with ‘AI-based’ search results. For example, in case we are searching for an imminent personality and the AI comes up with some AI generated image. This could not only spread misinformation but can also result in the face loss of that person.
Furthermore, “We hope this new feature supports builders and users of generative AI in fields such as medicine, where developers must ensure the accuracy of responses,” a Microsoft spokesperson explained in an interview with TechCrunch.
Follow FE Tech Bytes on Twitter, Instagram, LinkedIn, Facebook