By Siddarth, Sanjeev Prashar and Archana Parashar

Recent developments have cast a spotlight on a pressing issue in recruitment: AI bias. A lawsuit was recently permitted against Workday, an AI-driven recruitment tool, accused of age discrimination, for unfairly rejecting older candidates, despite their qualifications. AI recruitment tools are showing troubling signs of bias in India too. In fact, forty percent of rejections flagged bias against women and marginalized groups, prompting a significant question: Can AI truly create a fair hiring process, or is it simply automating the biases of the past?

AI in hiring

Revolutionising recruitment, artificial intelligence facilitates HR departments by increasing speed, reducing human error, and minimizing bias in screening resumes, shortlisting candidates, and conducting interviews. It helps businesses identify the best talent efficiently and fairly, without the traditional biases that can cloud human judgment. But AI is only as unbiased as the data it is trained on. Flawed data will replicate the biases.

Hidden biases of AI

Despite AI’s promises, bias in hiring remains a serious concern. While many believe AI can remove human prejudice, it often ends up reinforcing existing biases. Algorithmic bias occurs when an AI system is designed in a way that results in discriminatory outcomes. If an AI tool is trained primarily on resumes from one group, say male candidates, it will favour them, even when equally qualified women exist. Sample bias happens when the data used to train AI systems isn’t diverse enough, leading to unfair outcomes. If the data predominantly reflects specific groups, like urban males, the AI system will favour them, disadvantaging others.

In India, these biases are becoming increasingly evident. ZinterviewAI, a platform aimed to streamline hiring with AI-powered interviews, had a forty percent rejection rate for female candidates and those from marginalized backgrounds. This bias highlights a broader problem: AI systems not trained on sufficiently diverse data, will fail to assess candidates fairly. As AI adoption grows in India’s IT sector, concerns arise that such biases may be embedded across recruitment tools, reinforcing existing inequalities.

Gender bias is becoming digital. Women taking career breaks for caregiving or maternity, are often penalized by AI systems that flag these breaks as “inconsistent experience” or downgrade them for lack of recent exposure. Women opting for part-time roles are too penalized, subtly reinforcing traditional gender expectations about work. This highlights the need for AI systems prudent enough to avoid reinforcing gender biases.

Fixing the Biases: Addressing AI bias in recruitment isn’t easy, but it’s necessary. Companies must ensure fairness and accuracy in AI systems. Here’s how:

Regular audits and monitoring

AI systems need to be regularly audited to ensure they don’t perpetuate bias. The Workday lawsuit is a reminder that unchecked AI can lead to significant problems. Regular audits will help identify biases early and ensure AI systems perform as intended.

Diversifying Training Data and Implementing Moving Averages: AI models must be trained on diverse data that represents varied segments – genders, castes, regions, and educational backgrounds. Moving averages should be used to keep AI tools up-to-date with current hiring trends, ensuring they don’t rely on outdated data that reflects past biases. HR automation platforms should consistently update data ensuring these reflect the modern, diverse workforce.

Human Oversight: While AI assists, human oversight remains critical. Even the best AI systems cannot evaluate qualities like cultural fit, creativity, or emotional intelligence. By involving humans in the final stages of decision-making, companies can ensure AI-generated recommendations are reviewed for fairness.

Transparency and accountability: Candidates deserve to know how their applications are evaluated, particularly when AI is involved. This transparency will make companies accountable, ensuring their AI systems align with fairness, diversity, and inclusivity that is essential for ensuring trust in HR systems.

Conclusion: Though AI holds promise for transforming recruitment, making it faster, more efficient, and ostensibly more objective, it can perpetuate or even introduce new forms of discrimination, if left unchecked. Businesses must commit to regular audits, data diversity, moving averages, human oversight, and transparency, making AI a tool for fairer, more inclusive hiring practices, ensuring equal opportunities for all candidates.

The authors are faculty at Indian Institute of Management Raipur

Disclaimer: Views expressed are personal and do not reflect the official position or policy of FinancialExpress.com. Reproducing this content without permission is prohibited.