The world’s largest 5-day meet-up of global leaders and some of the biggest voices in the field of artificial intelligence, AI Impact Summit 2026 officially began in Delhi on February 16. The summit’s organisers yesterday also drew heavy criticism for their poor crowd management at Bharata Mandapam.

The summit hosted a vast number of workshops and discussions where each session was illuminated by a panel of brilliant individuals working at the top level of the tech ecosystem.

One of the key talking points that emerged from the summit was about how to ensure the safety of people using Artificial Intelligence particlualry all forms of marginalised communities.

Speaking at a panel titled, ‘Women, Work and The Future of AI,’ Kalika Bali, Principal researcher at Microsoft and Dr Urvashi (Founder and CEO of Digital Futures Lab) raised some big concerns about the general trajectory of Artificial Intelligence and the way it’s being developed.

Talking about the impact of the development of artificial intelligence on the global power equation, Kalika Bali said that the current trajectory of the development poses a great danger to marginalised groups and communities.

Bali remarked that the way data collection is taking place across the world, there’s a great danger of safeguards and ethnographic considerations for women and marginalised communities being overlooked because of the way AI is being developed.

Notably, Kalika and Dr Urvashi’s comments came shortly after the Grok deepfake controversy. Where people across the world used Elon Musk’s Grok AI to publicly produce and propagate fake, unclothed and sexualised images of famous women ministers, influencers, British royalty, daughters and sisters of famous personalities.

“There is a great danger because of these requirements because of how AI is built It is very easy to kind of overlook all minority or marginalised communities, but especially women, who are the biggest minority community in the world,” Kallika told people present at the panel.

Can AI lead to the persecution of immigrants across the globe?

Dr Urvashi raised some of the more important points about the dangers of ‘accelerated’ technological development. She even pointed out that if we continue to create databases of people across the world, then this same data, combined with facial recognition tech, can be used to violate the rights of ethnic minorities.

“Surveillance, for example, where you have strict immigration control building that data set of marginalised populations, is actually quite risky because then you’re actually creating the basis for those technologies which are harmful to human rights to actually work really well,” she warned.

Dr. Urvashi also voiced fears of this development soon becoming a reality in places where governments have imposed strict immigration control. In the elaborate panel, a group of women leaders agreed that on some level, the trajectory of development of AI systems is reproducing inequality.

“AI systems are reproducing inequality. They are centralising power. They’re creating new forms of vulnerability, and it is those who are marginalised already that are disproportionately impacted by those new vulnerabilities, including women,” Dr Uravshi added.

What’s the Solution?

Talking about a potential solution to this rather large problem, both Kalika Bali and Shachi Bhalla, the deputy director uh of gender equity at the Gates Foundation, recommended organisations to engage in ethnographic studies where they consult with ground-level women workers and minority groups to better embed safeguards in the AI models.

The esteemed panellists also recommended creating systems to ensure that women who are employed in vast quantities to train models as ground-level data labellers/annotators can grow to become leaders with power.

To address these systemic flaws, the experts proposed a shift in how AI is developed and who develops it. The proposed solutions have been tabled below for the reader’s convenience.

Proposed ActionDetails & Goal
Ethnographic StudiesConsulting ground-level workers to embed local safeguards into AI models.
Leadership MobilityMoving women from “invisible labor” (data labeling/annotation) to positions of power.
Inclusive DevelopmentShifting from “building for women” to “building with women.”

“We need to look at creating a space where women can actually can we make sure that women go from being this invisible labor to actually growing in the system and really emerging as leaders with power,” Shachi Bhalla remarked.

Conclusion: A Shift in Trajectory Needed

The consensus at the summit was clear: AI is not a neutral tool. Without deliberate ethnographic practices and a pivot away from purely volume-driven data collection, the technology risks deepening the very same ‘dangerous’ biases that it aims to solve. 

As the summit continues through February 20, the focus remains on whether global tech giants will adopt these recommended safeguards or continue the “accelerated” path that current leaders find so precarious.