A little-noticed government notice in April 2025 showed a shift in immigration enforcement in the United States. Immigration and Customs Enforcement (ICE) is building an AI-powered platform called ImmigrationOS, with support from Palantir. The system is designed to identify and prioritise individuals for deportation, ranging from visa overstayers to those with criminal records.

This comes as the White House aims to scale up deportations, putting pressure on ICE to act faster. ImmigrationOS is intended to streamline the entire enforcement process, bringing together fragmented data and decision-making into one system.

Use of AI in immigration enforcement

The expansion of AI in immigration enforcement has alarmed civil-rights groups and legal experts. A lawsuit by the American Civil Liberties Union described recent ICE actions as a “crude dragnet” that led to unlawful detentions, some involving excessive force. Courts have also begun to show discomfort with the scale and methods of enforcement.

John Sandweg, a former ICE official, told The Economist, “I understand why people would be scared to death of these tools,” warning that AI could disproportionately target undocumented individuals who are otherwise law-abiding, simply because they leave clearer digital footprints. There are also concerns that tools originally developed for counterterrorism are now being repurposed for domestic use.

AI efficiency for ICE

For years, ICE has collected vast amounts of data, including records related to vehicles, phones, courts, and social media. However, the process has been slow and disjointed.

ImmigrationOS aims to change this by integrating all available data into a single interface. This would allow agents to quickly build profiles, identify inconsistencies in visa or asylum applications, and match identities across different databases. The system is also expected to make information gathered in criminal investigations usable for civil immigration enforcement.

Expanding surveillance

The range of data feeding into ICE’s systems is extensive and continues to grow. Information comes from government databases such as Social Security and motor vehicle records, as well as financial reports flagging suspicious activity and data from welfare programmes.

Even when local authorities refuse to cooperate, ICE can access information indirectly through private data brokers. The agency also uses tools like licence-plate readers and can request footage from home-security systems such as Amazon’s Ring. Facial-recognition technology plays a role as well, with systems from firms like Clearview AI enabling identification using billions of publicly sourced images.

Faster enforcement and legal shortcuts

AI is not only enhancing surveillance but also speeding up legal processes. Tasks that once took days, such as preparing warrants, can now be completed in under an hour. This efficiency could lead to a sharp increase in data requests.

The system is also being designed to flag useful but restricted data and suggest ways for agents to obtain legal access. In addition, ICE has been purchasing data from private companies, sometimes bypassing restrictions imposed by local governments, raising concerns about transparency and oversight.

Doubts over accuracy and accountability

Regardless of its capabilities, the reliability of AI-driven enforcement remains uncertain. There have already been instances of mistaken identity, including cases where individuals were detained incorrectly.

Experts say that error rates are unclear and that the inner workings of such systems are highly opaque. This lack of transparency makes it difficult to understand how decisions are made or to challenge them effectively. Even developers may not fully grasp how certain outcomes are produced.

Fears of political misuse

Critics warn that the technology could expand beyond its original purpose. ICE has reportedly begun analysing data related to activists who attempt to disrupt enforcement operations, raising fears that lawful protest could be monitored.

This has caused concerns about a chilling effect on free speech, especially amid political rhetoric that frames some opponents of immigration enforcement as threats.

The legal system has struggled to keep pace with the rapid adoption of AI in enforcement. While some rulings have limited data-sharing practices, broader questions about the legality of AI-driven surveillance remain unresolved.

This lack of clarity may be interpreted as permission to expand the use of such technologies. At the same time, it leaves room for future legal challenges that could define the boundaries of AI in immigration enforcement.

Disclaimer: This article is for general informational purposes only and does not constitute legal, immigration, or tax advice. Immigration laws and government policies are subject to frequent change without notice. While we strive to provide accurate updates, readers are strongly advised to verify the latest requirements with the official embassy, consulate, or government portal of the respective country. Financial Express is not responsible for any decisions made based on this information. For personalized guidance, please consult a qualified immigration attorney or a certified professional advisor.