By: Steven Hubbard |americanimmigrationcouncil.org |Editorial credit: Mamun_Sheikh / Shutterstock.com
U.S. Immigration and Customs Enforcement (ICE) has partnered with Palantir Technologies—a Denver-based software company co-founded by billionaire entrepreneur Peter Thiel—to use artificial intelligence and data mining to identify, track, and deport suspected noncitizens. Palantir is slated to deliver a prototype of the ImmigrationOS platform by September 25, 2025, with the contract running through September 2027. ICE is paying Palantir $30 million for the platform.
Similar to Palantir’s other systems, ImmigrationOS will pull together vast amounts of data, detect patterns, and flag individuals who meet certain criteria, raising concerns about potential impacts on civil liberties in America. Those concerns are amplified by the revelation that Stephen Miller, the Trump administration’s chief architect of immigration policy, holds a substantial financial stake in Palantir—underscoring the potential conflicts of interest in the government’s embrace of the company’s technology.
The plan, first reported by Business Insider, has triggered lawsuits from privacy and labor rights advocates and raises serious concerns about accuracy, justice, and civil rights. For its part, Palantir says it only builds the tools, not the rules. However, the architecture of an AI system—how it integrates data, flags individuals, and triggers action—is a form of policymaking. Designing a system like ImmigrationOS means deciding which data is included, what prompts alerts, and what gets overlooked.
What is Palantir?
Incorporated in Silicon Valley in 2003, Palantir began working in counterterrorism in 2005 after receiving investment from In-Q-Tel, the CIA’s venture capital arm—a partnership that helped position its software for use in counterterrorism missions around the world.
Palantir grew on its ability to turn messy, large data into actionable intelligence. Its platforms are like smart dashboards. Upload enough data, and they find patterns, make predictions, and flag people who meet specific criteria—such as being a “visa overstay,” having alleged gang affiliations, or other red flags.
Since 2013, Palantir has provided ICE with systems like FALCON and Investigative Case Management (ICM), which have been used in workplace raids, large-scale enforcement operations, and investigations involving asylum seekers—showing how Palantir’s pattern-finding capabilities have long been central to ICE’s most aggressive tactics and raising concerns that ImmigrationOS could enable similar or expanded practices.
The company has two main products, Foundry and Gotham, which are used by agencies like the Department of Defense, the IRS, and ICE. Foundry helps unify and visualize massive data sets from tax records to biometrics. Gotham is designed for law enforcement and the military to identify connections between people, places, and events.
Palantir has received more than $900 million in federal contracts since Trump took office, public records obtained by The New York Times show.
What Is ImmigrationOS?
According to government documents, ImmigrationOS is ICE’s next-generation enforcement tool. It has three main components:
- Targeting and enforcement prioritization: Helps ICE streamline its decisions on who should be removed first, with priority given to “violent criminals” and people who have overstayed their visa. It is unclear which data or criteria the new system will use to identify “violent criminals.”
- Self-deportation tracking: Monitors whether individuals are voluntarily leaving the United States.
- Immigration lifecycle management: Streamlines the deportation process, from identification to removal.
Palantir’s systems pull data from across government databases—regardless of the veracity or accuracy of those databases—including passport records, Social Security files, IRS tax data, and even license-plate reader data. The goal is to create a comprehensive, AI-driven profile of individuals that agencies can use to make faster and more efficient enforcement decisions.
The Illusion of Neutrality
Palantir argues that it just provides the tools. It doesn’t decide who gets targeted, deported, or surveilled. Yet design choices shape real world outcomes. Essentially, AI architecture becomes policy.
While the full technical design of ImmigrationOS is still emerging, past examples show why it’s important to closely scrutinize the construction of algorithmic systems, which can produce biased and inaccurate outcomes.
For example, a 2016 investigation by ProPublica found that COMPAS, a risk-assessment tool designed by the private software firm Northpointe that is widely used in the criminal justice system, was nearly twice as likely to incorrectly label Black defendants as a high risk for recidivism than white defendants. It also incorrectly labeled white defendants as a lower risk for recidivism than similarly scored Black defendants.
These weren’t just technical errors. They were design decisions—based on which data the algorithm used, how it weighed different factors, and how it categorized risk. The resulting scores were presented to judges at critical moments in the legal process, shaping decisions about sentencing and release and impacting the lives of thousands of people subject to incarceration and prosecution. That’s the danger of treating algorithmic systems as neutral: created by people, they clearly reflect human judgment, bias, and prioritization.
Oversight Needed
In practice, AI-driven enforcement systems can be far from perfect. Mistakes in automated systems can have outsized effects, depriving people of their liberty through detention, loss of legal status, or wrongful deportation. Even if error rates are low, the stakes for affected individuals are high. That is why independent audits, clear appeal processes, and regular bias testing matter.
There is also a persistent tension between accuracy and privacy in AI-driven enforcement. Expanding the range of data accessible to a system like ImmigrationOS can make targeting more precise, but often at the cost of deeper intrusion into personal information. Much of this data can be accessed if an individual is declared “under investigation,” a designation that in turn underscores the need for transparency, clear limits on what constitutes an investigation, and strict separation between criminal probes and civil or administrative enforcement.
Some Palantir engineers themselves have raised concerns about the ethical burden of designing these tools. They argue that building systems, especially without sufficient oversight, that are capable of mass surveillance crosses a dangerous line—from protecting the civil liberties that underpin democracy to blatantly undermining them.
Palantir may position itself as a neutral technology provider, but when its software becomes central to government immigration enforcement systems, its role becomes more than just technical. The architecture of a system—how it integrates data, flags individuals, and prioritizes actions—inevitably shapes outcomes.
Palantir claims that ImmigrationOS will improve efficiency of Trump’s indiscriminate and chaotic immigration agenda and alleges that it will prevent real threats to public safety. But it also concentrates enormous power in the hands of AI-driven platforms with minimal public oversight. How will such integrations be monitored? What guardrails will be put in place to prevent unintended overreach? We should be asking not just what these systems do, but who gets to decide what “justice” looks like when it’s written in code.