AI in Migration Management: Human Rights Risks & Lack of Regulation

The increasing integration of Artificial Intelligence (AI) and other advanced technologies into the management of global migration presents a complex and often troubling landscape from a human rights perspective. From Big Data predictions about population movements in the Mediterranean to AI lie detectors at European borders and automated decision-making in immigration applications, states are keenly exploring these new tools. However, a critical issue highlighted by the sources is that these technologies are largely unregulated, developed and deployed in opaque spaces with little oversight and accountability.

A central argument is that this lack of regulation is deliberate. Migrant populations are effectively being singled out as a "viable testing ground" for new technologies. This allows states to justify making migrants more "trackable and intelligible" under the guise of national security, or even humanitarianism and development. This technological experimentation creates a "differentiation of rights between citizens and non-citizens", allowing states to exercise greater control over migrant populations and potentially externalize their responsibilities to uphold human rights. Furthermore, by outsourcing technological innovation to the private sector, states can "distance themselves from suspect actions" through a process sometimes called "agency laundering," complicating the lines of accountability.

Understanding the Tools: AI, Algorithms, and the "Black Box"

The term "migration management," while theoretically debated, is widely used to describe how states and international organizations handle the movement of people. The surge in technological interest is driven by an "unprecedented number of people on the move" due to conflict, instability, and other factors, leading receiving countries to seek new ways to manage large populations and address concerns about strained resources or national security. States are eager to see new technologies as a "quick solution" to what are otherwise complex policy issues, fueling an "international race for AI leadership" driven significantly by private-sector innovation.

In this context, AI is a broad term that encompasses machine learning (ML), automated decision systems, and predictive analytics. These technologies are designed to either assist or "replace the judgment of human decision-makers". At their core are algorithms, which can be understood as "a recipe composed in programmable steps" that organize and act on data to achieve a desired outcome. Many algorithms learn by being "trained" on large, existing collections of data.

A significant problem with these systems is what's known as the "black box" phenomenon. This means that the algorithm's inner workings, its source code, and even the training data it uses are often proprietary or classified, making them shielded from public scrutiny. Without the ability to examine how the AI makes its decisions, it becomes incredibly difficult to understand, scrutinize, or critique its logic.

This lack of transparency directly leads to concerns about bias. The training data used can be "coloured by direct or indirect human agency and pre-existing bias". For example, seemingly neutral variables like postal codes can become "proxies" for other categories like race, leading to discriminatory outcomes. A prominent example is the COMPAS algorithm in the United States, which was criticized for falsely recommending higher pre-custodial sentences for racialized individuals compared to white offenders. The fundamental issue is that algorithms are vulnerable to the same decision-making concerns that plague human decision-makers: transparency, accountability, discrimination, bias, and error. If the data they are trained on is biased, the AI will inevitably replicate and even amplify those biases.

High-Stakes Experiments: AI's Impact on Migrant Rights

The implementation of new technologies in migration management impacts both the processes and outcomes of decisions that would otherwise be made by human officials, with far-reaching ramifications for human rights.

  • Data Collection and Biometrics: The Digital Fingerprint of Vulnerability

    • Automated decision-making systems require "vast amount of data on which to learn". International organizations, including multiple organs of the United Nations (UN), are heavily relying on Big Data analytics. For instance, the International Organization for Migration (IOM)'s Displacement Tracking Matrix monitors populations using mobile phone records and social media to predict needs.

    • The use of biometrics, such as fingerprint data, retinal scans, and facial recognition, is also rampant. The UN has collected biodata on "more than 8 million people, most of them fleeing conflict or needing humanitarian assistance".

    • However, this data collection is not a neutral exercise, especially when powerful "Global North" actors collect information on vulnerable populations without regulated oversight. It has been criticized for its "potential to result in significant privacy breaches and human rights concerns". Historically, such systematic data collection on marginalized groups has facilitated atrocities, as seen in Nazi Germany's data collection on Jewish communities and the Tutsi registries in Rwanda. More recently, China has been criticized for collecting facial recognition and location tracking data on its Muslim minority Uighur populations.

    • A critical human rights concern is informed consent. When refugees in Jordanian camps have their irises scanned to receive weekly food rations in an experimental program, their ability to meaningfully refuse is severely curtailed. Consent "cannot be truly informed and freely given if it is given under coercion".

    • Concerns also exist about data sharing. For example, the UN High Commissioner for Refugees (UNHCR) has reserved the right to share collected biometric data with third parties, including private sector entities like Accenture. The UN's World Food Program (WFP) partnered with Palantir Technologies, a company heavily criticized for providing technology that supports US Immigration and Customs Enforcement (ICE)'s detention and deportation programs. The sources highlight that data can also be "misinterpreted and misrepresented for political ends," potentially stoking fear and xenophobia.

  • Securitization and Criminalization: Borders Go High-Tech

    • Autonomous technologies are increasingly used to monitor and secure border spaces. FRONTEX, the European Border and Coast Guard Agency, uses military-grade drones for surveillance and interdiction of migrant vessels. The ROBORDER project aims to create a fully autonomous border surveillance system with unmanned mobile robots.

    • This trend "bolsters the nexus between immigration, national security and the increasing push towards the criminalisation of migration". Despite being presented as "smarter" or "more humane" alternatives to physical barriers, studies along the US-Mexico border have documented that these new surveillance technologies have "actually increased migrant deaths and pushed migration routes towards more dangerous terrain".

  • Automated Decision-Making: Life-Altering Algorithms

    • States are experimenting with automating various facets of immigration and refugee decision-making. Canada, since 2014, has used some form of automated decision-making to augment human decisions.

    • In the United States, an investigation revealed that ICE "amended its bail-determination algorithm" to justify the detention of migrants in every single case. The "Extreme Vetting Initiative" aimed to use automated assessments to predict whether an applicant would be a "positively contributing member of society" or intend to commit criminal/terrorist acts.

    • The inherent biases in these systems are critical. When algorithms rely on biased data, they produce biased results, which can have "far-reaching results" when embedded in technologies used experimentally in migration.

    • For example, AI-powered lie detectors by companies like iBorderCtrl are being piloted at EU border checkpoints. These systems monitor faces for signs of lying. However, it is "unclear how this system will be able to handle cultural differences in communication, or account for trauma and its effects on memory", which are crucial in refugee claims. This raises serious concerns about "breaches of internationally and domestically protected human rights in the form of bias, discrimination, privacy breaches, and due process and procedural fairness issues". It is not clear how the right to a fair and impartial decision-maker and the right to appeal will be upheld.

    • The sources note that "government surveillance, policing, immigration enforcement and border-security programs can incentivise and reward industry for developing rights-infringing technologies". Amazon's "Rekognition" facial recognition system, marketed for law enforcement, has been criticized by the American Civil Liberties Union (ACLU) and Amazon's own workforce for "profound civil liberties and civil rights concerns".

    • While there are some "encouraging developments," such as a robotic life-raft for rescuing refugees or apps to assist refugees, these are described as "piecemeal interventions" that "fail to consider that the issues around emerging technologies in the management of migration are not about the inherent use of technology but rather about how it is used and by whom". They don't address the fundamental issue of power consolidation and the unequal distribution of benefits.

The "Legal Black Hole": Why Regulation is Lagging

Despite the clear human rights implications, the "current global governance regime of migration management technologies is inadequate". There is "no integrated regulatory global governance framework" or specific regulations for automated technologies in migration management. Much of the global conversation around AI in this space "centres on ethics without clear enforceability mechanisms".

While some countries and regions are developing piecemeal guidelines (e.g., the European Commission's Communication on Artificial Intelligence, Canada's "Algorithmic Impact Assessment," India's "AIforAll," Kenya's taskforce), and certain binding regional mechanisms like Article 22 of the EU's General Data Protection Regulation (GDPR) touch on automated decision-making, there are "currently no legally binding international legal documents to regulate these technologies and limit their risks".

The sources explicitly argue that this lack of regulation is "deliberate". Migrants are seen as a population that can be "historically rendered as a population which is intelligible, trackable and manageable". This makes them the "perfect laboratory for technological experiments" that would likely not be allowed to occur in other societal spaces. This situation creates "legal black holes" in migration management technologies, where states deliberately seek to leave migrants "beyond the duties and responsibilities enshrined in law".

States engage in "agency laundering" by outsourcing technological innovation to the private sector, which allows them to distance themselves from suspect actions. This makes public-private accountability complex. International organizations also play a role; as non-state actors, they can be "overly empowered to administer technology without being beholden to rights-protecting laws and principles," potentially "launder[ing] their legal responsibility" for state actions.

Furthermore, technology is not neutral; it "replicates existing power hierarchies and differentials". The "Global North" often serves as the "locus of power and technological development," with technologies then deployed in "conflict zones and refugee camps [as] sites of experimentation under the guise of humanitarianism". This creates an "AI divide" where the viewpoints of those most affected are often excluded from discussions about ethical use and "no-go zones" for these technologies. The ultimate purpose of these technologies in migration management is to "track, identify and control those crossing borders," transforming individuals into "security objects and data points".

The Path Forward: Reimagining Governance Through Human Rights

The current global governance of migration management technologies is "inadequate". The reliance on "techno-solutionism" without systematic analysis of impacts is problematic. It is clear that "self-regulation by the private sector is not enough to ensure that rights-infringing technological experiments are curtailed".

A "more rigorous global accountability framework is now paramount". This framework must bridge the public-private accountability divide and be firmly rooted in International Human Rights Law (IHRL). IHRL provides a "viable starting point for codifying and recognising potential harms".

Key steps towards this include:

  • Establishing "bright lines that prohibit the use of new technologies" in certain instances where they could circumvent international human rights law. This means forbidding uses like AI for the arbitrary detention of migrants or the complete replacement of human decision-makers in refugee determinations.

  • Committing to safeguards around the use of these technologies to ensure that principles of natural justice and administrative law are respected. This includes ensuring rights such as the right to be heard, the right to a fair and impartial decision-maker, the right to reasons (explanation), and the right to appeal an unfavorable decision are upheld.

  • Recognizing the inherent transnational dimension of technology and preventing a "race to the bottom" in ethical standards, especially when exporting technologies to countries with weaker rule of law or problematic human rights records.

Ultimately, while technology presents a hopeful promise, it also forces us to grapple with fundamental questions about "what constitutes intelligence, how to manage and regulate new systems of cognition, and who should be at the table when designing and deploying new tools". As the sources note, "Technology is a social construct, a mirror to reflect the positives and negatives inherent in our societies".

Therefore, to move forward ethically, the aim is not just more regulation, but smarter, more agile oversight that can navigate the complexities of AI while upholding fundamental dignity and rights. Without a robust, human rights-centered global governance, this powerful mirror risks reflecting only existing biases, power imbalances, and ultimately, human suffering, rather than aiding a more just and humane approach to migration.


Next
Next

The Secret Algorithms Ruling Your Playlist: Why AI is Reshaping Music, Culture, and YO! Life – And What We MUST Do About It!