By Vladislava Stoyanova
Artificial intelligence (AI) is increasingly shaping migration, asylum, and border governance by introducing forms of automated decision-making. Although such technologies promise greater efficiency, their use in asylum procedures raises significant human rights concerns, particularly because the individuals affected are often in highly vulnerable situations. The EU AI Act therefore categorises the use of AI in asylum procedures as a high-risk application. This report ‘Artificial Intelligence and Asylum Decision-Making: Any Role for Human Rights Law?’ published by the Swedish Delegation for Migration Studies, assesses whether AI-assisted decision-making in asylum cases is compatible with human rights law, focusing in particular on the rights to privacy and protection against refoulement.
The study highlights the challenges involved in establishing a causal link between potential harm and the use of AI systems in asylum procedures. It argues that these difficulties can be addressed by conceptualising the harm primarily as procedural harm. From this perspective, particular attention must be paid to procedural guarantees, including the quality and reliability of decision-making processes, timeliness, effectiveness, institutional independence, the participation of affected individuals, and the transparency of the reasoning underlying decisions. When AI tools are used in asylum procedures, these safeguards become especially important, including meaningful involvement of applicants and clear explanations of decisions that affect them.
One central difficulty is that asylum authorities generally lack reliable mechanisms to verify whether their decisions – such as granting or refusing protection – were substantively correct. Because outcomes cannot easily be validated, there is little reliable feedback that could be used as test data for evaluating AI systems, either during their development or after deployment. In addition, historical datasets used to train such systems may have limited value in predicting future risks in applicants’ countries of origin.
The report underscores that new technologies may themselves transform the practice of asylum law. This transformation is plausible given the growing importance of data, the choices involved in selecting and structuring that data, and the influential role of programmers in designing algorithms.
As a result, decision-making authority may gradually shift away from the discretion traditionally exercised by individual asylum officers toward discretion embedded in the design and operation of technological systems.
Report: Artificial Intelligence and Asylum Decision-Making (Link to Delmi website, new tab)

Vladislava Stoyanova is Associate Professor of Public International Law at the Faculty of Law, Lund University (Sweden). Vladislava is the holder of the Wallenberg Academy Fellowship (2021-2026) awarded by the Knut and Alice Wallenberg Foundation and the Royal Swedish Academy of Sciences. As a Wallenberg Fellow, she leads the project ‘The Borders within: the Multifaceted Legal Landscape of Migrant Integration in Europe.’
