One of the principles of procedural fairness is the right to reasons for an administrative decision.
Canadian courts and decision-makers have an obligation to explain why a particular result is achieved and its justification.
The use of artificial intelligence (“AI”) as a tool in court adjudication challenges the common law standard for what amounts to adequate reasons for decision.
For example, the right to the reasons set out in the Supreme Court of Canada decision in 1999, Baker v. Canada (Minister of Citizenship and Immigration), (1999) 2 SCR 817, include the duty that a member of the human court drafts or arrives at the decision in question? To what extent can a court delegate its decision-making power to AI, for the sake of efficiency and to avoid delays? Will courts consider inherent or potential AI biases that might taint a particular court outcome?
Most of these questions remain unanswered.
A recent Federal Court decision, Haghshenas c. Canada (Minister of Citizenship and Immigration), 2023 FC 464highlights how Canadian courts can address the fairness or reasonableness of administrative decisions written with the help of AI.
A powerful Chinook
Haghshenas concerned an application for judicial review of a decision of an immigration officer (the “officer”) at the Canadian Embassy in Turkey. The officer refused the applicant a work permit designed for foreign entrepreneurs and self-employed persons seeking to operate a business in Canada (the “work permit”).
One of the work permit requirements under paragraph 200(1)(b) of the Canada Act Immigration and Refugee Protection Regulations, SOR/2002-227 (the “Regulation”) is that the officer is satisfied that the applicant “will leave Canada by the end of the authorized period of stay”.
In this case, the officer concluded that the applicant would not leave Canada at the end of his stay under the work permit. That is, the Applicant’s aspiration to start an elevator/escalator business in Canada “did not seem reasonable” given the speculative revenue projections for the business and the fact that the business did not had not obtained the appropriate licenses, among other reasons.
In arriving at this decision, the officer used Chinooka Microsoft Excel tool developed by Immigration, Refugees and Citizenship Canada (“IRCC”).
According to IRCC website, Chinook assisting with the “processing of temporary resident applications to increase efficiency and improve customer service”, with the aim of reducing the backlog of work permit applications. It “does not use artificial intelligence (AI) or advanced analytics for decision making, and there are no built-in decision-making algorithms.”
Notwithstanding these statements, the plaintiff disputed the officer’s use of Chinook on judicial review, arguing that the use of AI to reach an administrative decision was both procedurally unfair and substantively unreasonable.
The Federal Court rejected the plaintiff’s position, largely on what appears to be an assumption that the Chinook tool is a form of AI.
In doing so, the Court alluded to a number of important principles about how it could control the use of AI in administrative decision-making in the future.
1. Decisions made by human decision-makers are not procedurally unfair
By rejecting the argument that the use of AI was procedurally unfair, the Court appears to have drawn a line in the sand on the proper role of AI mechanisms in administrative decision-making.
The Court noted that in the applicant’s case, AI did not make the final decision about his work permit – the agent did.
Inherent in the Court’s reasoning is the presumption that it is procedurally fair for AI to assist an administrative decision-maker in giving reasons for its decision. AI assists the administrative state with the objective of promoting more efficient and timely results.
What seems unfair, however, is the state’s delegation of its decision-making power to AI. The Court held:
As to artificial intelligence, the plaintiff argues that the decision is based on artificial intelligence generated by Microsoft in the form of “Chinook” software. However, the evidence indicates that the decision was made by a visa officer and not by software. I agree that the decision was gathered by artificial intelligence, but it seems to me that the Court of Judicial Review must review the record and the decision and determine its reasonableness in accordance with (the decision of the Supreme Court of Canada In) Vavilov. Whether a decision is reasonable or unreasonable will determine whether it stands or is overturned, whether or not artificial intelligence was used. To conclude otherwise would elevate the process above the bottom.
2. It is not unreasonable to use AI in administrative decision-making
Regardless of whether the use of AI is procedurally unfair, the Court also rejected the argument that the officer’s reliance on Chinook rendered the decision substantially unreasonable.
According to the Court, there is nothing inherently unreliable or ineffective about the use of AI, at least in this particular case.
The Court did not find it necessary to delve into the workings of the Chinook software to determine whether its mechanisms were inappropriate or would cause unreasonable results in the immigration assessment process:
With respect to the use of “Chinook” software, the Applicant suggests that there are questions about its reliability and effectiveness…the Applicant suggests that a decision made using Chinook cannot be said to be reasonable until it is explained to all stakeholders how machine learning has replaced human input. and how it affects application results. I have already dealt with this argument under procedural fairness and concluded that the use of (IA) is irrelevant…
So in this particular context, the government’s use of AI has survived reasonableness scrutiny.
Will AI replace court decision-making?
The Court’s approach above reflects a willingness to accept machine learning as a limited component of administrative decision-making, with the caveat that ultimate decision-making authority must reside in a human court.
Haghshenas However, it only scratches the surface of the implications of the Canadian administrative state delegating its roles and responsibilities to machine learning in the interest of efficiency.
As we learn, AI comes with its own set of inherent biases and problems.
There will undoubtedly be new circumstances in which an enterprising lawyer will argue that the court’s reliance on AI tainted the outcome of the decision or rendered it procedurally unfair.
Courts and agencies across Canada must therefore approach the question of whether and how to adopt AI in decision-making with some caution and with significant legal and ethical training.
This is the only way to ensure that the use of AI remains a fair and reasonable tool in administrative decisions.