Implications and Challenges of AI for Parliamentary Ombuds Work in Canada
Within a few short years, public interest in and concern about artificial intelligence (AI) has ballooned. The rapid pace of development within the field and the emergence of AI tools which seem to poke at the heart of what it means to be human have opened the floodgates of public discourse on the potential for a disruptive change to society that may be on par with or that may surpass the Industrial Revolution. Discussions among Ombudspersons, Information and Privacy Commissioners, and others involved with or impacted by parliamentary governance oversight and accountability bodies have identified some of the implications and challenges of emerging AI technology. In this article, the author explores some of the concerns of these stakeholders when grappling with regulating the use of AI in public sector service design. He cites lack of transparency, the potential for bias and ethics violations, insufficient data collection and management rules, privacy issues, and fairness in terms of AI-influenced eligibility criteria for public programs as areas that require investigation and action by parliamentary decision-making bodies. He concludes by warning governments to act expeditiously to protect people from AI’s potential to do harm as they consider how to harness its potential benefits.
Jay Chalke
Jay Chalke was appointed Ombudsperson of BC in 2015 and reappointed for a second six-year term in 2021.
In 2019, the Office of the Ombudsperson of British Columbia hosted a symposium that invited academics, Ombuds from across Canada, Indigenous leaders, senior public servants, and students to delve into the future of parliamentary ombudship with a wide range of topics. Discussions of issues pertaining to ombudship in the digital era, for example, touched upon the challenges to oversight, governance and accountability of decision making, and fairness that are affiliated with the emergence of artificial intelligence (AI) in public sector service design.
Conversations at the symposium stimulated further exploration of the issue of AI. In June 2021, together with the Information and Privacy Commissioner of BC and the Ombudsman and Information and Privacy Commissioner of Yukon, we published a report entitled Getting Ahead of the Curve: Meeting the challenges to privacy and fairness arising from the use of artificial intelligence in the public sector1.
In the two years since the report was published, the presence and use of AI has increased exponentially, and its application is considerably more sophisticated. Warnings and fears about its unregulated use in the digital world have similarly increased. Every day Canadians are subjected – in equal measures – to various promises and cautions regarding the possibilities and pitfalls associated with the developments in AI.
Governments and global organizations negotiate a complicated balance between AI’s anticipated economic advantages and a growing awareness of the potential for social detriment. They grapple to identify and address the risks related to AI while simultaneously recognizing the possible efficiencies it creates in the provision of government services. The Government of Canada has published its Responsible use of artificial intelligence (AI)2 to guide its use of AI in the delivery of public services. It includes a Directive on Automated Decision-Making, intended to ensure transparency, accountability and fairness, particularly when using its Algorithmic Impact Assessment. It also tabled the Artificial Intelligence and Data Act,3 as part of Bill C-27 (Digital Charter Implementation Act, 2022). The Act is “intended to protect Canadians [and] ensure the development of responsible AI in Canada….”
Likewise, the European Union has drafted an Artificial Intelligence Act4 and a proposal for a regulation5 that will harmonize rules on AI. UNESCO member states also adopted the first ever global agreement on the ethics of AI, which is described as an instrument that will “not only protect but also promote human rights and human dignity, and will be an ethical guiding compass and a global normative bedrock allowing to build strong respect for the rule of law in the digital world.”6
For Ombuds’ work, the implications of AI in public service delivery are significant. There are concerns about the capacity of regulatory schemes to keep pace with AI innovations, the lack of decision-making transparency inherent in AI, and intrinsic bias and ethical standards. Moreover, questions relating to data collection, storage and management processes, the rigorous protection of personal privacy, and the implications for criteria on rules-based public program eligibility are all being posed. All of these issues are exacerbated by the volatility of the AI terrain itself. Rapid changes are driving what often appear to be capricious or, at best, hurried policy responses. The pace of AI developments has created an unfortunate and seemingly unavoidable state of bureaucratic “whack-a-mole.”
Regulatory Challenges
There are clear administrative law principles around procedural fairness, the right to be heard, the right to impartial decision making, the right to reasons, and the availability of a review. The absence of any one of those is a routine subject of complaint to Ombuds. Regulatory intervention is necessary; however, regulation is frequently characterized as an obstacle to innovation. The challenge is deciding how to adapt or modernize existing regulatory instruments to account for the new and emerging concerns brought on by governments’ use of AI. The increasing automation of government decision making undermines the applicability or utility of existing regulations or established common law rules that would sufficiently address criticism about those decisions.
The latest federal Directive on Automated Decision Making is the third iteration of the federal government’s effort to keep pace with the rapid development of AI. Some of the latest changes are directionally similar to the recommendations we made to the BC and Yukon governments in Getting Ahead of the Curve. This is encouraging, but oversight and enforcement of the directive remains an issue.
For compliance purposes, government and the private sector should be required to assess the privacy impacts before implementing AI technology. This obligation should be ongoing and verifiable through proactive audits by regulators once the technology is deployed. Some controls and obligations are already present in legislation, such as the need to complete privacy impact assessments (PIA) before the implementation of a new or revised process.
PIA regulations, templates and tools may need to be crafted to address AI-specific concerns, including the creation of a proposed Artificial Intelligence Fairness and Privacy Impact Assessment (AIFPIA). This should include conditions that trigger the obligation to complete a PIA for systems that leverage AI to process personal information and clear rules about when an AIFPIA must be conducted. The process should include a requirement to conduct security threat and risk assessments and incorporate algorithmic impact assessment components specific to automated decision systems and their processing of personal information. It should also require transparency and mandate the review of AIFPIAs by the appropriate oversight bodies.
In addition, to keep up with technological developments, the legislated timelines at which legislation is to be reviewed must be short enough to address the rapid rate of significant changes in technology and the impact on society. Depending on the speed such developments reach, governments may have to consider models of continuous development of legislation as a solution to keep up with such rapid change.
Lack of Transparency
During the 2019 symposium, former New Brunswick Ombud, Charles Murray, presented the example of a Facebook AI Research Lab experiment in bot-to-bot negotiations regarding ownership of virtual objects. As the negotiations progressed, the bots invented a seemingly nonsensical derivation of human language that was unintelligible to the humans running the experiment. Although no deliberate deception or evasion efforts were programmed into the bots, what was remarkable was the bots’ strategic use of mendacity to maximize achievement of their goals. The most telling and significant aspect of the experiment was that the humans were unable to understand what informed the negotiations and, ultimately, what the negotiations were actually about.
The public must be able to understand the reasons behind an administrative decision. If the reasons for a decision cannot be known by the person affected by it, how can that person be expected to accept it? How could they propose the basis of a request to review the decision? If the factors that were considered and their weighting and analysis are not discernable, how could they formulate a reasonable argument to impugn the decision? For that matter, how could a member of the appeal body credibly consider any arguments submitted?
The consequence of this opacity has obvious implications for an Ombudsperson. When our ability to discern the fairness of processes is hindered by inaccessibility of the programming inputs or code, we are limited to assessing the outputs alone. While there is evidence that bots seem to be able to acquire a capacity for nuance with practice – for example, one report noted that bots became adept at feigning interest in a virtual object so they could use it as a concession while pursuing a higher value object – what is absent is the ability to program bots with a moral compass. Indeed, even if this were possible, inevitable questions would arise about those moral standards selected to inform the coding.
Bias/Ethical Concerns
AI is often lauded as a fast, efficient, and objective decision-making tool for administrative bodies and even courts. Its proponents envision it as a mechanism for enhanced fairness because of the speed with which it can review and evaluate large volumes of data, oblivious that this may occur in an environment influenced by inherent bias, partiality, or prejudice. Some proponents are convinced that such unreasonable influence either does not exist, or that it can be excised or mitigated by other factors over time. Others believe we can rely on external mechanisms of review to remedy the detrimental outcomes for the relatively small number of people affected. In this version, AI is utilitarian – a digital “trolley dilemma” solution. It would be used to dispassionately assess information and make decisions based on facts alone with no messy extraneous considerations such as empathy, morality, or the myriad of extenuating circumstances that could and currently do inform decisions.
However, there is ample evidence to contradict the supposed impartiality of AI. In fact, the myth of AI’s neutrality has been disproven many times over. AI relies on algorithms informed by specific datasets and machine learning to spur action and decisions. However, as we know from the erstwhile principle of “garbage in, garbage out,” AI is only as neutral as its inputs. The algorithms employed may inadvertently embed preferences, biases, or even errors. This potential flaw should raise questions about the efficacy of what some have called the automatization of decision-making processes, particularly when the stakes are high for those involved and could affect their freedom, their finances, or their rights.
Data Collection
In 2017, IBM estimated that the vast preponderance (90 per cent) of the data in the world had been created in the previous two years. The World Economic Forum turned that estimate into a simple yet stunning graphic (Fig. 1)7. It’s safe to surmise that the amount of data created since that estimate has made the already thin wedge of historical data progressively smaller.
Despite the plethora of data held by many public and private organizations, our ability to organize, manage, and protect data has not kept up with our tremendous ability to generate and collect data. Moreover, the collection of data, particularly in network design, is often seen as a technical or mechanical task rather than one that requires comprehensive understanding of the ethical, privacy, policy and governance properties of data assets.
Privacy Issues
AI-driven use of biometric data by government continues to garner significant public attention and criticism. The sensitive nature of this kind of personal information raises significant privacy concerns. However, unlike other jurisdictions where privacy laws have recently been modernized, there is currently no Canadian law in force that addresses rights or obligations relating directly to AI. Federally, Bill C-27 (Digital Charter Implementation Act, 2022) was tabled in July 2022, but is not yet law.
A modern interpretation of the right to privacy as a human right is necessary for the exercise of other fundamental rights. At a minimum, privacy legislation should be amended to include the right to notification that AI is used, an explanation of the reasons and criteria used, and the ability to object to or opt out of the use of automated decision systems.
Facial recognition technology (FRT) is a form of biometrics that can identify or authenticate individuals by comparing their facial features against a database of known faces to find a match. The process can be broken down into three steps. First, the computer finds facial features in a digital image, video frame or other representation. It then creates a numeric representation of the face based on the relative position, size, and shape of identified facial features. Finally, this numeric “map” of the face in the image is compared with a database of identified faces, for example, from a driver’s licence database.
With respect to the use of a driver’s licence database to aid FRT, the Insurance Corporation of British Columbia (ICBC) had previously offered use of its database of driver’s licence photos to assist law enforcement agencies in identifying individuals suspected of crimes. Most notably, ICBC offered to use its database to assist the Vancouver Police Department in identifying suspects in the 2011 Stanley Cup riots. The Information and Privacy Commissioner of BC investigated this issue and concluded that ICBC’s stated use of FRT – to combat driver’s licence fraud – did not allow ICBC to use that database for a collateral purpose of law enforcement without a warrant or court order.
The improper collection and use of biometric data raises significant privacy concerns for citizens. It is also worth noting that the very nature of the way biometrics operates presents a threat to individual privacy. The Supreme Court of Canada in R v Jarvis (2019) recognized that people have a reasonable expectation of privacy, even in public spaces. Those expectations are not forfeited simply by walking out one’s front door. For example, while most bank customers would reasonably expect that they are being recorded when conducting a transaction inside, they would not necessarily consent to the harvesting of their further biometric data by merely touching the handle of the bank’s entrance door. Similarly, FRT operates in public settings in ways that may undermine the public’s reasonable expectation of privacy. The standard terms of service that mediate digital consent are absent. We are often not made aware that we are being observed or recorded, how and why we are being observed or recorded, what biometric data or other information is being collected in the process, and how it is being used.
The impact on public program eligibility
The rise of the modern social welfare state over the last 70 years has been based on a balance of democratic institutions establishing criteria or rules for eligibility to public programs, public administrators applying those criteria and oversight bodies – courts, tribunals and ombuds –conducting reviews.
To date, AI has mostly been deployed to assist, and in some cases effectively replace, administrators in their application of criteria established by legislators or subordinate decision makers. But AI threatens to move earlier in the process. What if instead of eligibility criteria, AI were simply to administer a sum of public funds for a particular social program or outcome and AI could decide the rules? And what if those rules would always be fluid, informed by outcomes and experiences from each decision? The implications for democracy and the rule of law are obvious but the impossibility of such a scenario 10 or 20 years from now is less obvious. Now is the time to have the conversation about whether we want AI displacing our democratic institutions this way and, it should be said, neutralizing oversight.
Conclusion
When the topic of AI in public service delivery was discussed at the 2019 Symposium, public discussion of the prospect was minimal. When the Getting Ahead of the Curve joint report was issued in 2021, emerging public awareness was evident in the occasional article in the mainstream media. Since the introduction of GPT-4 in early 2023, public discourse has increased at a rate comparable to the rate of data creation reported by IBM in 2017. The ability of Open AI’s chatbot to convincingly communicate in an apparently reasoned way, appears to have caught the public’s attention.
The possibility that AI could more efficiently and consistently replicate human reasoning – and consequently replace it – seems to poke at the heart of what it is to be human. Add to that GPT-4’s uncanny ability to communicate in a natural way and it could appear there is little more left that public administrators do when they interface with the public.
The lure of AI in public service delivery is understandable, while the risks appear to have been left unattended. Precious little has been developed by governments to protect the citizens who will inevitably be affected by the myriad exploitations of AI imaginable. Unless governments adopt a standard that preserves the principles of privacy and fairness in the development of AI systems, they will be exposing themselves and the public they serve to risks and harms that will be far more difficult to remedy than to prevent.
Notes
1 https://bcombudsperson.ca/assets/media/OMB-GettingAheadoftheCurve-v6.pdf
3 https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act
4 https://artificialintelligenceact.eu/
5 https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206
6 https://unesdoc.unesco.org/ark:/48223/pf0000381137
7 https://www.weforum.org/agenda/2019/08/artificial-intelligence-government-public-sector/