Artificial Intelligence

In recent years, artificial intelligence (AI) has subtly changed many aspects of our daily lives in ways that we may not yet fully understand. As the Office of the Privacy Commissioner (OPC) of Canada states, “AI has great potential in improving public and private services, and has helped spur new advances in the medical and energy sectors among others.” The same source, however, notes that AI has created new privacy risks with serious human rights implications, including automated bias and discrimination.

AI adds a new element to the classic data lifecycle of collection, use, retention and disclosure: the creation of new information through linking or inference. For privacy principles to be upheld, data created by AI processes needs to be subject to the same limitations of use, retention and disclosure as the raw personal data from which it was generated. It is important to note that, conceptually, AI is not a form of data processing; rather, it is a form of collection. AI’s importance in the privacy domain lies in its impact – which is that it expands on the data collected directly from individuals.

AI systems go far beyond analyzing data that individuals have voluntarily provided. They frequently collect data indirectly, for example, by collecting public social media posts without individuals’ knowledge or consent. Through linking and inference, AI uses data from various sources to create new data, almost always without the consent of data subjects. This creation of knowledge is a form of data collection. If regulation can deal with privacy issues at the level of collection, it has also dealt with use, since collection is the gateway to use.

Many experts on privacy and artificial intelligence (AI) have questioned whether AI technologies such as machine learning, predictive analytics, and deep learning are compatible with basic privacy principles. It is not difficult to see why; while privacy is primarily concerned with restricting the collection, use, retention and sharing of personal information, AI is all about linking and analyzing massive volumes of data in order to discover new information.

AI requires new approaches to enforcing the data protection principles of data minimization and purpose specification. While AI systems have the capacity greatly to increase the scope of data collection, use, retention and sharing, they also have the capacity to track the purposes of these data processing activities. Maintaining the link between data and specified purposes is the key to enforcing privacy principles in a big data environment.

KI Design’s AI Experience

KI Design’s experience includes numerous public and private sector AI projects, covering the following areas of work:

Insights and Predictive Modelling: Maximizing the value of data and information by leveraging such techniques as machine learning and natural language processes to predict outcomes and gain deeper insights into behavioural patterns and trends. This could include preparing data, building and training models, putting models into production, and monitoring.

Machine Interactions: Facilitating information sharing and citizen-government interactions by using chatbots and other techniques such as semantic analysis, natural language processing, speech recognition and rule-based pattern matching.

Cognitive Automation: Automating information-intensive tasks and supporting more efficient business processes. This could include AI applications to assist in or perform automated decision-making and robotic process automations.

KI Design’s Ethical AI Use Process

KI Design follows a three-tier approach for evaluating the ethical appropriateness of AI applications. This includes a top-level, legislative analysis of the project’s compliance; a mid-level analysis of executive, staff, and user experience of the pilot version; and a base level review of the technology. We use each of these analyses to measure the potential for risk, bias, and unethical use.

Top-level: We begin our legislative review by evaluating alignment with the Ten Fair Information Principles of the Personal Information Protection and Electronic Documents Act (PIPEDA). Having identified privacy risks, we proceed to explore options for mitigation. If there are issues that cannot be mitigated without affecting the functionality of the AI solution, we apply the Two-Step Test from R. v. Oakes (1986) to determine whether circumstances justify the limitation of an individual’s rights. These standards provide a high-level framework for ethical and unbiased data management practices.

Mid-level: We ensure that none of the processes, tools, or users are being used in an unethical manner. For example, we review processes and operational activities that could result in biased outcomes, and establish mechanisms to limit queries that target marginalized groups.

Low-level: In our technical review, we use quality procedures to test whether the AI is:

  • Open : The vendor and the technology have been subject to third-party peer reviews.
  • Reputable : The AI technologies used come from reputable industry-leading vendors, or open-source projects.
  • Free of bias : We ensure that quality control and testing for bias is a part of the software management process.

We document all data sources and their inter-linkages, and validate whether these inter-linkages could result in any discriminatory outcomes. We provide the AI with sample data and review the resulting deductions or assertions made by the AI for bias.

KI Design is deeply committed to a “human-centered approach to AI,” as defined by the G20 AI Principles (2019):

  1. Inclusive growth, sustainable development and well-being
  2. Human-centered values and fairness
  3. Transparency and explainability
  4. Robustness, security and safety
  5. Accountability

Our ethical AI use process ensure that procedural fairness and equitable outcomes are centered at each stage of the solution design and implementation process.