Ethical AI Use Process
KI Design follows a three-tier approach for evaluating the ethical appropriateness of AI technologies. This Ethical AI Use Process includes a legislative analysis of the project’s compliance; an analysis of executive, staff, and user experience of the pilot version; and a review of the technology. We use each of these analyses to measure the potential for risk, bias, and unethical use.
Legislative Review: We begin our legislative review by evaluating alignment with the Ten Fair Information Principles of Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA). Having identified privacy risks, we proceed to explore options for mitigation. If there are issues that cannot be mitigated without affecting the functionality of the AI solution, we apply the Four-Step Test from R. v. Oakes (1986) to determine whether circumstances justify the limitation of an individual’s rights. These standards provide a framework for ethical and unbiased data management practices.
Pilot Analysis: We ensure that none of the processes, tools, or users are being used in an unethical manner. For example, we review processes and operational activities that could result in biased outcomes, and establish mechanisms to limit queries that target marginalized groups.
Technical Review: We use quality procedures to test whether the AI is:
- Open: The vendor and the technology have been subject to third-party peer reviews
- Reputable: The AI technologies used come from reputable industry-leading vendors, or open-source projects
- Free of bias: We ensure that quality control and testing for bias is a part of the software management process
We document all data sources and their inter-linkages, and validate whether these inter-linkages could result in any discriminatory outcomes. We provide the AI with sample data and review the resulting deductions or assertions made by the AI for bias.