Ryan Phillips
Matthew Scott

March 23, 2021

This article was originally published in the Canadian Bar Association's periodical, National Magazine, on March 16, 2021. Click here for the original article.


When it introduced its new privacy bill, the federal government took its first major step in modernizing the regulation of artificial intelligence in Canada.

Bill C-11, tabled in November 2020, will, among other things, create a new Consumer Privacy Protection Act (CPPA), which contains a novel addition to Canadian privacy law: the right to an “explanation” concerning decisions made by an automated decision system. It’s a welcome measure, but the government must now give organizations better guidance on what constitutes a meaningful explanation.

Like the Personal Information Protection and Electronic Documents Act (which will become the Electronic Documents Act), the CPPA will apply to every organization that collects, uses, or discloses personal information in the course of commercial activities (apart from enumerated exceptions). It will also apply to every organization regarding the personal information of their employees (or applicants for employment), in connection with the operation of a federal work, undertaking, or business. In addition, it will operate in default of similar legislation within a province in respect of organizations within the province’s jurisdiction.

Under section 63(3) of the CPPA, individuals have a right to an explanation about the use of an automated decision system to make a prediction, recommendation or decision about them and how their personal information was used in the process.  

“Automated decision system” means “any technology that assists or replaces the judgement of human decision-makers using techniques such as rules-based systems, regression analysis, predictive analytics, machine learning, deep learning and neural nets.‍”

This measure is meant to balance individuals’ privacy interests against the needs of organizations to collect personal information in a commercial world that is increasingly reliant on AI.

So suppose an organization collects your personal information in the course of commercial activities, and you are subject to an automated decision by this organization. In that case, you are entitled to an explanation of (i) the decision, and (ii) how your relevant personal information was obtained. Individuals who take issue with the explanation provided may file a complaint with the Privacy Commissioner under s. 82(1) of the CPPA.

But the proposed CPPA does not codify any requirements or standards for the content of explanations. The only guidance comes from section 66(1), which states that the explanation must be provided in “plain language.”

This causes two problems.

First, the lack of a standard for explanations may undermine the protections or rights that the CPPA purports to give Canadians. Any benefits an individual might receive from an explanation are potentially lost if each organization is able to choose what level of detail is provided.

Second, and related to the first, it opens the door to inconsistent and inadequate compliance by organizations who cannot know what is required of them. This should concern organizations as they may fall short of their obligations and elicit numerous complaints by individuals. Or they might provide overly detailed explanations which may, depending on the nature of the organization and the number of requests it receives, be excessively burdensome.

Another feature of section 63(3) compounds the problems: the CPPA does not take a nuanced approach to the nature of automated decisions. All are treated as equally significant, and all have the potential to require an explanation.

Contrast this with Article 22 of the European Union’s General Data Protection Regulation (GDPR), which provides that individuals “shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”

The approach taken under the GDPR is to provide a remedy only when an automated decision produces “legal effects” or other similarly significant effects.

Canada should consider a similar approach that expressly recognizes that some automated decisions are more impactful than others.

The government should codify standards for s. 63(3) explanations that correspond to the significance of the automated decision and allow proportionate responses. Low-significance decisions should require more pragmatic responses, while high-significance decisions should command more substantive responses. 

Fortunately, this “sliding scale” of explanations would not be difficult to implement. The federal government has already created the beginnings of a workable model elsewhere, in its Directive on Automated Decision-Making (DADM). The DADM is a governmental directive for the use of AI by federal administrative decision-making bodies.  Appendixes B and C to the DADM describe “Impact Assessment Levels” and prescribe proportionate requirements for each level. They provide a useful model for the suggested sliding-scale approach to section 63(3) explanations.

While the appendices to the DADM could not simply be inserted into Bill C-11, more detailed requirements are needed in the CPPA. At the very least, the phrase “meaningful explanation” must be defined.

Such an approach would better serve individuals and organizations, as well as the Office of the Privacy Commissioner in its enforcement of the CPPA.

The regulation of AI is likely to remain a work in progress that will raise many questions about privacy and data protection. It is therefore is a good objective for the CPPA to provide Canadians with algorithmic transparency in the automated decisions made by organizations using personal information.

However, without codifying requirements for the content of the explanations that organizations are required to give, Canadians may be deprived of the some of the protections contemplated under the CPPA. As it stands, individual Canadians may be given meaningless explanations, organizations may be overburdened by responding to requests about trivial automated decisions, and the Privacy Commissioner will lack the statutory guidance to interpret and enforce proportionate standards.

A sliding scale of codified requirements should be added to the proposed CPPA to assure proportionality between the impact of the automated decision and the thoroughness of the explanation. The economy and certainty of codified standards would benefit individuals and businesses alike.


Ryan Phillips is a partner at JSS Barristers. Click here for his bio.

Matthew Scott is a student-at-law at JSS Barristers. Click here for his bio.