top of page
Canari's Background.jpg

Leading Platform to Operationalize AI Governance

Canari is an expert AI Risk assessment platform that provides rigour at market speed.


Through its collaborative process, Canari aligns different members of your organization on the risks and benefits of an AI initiative, and provides an actionable plan to reduce risk.

Canari tracks over 80+ frameworks and guidelines on responsible AI to consolidate and score best practices.

With Canari you can innovate with confidence.

Operationalize and measure data ethics

Measure your AI system's positive and negative impacts as well as organizational readiness

Assess and quantify your AI risk

Communicate benefits and risk with clear and colourful visuals

Standardize AI risk assessment criteria and communicate measurable risk.

Navigate AI governance

Canari gathers international best practices in responsible AI, and ethical considerations to offer your organization the leading best practices in this highly fluid policy and regulatory environment


Unique Toronto law firm to focus on responsible use of AI technology

Read article >>

Canari is powered by leading AI lawyers

Canari was developed by leading AI experts at INQ Law. INQ Law is recognized as a unique firm focuses on responsible and trustworthy AI development and operationalization, with deep knowledge of data-related enterprise risk management.


INQ lawyers have pioneered areas of AI policy, including as lead author of a national law firm’s first White Paper on AI and law, and as principal architect of the federal government’s Algorithmic Impact Assessment.


We understand the ethical and legal issues having to do with AI design and development. We are practical, forward-looking and can help you innovate, responsibly.

How canari works

Canari is your AI risk barometer. Canari helps organizations reduce friction to innovation, allocate resources for growth and accelerate impact, all while building a culture of trust and responsible innovation. 






Stock Market Data

What is the cost of not getting it right?

Lack of trust costs brands




of Americans and Europeans distrust AI systems


1. Accenture, Exceed Expectations with Extraordinary Experiences: What Today's Customers Want Isn't What You Think (2017)

2. Baobao Zhang and Allan Dafoe, Center for the Governance of AI, Future of Humanity Institute, University of Oxford, Artificial Intelligence: American Attitudes and Trends  (January 2019) 

Reduce Friction, Accelerate Innovation, Build Trust.

bottom of page