top of page

5 questions to ask before implementing your corporate AI project

Across departments and the technology stack, AI is being used in a variety of ways – underwriting solutions to assess borrowers with little to no credit history, machine-learning to predict product trends. These are just two examples among many others that have shown AI to be a worthy investment. Yet, the risks can’t be ignored.

As corporate AI teams come together, they should consider their people, data, risks, controls, and security right from the start.

People: Are your key people and teams aligned on the AI project?

Your teams will drive your AI initiative forward and help it reach its full potential. Not having the right teams involved at the start could result in multiple iterations, and the loss of both time and productivity.


As you onboard your relevant teams, be sure to inform them about the AI project and align them to its goals. It’s also important to understand what your people know about AI and the technology you’re looking to implement at the start. This way you can proactively plan to fill any potential talent or training gaps.


Data: Does data usage for the AI project comply with best practices?

AI depends on a steady stream of data. Right now, there is a lot of uncertainty around how companies can use data, opening them up to unforeseen risks that can result in loss of reputation and customer trust. AI teams need to take time to understand their company’s data practices and guiding ethical frameworks; and of course, any legislative imperatives. A good question to ask: Do we have the authority to use the company’s data in this new way? Looking outward, teams will also need to find ways to stay on top of top changing regulations and an evolving environment.


RIsks: Does the AI project open the company up to risk?

AI technology isn’t inherently good or bad. The difference lies in how it’s used and managed in real terms. For this, AI teams need to carefully think through the full AI project, weighing the risks and benefits of the system for the company, its customers and users. It’s also important to assess risks related to third-party players, their products and technologies, so their risks don’t impact you.

Controls: Are the right controls in place to monitor the AI project?

AI is an evolutionary technology. It learns and adjusts as it manages more data. AI teams need to replicate this cycle as they build and manage their AI systems. A strong due diligence process at the start is smart. Teams will also need to apply these same controls at different intervals – as the systems evolves, as the company adjusts its data practices, as the regulatory environment progresses, and more.

Security: Is this AI project secure?

This question is one to ask iteratively over the life of each AI system. As companies embed more technology into their organizations, they’ll need to consider how they are securing and responsibly leveraging the data used and the outputs created by their systems. AI teams can help by adopting processes that extend well past implementation. Not aligning tightly with security and IT professionals can result in the AI project being a target for hackers and potentially cause a security breach.

These questions are a start to assessing the risks of your AI system. As the rules of engagement continue to shift in the AI space, other levers will need to be considered.



Automate your AI Risk Assessment with Canari




























18 views0 comments
bottom of page