Episode 390 — AI Risks: A Focused and Realistic Approach

The compliance industry appears to be taken over by AI-this and AI-that. Third party risk bleeds into major AI risks, corporate governance needs to incorporate AI risks, and policies and procedures have to incorporate AI risks, while of course no risk assessment is worth its sale unless there is a discussion of dramatic AI risks.
My first response is whoa — let’s all take a deep breath. The best self-help tactic when experiencing anxiety is to take a deep breath, a proven remedy. The AI discussion is veering off into a racing brain phenomena where the compliance profession is sprinting to keep up with the newest hypothetical risk.
So let’s take a calm and deliberate review of some of the key issues.
As an initial step, we have to divide the risk population, not by country but by how AI is being used within your company.
The key question is whether AI is central to your company’s core business? If you embrace AI to make automated lending decisions, score customers’ credit, make medical diagnoses or recommendations, or hiring decisions, then your company’s AI risks are materially distinguished from company core AI users. Another way to describe this is if your company relies on AI for algorithmic decision-making, your risk profile is likely to be higher than those that do not use AI for any key algorithmic decisions.











