Soothing the AI-Risk Hysteria: A Focused Approach to AI Risks (Part I of II)

From my perspective, hopefully a reasonable one, there is a little too much AI-Risk Hype. Not to belittle the experts or ignore potential risk concerns but this is getting a little carried away.
The compliance industry appears to be taken over by AI-this and AI-that. Third party risk bleeds into major AI risks, corporate governance needs to incorporate AI risks, and policies and procedures have to incorporate AI risks, while of course no risk assessment is worth its sale unless there is a discussion of dramatic AI risks.
My first response is whoa — let’s all take a deep breath. The best self-help tactic when experiencing anxiety is to take a deep breath, a proven remedy. The AI discussion is veering off into a racing brain phenomena where the compliance profession is sprinting to keep up with the newest hypothetical risk.
So let’s take a calm and deliberate review of some of the key issues.
As an initial step, we have to divide the risk population, not by country but by how AI is being used within your company.
The key question is whether AI is central to your company’s core business? If you embrace AI to make automated lending decisions, score customers’ credit, make medical diagnoses or recommendations, or hiring decisions, then your company’s AI risks are materially distinguished from company core AI users. Another way to describe this is if your company relies on AI for algorithmic decision-making, your risk profile is likely to be higher than those that do not use AI for any key algorithmic decisions.

If we put these central AI risk profiles aside, then what are the issues that are likely to raise AI risks and potential harm to the company?
When you boil it down, the primary risk area fall into one of four buckets:
1. Data Privacy and Cybersecurity
2. Third-Party Oversight
3. Information Accuracy
4. Improper Employee Use
5. AI Regulatory Risks
Once identified, the challenge is avoiding the design and implementation of unworkable or burdensome controls. That process requires a careful balancing of risks and outcomes.
To address these risks, companies need to establish clear policies and procedures that are tailored to accurate information on AI use and reliance in corporate functions. This is a difficult process since many companies are unsure of the extent to which AI is being used or built into certain functions.

This knowledge gap is especially pronounced when it comes to third parties. Companies are not asking third parties about their use of AI because it opens up a whole new set of risks and potential abuses. The current AI ignorance is akin to prior years when third-party risk management was revised to incorporate third-party cyber risks. This same delay and eventual catch up is occurring now with AI and third parties.
Another challenging area in AI risk management is the importance of human monitoring, oversight and auditing of AI practices. When it comes to content accuracy and moderation, companies have to dedicate resources to allocating humans — not machines — to identify AI content risks and potential errors. AI has not learned how to regulate its own content. Until that happens, the risks of misinformation and intellectual property is too great.











