• Uncategorized

AI Compliance Programs: Filling in the Gaps and Mitigating Risks (Part II of II)

We are at an important inflection point — AI technologies are rapidly developing; we are witnessing a historic metamorphosis in the technology, the impact on businesses and society and important steps being taken to regulate AI and develop appropriate risk management strategies. 

We have experienced important inflection points throughout our history — however, AI technology is evolving at such a rapid pace, it is difficult for anyone to “plan” and to “respond” in the usual fashion, with careful planning, crafting of controls and risk mitigation strategies.  In man y respects, compliance professionals are about to face a significant challenge.  But, like everything, compliance professionals have the intelligence, professional capabilities and integrity to rise to the challenge.

AI regulation is catching up but you can rest assured that we will see some significant steps taken to curb potential excesses and harm.  My only concern is that the hype surrounding AI does not result in a disproportionate response — there are real tangible benefits to be secured from AI and excessive regulation should not stifle innovation is this area.

AI Risk Management

As I noted in Part I, the specific benefits and risks from using AI will depend largely on the industry, the size of the company, and use case for AI functions.  Like other technology risks, AI risk management a variety of sources and choices for potential risk management systems.  For example, the National Institute of Standards and Technology (“NIST”) offers valuable guidance on risk management frameworks (“RMFs”).  NIST’s guidance on cybersecurity risk management is an example of how these principles apply to a complex technology issue.  The OECD offers helpful guidance as well in this area.

Cybersecurity is of paramount importance to any AI use.  The ability of cyber actors to exploit AI vulnerabilities has to be at the top of the list for every company relying on AI technology.  Cybersecurity risks are difficult enough to mitigate and these risks are likely to increase exponentially with AI risk.

Whatever framework is built, companies have to ensure that AI risks are identified and mitigated throughout the AI-life cycle, i.e., selection, testing, implementation, evaluation, monitoring and auditing.  In this process, companies have to ensure there is transparency into AI functions, uses and issues so that there is accountability.  To accomplish this, companies have to identify all AI uses, map these uses across the company, document such uses, share such information and provide appropriate transparency throughout the organization. 

A critical component of AI risk management is accurately uncovering and mapping data sources, data providers, changes in data sets, and ultimately how such data is integrated into the company’s AI system.

Further, like any other risk, companies have to include third-party risk management to incorporate AI risk. Like cybersecurity and data privacy issues, companies have to incorporate AI risks into its third-party onboarding and monitoring procedures. Due diligence questionnaires/onboarding procedures should include questions about a third-party’s use of AI technology, data privacy and retention policies.  As part of this inquiry, third-parties have to provide information surrounding its AI model validation and maintenance procedures.

AI Compliance Responsibility

Given the nature of AI risks, companies have to consider who should assume responsibility for AI governance, compliance and risk management.  Cybersecurity shares some common risk attributes; accordingly, it may be easy for a chief technology officer of chief information security officer to assume responsibility for AI compliance and risk management. 

AI risk management, like cybersecurity, requires cross-functional risk management structures and governance.  The members of this cross-functional operation should include information technology, data privacy, legal, compliance, marketing and relevant business representatives. Cross-team cooperation and operation is a critical element of an effective compliance program.

Corporate boards should oversee AI risks, just as they do for other risks.  There is no need to establish a separate committee or a subset of the entire board.  Corporate boards should respond in a measured way, without over-reacting to the AI-hype.

You may also like...