• Uncategorized

Emerging AI Risk and Compliance Frameworks (Part I of II)

The new compliance cottage industry surrounds artificial intelligence.  We are at such an early stage of AI development — companies are still figuring out how they can employ the technology.  Some industries, such as financial institutions, however, have been using AI for fraud detection and other issues.  I expect financial institutions will set the tone for much of compliance practices around AI.

There is no question that AI holds terrific promise.  The hype surrounding AI is just that — hype.  Until there is more certainty surrounding AI technology, I expect we will witness a lot of bloviating.  But this aside, corporate boards, senior executives and business developers need to pay attention until the dust settles.  The AI industry is moving so fast that the sooner we start to focus the nimbler our response will be.

A few issues we know for certain — AI can be a very productive tool.  It can easily end up reducing costs and increasing efficiency.  I do not share the doomsayers perspective that AI will result in job losses — I look at from a positive standpoint.  By making companies more efficient, the economy will expand and new opportunities will develop and growth will occur.  In my view, we are on the cusp of a huge economic jolt — positive, just like pre- to post-Internet days. As I always say — change can be good.

Like every aspect of a business, there are risks with any new technology and AI certainly presents risks that need to be mitigated.  This in turn leads to the necessary questions —

How should a company structure its AI risk and compliance program?

Luckily, ethics and compliance principles are easily adaptable to AI risks.  The compliance profession is more than capable of building effective compliance programs around AI operations.

The Business Use Case

A starting point is to identify the business use case — How is the business using or considering using AI?  For what functions?

The answer to this question will turn on the industry, size of the company and potential upsides.  Financial institutions, tech companies, pharmaceutical/medical device and transportation/logistics industries are likely to be significant users of AI technology.  AI offers these firms significant benefits and efficiencies in otherwise complicated or data intensive functions. 

Balanced against these benefits, companies have to get up to speed on AI risks that may offset the business use case for integrating AI.  Some companies may be surprised to learn that a generative AI use may increase the risk of fraud and will need to incorporate risk mitigation costs and capabilities when reviewing a business use case.  In some cases, the capabilities to identify and mitigate such fraud may be in its “infancy.”  This is why AI presents so many risks — benefits and costs are moving targets and companies are not used to moving so quickly with a shifting analytical framework.

The Right Structure: 3-Lines v. Holistic Approach

Even with the rapidly shifting uncertainty, AI compliance programs are likely to be incorporate into existing 3-Lines compliance systems — in other words, the business assumes “front-line” mitigation responsibilities, traditional corporate risk management is responsible for the second line, and third line audit bats third in the line-up.  This vertically-integrated model is sound and well-tested.  But can it work with the rapidly evolving AI technology?

Some have suggested a more “principles-based” or holistic approach — perhaps akin to an AI tzar or task force model to focus on AI benefits, risks and mitigation.  A strong argument can be made for this approach given the unique aspects of the AI model.  This top-down approach relies on senior executives to set policies for enterprise risk management and oversee the implementation of such frameworks with a roving or limited structure.  This approach may be a good starting point until issues are better defined internally as to the specific uses of AI and potential risks.

You may also like...