AI Governance Best Practices (Part II of II)

The new world of AI presents significant benefits and risks that need to be addressed in an overall governance framework. Luckily, the principles to apply here will surprise no one — governance, compliance, legal and risk professionals will quickly adapt well-known principles to the new frontier of AI.
Starting at the Board level, AI oversight responsibility should be assigned a dedicated board committee, such as Audit, Risk, or similar organization. Board members should receive training on AI technology and risks, as well as legal and compliance controls. Depending on the specific level of use and attendant risks, Board members should receive regular AI compliance reports, at least quarterly.
The Board should receive reports on the number of AI compliance incidents, identification of any significant AI issue, testing and audit findings and overall model and use case performance.
Below the Board level, a Chief AI Ethics/Compliance Officer should be appointed. Companies may be tempted to just add this responsibility to the existing Chief Compliance Officer’s job, but that be evaluated carefully.

In addition to the designation of a Chief AI Ethics/Compliance Officer, a senior level AI Compliance Committee should be established which includes stakeholder representatives, including Legal, Compliance IT, Data and Technology and Human Resources.
Every company that employs AI should adopt an AI Use Policy that outlines relevant issues: (1) Acceptable and prohibited AI use cases; (2) Establish policies and controls for sourcing of data, mitigating bias, and for human review and oversight of content; (3) Set documentation requirements for design, training, testing, and monitoring functions; (4) Extend AI standards for third-party due diligence, audit rights and contractual compliance clauses

To mitigate risks, a AI Use Policy should define distinct risk levels for the company’s use of AI. Before deploying any AI, the AI system should be evaluation. Once launched and used, the company should schedule periodic audits and testing requirements. If the AI use triggers high-risk analysis, the company has to design controls to protect against potential bias, IP risks and collateral legal consequences.
To encourage reporting of potential AI concerns and incidents, the company should expand any existing incident management system to include reporting of AI concerns and any incident.
For Legal and Compliance, it is critical to monitor and update the rapidly changing laws and regulations that apply to AI use. One high priority is to ensure that data privacy requirements (e.g. GDPR) are integrated into the AI design. Documentation of all legal and compliance activities is important in the event of regulatory inquiries.
As part of any data privacy structure, AI uses must include source validation, accuracy checks and testing. Data security should ensure that AI data is encrypted and that appropriate access controls are assigned. Data privacy requirements for anonymization and minimization have to be built in as well.

The AI documentation should include model documentation, including AI system, the purpose, datasets, algorithms (if any) and any limitations. The output of any AI system should be defined and assessed for risk purposes. Customer and employee notice requirements should be drafted for use as needed.
AI training programs are important to ensure that employees learn responsible AI use, bias recognition and red flag risk escalation. Responsible AI use should be included in corporate values and an element of employee performance evaluations.
Crisis and incident management procedures should be established to respond to potential violations, collateral legal harms and other reputational issues. A cross-functional yeam should be assigned to handle these issues.