Board Oversight and Monitoring of Artificial Intelligence Risks

Corporate boards face a panoply of risks – and the nature of these risks are quickly evolving.  Cybersecurity has quickly risen to the top of the list of corporate risks.  Add to that the new SEC regulations on cybersecurity disclosures, and board members face serious and escalating risks surrounding ransomware attacks, data breaches and other technical issues.

The challenge – board members are not cyber experts, nor do they really like to focus on technical issues.  Not to be too simplistic or harsh but board members usually ask CISOs – “Are we okay?” and then want to move on.

Just to make everything even more complicated, now let’s ladle on a new, and quickly growing risk for board – artificial intelligence.  By this time at the board meeting, eyes will be glazed over.

Directors have significant oversight obligations to cover artificial intelligence.

First, if properly applied, artificial intelligence can exponentially assist businesses.  Artificial intelligence can be used to increase the accuracy and speed of processes that may depend on human functions – Companies are spending more money on artificial intelligence capabilities.  But companies have to be careful in this area – we all have heard about Zillow’s disasterous implementation of a housing value algorithm that was riddled with problems, forcing Zillow to shut down its new product offering. 

Companies have to identify and assess the potential risks.  We still do not know if and how the federal and state governments may impose regulatory regimes over artificial intelligence.  Congress and the Executive Branch are focusing on artificial intelligence risks and appropriate regulation.

In this uncertain environment, stakeholders are quickly discovering the real and significant risks generated by artificial intelligence.  Companies have to develop risk mitigation strategies before implementing artificial intelligence tools and solutions. These risks cover a wide swarth of terrible results – artificial intelligence can rapidly be abused to spread disinformation; algorithms can have built in racial discrimination; an artificial intelligence platform can easily (and with little effort) cause privacy invasions; and possibly lead to layoffs, primarily among white collar workers.  In combination, these are some significant risks.

Like any risk area, companies need to develop appropriate compliance policies and procedures, tailored to a specific risk profile.  Corporate boards have to head up this effort and oversee and monitor a company’s artificial intelligence compliance program.

Corporate boards are familiar with the legal framework – the Caremark decision requires that a corporate board ensure that a compliance program is operating, that the board is informed as to the artificial intelligence compliance program and its effectiveness in mitigating risks, and that the company has implemented a training program.

Over the last ten years, shareholder derivative suits based on Caremark violations have become a more significant risk.  Several Caremark claims have survived motions to dismiss, particularly in those areas where compliance failures have had an impact to innocent consumers (e.g. food safety, air travel, pharmaceuticals and medical devices).

In the case involving Boeing and the two horrific crashes of the 737 MAX, the Chancery Court applied the well-established Caremark factors and cited the Board’s failure to implement or prioritize safety oversight at the “highest level of the corporate pyramid.”  None of Boeing’s Board committees were specifically assigned responsibility for overseeing airplane safety.

Second, the Chancery Court noted that the Board at large was not formally monitoring or discussing safety on a regular basis. In particular, the Court cited Board discussions of 737 MAX issues that were “passive invocations of quality and safety . . . [that] fall short of the rigorous oversight [Caremark] contemplates.”

While Boeing’s Audit Committee was charged with oversight of risk as a general matter, the Audit Committee never examined or even considered airplane safety. For example, when the Board discussed audit plans in 2014 and 2017, it did not mention nor address airplane safety. Instead, the Audit Committee maintained a singular focus on financial risks and profits.  Even after the Lion Air 737 MAX crash, Boeing’s CCO update to the Audit Committee failed to mention “product safety” as a “compliance risk.”

Third, management’s periodic reports to the Board did not include safety information.  CEO Mullenburg sent the Board a monthly summary and competitor dashboard, and occasionally made presentations at Board meetings.  These communications focused on the business impact of airplane safety crises and risks, and not overall product safety issues.

Further, the Court cited that Boeing’s Board did not have a mechanism for receiving internal complaints about airplane safety.  Boeing’s internal reporting system only reached managers below the senior management and board level.  The Board never learned about any employee or whistleblower safety complaints.

Given the Boeing case and the Chancery Court’s recent invocation of Caremark to hold Corporate Boards accountable, companies that are embracing artificial intelligence have to ensure that they design and implement an appropriate governance framework to meet basic requirements.  Artificial intelligence presents significant risks that have to be identified and mitigated.

A basic list of compliance oversight tasks include:

  • Listing of Artificial intelligence risks as a standing agenda item for every quarterly meeting.  A standing committee can be assigned the task or full board discussions can address the issue each quarter;
  • Companies should add a board member with technical expertise to cover cybersecurity, data governance and artificial intelligence;
  • Board members should be briefed on existing and planned artificial intelligence deployments to support the company’s business and/or support functions;
  • Designation of senior management executives(s) responsible for artificial intelligence compliance;
  • Corporate boards should ensure that an effective compliance framework is in place, including avenues for reporting potential violations of corporate policies, and applicable regulations.

You may also like...