Embracing Generative AI — The Current Risk Profile (Part II of II)
When evaluating AI risks, legal, ethics and compliance professionals need to divide the question into two — first, what are risks from legal, ethics and compliance internal use of AI? and second, what are business risks from employing AI capabilities in specific functions and use cases?
Sometimes ethics and compliance professionals are surprised to learn that certain functions — finance, security, HR or others — already are employing AI functionality for certain tasks. A finance department may use AI capabilities to search financial transactions for anomalies or perform other basic financial functions.
We are focusing on non-tech companies that are reviewing and considering AI functionality for business functions. Given the prevailing risks, companies are slowly evaluating deploying AI for business uses. Some of the delay is understandable given potential privacy, copyright, security and other threats. Some companies have banned employee use of generative AI until there is a more comprehensive analysis and weighing of risks.
Generative AI
Generative AI is the new-fangled, hot concept. Business use cases are stretching business development departments into new and imaginative use cases. When properly harnessed, generative AI can create content that is indistinguishable from human-generated content. This is the scary part: Generative AI can be transmitted without human review, accuracy check and a variety of potential pitfalls.
Generative AI risks, like any language product, can be misused in a variety of contexts. Misuse of AI involves illegal exploitation of generative AI for fraud, misinformation and basic financial fraud. Bad actors have employed AI to falsely represent identities, voices, messages and emails. These deepfakes are used for social disinformation, and execution of individually-focused fraud and identity-theft.
Another risk from Generative AI results from failure to review content to verify accuracy. It is far too easy to rely on what looks like logical explanations and descriptions; however, these presentations can include serious errors and inaccurate statements. Further, company sourced information if disseminated can create significant legal and compliance risks. Companies have to establish verification techniques and prevent dissemination of company-sponsored information that is inaccurate or creates legal peril. When generative AI contains false or inaccurate information, victims may challenge company use or reliance on such information.
Also, Generative AI can create risks of misrepresentation — a third party may rely on generative AI despite questions relating to accuracy, credibility and authenticity. When content is created by someone else, another party cannot simply repeat this content with subjecting such content to a review for accuracy, sourcing and verification. Fake videos, content and other information can be negligently repeated or relied on — just because it appeared on the Internet does not make it true.
Generative AI can involve content that can be accidentally consumed and shared by users who have no idea that the content is fake, false or inaccurate. Deepfakes are just that — fake.
Each of these risks create risks and challenges. Mitigating these risks is incumbent on legal and compliance professionals. Just like any issue, companies have to detect, identify and prevent risks of the spread of inaccurate and misleading content. The easy case has to be protection against intellectual property infringement claims. Aside from this easy case, companies face serious reputational risks from spreading inaccurate or misleading information.
Risk Mitigation Strategies
To mitigate these obvious risks, companies have to establish ethical principles and guidelines for Generative AI uses. Corporate actors who rely on Generative AI have to ensure that its use does not cause any harms. Consistent with a company’s ethical principles, Generative AI principles have to include transparency, fairness, accuracy and accountability. These principles have to be reflected in the company’s AI compliance policy framework.
Generative AI output has to marked for tracking and identification purposes. It is important for tracking and accountability purposes. Most major AI companies are including watermarks in their Generative AI content. Such a practice is important for monitoring content sources and tracking persons responsible and verification of any accuracy protocols.
To eliminate potential privacy concerns, companies include a privacy element in any review and due diligence process. As companies tailor Generative AI to their respective needs, they are building a due diligence process that addresses accuracy, verification, privacy and other risks, including reputational risks. As in other compliance areas, these efforts have to be documented and readily accessible and auditable.
AI compliance policies and procedures have to define proper use of AI and Generative AI. By identifying those AI uses that are authorized and those that are prohibited. Aside from this general guidance on permissible uses, AI risks surround potential bias, validity and accuracy of output and other regulatory risks.
Training programs are essential in promoting AI compliance policies and procedures that mitigate Generative AI risks. These programs should provide general AI information and be tailored to specific users.
Companies need to establish a procedure to check and verify accuracy of content. This process needs to detect inaccuracies, misuse or high-risk AI content. Whatever review process is used, there needs to be a checklist of content principles that identify potential risks and inappropriate content. Given existing technology and its limitations, human review is needed to verify and audit content.
To address potential problems, companies need to address content threats, situations when inaccurate or misleading content is released and steps need to be taken to contain and remediate potential problems. Damage mitigation protocols need to be designed and implemented. Some of these procedures are akin to incident response planning in response to data privacy breaches.