Reviewing the 5 Major AI Risks (Part II of II)

Here are the five primary risk areas when a company uses AI in a supportive or assistance-based role as opposed to an algorithmic-based use case.
1. Data Protection and Cybersecurity: Internal AI use cases may involve chatbots, drafting of documents, emails, and other deliverables (e.g. slide decks). To the extent employees incorporate sensitive data (e.g. confidential, personal or regulated data) into AI vendor tools, such sensitive data may be retained, transferred, or disclosed without appropriate controls.
For example, if attorney-client privileged material is uploaded into athir-party AI tool, the disclosure and processing of such information may result in disclosure of privileged material to a non-covered individual, thereby threatening the preservation of such privilege.
In addition, if data is entered into an AI tool and subject to cross border transfer, the company may have violated applicable data regulatory requirements for pre-approval or waivers for such transfers.
In these situations, when an AI tool may be used, compliance has to ensure that the use of this tool is identified and potential guidance may be needed on types of data that can be entered into the AI tool. This may require specific vendor compliance provisions to prevent unintended transmission, use or storage of data by a third-party vendor.
2. Third-Party Vendor Risks: Companies have to understand precisely how employees access and use AI tools since many tools may be procured informally. If a company purchases third-party provided AI services, or if a company learns that a third-party relies on AI functions in providing services to a company, compliance needs to identify the risks and mitigate them through procedures, contractual provisions and other techniques.

The range of risks here can be broad. An employee can use an AI tool that is widely-available without approval from a company’s IT department and/or compliance team.
With respect to third-party vendors, the use of data, hoisting and processing procedures may not be disclosed to the company, thereby raising the risk of unwitting violation of data privacy policies or procedures. It is imperative to conduct due diligence on AI providers and on third parties that rely on AI when providing services to a company. These risks may be significant in regulated industries such as healthcare, financial services, and other entities subject to government monitoring and scrutiny.
3. Misinformation and Intellectual Property: Lawyers are the perfect example of misinformation risks. Any lawyer who relies on AI for legal research is playing Russian Roulette. Lawyers have to ensure that AI-generated legal materials have extremely high error rates, such as citing cases that have nothing to do with a specific legal issue. Lawyers have to be mindful of these major risks.
The misinformation risks, however, extend well beyond legal research (just a pet peeve of mine) since companies rely on Internet-based content all of the time. Companies have to avoid reliance on inaccurate or erroneous information that can create liability issues, such as defamation, marketplace misconduct claims, and potential intellectual property infringement. When it comes to inaccurate information, AI is the poster child for generating and dissemination of harmful and libelous material. Where such risks exist, companies have to design content review controls to protect against inaccurate claims or disinformation. While we live in an information age where accuracy is not guaranteed, companies have to assess these risks for disinformation in specific market categories and competitive arenas. To itigate these risks, content review controls is a must requirement, along with approriate disclaimers and multi-level validation steps.

As to IP and copyright issues, AI content may infringe a third-party’s protected IP and copyright. Employees cannot assume that a company owns all of its AI generated information and deliverables. This can result in litigation and contract disputes. Employees need to understand the risks generated in their particular use cases and must be provided with appropriate mitigation tools..
4. Workplace Risks: HR professionals are quickly integrating AI tools to assist in numerous employment functions such as monitoring or surveillance of employees, AI-generated evaluations that may create disparate impacts, and undisclosed reliance on AI tools in HR tasks. Even in situations where AI is not related to key hiring determinations, HR professionals will be tempted to use AI to facilitate written documentation, reports and evaluations. Such use cases have to be disclosed, reviewed and mitigated for fear of resulting actionable HR materials and compliance materials.
5. Last but not least, AI Regulations: Federal, state and local regulations on AI are rapidly growing. While there have been arguments made at the federal level urging a regulatory light touch, state and local governments and foreign governments are adopting comprehensive AI regulatory frameworks. This is a fast-changing area of development and companies have to monitor these developments to update their compliance programs on an ongoing basis.











