AI “Hysteria” and Legal and Compliance Risks (Part I of II)

Well, let’s all admit it — we have all enjoyed using the new, hot technology, AI.  It is an understatement to opine that AI is transforming our world, business and personal.  This development compares to the 1990s and the impact of the Internet.  Many of us are early users of AI, finding it a welcome alternative or supplement to Internet research tools.  Businesses are quickly embracing AI as a new and valuable tool that improves efficiency, enables new business opportunities, and an engine to innovation.

Like any other business technology, AI presents significant risks.  I am sure that everyone has experienced a few reminders already that AI is not perfect.  On one occasion, I used AI to conduct research on an issue.  When I double checked the issue, it turns out that the AI answer was wrong.  That was a surprise and an important reminder — AI content and accuracy has it limitations.  This, of course, was an important reminder on the risks and potential downfalls of blind reliance on AI as a solution.

Given the real and significant potential benefits of AI, companies have to be careful in the rush to implement AI technology.  Starting with a clear use case, companies have to weigh the potential benefits of AI technology and identify the legal and compliance risks.  An AI compliance program is a critical element of a corporate governance structure.  Building on this specific policy, companies need to establish a governance structure adequate to their specific use cases.

The legal and compliance risks surrounding AI is a rapidly evolving area that stretches across numerous jurisdictions.  The FTC has issued enforcement guidance, many states are adopting laws (e.g. Colorado, California) and the European Union AI Act.  Add to this equation the rapid adoption of sector-specific rules, including financial services, healthcare, and defense and export controls.

The FTC focus is on deceptive AI claims and advertising.  Equal employment and civil rights regulators are monitoring use of AI in hiring and promotions. State AI laws have a similar focus and extend into privacy concerns.  The EU’s AI Act mandates risk management, monitoring, testing and regulation of high-risk AI activities.

From the above description, the global patchwork of AI laws and regulations is a significant challenge.  Legal and compliance practitioners are challenged in keeping current with this rapidly evolving laws and regulations.

Aside from these government regulatory and enforcement concerns, companies have to establish their own risk framework depending on their industry, business, and specific use of AI technology.  Two significant risks that have developed include (1) AI-generated false content and images that can create liabilities and reputational harms; and (2) AI decision-making functionality that is inadequately supervised and verified that can make unjustified determinations (e.g. loan application decisions).

Data Protection: As an innovative information tool, companies have to remain sensitive to data protection and privacy risks resulting from unlawful data processing, protection of data subjects’ rights, cross-border data transfers, e.g., AI results, involving sensitive data, and other data management requirements.

Intellectual Property: AI can easily incorporate copyrighted materials and potentially trade secrets.  Employee training on thes issues can help address the risks.

Discrimination and Decision-Making: AI use in decision-making raises serious challenges, especially if such decisions are challenged for reliance on impermissible factors (e.g. race, gender, or proxies).  AN AI function that creates disparate results should be monitored and audited as needed. 

Marketing Claims: Companies should avoid inaccurate marketing claims on AI capabilities, undisclosed use of AI functions and any attempts to manipulate consumer choices.

Third-Party Risks: Companies have to identify potential third-party use of AI that may raise compliance issues.  Contracts should include AI compliance warranties and certifications.

Consequential Liability: Companies that rely on AI to deliver solutions or recommendations (e.g. financial advice) can suffer tort liability.  Also, AI functions included in other products (e.g. medical devices) may cause injury or damage.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *