• Uncategorized

On the Horizon: EU Puts Final Touches on Risk-Based Artificial Intelligence Regulatory Overhaul

In a significant development, the European Union (EU) is set to implement the most comprehensive suite of regulations seen to date governing the use of artificial intelligence (“AI”) technology. The proposed regulatory overhaul aims to address a wide range of concerns surrounding AI, including ethical considerations, accountability, and potential risks to individuals and society writ large.

The forthcoming “AI Act”––the final text of which was approved February 2––establishes a new regulatory framework under which AI systems will be categorized according to the level of risk each system poses. Notably, the AI Act is broadly applied to all uses of AI in all sectors, with narrow carveouts for military and scientific research and development, as well as exemptions for individual persons using AI in non-professional applications.

bionic hand and human hand finger pointing

The AI Act represents years of negotiation between lawmakers and is expected to be passed into law by the EU Parliament and EU Council of Ministers at some point during Q2 2024. As written, the Act will take effect 24 months plus 20 days after its publication in the Official Journal of the European Union.

However, prohibitions on AI systems deemed to pose an “unacceptable risk” will take effect just 6 months plus 20 days post-publication, while reporting requirements relating to general purpose AI models (defined below) take effect 12 months plus 20 days post-publication. Within 36 months plus 20 days, provisions concerning the classification of high-risk systems and obligations imposed on their providers take effect as well, at which point the AI Act will have been made fully effective.

Definitions & Scope

The AI Act defines an “artificial intelligence system” as one that is “machine based [and] designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content recommendations, or decisions that can influence physical or virtual environments.”

Under the AI Act, AI systems will be designated as Unacceptable Risk, High Risk, Specific Transparency Risk, and Low or Minimal Risk, with additional categories likely to be created in time.

  • Unacceptable Risk: Systems falling into this designation will be prohibited from use or sale within the EU. Unacceptable Risk systems include ‘social engineering’ applications, which seek to subliminally manipulate or deceive individuals into modifying their behavior, as well as systems that seek to exploit users’ vulnerabilities for the purpose of effecting behavioral changes. Also unacceptable are ‘social credit’-style systems intended to score individuals, systems that collect biometric or facial recognition data, individual predictive policing applications for law enforcement, emotional recognition systems in schools or the workplace. The law provides some exceptions to the above in limited, mainly law enforcement contexts.
  • High-Risk: These systems carry the potential of an adverse impact on individual safety and/or fundamental rights and are subject to the highest level of scrutiny. The eight use cases for high risk systems are: (i) any permissible biometrics application not deemed unacceptable; (ii) systems within critical infrastructure; (iii) educational and vocational training systems; (iv) essential public (social insurance/pension) and private (banking, insurance) services; (v) employment and self-employment systems; (vi) any other permissible law enforcement use not deemed unacceptable; (vii) migration and asylum case management; and (viii) use in judicial or voting processes.
  • Specific Transparency Risk: AI systems under this designation are those that risk manipulating natural persons, particularly those that interact directly with individual persons. This includes emotional recognition systems, generative AI models like Chat-GPT, so-called ‘deep fake’ image or video generators, as well as chatbots commonly used in customer service applications. Under the AI Act, the developers of Specific Transparency Risk systems must inform persons interacting with the system that they are engaging with AI in a manner that is conveyed clearly and distinguishably upon their first exposure to the content in question. The AI Office (the existence of which is discussed in greater detail below) will promulgate rules governing the labelling of AI generated, potentially manipulative content.
  • Low or Minimal Risk: These systems may be used or sold in the EU without restrictions. Under the AI Act, the European Commission’s AI Office must provide a list of high and low risk applications of AI systems within 18 months of the Act’s taking effect, which should offer insight for developers seeking to enter EU markets as to where their product falls in the AI Act’s risk matrix.
  • General Purpose: Though not a risk-based designation per se, the designation of General Purpose will be of import to many consumer-facing AI systems. General Purpose systems are those that perform a wide range of unique functions irrespective of where specifically the system is placed on the market, and can be integrated into other systems. Think ChatGPT. General Purpose systems will be subject to additional scrutiny by the AI Office due to their widespread use and proximity to natural persons, which will promulgate rules specific to General Purpose AI systems.

Regulating High Risk AI Systems

pexels-photo-2085831.jpeg

High Risk AI systems are subject to seven specific obligations under the AI Act, which must be complied with in order to operate in EU markets. High Risk systems (1) must be subject to rigorous internal risk management systems; (2) maintain data governance measures as to the use of data in training the AI model; (3) must be preceded by technical documentation outlining compliance with applicable requirements, to be presented to relevant Member State authorities before the system enters EU markets; (4) include robust record-keeping capabilities to record the system’s internal operation; (5) be designed in a manner that offers transparency to the system’s deployers, to interpret the system’s input and output and ensure continued compliance; (6) must be overseen by human persons with sufficient familiarity with AI systems to manage their compliant use; and (7) must meet high cybersecurity standards in order to prevent unauthorized third-party access.

Compliance for High Risk Deployers, Distributors, Importers & Providers

The providers of AI systems designated High Risk are obligated to ensure that their system is compliant with the AI Act. However, the AI Act also treats distributors, importers, deployers, and other non-provider third parties as if they were providers for purposes of compliance with the AI Act in the event that they trademark the system, substantially modify it, or repurpose a non-High Risk system in a manner that effectively serves to make it High Risk. In such cases, the third-parties will themselves be considered first providers.

How will the Act be Enforced?

In order to give effect to the AI Act, the European Commission will establish an AI Office with robust enforcement powers. For one, the AI Office will have the ability to request documentation from providers, conduct its own evaluations of AI system models, investigate and request remediation of any defects, and even force the withdrawal of non-compliant AI systems from EU-regulated markets.

The AI Office will consist of a Board, with each EU Member State nominating one representative. The Board will oversee the AI Act’s implementation and liaise with industry stakeholders, public interest groups, and establish a scientific panel of experts geared towards supporting enforcement of the Act, and responsible for notifying the Board of systemic risks as they arise.

woman in black leather jacket holding a gun

The AI Act also mandates that EU Member States designate a domestic law enforcement or regulatory agency as responsible for applying and enforcing the AI Act. Trade groups and stakeholders must remain mindful of this dual track enforcement model, as enforcement is likely to vary state-to-state. The AI Act requires that the designated national agency enable natural or legal persons to lodge complaints concerning non-compliant AI systems.

Fines for non-compliance will be levied by the AI Office. The fine schedule provides that violations of the Act’s data governance provisions are punishable by the higher of €35 million or 7% of worldwide annual turnover. Violations of the non-data governance provisions are punishable by the higher of €15 million or 3% of worldwide annual turnover. Reporting incorrect, misleading, or incomplete data to the AI Office or National authorities is punishable by the greater of €7.5 million or 1% of worldwide annual turnover. Member states are also within their remit to levy fines at a national level.

Closing Thoughts

The AI Act will have far-reaching implications for businesses operating within the EU or offering AI products and services to EU residents. Companies in the AI space must invest in robust compliance measures, including risk assessments, transparency mechanisms, and responsible data governance frameworks in anticipation of these and similar regulatory frameworks, which are inevitable.

While compliance with the AI Act will require substantial resource and expense, as well as expertise that is in short supply, it is important to also weigh the benefits of adherence to ethical AI principles. Setting aside regulators, consumers are increasingly concerned about the potential dangers of AI systems, and strict regulatory compliance in the AI space is likely to enhance trust and reputation.

In closing, the AI Act represents a significant milestone in the governance of AI technology. By addressing key concerns related to ethics, accountability, and risk management, the Act aims to promote the responsible development of this technology while safeguarding individual rights and safety. As businesses prepare to comply with this new suite of regulatory requirements, the broader implications of the EU’s approach underscore the growing importance of ethical AI governance on a global scale.

You may also like...