--------- Website Scaling Desktop - Max --------- -------- Website Highlight Effekt ---------

The EU AI Act, Simplified: What are Your Obligations as a Company Using High-Risk AI?

It is often coined that high risk yields high rewards—the same lies true for AI. High-risk artificial intelligence, as defined by the EU AI Act, typically generates significantly larger benefits for firms using them; the use-case applications allow for automation in countless sectors that can often bring a strong, verifiable impact for the deploying enterprises.

While the Act looks to establish harmonization with other current requirements, whether it be Solvency II, Basel III, or any other industry-standards (we cover harmonization more indepthly here), it is vital to understand the framework of the EU AI Act and its nature to build upon the current regulatory risk standards.

Thus, the EU AI Act introduces several obligations for models regarded as high-risk. In this mini blog post, we highlight some of the current classifications for high-risk systems as seen in Article 6, Annex I, and Annex III, as well as their prime requirements outlined for high-risk AI in Section 2 & 3 of the Act, and how tools like Calvin Risk help eliminate several technical and administrative headaches associated therein.

High-risk AI Classifications: An Overview

The EU AI Act takes a risk-based approach, assigning varying levels of risk based on the impact, severity, and ethical implications that they hold for the public. High-risk AI risk management and regulation lies at the core of the Act, identifying and applying best practices for the safe, trustworthy implementation of such systems across the EU.

In particular, high-risk AI is identified as:

- An AI system intended to be used as a safety component of a product, or the AI system is itself a product, as covered by the Union harmonization legislation listed in Annex I, and that;

- the product whose safety component pursuant to the point above is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonization legislation listed in Annex I.

So what’s in Annex I?

The Annex lists a variety of points ascertained to be high-risk, bridging a new framework outline with past harmonization legislation. From medical devices to railway system interoperability, Annex I serves as the base identifier of high-risk AI.

Coincidingly, Article 6 also refers to Annex III in a similar manner; however, these are can be considered not high-risk given they don’t pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not “materially influencing the outcome of decision making”. This includes performing a narrow procedural task, improving the result of a previously conducted human activity, or the detection of decision-making patterns/deviations from prior decision-making—not meant to replace or influence the previously completed human assessment without proper human review. However, Annex III models will always be considered high-risk when profiling natural persons (Article 6(3)). Annex III’s appending list can be found here.

Obligations for high-risk AI

With the EU AI Act having entered into force in August 2024, the coming years will be crucial to ensuring enterprise AI properly complies with the numerous compliance obligations outlined. We will take a particular look at the requirements for deployers of AI—firms actively using AI within their own networks and operations and provided with the high-risk system’s instructions for use.

Section 2 of the Act outlines the multilateral approach to managing the risks of AI systems. Foremost, Article 9 displays the requirements for a risk management system, declaring it to be established, implemented, documented and maintained in relation to the entire lifecycle of the high-risk AI systems. It is to comprise of identification and analysis of the known and the reasonably foreseeable risks posed, the estimation and evaluation of the risks that may emerge, the evaluation of other risks possibly arising, and the adoption of appropriate and targeted risk management measures.

Article 10 continues by highlighting the data governance needs to ensure safety of the system. These points range from assessment of the availability, quantity, and suitability of the data sets that are needed, to requiring the data set to hold the characteristics or elements that are particular to the specific geographical, contextual, behavioral or functional setting within which the high-risk AI system is intended to be used.

Article 11 and Article 12 focus upon technical documentation and record-keeping, respectively. In particular, technical documentations of a high-risk AI system are to be drawn before the system is placed on the market or put into service, as well as be kept up-to-date, with record-keeping technical allowance to the automatic recording of events (logs) over the lifetime of the system.

Article 14 stresses the importance of human oversight, declaring high-risk systems shall be designed and developed in a way that they can be effectively overseen by natural persons during the usage period.

Article 15 marks the last generalized set of high-risk rules, setting requirements for sufficient accuracy, robustness, and cybersecurity, with consistent performance in those respects throughout their lifecycle.

Article 26 identifies the specificities for deployers of AI: firms that are employing AI in their operational use throughout the firm.

12 subsections are identified for properly fulfilling the expectations to ensure the safe, continuous use of AI systems. Summarized, this requirements involve the following:

1. Appropriate technical and organizational measures to ensure that systems are deployed in accordance with their instructions for use

2. Proper human oversight to natural persons with the necessary competence, training, authority, and support

3. The two former points being carried out without prejudice to any other deployer obligations (whether Union or national law), with deployers free to organize their own resources to fulfill these requirements

4. Additionally, without prejudice to points 1 & 2, deployers shall ensure that data is relevant and sufficiently representative of the intended purpose of the high-risk AI system—to the extend the deployer holds control over input data

5. Monitoring operations of the high-risk AI with respect to the instructions for use, notifying AI providers and relevant surveillance authorities if the system is deemed to present a risk or has incurred an incident. For financial institutions subject to internal governance, arrangements, or processes under Union financial services law, the monitoring obligation is deemed fulfilled by the corresponding rules under relevant financial service law

6. Provision of automatically generated logs, under deployer control and held for at least 6 months

7. Informing of workers’ representatives and affected workers that they will be subject to the use of a high-risk AI system

8. Complying with registration obligations (Article 49)

9. Carrying out of a data protection impact assessment (Article 35 of Regulation (EU) 2016/679)

10. Details in the usages of legal post-remote biometric identification systems (Article 26(10))

11. For high-risk AI identified in Annex III, any decisions or assistance of an AI system to a natural person shall be transparently relayed

12. Cooperation with relevant authorities in order to implement the regulation

Lastly, Article 27 relays the concept of the fundamental rights impact assessment: a set of requirements prior to the deployment of the AI where the impact on fundamental rights that the use of such a system may produce is evaluated. This involves descriptions of the intended purpose of the system, affected persons, descriptions of the human oversight measures taken, and so forth.

Proper Tooling: The Calvin Software

The culmination of these requirements—spanning governance workflow requirements to testing, assessments, and logging—are often regarded as a significant impending headache for risk, compliance, and validation teams; while harmonization will exist, due to the novel risks that must be addressed within AI systems, significant resources and capital are expected.

At Calvin, we offer the modularized solution for firms looking to ensure their compliance in one single tool; from our inventory module and Evidence Management Tool (QLAIMS), to our assessment and validation suite for both traditional and LLM models, we cover the pain points that many face when strategizing on the Act’s implementation.

Interested to learn more? Check out our EU AI Act Checklist Blog, or book a demo with us!

Authors

Shelby Carter

Business Development Intern

Upgrade AI risk management today!

request a demo

Subscribe to our
monthly newsletter.

Join our
awesome team
today!

e-mail us