Introduction to the topic
Under Article 6 of the EU Artificial Intelligence Act (Regulation (EU) 2024/1689*), an AI system is classified as high-risk if it either (1) forms part of a product regulated by EU harmonised legislation—such as medical devices, vehicles, or industrial machinery—or (2) performs a function listed in Annex III, including use in areas like education, law enforcement, employment, and biometric identification. The Act also mandates that any AI system used for profiling natural persons—automated evaluation or prediction of individuals' traits, preferences, or behaviour—is automatically classified as high-risk. Even when a provider believes a system does not pose serious risks, its profiling function alone triggers this classification. Providers of such systems must either conduct a formal risk assessment to justify otherwise and still register the system in the EU database, or they must comply with the full framework of regulatory obligations, including documentation, transparency, testing, and human oversight. This high-risk classification ensures that AI systems which can significantly impact people’s lives, safety, or rights are subject to stricter scrutiny and accountability—ultimately promoting trust and lawful innovation across the EU.
Classification of AI systems as high-risk
Under Article 6 of the EU AI Act, an AI system is classified as high‑risk if it meets either of two criteria:
- It is incorporated into or functions as a safety component of a product covered by the EU’s harmonised legislation (e.g. medical devices, machinery, vehicles, aircraft, toys, etc.) listed in Annex I. Here are some examples:
- AI in autonomous vehicles (e.g. self-driving car’s lane-keeping system or emergency braking AI) – covered by vehicle safety regulations.
- AI-powered surgical robots – covered under the Medical Devices Regulation.
- AI in aircraft flight control – governed by EU aviation safety rules.
- AI in industrial machinery – e.g. an AI that monitors and automatically adjusts pressure in a manufacturing plant to avoid accidents.
- It belongs to specific AI use-cases listed in Annex III*, especially those involving profiling natural persons, or uses significantly impacting people’s health, safety, or fundamental rights. This means that any AI system listed in Annex III shall, without exception, be classified as high-risk when it is used for the profiling of natural persons - even if a provider believes the system does not pose a significant risk in other respects, the mere fact that it is used for profiling purposes—i.e., the automated processing of personal data to evaluate, analyse, or predict aspects such as behaviour, preferences, or performance—automatically triggers the high-risk classification under the AI Act. Here are some examples:
- AI used in recruitment → e.g. software that screens CVs or scores candidates for a job interview.
- AI used in education→ e.g. automated systems that grade student performance in university admissions or final exams.
- AI used in law enforcement→ e.g. predictive policing tools that assess risk of future crimes, or facial recognition in public spaces.
- AI used in border control → e.g. systems that screen travelers
In short, any AI that can make or support decisions with major effects on people’s lives or safety is treated as high-risk—whether it’s embedded in a product or used as a standalone service.
According to Regulation (EU) 2024/1689, where a AI system provider considers that an AI system listed in Annex III does not qualify as high-risk, the provider is nonetheless required to fulfil specific procedural obligations for that assessment to be legally valid. First, the provider must document its internal risk evaluation justifying why the AI system does not pose a significant risk to health, safety, or fundamental rights, nor materially influence decisions affecting individuals. This documentation must be completed prior to placing the system on the market or putting it into use. Regardless of the outcome of that assessment, the provider is still required to register the AI system in accordance with Article 49(2). This means the system must be entered into the EU’s public database of high-risk AI systems, ensuring transparency and enabling ex post oversight. Furthermore, upon request by national competent authorities, the provider must promptly submit the documentation that supports its decision not to classify the system as high-risk. This includes the risk assessment, information on the system’s intended use and technical specifications, and data regarding its potential effects.
Requirements for high-risk AI systems
Providers of high-risk AI systems must comply with a robust regulatory framework designed to ensure that such systems are developed and deployed in a safe, transparent, reliable, and accountable manner—safeguarding both individuals and society as a whole.
1. Risk Management System (Article 9)
Providers of high‑risk AI systems must establish a continuous risk management process throughout the system’s entire lifecycle—from design to market withdrawal. This includes identifying and assessing known and reasonably foreseeable risks to health, safety, or fundamental rights, including potential misuse. Risks must be regularly reviewed, especially following any modifications to design, algorithms, or data. The system must include mitigation plans and incident response mechanisms.
2. Data governance (Article 10)
High-risk AI systems that rely on model training must use training, validation, and testing datasets that meet strict quality requirements. These datasets must be governed by appropriate data management practices aligned with the system’s intended purpose, including how data is collected, processed, and assessed for suitability. Particular attention must be paid to bias detection and mitigation, representativeness, and contextual relevance, especially when the system interacts with individuals or specific populations. Datasets must be accurate, complete, and statistically appropriate, with consideration for the geographical or functional setting in which the system will operate. In exceptional cases, special categories of personal data (such as health or ethnicity) may be processed to detect and correct bias—only if no other data can serve that purpose and if strong safeguards are applied, including pseudonymisation, restricted access, and timely deletion.
For high-risk AI systems that do not involve training models, only the testing datasets are subject to these quality and governance requirements.
3. Technical Documentation (Article 11)
The technical documentation for a high-risk AI system must be prepared before the system is placed on the market or put into service and must be kept up to date. It must clearly demonstrate compliance with the requirements of the AI Act and provide all necessary information to competent authorities and notified bodies. The content must include at least the elements listed in Annex IV. Small and microenterprises, including start-ups, may use a simplified format for this documentation, which the Commission will prepare. Notified bodies are required to accept this simplified form during conformity assessments.
4. Record‑Keeping and Logs (Article 12)
High‑risk AI systems must have automatic logging functionalities that record operational activities and decisions. Logs should be timestamped and include details on system states at decision points, input data references, and any errors, including identifying human oversight actions. This traceability is essential for oversight, accountability, audits, and complaints handling
5. Transparency and Information to Deployers (Article 13)
Providers must furnish clear, written instructions for users and deployers, explaining the system’s functions, capabilities, limitations, accuracy, known risks, and safe usage procedures. Users must also be informed if the system processes personal data or performs automated decision-making
6. Human Oversight (Article 14)
High-risk AI systems must be designed to allow effective human oversight throughout their use. This oversight is intended to reduce risks to health, safety, or fundamental rights, including in cases of foreseeable misuse. Oversight measures should match the system’s risk level, autonomy, and context, and may be built into the system or implemented by the deployer. The system must enable assigned persons to understand its capabilities and limitations, monitor its performance, detect anomalies, avoid over-reliance on outputs, interpret results correctly, override decisions when necessary, and safely stop the system if needed. For systems involving biometric identification (Annex III, point 1(a)), any decision based on identification must be verified by at least two qualified individuals, unless exempted under law for specific areas like law enforcement or border control.
7. Robustness, Accuracy, and Cybersecurity (Article 15)
Systems must be technically robust, resistant to errors, manipulation, and cyber-attacks. They should deliver reliable performance within specified conditions, feature output validation mechanisms, error detection, and safeguards against data tampering. Accuracy must be measured, quantified, and communicated to users .
8. Conformity Obligations (Articles 16–19)
All supply chain actors—providers, importers, distributors, and deployers—bear defined responsibilities; providers must conduct conformity assessments, apply the CE marking, issue a declaration of conformity, and maintain technical documentation. Importers and distributors must verify proper marking and registration, while deployers must use the system according to instructions and provide human oversight.
Notifying authorities
Each EU Member State must designate at least one notifying authority responsible for evaluating, designating, and monitoring conformity assessment bodies. These procedures must be coordinated across Member States. To ensure impartiality, notifying authorities must operate independently from the assessment bodies, avoid conflicts of interest, and separate decision-making from assessment roles. They cannot offer services that overlap with those of the assessment bodies and must maintain strict confidentiality. Additionally, they must employ a sufficient number of qualified personnel with relevant expertise in areas such as AI, law, and fundamental rights.
Further explanations
* Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)
* Annexe III - High-Risk AI System Categories:
Biometrics: Includes remote biometric identification, categorisation based on sensitive traits, and emotion recognition—where permitted by law.
Critical Infrastructure: AI used as safety components in managing digital infrastructure, traffic, or utilities like water, gas, electricity, or heating.
Education and Vocational Training: AI used for admissions, evaluating learning outcomes, assigning education levels, or monitoring student behaviour during exams.
Employment and Workforce Management: AI used for recruitment, job advertising, application screening, performance evaluation, or decisions on promotions and dismissals.
Access to Essential Services: AI used to assess eligibility for public benefits or services (e.g. healthcare), credit scoring, insurance pricing, or emergency response classification and triage.
Law Enforcement: AI used for risk assessment (e.g. victimisation or recidivism), polygraphs, evidence reliability, profiling, and behavioural assessment during investigations or prosecutions.
Migration and Border Control: AI used to assess security or health risks, process visa/asylum applications, or support identity verification and detection at borders.
Justice and Democratic Processes: AI assisting judicial decision-making or influencing electoral outcomes or voter behaviour (excluding internal campaign tools not shown to voters).
Law firm JURŠETIĆ & PARTNERS Llc – Aleksandar Aleksovski, managing partner