Kontakt

+385 99 481 73 73

+385 1 4920 120

info@aj.hr

HR | EN

RED LINES IN AI: UNDERSTANDING THE EU’S BAN ON UNACCEPTABLE RISK SYSTEMS

One of the most significant innovations introduced by Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024—commonly referred to as the EU Artificial Intelligence Act (AI Act)—is its risk-based regulatory approach.

Unlike a one-size-fits-all model that would treat every AI system the same, the AI Act recognizes that different technologies pose different levels of risk to individuals, fundamental rights, and society as a whole. As a result, the regulation defines four categories of risk, each with increasing levels of legal obligations depending on the potential for harm or misuse. The first risk category we are addressing is unacceptable risk, i.e. AI systems that pose an unacceptable risk.

Yesterday, we introduced the historical background of artificial intelligence and emphasized the adoption of this landmark regulation at the EU level. Today, we begin the first of four in-depth articles addressing each risk category under the AI Act—starting with the most serious: AI systems that pose an unacceptable risk.

Defining artificial intelligence under the AI Act

Before examining prohibited practices, it is essential to understand what the EU considers “artificial intelligence.” According to the AI Act, AI refers to a rapidly evolving set of technologies that generate wide-ranging economic, environmental, and social benefits across industries and sectors. These technologies enhance predictive accuracy, streamline operations, optimize resource allocation, and personalize digital services. When applied responsibly, AI offers competitive advantages to businesses while contributing to public good—particularly in sectors like healthcare, agriculture, food safety, education, energy, public services, justice, environmental protection, and climate change mitigation.

Unacceptable risk: Prohibited AI practices in the EU

At the highest tier of regulatory concern are AI systems that pose an unacceptable risk to human dignity, safety, and fundamental rights. These systems are considered inherently incompatible with EU values and are therefore strictly prohibited within the European Union. The prohibition applies regardless of any potential utility or innovation value the system might offer.

The following practices are explicitly banned:

Manipulative or subliminal techniques
The use of AI systems that apply subliminal, deceptive, or manipulative methods to influence individuals’ behavior without their awareness, especially when such influence undermines their ability to make informed choices and may result in harm.
Exploitation of vulnerabilities
AI systems that target individuals or groups based on age, disability, or socio-economic status in order to manipulate behavior in ways that may cause harm.
Social scoring
Systems that assess or classify individuals over time based on personal behavior or characteristics, leading to disproportionate or unjust treatment, particularly when applied outside the original data context.
Predictive policing based on profiling
The use of AI to predict the likelihood of criminal behavior solely based on personality traits, biometric data, or behavior profiling—unless used to support human judgment grounded in objective, verifiable facts.
Indiscriminate facial recognition databases
Creating or expanding biometric databases through mass collection of facial images, such as scraping the internet or using public surveillance footage, without consent or legal basis.
Emotion recognition in sensitive contexts
Using AI to detect or interpret human emotions in workplaces or educational settings, except where strictly necessary for medical or safety-related purposes.
Biometric categorization based on sensitive traits
Categorizing individuals using biometric data to draw inferences about race, religion, political beliefs, sexual orientation, or union membership—unless used in law enforcement under lawful and limited circumstances.
Real-time remote biometric identification in public spaces by law enforcement
This practice is generally prohibited, with a few narrowly defined exceptions such as:
Locating missing persons or victims of serious crimes (e.g. human trafficking).
Preventing imminent and serious threats to public safety or terrorist acts.
Identifying suspects involved in serious crimes punishable by at least four years of imprisonment under national law.
Even when permitted under these exceptions, such usage is subject to strict safeguards, including:

Prior judicial or administrative authorization (with limited emergency exceptions).
Clear limits on duration, location, and scope.
A fundamental rights impact assessment.
Notification of supervisory and data protection authorities.
Annual reporting to the European Commission.
The option for Member States to impose stricter national rules.
Why is this so important?

The classification of these practices as "unacceptable" sends a clear message: not all innovation is worth the risk. The EU has drawn a firm red line when it comes to AI systems that threaten the core of democratic society. While the AI Act encourages innovation and development, it also ensures that no technology is allowed to flourish at the expense of human autonomy, dignity, and rights. Understanding these red lines is essential not only for developers and businesses, but also for regulators, legal advisors, and policymakers. It is the foundation for building trustworthy, lawful, and human-centric AI in the European Union.

 

Law firm JURŠETIĆ & PARTNERS Llc – Anja Juršetić Šepčević, managing partner