Jun17
In our previous two articles, we explored the categories of prohibited AI systems and high-risk AI systems as defined by the EU Artificial Intelligence Act (Regulation (EU) 2024/1689, hereinafter: the Act*). While much of the public and legal discourse has focused on these areas, an important and widely applicable category—limited-risk AI systems—deserves closer examination. These systems, which comprise the majority of AI applications in daily commercial and public life, are not banned or heavily regulated. Instead, the Act introduces a light-touch framework of transparency obligations aimed at fostering accountability, user awareness, and trust without stifling innovation.
Limited-risk AI systems are defined in the Act as systems that do not directly pose significant risks to fundamental rights, safety, or health. They are not embedded into critical products such as medical devices or industrial machinery, nor do they serve functions like biometric identification, predictive policing, or automated decision-making in migration and justice systems. However, they may still affect users’ perceptions, decisions, or interactions—especially when users are unaware they are engaging with AI.
Transparency as the key element
The EU AI Act does not subject limited-risk systems to conformity assessments, CE marking, or detailed technical documentation. Instead, the obligations are narrowly tailored to ensure user awareness. The central requirement is transparency—users must be informed when they are interacting with an AI system, especially when it may be confused with a human, or when content has been synthetically generated. This fosters trust and protects users from manipulation or deception. Importantly, these requirements are not meant to deter development, but to establish a baseline of responsible design and deployment. One of the most impactful features of the AI Act is its future-proof approach to limited-risk systems. The Act empowers the European Commission to issue guidance on grey areas and develop standardization practices for transparency. This is essential in a fast-moving field where today's limited-risk applications might evolve into high-risk systems as their functions become more complex and consequential. The law’s emphasis on adaptability ensures that developers and users can keep pace with innovation while maintaining legal certainty.
Below, we provide examples of limited-risk AI systems to give you the most illustrative understanding of this topic.
Customer service chatbots
A typical example of a limited-risk system is a customer service chatbot used by online retailers. These bots respond to customer inquiries using natural language processing. Although they facilitate communication and improve user experience, they do not make legally binding decisions or materially impact fundamental rights. Nevertheless, the user must be clearly informed that they are interacting with a machine, not a human being. Under Article 50 of the EU AI Act, this disclosure requirement ensures that users are not misled and can make informed choices about how to engage with the technology.
Streaming platforms
Another common example is content recommendation algorithms used by streaming platforms like Netflix or music services like Spotify. These AI systems predict user preferences based on past behavior to suggest movies or songs. While such systems significantly shape media consumption, they generally do not interfere with legal rights or personal safety and are thus considered limited-risk. However, because they influence individual decision-making, platforms must ensure a certain degree of transparency, for instance by disclosing how recommendations are generated or allowing users to modify preference settings.
Generative AI systems
Generative AI systems, such as those producing images, text, audio, or video, also often fall into this category—unless they are used in high-risk sectors or produce content that impersonates real individuals. A generative AI tool that creates marketing copy, for example, might suggest product slogans or simulate a brand’s tone of voice. While powerful, such a system does not assess individuals or make real-world decisions affecting livelihoods or legal status. The Act requires that users be made aware when content is artificially generated, especially in cases where it could be mistaken for human-created material. This includes deepfake technology* where synthetic videos depict real people saying or doing things they never did. Even if not maliciously intended, such content must carry a visible disclosure to avoid deception.
AI-powered translation tools
AI-powered translation tools represent another typical limited-risk use case. These systems, such as Google Translate or DeepL, allow users to convert text between languages in real time. While invaluable for communication, especially in multilingual contexts like the EU, they do not autonomously make decisions or impact legal rights. Still, users should be aware of their limitations—translations may not be contextually accurate or culturally appropriate. The EU AI Act encourages clarity on such limitations to prevent overreliance and potential misunderstandings in legal, medical, or contractual settings.
In the educational sector, AI systems used to support learning, such as grammar correction tools or language learning apps, generally fall within the limited-risk tier. A tool like Grammarly, which suggests edits to improve style or clarity, supplements human judgment rather than replacing it. However, when AI is used to automatically grade essays or rank students for admission—functions that can impact future opportunities—those systems transition into the high-risk category, triggering far stricter obligations. The boundary between limited and high-risk thus hinges not only on the sector but also on the AI system’s function and its influence on outcomes.
Voice assistant tools
Voice assistants such as Siri, Alexa, or Google Assistant also fall into the limited-risk domain, provided they are used for routine tasks like setting reminders, playing music, or answering factual questions. However, if such assistants are used in sensitive environments—say, assisting elderly users in managing medication schedules or providing financial advice—developers must be vigilant in clarifying their purpose, limitations, and data handling practices. Even within limited-risk categories, the context of use can elevate or mitigate perceived risk.
Transparency as the key element
The EU AI Act does not subject limited-risk systems to conformity assessments, CE marking, or detailed technical documentation. Instead, the obligations are narrowly tailored to ensure user awareness. The central requirement is transparency—users must be informed when they are interacting with an AI system, especially when it may be confused with a human, or when content has been synthetically generated. This fosters trust and protects users from manipulation or deception. Importantly, these requirements are not meant to deter development, but to establish a baseline of responsible design and deployment. One of the most impactful features of the AI Act is its future-proof approach to limited-risk systems. The Act empowers the European Commission to issue guidance on grey areas and develop standardization practices for transparency. This is essential in a fast-moving field where today's limited-risk applications might evolve into high-risk systems as their functions become more complex and consequential. The law’s emphasis on adaptability ensures that developers and users can keep pace with innovation while maintaining legal certainty.
Want to know more?
* Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)
*deepfake technology refers to the use of artificial intelligence (AI)—specifically deep learning techniques—to create or manipulate audio, video, or images in a way that realistically imitates real people or events, often without their consent. The term “deepfake” is a combination of “deep learning” and “fake.” Deep learning is a type of machine learning that uses neural networks with many layers to analyze large amounts of data and generate realistic outputs.
*conformity assessment is a formal process used to demonstrate that a product, service, or system meets specific legal, technical, or safety requirements—often set by law, regulation, or standards. In the context of the EU AI Act, a conformity assessment is the procedure through which a provider of a high-risk AI system proves that the system complies with all applicable legal requirements before it is placed on the EU market or put into use.
Law firm JURŠETIĆ & ALEKSOVSKI Llc – Jelena Nađ, partner