European Commission: AI Act Article 5 – Prohibited Practices
The guidelines outline prohibited AI practices under the EU AI Act, including harmful manipulation and deceptive techniques, exploitation of vulnerabilities, social scoring, unauthorized biometric and emotion recognition applications, and real-time biometric identification restrictions. They emphasize transparency, legal safeguards, and a balance between innovation and fundamental rights protection, while also noting the interplay with other EU laws.
European Commission: AI Act Article 5 – Prohibited Practices
This document offers Commission Guidelines on the prohibitions of specific Artificial Intelligence (AI) practices outlined in the EU AI Act (Regulation (EU) 2024/1689).
The guidelines clarify the scope and application of these prohibitions, providing examples and explanations to aid authorities in enforcement and to guide AI providers and deployers in ensuring compliance.
These guidelines are non-binding, with final interpretation reserved for the Court of Justice of the European Union. The document addresses key areas such as manipulative AI, exploitation of vulnerabilities, social scoring, and biometric identification, examining their interplay with existing EU law.
Here's a summary of key takeaways from the provided document, which outlines guidelines on prohibited AI practices under the EU's AI Act:
- Harmful Manipulation, Deception, and Exploitation: The AI Act prohibits AI systems that use subliminal, purposefully manipulative, or deceptive techniques to materially distort behavior and cause significant harm. This includes exploiting vulnerabilities related to age, disability, or socioeconomic status.
- Subliminal techniques, such as flashing images too quickly for conscious perception, are prohibited when used to manipulate behavior to cause significant harm.
- Purposefully manipulative techniques are not defined in the AI Act, but they are techniques intended to increase the effectiveness and impact of manipulation, even if the intention to cause harm isn't there.
- Deceptive techniques such as those that present false or misleading information are prohibited when used to manipulate behavior to cause significant harm. Generative AI systems that "hallucinate" may not be considered deceptive if the provider has informed the user of the system's limitations.
- The concept of material distortion of behavior involves impairing a person's ability to make an informed decision, causing them to act in a way they wouldn't otherwise.
- Significant harm includes physical, psychological, financial, and economic harm and must be reasonably likely to occur for the prohibition to apply.
- Lawful persuasion is not prohibited, but manipulation is. Persuasion involves transparency and respects autonomy, while manipulation exploits vulnerabilities and aims to benefit the manipulator.
- Social Scoring: The AI Act prohibits AI systems that classify individuals based on social behavior or personality traits, leading to detrimental or disproportionate treatment in unrelated social contexts.
- This prohibition applies to both public and private actors.
- Biometric Data and Facial Recognition: The Act prohibits untargeted scraping of facial images to create facial recognition databases. It also prohibits biometric categorization that infers sensitive characteristics like race, political opinions, or sexual orientation.
- Real-time remote biometric identification (RBI) in public spaces for law enforcement is generally prohibited but allowed in certain exceptions, including targeted searches for victims of specific crimes, prevention of imminent threats, and locating suspects of certain crimes.
- RBI use requires prior authorization from a judicial or independent administrative authority.
- Emotion Recognition: AI systems that infer emotions in the workplace and educational settings are prohibited, with exceptions for medical and safety reasons.
- Exclusions: The AI Act excludes certain areas, including national security, defense, military purposes, research, and personal non-professional activities from its scope.
- Interplay with Other Laws: The AI Act works alongside other EU laws, including data protection, consumer protection, and non-discrimination laws.
- Transparency and Oversight: The AI act mandates that the use of real-time RBI systems must be reported to market surveillance and data protection authorities.
- Member State Flexibility: Member states may introduce stricter or more favorable laws that do not conflict with the AI Act.
- Safeguards: The Act also highlights the need for fundamental rights impact assessments (FRIA) before deploying RBI systems in law enforcement. These assessments should consider the seriousness of the potential harm, the scale of people affected, and the probability of adverse outcomes.
These guidelines aim to balance innovation with the protection of fundamental rights and safety, setting clear boundaries for AI practices that are considered too risky.
Related Articles
Building a Vertical AI Agent for Campus Facilities: Smarter Operations, Better Experience
Universities operate complex physical plants—buildings, utilities, grounds, and infrastructure that support the academic mission. A purpose-built AI agent can optimize operations while improving the campus experience.
AI Writing Tutors: Improving Student Writing Without Doing It for Them
AI writing tutors walk the line between helpful and harmful. Here's how to implement AI that improves writing skills while maintaining academic integrity.
Mistral AI for Education: European Open-Source Excellence
Mistral AI offers powerful open-source models with European data considerations. Here's how educational institutions can leverage Mistral for AI tutoring.
The Trust Problem in an AI World: A University CIO’s Guide to Responsible AI in Higher Education
A pragmatic playbook for CIOs to replace “shadow AI” with a trust-first model—covering culture, architecture, standards (LTI/xAPI), safety, and analytics—plus how a model-agnostic, on-prem platform like mentorAI operationalizes responsible transparency at scale.