INTRODUCTION
Following the entry into application of Chapter II of Regulation (EU) 2024/168 (the “AI Act”) on 2 February 2025, the European Commission has published guidelines (the “EC guidelines”) aiming at ensuring consistent, effective and uniform interpretation of the rules on AI practices that are deemed unacceptable due to their potential risks to European values and fundamental rights. These guidelines still need to be formally approved to be applicable (their translation in all languages is still ongoing).
Although those guidelines are non-binding, they offer valuable insights to help stakeholders understand and align with the AI Act’s requirements on prohibited practices.
1. PROHIBITED AI PRACTICES UNDER THE AI ACT
Article 5 of the AI Act, which we have previously detailed in this article, prohibits the placement or use of certain AI systems which are deemed unethical and harmful to society.
Among others, manipulative practices, exploitation of vulnerabilities, social scoring or real-time remote biometric identification are in principle prohibited. The EC guidelines offer, in that regard, legal clarifications and practical examples, some of which are more detailed below. Without being exhaustive, two main prohibitions will be detailed in this article: harmful manipulation, deception and exploitation (§2) and social scoring (§3).
2. HARMFUL MANIPULATION, DECEPTION AND EXPLOITATION
The AI Act aims to safeguard individuals and vulnerable persons from any significantly harmful effects of AI-enabled manipulation and exploitation. As such, AI systems that exploit vulnerabilities due to age, disability, or a specific socio-economic situation are prohibited.
a. Examples of prohibited practices
Age is a primary vulnerability category covered by the AI Act, whatever the age. The goal is to prevent AI systems from taking advantage of cognitive or other limitations linked to age.
For instance, a game that uses AI to monitor children’s behaviour and deliver personalised rewards through addictive reinforcement loops exploits their developmental vulnerabilities. This can lead to lasting harm, including addiction, health issues, poor academic performance, and social difficulties, with effects that may persist into adulthood.
Similarly, AI systems that target older individuals with deceptive, personalised offers or scams exploit reduced cognitive capacity with the aim of influencing decisions they would not otherwise make, often resulting in significant financial harm.
Likewise, individuals in disadvantaged socio-economic situations are generally more vulnerable, with fewer resources and lower digital literacy than the general population. This makes it more difficult for them to recognise or resist exploitative AI practices. The AI Act seeks to prevent AI technologies from reinforcing or deepening existing financial and social inequalities by exploiting these vulnerabilities.
For example, AI-driven systems that target individuals in low-income areas with advertisements for predatory financial products are prohibited. In contrast, AI systems that unintentionally disadvantage socio-economically vulnerable groups due to biased training data (i.e. indirect discrimination) are not automatically covered by the prohibition. The provision applies when such targeting is intentional.
b. Examples of permitted practices
The AI Act clarifies that common and legitimate commercial practices, such as advertising, should not be regarded ‘in themselves’ or by their very nature as harmful manipulative, deceptive or exploitative AI-enabled practices.
For example, AI systems used in banking services – such as for loans or mortgages – that consider a client’s age or socio-economic status in line with EU legislation on financial services, consumer protection, data protection, and non-discrimination, are not prohibited under the AI Act. When designed to support and protect vulnerable individuals, such systems may in fact enhance fairness and accessibility in financial services for these groups.
3. SOCIAL SCORING
While AI-based scoring can offer benefits – such as promoting positive behaviour or improving safety and service quality – certain practices cross the line into unacceptable social control and surveillance and are hence prohibited by the AI Act. The AI Act prohibits AI-enabled ‘social scoring’ systems that assess or classify individuals or groups based on their social behaviour or personal characteristics, resulting in unfair or harmful treatment. This is particularly the case when the data originates from unrelated social contexts or when the resulting treatment is disproportionate to the underlying behaviour. The prohibition applies broadly across both public and private sectors, without limitation to specific fields.
a. Examples of prohibited practices
The EC guidelines provide, among others, the following examples of prohibited practices:
- the use of an AI-based predictive tool by national tax authorities to assess all taxpayers’ returns for potential audits;
- the use of an AI system based on various (unrelated) data by a social welfare agency to estimate the likelihood of fraud among recipients of household allowances;
- the use of an AI system by a public labour agency to score unemployed individuals based on an interview and algorithmic assessment to determine eligibility for employment support if unrelated data are included (such as marital status, chronic health conditions, or addiction history).
b. Examples of permitted practices
Examples of legitimate scoring practices that fall outside the scope of the AI Act prohibitions include the following:
- financial credit scoring systems used by creditors or credit agencies to assess creditworthiness based on relevant financial data, such as income and expenses (when aligned with consumer protection laws and serving a legitimate purpose);
- telematics data used by insurers on risky driving behaviour, like speeding, to proportionally adjust premiums, reflecting the higher accident risk;
- specific rewards’ systems (such as faster returns or refunds) offered by online shopping platforms to customers with a strong purchase history and low return rates, provided these rewards are fair and users still have access to standard options.
4. ENTRY INTO APPLICATION
Article 5 of the AI Act has been applicable since 2 February 2025 and applies to all AI systems in the European Union, regardless of whether they were placed on the market, put into service or used before or after this date. The prohibitions set out in this provision are directly binding and providers and users must ensure that they do not place on the market, put into service or use AI systems that could constitute prohibited practices under Article 5 of the AI Act.
Although these prohibitions are already in force, the related provisions on governance, enforcement, and penalties will only become effective on 2 August 2025. Until then, no administrative fines can be imposed, and market surveillance authorities are not yet operational. However, these prohibitions have direct legal effect, allowing affected parties to seek enforcement and interim injunctions through national courts even before enforcement mechanisms are in place.
If you have any questions or would like to discuss the potential impact of the AI Act on your business, feel free to reach out to us at joan.carette@simontbraun.eu or maika.bernaerts@simontbraun.eu.
***
This article is not a legal advice or a legal opinion. You should seek advice from a legal counsel of your choice before acting upon any of the information in this newsletter.