Prohibited Artificial Intelligence Practices in the European Union
The European Union Artificial Intelligence Act has explicitly outlined which use cases of Artificial Intelligence are forbidden within its jurisdiction. Learn why you should be aware of them.
Last March 13th, the European Parliament finally voted and passed the much debated European Union Artificial Intelligence Act1. And within it, the detailed definition of the Prohibited Artificial Intelligence Practices in the European Union, which we all should be aware of, regardless of our position in the market: entrepreneurs building Artificial Intelligence tools within the European Union jurisdiction, entrepreneurs building AI tools outside of the EU, but with the intent of selling it within the EU jurisdiction, organizations within the EU (private and public) intending to acquire, develop, or deploy AI tools, users of AI tools within such organizations, or the the general public which will be affected by the use of AI. Everyone.
The road hasn’t been short nor easy. Preceded by the EU Digital Markets Act2 and strongly based on the output from the EU High Level Expert Group on Artificial Intelligence’s Ethics Guidelines for Trustworthy AI3, it has been subject to a lot of debate. At some point, a few of the most actively engaged EU Member States were even considering withdrawing from the Act itself4, because they considered it too restrictive and with the potential of hampering their AI development.
I’ve written a few pieces on that regard if you wanna check them out:
The European Union Artificial Intelligence Act is quite large, so we will deal with it in several parts, focusing on the most relevant sections of it, and leaving out most of the obscure legalese that is meant to give it structure or connect it with previous regulations (without ignoring their effects, of course).
We could split its areas of interest like this:
Prohibited Artificial Intelligence Practices (the subject of this article)
Definition of High Risk AI Systems, and How every party should deal with them
Transparency Obligations for Providers and Deployers of AI Systems
Obligations for the Providers of General Purpose AI Models
Measures in Support of Innovation (or How do we plan to regulate all of this without scaring away from the EU all the AI researchers and entrepreneurs)
Governance and the EU Database for High Risk AI Systems
Post-Market Monitoring, Market Surveillance and Penalties
Let’s get on with it.
General Provisions
Let’s begin with the General Provisions, as stated on the EU AI Act Chapter 1:
“The purpose of this Regulation is to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter of Fundamental Rights, including democracy, the rule of law and environmental protection, against the harmful effects of artificial intelligence systems (AI systems) in the Union, and to support innovation.”
As mentioned before, the EU AI Act is highly influenced by the previous work of the Trustworthy AI Guidelines, developed by the EU High Level Expert Group on AI. It starts with the Fundamental Rights and moves from there into the more applied objectives of trying to balance the containment of potential harm produced by AI systems, with the necessity of supporting innovation on this field.
Who are the market actors affected by the scope of this AI regulation?
Providers of AI Systems located both in and outside of the EU
Deployers of AI Systems located in the EU
Providers and Deployers outside of the EU, but the AI System’s output is used inside the EU
Importers and Distributors of AI Systems
Product manufacturers that embed AI Systems within their products
Authorized representatives of providers not established in the EU
Affected persons located in the EU
Which actors are outside the scope of the EU AI Act?
Outside of the scope are the National Security concerns of the Member States, and military applications. Also aren’t affected states not member of the EU who make use of their own AI systems during cooperations, as long as they can provide proof that such systems protect the fundamental rights and freedoms as stated by the EU charter.
AI Systems and AI Models in operation for scientific purposes, research and development are outside of the scope of the Act, until they enter real world testing phase. At that point they are liable to the current regulation.
Consumer protection and product safety laws apply normally to everything AI related.
Individuals tinkering with AI outside of any professional activity are not subject to the Act. So yeah, you can build Skynet in your basement, as long as you don’t openly sell it, or test it in the public domain.
The Member States can introduce further regulations to favor workers affected by AI Systems.
AI Systems developed under Free and Open Source licenses are not subject to the AI Act, unless they are placed on the market or are deemed High Risk AI Systems, considered as Prohibited AI Practices, or should be subject to the Transparency Obligations cited in Article 50.
Prohibited Artificial Intelligence Practices
This list can be quite extensive, so we will review its elements to assess their impact for both end users and entrepreneurs engaging in AI development.
I’ve simplified a bit the legalese to make it more readable, but if you want to check the original text (in the EU AI Act Chapter 2), you can refer to the source material in the References section.
AI systems that use subliminal techniques beyond a person’s or group of persons consciousness to purposely manipulate their behavior, leading them to taking decisions they would have not otherwise taken and that will result in significant harm to them.
AI systems that exploit vulnerabilities of a person or group of persons (like age, disability, social or economic situation) to manipulate them and lead them to take decisions that will result in significant harm.
The EU AI Act is very conscious of the dangers AI can pose to the general population in terms of manipulating the human behavior through targeted and repeated bombardment of information. Most probably building upon the negative experiences from the Cambridge Analytica scenario, and related ones, it explicitly forbids the use of AI for this use case.
AI systems deployed with the intention of evaluating or classifying persons according to their behavior, inferring personal characteristics, storing that social score and using it to treat them unfavorably in a different context, or in a disproportionate way given the behavior analyzed.
AI systems deployed with the intention of making risk assessments of persons to predict the likelihood of him committing a criminal offense, based solely on the profiling performed on their personal traits or characteristics.
AI systems can be used as a tool to support the assessment being performed by a human on the likelihood of a person having participated in a criminal activity, based on existing and verifiable facts linked to such activity.
AI systems aren’t new. We already have real examples of AI systems used to classify people for law enforcement purposes, resulting in inadequate and even racist outputs, as reported by this MIT Technology Review article5.
We also have the Police State being built by China and its famous “Social Scoring System”, already in operation.
Building upon these experiences, the EU AI Act also explicitly forbids the use of AI systems to perform classification and inference of future behavior of people.
Nevertheless, the use of AI systems as a tool to sift over large amounts of data to find or refine information a human criminal investigator is already working on is also explicitly permitted.
AI systems with the purpose of creating or expanding a facial recognition database by scraping public, CCTV or Internet video sources.
AI systems aimed to infer the emotions of natural persons in the workplace or educational institutions, except for medical or safety reasons.
AI systems with the intent of categorizing and classifying people according their sex, race, political opinion, beliefs, etc. deducing them from their biometric data.
The potential of using AI systems to scrape biometric information from public sources is one of the biggest risks this technology poses, and it is being addressed directly here.
Also, the use of AI to infer the emotional state of people can be used for manipulation (think about emotion-based advertising, for example). This is also explicitly forbidden.
And any attempts to classify people, by inferring their personal characteristics from their interactions with any system (for example, Social Media) is considered a forbidden practice.
It is permitted the use of AI systems to perform labelling or filtering of lawfully acquired biometric data in the area of law enforcement.
The use of real-time biometric identification systems in public spaces for the purposes of law enforcement, unless it is deemed strictly necessary to achieve one of the following objectives
The targeted search of victims of abduction, human trafficking, sex exploitation, and searching for missing persons.
The prevention of a specific, substantial and imminent threat to the life and physical safety of persons, or a genuine threat of terrorist attack.
The localization or identification of a person suspected of having committed a criminal offense of the types referred at the Annex II.
The use of real-time biometric identification in publicly accessible spaces for the purposes of law enforcement shall only be deployed to confirm the identity of the specific targeted individual and must take into account:
The scale and probability of the harm that would come if not used.
The consequences for the rights and freedoms of the people involved.
Another important use case of AI systems that needed regulation was the use of real-time biometric identification in public and open spaces, for law enforcement purposes.
This use case of AI is probably the most specifically regulated in the Forbidden Practices section of the EU AI Act.
It can only be approved in very specific cases, as stipulated in the Annex II: List of Criminal Offenses, from the Act, and those situations detailed here.
Nevertheless, AI systems are permitted as tools to help in the manipulation of data already acquired in a lawful manner, such as filtering and labelling.
The use of real-time biometric identification systems in public spaces for the purposes of law enforcement can only be authorized only if such law enforcement authority has completed a Fundamental Rights assessment and the system to be used is registered in the EU High Risk AI Systems database.
The authorization will be provided by either by a judicial authority or a specially appointed independent administrative authority whose decision is binding of the Member State.
In cases of justified urgency, such systems could be used without an existing registration, but the registration process must be started within 24 hours at most, and if the authorization is rejected, the use of the AI system must be stopped immediately, and the information collected must be destroyed, as it cannot be used lawfully.
The authorization provided is only valid for a specific geographic location, time, and personal scope.
No decision that produces an adverse legal effect on a person can be taken by using solely the output of the real-time biometric identification system.
Each and every use of the real-time biometric identification system must be notified to the Market Surveillance Authority and the National Data Protection Authority.
The EU AI Act has put a lot of effort in trying to control the governments’ ability to perform automated surveillance over their citizens.
This is clearly shown by the strict requirements in place to even use an AI system for real-time identification in public spaces.
Not only those systems must be registered previously to their usage, but each and every use must be directly approved by a judiciary authority, or a similar legally binding one appointed by their respective government, akin to the lawfully authorized wiretapping and surveillance of communications.
But in addition to that, data collected this way must be reported to a central EU authority, responsible of summarizing this information for later public reporting, and it cannot be the sole evidence to be used in a legal proceedings.
Member States who decide to adopt real-time biometric identification systems must modify their national laws to allow it, in strict compliance of the limits and conditions already mentioned. After doing so, they must notify the EU Commission 30 days after, at the latest.
Each national Market Surveillance Authorities and Data Protection Authorities will collect the notifications received, and report them annually to the EU Commission.
The EU Commission shall publish annual reports on the consolidated data of the use of real-time biometric identification systems.
And finally, the Member States who wish to enable the use of AI systems capable of real-time biometric identification must explicitly modify their national laws, making them as restrictive as the EU AI Act or more, but not less.
They must also create the appropriate national agencies responsible of monitoring the compliance and data collection already mentioned, because there is an obligation of publishing consolidated reports on the topic, for general scrutiny.
Conclusion
This is the first article in the series where we’ll explore the most relevant elements of the EU AI Act for the general public and entrepreneurs wishing to create AI systems and market those in the European Union.
The efforts to regulate AI all over the world are just starting: the US Government has also issued Executive Orders on this topic (which I will also review in later articles), and we can bet that very soon additional jurisdictions will start aligning their legal frameworks to both reference points.
As individuals, as entrepreneurs, and as responsible citizens we must keep ourselves informed and aware of these regulations, because the usage of AI systems will deeply change the way we interact with each other, with our governments, and potentially tilt the power structures not necessarily in our favor.
In the next article we will review the next important point in the EU AI Act: “Definition of High Risk AI Systems, and How every party should deal with them”, so stay tuned.
Thank you for reading my publication, and if you consider it helpful or inspiring, please share it with your friends and coworkers. I write weekly about Technology, Business and Customer Experience, which brings me lately to write a lot also about Artificial Intelligence, because it is permeating everything. Don’t hesitate in subscribing for free to this publication, so you can keep informed on this topic and all the related things I publish here.
As usual, any comments and suggestions you may have, please leave them in the comments area. Let’s start a nice discussion!
When you are ready, here is how I can help:
“Ready yourself for the Future” - Check my FREE instructional video (if you haven’t already)
If you think Artificial Intelligence, Cryptocurrencies, Robotics, etc. will cause businesses go belly up in the next few years… you are right.
Read my FREE guide “7 New Technologies that can wreck your Business like Netflix wrecked Blockbuster” and learn which technologies you must be prepared to adopt, or risk joining Blockbuster, Kodak and the horse carriage into the Ancient History books.
References
European Union Artificial Intelligence Act
https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html
A Connected Digital Single Market for All (and Annex)
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM:2017:228:FIN
Ethics Guidelines for Trustworthy AI
https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
Why the EU AI Act was so hard to agree on
https://www.technologyreview.com/2023/12/11/1084849/why-the-eu-ai-act-was-so-hard-to-agree-on/
Predictive policing algorithms are racist. They need to be dismantled.