Artificial Intelligence act

AI Act

This summary is by TNP Data Protection 

In April 2021, the European Commission presented its Artificial Intelligence package including its proposal for a regulation laying down harmonized rules on AI (“AI Act”). 

Through this legal framework on AI, the Commission aims to address fundamental rights and safety risks generated by specific uses of AI by providing AI developers, deployers and users with clear requirements and obligations. 

The European Parliament adopted its negotiating position on the AI Act on June 14 2023, stating that “The rules would ensure that AI developed and used in Europe is fully in line with EU rights and values including human oversight, safety, privacy, transparency, non-discrimination and social and environmental wellbeing”1.  

AI definition 

The EU legal framework on AI overall aim is to cover all AI (traditional symbolic AI, machine learning, hybrid systems and more recently generative AI), while having a definition of AI as neutral as possible in order to cover techniques which are not yet known or developed. 

According to the latest text version adopted by the Parliament on June 2023, an AI system is “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments”2. 

A risk-based approach regulation 

AI Act follows a risk-based approach which classifies AI systems depending on the level of risk the AI can generate. 

  • Unacceptable risk AI system 

Unacceptable AI systems are systems considered as a threat to people and will therefore be prohibited, such as those used for social scoring (meaning “evaluating or classifying natural persons based on their social behavior, socio-economic status or known or predicted personal or personality characteristics” 3). 

The Parliament has expanded the list to include bans on intrusive and discriminatory uses of AI, such as: 

  • “Real-time” remote biometric identification systems in publicly accessible spaces; 
  • “Post” remote biometric identification systems (with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization); 
  • Biometric categorization systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation); 
  • predictive policing systems (based on profiling, location or past criminal behavior); 
  • emotion recognition systems in law enforcement, border management, the workplace and educational institutions; 
  • untargeted scraping of facial images from the Internet or CCTV footage to create facial recognition databases 4. 
  • High-risk AI system 

The Parliament’s negotiating position adopted on June 2023 introduces important changes to the categorization of high-risk AI system. Indeed, to be considered high-risk, an AI system must “pose a significant risk of harm to the health and safety or the fundamental rights of persons [or] (…) to the environment. Such significant risk of harm should be identified by assessing on the one hand the effect of such risk with respect to its level of severity, intensity, probability of occurrence and duration combined altogether and on the other hand whether the risk can affect an individual, a plurality of persons or a particular group of persons.”

According to the latest text version adopted by the Parliament on June 2023, high risk AI systems will be divided into 2 categories 6: 

  1. AI systems that are used in products falling under the EU’s product safety legislation (this includes toys, aviation, cars, medical devices and lifts); 
  2. AI systems falling into 8 specific areas 7:
    – Biometric identification and categorization of natural persons;
    – Management and operation of critical infrastructure;
    – Education and vocational training;
    – Employment, worker management and access to self-employment;
    – Access to and enjoyment of essential private services and public services and benefits;
    – Law enforcement;
    – Migration, asylum and border control management;
    – Assistance in legal interpretation and application of the law. 

Different rules for different risk levels 

N.B: The Parliament’s negotiating position substitutes the term “user” used in the Commission’s initial proposal with “deployer”, still defined as “any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity”8. 

Providers and deployers of high-risk AI systems must comply with obligations related to risk management, data governance and technical documentation detailed in the Title III – Chapter 3, “before placing them on the market or putting them into service”9. 

The Parliament’s negotiating position places new obligations on the providers of AI foundation models 10 “prior to making it available on the market or putting it into service”11: 

  • registering these models in an EU database in order to comply with comprehensive requirements for their design and development; 
  • producing and keeping certain documentation for ten years, 
  • drawing up extensive technical documentation and intelligible instructions for downstream providers; 
  • providing information on the characteristics, limitations, assumptions and risks of the model or its use 12. 

Moreover, according to the latest text version adopted by the Parliament on June 2023, “Providers of foundation models used in AI systems specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio, or video (“generative AI”) and providers who specialize a foundation model into a generative AI system”13 will have to comply with additional obligations:  

  • transparency obligations; 
  • training the foundation model in a way to ensure adequate safeguards to avoid generation of content breaching EU law; 
  • documenting and making publicly available a sufficiently detailed summary of the use of copyrighted training data. 

Finally, according to the latest text version, “Providers and deployers of AI systems shall take measures to ensure a sufficient level of AI literacy 14 of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on which the AI systems are to be used”15. 

Enforcement and governance 

The enforcement architecture of the AI Act should resemble the GDPR one, with the main competencies attributed to national authorities brought together on an EU AI Office that will ensure a consistent application of the regulation across the EU. 

Next steps 

The Parliament having adopted its negotiating position, trilogue negotiations between the Commission, the Council and the Parliament to agree on a final text have just started and are expected to continue until the end of the year (at least).   

 Once the final regulation has entered into force, AI Act would become applicable 24 months thereafter, which is currently expected to be in the course of 2025. 


  2. Article 3 – paragraph 1 – point 1
  3. Article 3 – paragraph 1 – point 44 k (new)
  5. Recital 32
  7. Annex III
  8. Article 3 – paragraph 1 – point 4
  9. Article 16 – paragraph 1 – point a
  10. Article 3 – paragraph 1 – point 1 c (new): ‘foundation model’ means an AI system model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks”
  11. Article 28 b (new)
  13. Article 28 b (new)
  14. Recital 9 b (new)“‘AI literacy’ refers to skills, knowledge and understanding that allows providers, users and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause and thereby promote its democratic control. AI literacy should not be limited to learning about tools and technologies, but should also aim to equip providers and users with the notions and skills required to ensure compliance with and enforcement of this Regulation.”
  15. Article 4 b (new). 
28 June 2023

Actualités liées