Regulation (EU) 2024/1689, better known as the AI Act, was published in the Official Journal on 12 July. It creates new obligations – and not just for manufacturers of AI systems.
Innovations are to be promoted
Artificial intelligence is not only to be regulated, but also promoted. To this end, member states are obliged to set up real-world laboratories, which must be operational by 2 August 2026 at the very latest (Article 57). This will allow tests and developments to be carried out under real-life conditions in a controlled environment. The competent authorities shall provide comprehensive support to users, in particular with regard to fundamental rights, health and safety, testing and risk mitigation measures and their effectiveness. Member States shall give priority access to real-world laboratories to SMEs and start-ups that are established or have a branch in the EU. The Member States are supported in this – as well as in other tasks – by the EU Office for Artificial Intelligence, which was also introduced by the AI Act (Article 64).
Staggered entry into force
The AI Act will enter into force 20 days after its publication in the Official Journal of the EU, i.e. on 2 August 2024. However, the full applicability of the regulation will be staggered (Article 113):
2 February 2025: ban on AI systems with unacceptable risk (Articles 1 to 5);
2 August 2025: in particular sanctions (Article 99 et seq.) and codes of conduct and rules for general purpose AI (Article 51 et seq.)
2 August 2026: General applicability of most provisions except for Article 6(1) and the corresponding obligations;
2 August 2027: Obligations for high-risk AI systems (Article 6(1)).
Not all AI is regulated
The AI Act takes a risk-based approach. It divides AI systems into risk categories:
- Unacceptable risk
- High-risk AI systems (Article 6 et seq.)
- Limited risk (Article 50 et seq.)
Unacceptable risk
The AI Act refers to this category as “prohibited AI practices” (Article 5). The focus here is on the use of AI, but the placing on the market and commissioning of such systems are also prohibited in the EU. This applies, for example, to so-called social scoring systems that record, evaluate and classify the behaviour of citizens. Other ways of influencing and manipulating the behaviour of a person or group of people are also prohibited if they are likely to cause harm to these persons or groups.
High-risk AI systems (Article 6 et seq.)
These systems are listed in Annex III of the Regulation. However, they only fall into this category if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons. High-risk AI is very strictly regulated. A risk management system must be set up, applied, documented and maintained. The system must be registered, technical documentation must be created, its actions must be recorded and it must be accessible to human control. High-risk AI systems must also undergo a conformity assessment and be CE labelled.
Limited risk (Article 50 et seq.)
The focus here is on transparency obligations (see Article 50 in particular). Chatbots must be recognisable as such, and the same applies to AI-generated audio, image, video or text content. Other obligations relate in particular to the providers of AI (Article 53 et seq.): technical documentation, a copyright compliance strategy and documentation of the data used to train the AI models are mandatory.
All other AI systems – apart from the transparency obligations (see above) – are not specifically regulated.
New obligations – not only for OpenAI & Co
High-risk AI systems not only create obligations for providers, but also for authorised representatives of providers (Article 22), importers (Article 23), distributors (Article 24) and operators (Article 26). When importing and placing on the market, particular attention must be paid to the CE marking and conformity assessment of the systems; operators must take appropriate technical and organisational measures to ensure that they use such systems in accordance with the operating instructions accompanying the systems. In addition, they shall entrust the supervision of the systems exclusively to natural persons who have the necessary competence, training and authorisation. According to Article 25, distributors, importers, operators or other third parties as providers of a high-risk AI-system are even subject to the full provider obligations under Article 16 of the Regulation, namely if they label a system with their name or trademark or make a substantial modification, in particular if this modification results in the system being classified as a high-risk system.
Source: GTAI (German language)