Since February 2025, companies that develop or use artificial intelligence (AI) must comply with the new requirements of the EU AI Act. This first comprehensive legal framework for AI worldwide aims to promote innovation while ensuring the protection of consumer rights.

What is the AI Act?
The EU AI Act aims to ensure the safe and transparent use of AI while promoting innovation. Companies that develop or use AI systems must ensure that they respect people’s rights and do not make discriminatory or erroneous decisions. The AI Act takes a risk-based approach and categorises AI systems according to their risk to society. Four risk classes are introduced for this purpose:

Risk-based classification of AI systems

  1. unacceptable risk: This includes AI systems that seriously jeopardise people’s fundamental rights, such as social scoring and manipulative applications. These will be prohibited in future.
  2. high risk: Strict requirements apply to AI systems that have a potentially serious impact on people’s safety and fundamental rights. This includes AI systems in areas such as human resources, lending or medical diagnoses. Among other things, companies must carry out a detailed risk assessment and documentation and ensure human oversight.
  3. limited risk: Less risky AI applications, such as chatbots or AI-generated content, must be made transparent. Users must be informed that they are interacting with AI systems and that generated content is labelled.
  4. minimal risk: AI systems that do not pose any significant risks, such as spam filters or translation programmes, are not subject to any specific requirements.

Schedule for the introduction of the AI Act
The AI Act will be introduced in several stages, with companies having the opportunity to gradually adapt to the new requirements:

  • Since 2 February 2025: Chapter I (General Provisions) and Chapter II (Prohibited Practices in the AI Sector) of the AI Act have come into force. These provisions relate in particular to the establishment of ground rules and the prohibition of certain high-risk AI applications.
  • 2 May 2025: Completion of the codes of conduct and other regulatory measures.
  • 2 August 2025: Full introduction of governance rules for high-risk AI and the obligation to notify authorities. Imposition of sanctions. (Chapters 3, 5, 7, 12 and Articles 78, 99, 100)
  • 2 August 2026: Entry into force of further provisions and regulations. (Exception: Article 6 paragraph 1)
  • 2 August 2027: End of the transitional periods for high-risk AI applications, such as those used in human resources.

Obligations for companies
All companies that develop or use AI must ensure that their systems comply with the new standards. This includes transparent documentation, security measures, data protection precautions and ensuring human supervision for high-risk AI.

Penalties for violations
Companies that fail to comply with the requirements of the EU AI Act will face severe penalties. Serious violations can result in fines of up to 35 million euros or 7% of annual global turnover.

Promotion of innovation
The AI Act not only promotes safety and consumer protection, but is also intended to stimulate innovation and investment in ethical and responsible AI technologies. Small and medium-sized enterprises (SMEs) and start-ups in particular are to be supported by the regulation.

Sources: European Commission, Sparkasse (in German), Regulation (EU) 2024/1689