Negotiations on the rules for dealing with AI have taken a good three years, but an agreement has now been reached. This is the world's first set of rules that provides for rules for artificial intelligence. It is intended to create legal certainty for all economic players involved in the private and public sectors (provider, and deployers of AI systems, product manufacturers, authorised representatives, importers and distributors). The AI Act is intended to promote the introduction of human-centred and trustworthy AI systems while ensuring a high level of protection for health, safety and fundamental rights, including democracy, the rule of law and protection of the environment.
Text of the AI ActOn 21 April 2021, the Commission followed with its official proposal for a regulation
The AI Act is in the final phase of the ordinary legislative procedure. On 13 March 2024, the EU Parliament adopted the AI Act. This was followed by a vote by the Council of the EU ("Council of Ministers") on 21 May and eventually the publication in the Official Journal of the EU.
The AI Act will come into force in 2024. The obligations will apply in stages.
Time Frame of the AI ActThe AI Act was adopted with the aim of establishing a standardised legal framework for aspects relating to AI systems. It brings:
The AI Act is an EU regulation, which means that the same rules apply throughout the EU. In the harmonised areas, the individual member states are prohibited from adopting national regulations. Member states may only regulate something if the AI Act permits this. Harmonised rules also ensure the free movement of AI-based goods and services across national borders.
The AI Act follows a risk-adapted approach and divides AI systems and practices into four groups. The first group includes bans on certain artificial intelligence practices. Certain possible applications of AI systems have too great a potential for harm in terms of health, physical integrity and fundamental rights and are therefore prohibited. Certain practices are enumerated in an exhaustive list. These include:
The second group of risk-based regulations includes high-risk AI systems (also known as high-risk AI, see Annex I and III of the AI Act). Although these AI systems have a high risk potential, they should not be prohibited. Instead, specific requirements are placed on AI systems and obligations are imposed on operators in relation to such systems.
Certain AI systems have a low risk, which is why the requirements for high-risk systems should not apply to them in line with the risk-based regulatory approach, but transparency requirements must be complied with. For example, when using a chatbot, it must be pointed out that users are communicating with a technical voice assistant.
Due to the market developments in AI systems in the meantime, regulations for the placing on the market of general purpose AI systems were also introduced in the negotiations.
After AI systems have been placed on the market, they should continue to be monitored by market surveillance authorities. It should be possible to detect malfunctions of an AI system. Users should also be able to report violations of the AI Act. Market surveillance and enforcement should be carried out by national market surveillance authorities and by the AI Office that has already been established at EU level.
In order not to restrict the development possibilities of AI, regulations to promote innovation are also laid down. AI systems should be able to be (further) developed in test environments (so-called "sandboxes"). A distinction is made between regulatory and operational sandboxes.
Even before and during the negotiations on the AI Act, other legal acts were adopted or are still being negotiated at EU level that are important for the application of the AI Act and for AI systems.
Other relevant legal acts (extract):