As already stated in the recitals of the AI Act, given the nature and complexity of the AI value chain and in line with the new legal framework, it is essential to ensure legal certainty and facilitate compliance with this Regulation. Therefore, the roles and specific obligations of the relevant actors along the value chain need to be clarified. The core of the AI Act concerns obligations related to AI systems or AI models according to their risk classification.
In certain situations, different roles in the AI value chain can coincide in one person or organisation. The following constellations are possible, for example:
In general, providers of AI systems of any kind have a duty to ensure that their staff and other persons entrusted with the AI systems have a sufficient level of AI competence (Art. 4 AIA). This includes the skills, knowledge and understanding that enable providers, considering their respective rights and obligations under the AI Act, to make an informed deployment of AI systems and to be aware of the opportunities and risks of AI and the potential harm it can cause (Art. 3 No. 56 AIA).
These are AI systems that pose a high risk. They are not prohibited, but in some cases far-reaching obligations must be complied with. The obligations of providers of high-risk AI systems are standardised in Art. 16 of the AIA. The obligations include ensuring compliance with the requirements for high-risk AI systems in accordance with Art. 16 letter a in conjunction with Chapter III Section 2:
A risk management system must be established, implemented, documented and maintained (see Art. 9 para. 1 AIA). Risk management is understood as a continuous iterative, i.e. repetitive, process throughout the entire life cycle. The provider of high-risk AI systems is subject to risk assessment and mitigation obligations.
Known and reasonably foreseeable risks to health, safety or fundamental rights must be identified, estimated and evaluated. In addition to possible risks in accordance with the intended purpose of the AI system, reasonably foreseeable misuse must also be assessed. Risks that only become apparent after placing on the market, in accordance with Art. 72 AIA, must also be assessed. Based on the risks identified, "appropriate and targeted" risk management measures must be taken.
The risk management system also requires that high-risk AI systems are tested throughout the entire development process (e.g. through tests under real conditions outside of AI real laboratories in accordance with Art. 60 AIA).
AI systems are generally based AI models. If the AI models used in high-risk AI systems have been trained with data, training, validation and testing data sets must be developed that meet the quality requirements of Art. 10 para. 2 to 5 AIA (see Art. 10 para. 1 AIA). Among other things, these data sets must be assessed for availability, quantity and suitability, possible distortions must be checked (bias), and data gaps or deficiencies must be identified.
Data sets must be sufficiently representative, to the best extent possible error-free and complete and must be used that are typical of the geographical, contextual, behavioural or functional conditions under which the high-risk AI system is intended to be used; e.g. autonomous driving systems - these systems must be strongly prepared for safety-critical decisions and function in a wide range of geographical and contextual situations (e.g. unfavourable weather conditions).
The technical documentation in accordance with Art. 11 AIA must be prepared before placing on the market or commissioning a high-risk AI system. The minimum details for technical documentation are listed in Annex IV. The technical documentation must be prepared in such a way that it provides evidence that the requirements for high-risk AI systems are met. It must enable competent authorities and notified bodies to assess the compliance of the AI-system with those requirements.
SMEs, including start-ups, may provide the elements of technical documentation listed in Annex IV in a simplified manner.
With regard to devices listed in Annex I Section A of the harmonisation regulations, a single set of technical documentation shall be created that also covers the requirements of the AI Act in addition to the general documentation. For example, the Medical Device Regulation also standardises the obligation for device manufacturers to prepare technical documentation (Art. 10 para. 4 MDR).
High-risk AI systems must be technically designed and developed in such a way that they allow the automatic recording of events - so-called "logging" (see Art. 12 AIA). This logging is used for documentation purposes, in particular to determine whether the high-risk AI system presents a risk within the meaning of Art. 79 para. 1 AIA or whether a "substantial modification" has been made.
The high-risk AI systems (remote biometric identification systems) referred to in Annex III(1)(a) of the AIA must fulfil specific logging functions.
High-risk AI systems must be designed and developed transparently so that deployers can interpret and apply the system’s output appropriately within the meaning of Art. 3 (4) AIA (see Art. 13 AIA). This obligation includes in particular the preparation of instructions that provide concise, complete, correct and clear information for the deployers. The instructions must contain the following information, among others:
High-risk AI systems must be designed and developed in such a way that they can be effectively overseen by natural persons for the duration of their use (see Art. 14 AIA). The purpose of this requirement is to prevent or minimise risks to health, safety and fundamental rights, as it cannot be ruled out that risks may persist despite compliance with all the requirements of Chapter III Section 2.
Appropriate supervisory measures must be taken in accordance with the risks, the degree of autonomy and the context of use of the high-risk AI system. These can be precautions of a technical nature that are built into the high-risk AI system and/or precautions that are to be implemented by the deployer.
High-risk AI systems must be designed and developed in such a way to achieve an appropriate level of accuracy, robustness and cybersecurity throughout their lifecycle (see Art. 15 AIA).
“Accuracy” refers to the extent to which a model's predictions or classifications match the actual data. It is a measure of how well the model is able to make the correct predictions.
“Robustness” describes the resilience of high-risk AI systems; these must be as resilient as possible to errors, faults, inconsistencies or unexpected situations that may occur within the system or the environment in which the system operates, in particular due to its interaction with natural persons or other systems (Recital 75).
“Cybersecurity” plays a critical role in ensuring that AI systems are resilient to attempts by malicious third parties to exploit the systems' vulnerabilities to alter their use, behaviour, performance or compromise their security features (Recital 76).
The requirements for accuracy, robustness and cyber security are largely technical aspects, which is why these measurements are to be ensured using benchmarks and measurement methods to be developed. In addition to technical measures, organisational measures must also be taken.
Possible measures include backup or disruption security plans (see Art. 15 para. 4 subpara. 2 AIA), measures to minimise the risk of so-called "feedback loops" and measures (see Art. 15 para. 4 subpara. 3 AIA) to prevent attacks that attempt to manipulate the training data set ("data poisoning") or pre-trained components used in training ("model poisoning"), or to carry out attacks that attempt to manipulate the training data set ("data poisoning") or pre-trained components used during training ("model poisoning"), input data intended to mislead the AI model into making errors ("adversarial examples" or "model evasions") (see Art. 17 para. 5 AIA).
According to Art. 16 letter b to l AIA, the provider is also subject to the following additional obligations. These are not requirements for the high-risk AI system itself, but "other" obligations.
Providers of high-risk AI systems must set up a quality management system that ensures compliance with this Ordinance. Rules, procedures and instructions must be documented (see Art. 17 AIA). Among other things, this system must contain a concept of how regulatory provisions and conformity assessment procedures are to be complied with.
Providers of high-risk AI systems must indicate their name, registered trade name or trade mark and contact address on the AI system itself or, if this is not possible, on its packaging or accompanying documentation (see Art. 16 letter b AIA)
For a period of ten years from placing on the market or putting into service, keeping of documents such as the technical documentation in accordance with Art. 11 AIA, the documentation within the meaning of Art. 18 AIA, documentation on changes authorised by the notified bodies, any decisions and other documents issued by the notified bodies and the EU Declaration of Conformity in accordance with Art. 47 AIA (see Art. 18 AIA).
The automatically generated logs must be retained for a period of six months in accordance with Art. 12 para. 1 AIA (see Art. 19 AIA).
Before placing on the market or putting into service high-risk AI systems within the meaning of Annex III (with the exception of point 2: Critical infrastructure), the provider must register the high-risk AI system in the EU database within the meaning of Art. 71 AIA (see Art. 49 para. 1 AIA)
Upon reasoned request by a competent authority, providers of high-risk AI systems must provide all information and documentation, including the automatically generated logs, if they have access to them (see Art. 21 AIA)
The provider of high-risk AI systems must ensure that a conformity assessment procedure is carried out (Art. 43 AIA). Depending on the high-risk AI system in question, this can be carried out on the basis of an internal inspection or with the involvement of a notified body. Furthermore, a declaration of conformity assessment must be issued (Art. 47 AIA) and a CE marking must be affixed to the AI system itself or, if this is not possible, on its packaging or in the accompanying documentation (Art. 48 AIA).
In the event of serious incidents involving high-risk AI systems placed on the market, the provider must notify the national market surveillance authority where the incident occurred. A notification must be made immediately after the causal link or the reasonable likelihood of such a link has been established, but in any case no later than 15 days after becoming aware of the serious incident (see Art. 73 AIA). There are stricter time requirements for this basic rule for certain incidents.
According to Article 3(49) of the AIA, a serious incident refers to incidents or malfunctions of an AI system that directly or indirectly results in death of a person or serious harm to a person’s health, serious and irreversible disruption to the management and operation of critical infrastructure, infringements of obligations under Union law intended to protect fundamental rights, or serious harm to property or the environment.
Specifically, the accessibility requirements of Directives (EU) 2016/2102 and (EU) 2019/882 must be met (see Art. 16 letter l AIA). According to Annex I Section I of Directive (EU) 2019/882, this concerns specific requirements for the provision of information, the design of user interfaces and functionality and also support services such as help desks, call centres, etc.).
Providers established in third countries must appoint an authorised representative established in the Union before providing the high-risk AI system in accordance with Art. 22 AIA. Providers are obliged to enable the authorised representative to perform their tasks.
If a provider of high-risk AI systems considers or has reason to consider that a high-risk AI system that has already been placed on the market or put into service does not comply with the AI Act, corrective action must be taken immediately (see Art. 20). This primarily means establishing compliance, but it can also mean withdrawing, disabling or recalling the AI system. At the same time, downstream actors (deployers, authorised representatives, importers) must also be informed.
If the high-risk AI-system presents a risk within the meaning of Art. 79 para. 1 of the AIA and the provider becomes aware of this, the provider - together with the deployer, if applicable - shall notify the market surveillance authorities, including the notified body, if applicable.
In Art. 50 AIA, the AI Act lists certain AI systems that present a limited risk. The risk can be minimised by means of certain transparency obligations. They can be summarised under the category "Transparency towards downstream actors".
The provider has the following transparency obligations with regard to the following AI systems:
Such AI systems must be designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system. Exceptions to this are cases where this is obvious from the circumstances and context of use.
AI systems (including GPAI systems) that generate synthetic or manipulate image, audio or video content shall be designed and developed in such a way that the outputs can be output in a machine-readable format and recognised as artificially generated or manipulated.
No mandatory requirements are standardised for AI systems with minimal risk. Only the obligation for "AI competence" pursuant to Art. 4 AIA also applies to such AI systems. Otherwise, compliance with the Code of Practices is encouraged, but this is voluntary.
In the case of GPAI models, providers must fulfil the following obligations in accordance with Art. 53, 54 AIA:
Providers of GPAI models shall prepare and update the technical documentation of the model, including its training and testing procedure and the results of its evaluation, containing at least the information listed in Annex XI (Art. 53 para. 1 letter a sentence 1 AIA).
This obligation does not apply to providers of exempt open source models within the meaning of Art. 2 para. 12 AIA, unless it is a GPAI model with systemic risk.
At the request of a competent authority or the AI Office, providers of GPAI models must make the above-mentioned technical documentation available (Art. 53 para. 1 letter a sentence 2 AIA).
In general, providers of GPAI models must "cooperate with the competent national authorities, including the Commission where necessary, in the exercise of their responsibilities and powers under this Regulation" (see Art. 53 para.3 AIA).
Providers of GPAI models shall prepare and update certain information and documentation for downstream providers of AI systems that intend to integrate this model into their AI systems and make it available to the providers of AI systems. The technical documentation shall fulfil the minimum requirements set out in Annex XII.
This information and documentation must enable providers of AI systems to "have a good understanding" of the capabilities and limitations of the GPAI model and also enable them to fulfil their obligations under the AI Act (Art. 53 para. 1 letter b AIA)
This obligation does not apply to providers of exempt open source models within the meaning of Art. 2 para. 12 AIA, unless it is a GPAI model with systemic risk.
Providers of GPAI models must have a policy for complying with EU copyright law and related rights, including through state-of-the-art technologies. This includes, in particular, the identification of and compliance with a reservation of rights asserted under Art. 4 para. 3 of the Copyright Directive (see Art. 53 para 1 letter c of the AIA).
Note: Art. 3 of the Copyright Directive (Directive (EU) 2019/790) standardises that text and data mining ("TDM") may be carried out in principle. However, this right can be restricted by a reservation of the rights holders in accordance with Art. 4 para. 3 Copyright Directive in such a way that only research institutions and cultural heritage institutions are permitted to operate TDM.
TDM is a collective term for various procedures that make it possible to search and analyse large quantities of texts or data from various perspectives. In Austria, this is implemented in Section 42h UrhG. Furthermore, the content used for the training of the GPAI model must be published in accordance with a template provided by the AI Office (see Art. 53 para. 1 letter d AIA).
If the provider of a GPAI model is established in a third country, the provider is obliged to appoint an authorised representative within the Union before placing the model on the market (see Art. 54 AIA).
If the GPAI model is one with systemic risk, providers must fulfil the obligations set out in Art. 55 AIA in addition to those set out in Art. 53 and 54 AIA:
The obligations under Art. 55 (1) (a) and (b) can be summarised under the heading "risk management". Accordingly, providers of GPAI models must carry out a model evaluation in accordance with standardised protocols and instruments. This also includes the performance and documentation of attack tests in order to identify and minimise systemic risks. In addition, potential systemic risks at Union level - including their sources - that may stem from the development, placing on the market or use of GPAI models with systemic risk must be assessed and mitigated.
Serious incidents must be documented and reported immediately to the AI Office and, if necessary, to the competent national authorities.
GPAI model providers must ensure an "adequate level" of cybersecurity and the physical infrastructure of the model (see Art. 55 para. 1 letter d). When protecting cybersecurity in the context of systemic risks associated with malicious use or malicious attacks, due consideration should be given to unintentional model data loss, unauthorised deployment, circumvention of security measures and protection against cyber-attacks, unauthorised access or model theft.
This protection could be facilitated by securing model weights, algorithms, servers and datasets, e.g. through operational security measures for information security, specific cybersecurity strategies, appropriate technical and established solutions, and physical and cyber access controls appropriate to the circumstances and associated risks (see Recital 115).
In the case of high-risk AI systems that are safety components of products covered by the Union harmonisation legislation listed in Annex I Section A, the product manufacturer is deemed to be the provider of the high-risk AI system in accordance with Art. 25 para. 3 of the AIA and is subject to the obligations of a provider in accordance with Art. 16 of the AIA in the two cases below: