Just like providers, deployers of AI systems are also obliged to ensure that, when using any type of AI system, the staff deployed and other persons entrusted with the AI systems must have a sufficient level of AI literacy (Art. 4 in conjunction with Art. 3 No. 56 AIA).
The obligations of deployers of high-risk AI systems are regulated in Art. 26 AIA. The following obligations must be complied with:
In accordance with Art. 13 para. 2 AIA, providers must provide instructions for use and make them available to deployers. Deployers of high-risk AI systems must take the necessary technical and organisational measures to ensure that high-risk AI systems are used in accordance with the attached instructions for use (see Art. 26 para. 1 AIA).
If the deployers have control over input data, they must ensure that it is in view of the intended purpose of the high-risk AI system and must be sufficiently representative (see Art. 26 para. 4 AIA).
This obligation does not affect other obligations of the deployer under Union or national law (see Art. 26 para. 3 AIA).
Providers are responsible for the basic implementation of human oversight tools (see Art. 16 letter a in conjunction with Art. 14 AIA), and deployers are subsequently obliged to assign human oversight to natural persons who have the necessary competence, training, and authorisation (see Art. 26 para. 2 AIA). Deployers are also obliged to provide these natural persons with the necessary support.
This does not affect other obligations of the deployer under Union or national law (see Art. 26 para. 3 AIA).
Pursuant to Art. 26 para. 5 subpara. 1 AIA, deployers must monitor the operation of the high-risk AI system used on the basis of the attached instructions for use and, if necessary, inform providers in accordance with Art. 72 AIA ("post-market monitoring").
For deployers that are financial institutions and subject to the relevant financial services law, the requirements on internal governance arrangements, processes and mechanisms apply (see Art. 26 para. 5 subpara. 2 AIA).
If there is reason to consider that the high-risk AI system presents a risk within the meaning of Art. 79 AIA, or if a serious incident has been identified, the deployer has reporting obligations towards the provider, importer/distributor and the competent market surveillance authorities (see Art. 26 para. 5 subpara. 1 AIA).
If there is reason to assume that a high-risk AI system presents a risk within the meaning of Art. 79 para. 1 AIA, the deployer shall suspend the use of that system.
Deployers must keep the automatically generated logs of high-risk AI systems for at least 6 months (unless otherwise stipulated by applicable Union law such as the GDPR), provided that they are under their control (see Art. 26 para. 6 subpara. 1 AIA).
Deployers that are financial institutions and subject to the relevant financial services law must keep the logs as part of these documentation requirements (see Art. 26 para. 6 subpara. 2 AIA).
Deployers of high-risk AI systems may use the information ("instructions for use") provided by the providers in accordance with Art. 13 AIA to fulfil their obligation to carry out a data protection impact assessment in accordance with Art. 35 GDPR or Article 27 of Directive (EU) 2016/680 (see Art. 26 para.9 AIA).
When using high-risk AI systems in accordance with Annex III, natural persons affected by a decision or in the case of such decisions where the AI system in question provides support must be informed of the use of the high-risk AI system in accordance with Art. 26 para. 11 AIA. In the context of law enforcement, Art. 13 Directive 2016/680 applies, for example to AI systems that are used for admission to educational institutions or for filtering applicants for job advertisements.
These obligations apply without prejudice to the transparency obligations pursuant to Art. 50 AIA.
When using high-risk AI systems in accordance with Annex III, natural persons affected by a decision or in the case of such decisions where the AI system in question provides support must be informed of the use of the high-risk AI system in accordance with Art. 26 para. 11 AIA. In the context of law enforcement, Art. 13 Directive 2016/680 applies, for example to AI systems that are used for admission to educational institutions or for filtering applicants for job advertisements.
These obligations apply without prejudice to the transparency obligations pursuant to Art. 50 AIA.
According to Art. 86 AIA, individuals subject to a decision taken by the deployer based on the output of a high-risk AI system listed in Annex III (except point 2: Critical Infrastructure) that has legal implications or similarly significantly affects them in a way that they consider to have an adverse impact on their health, safety or fundamental rights shall have the right to receive a clear and meaningful explanation from the deployer on the role of the AI system in the decision-making process and on the main elements of the decision taken.
The following obligations only apply to certain deployers when using specific high-risk AI systems:
In addition to the employees concerned, employers must also inform the employee representatives of the planned use of a high-risk AI system in the workplace prior to the putting into service or using of such an AI system (see Art. 26 para. 7). According to Annex III para. 4 letter a and b AIA, this includes AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, screen or filter applications and evaluate applicants, as well as AI systems intended to be used for decisions affecting the terms and conditions of employment, promotions and terminations of employment contracts, for the assignment of tasks based on individual behaviour or personal characteristics or traits, or for the monitoring and evaluation of the performance and behaviour of persons in such relationships.
EU institutions, bodies, offices and agencies that are deployers of high-risk AI systems must register the AI system used in accordance with Art. 49 AIA (see Art. 26 para. 8 AIA). If the high-risk AI system intended for use is not registered in the EU database referred to in Art. 71, they shall refrain from using it and also inform the provider or distributor of this.
If deployers (ergo law enforcement authorities) use high-risk AI systems for post-remote biometric identification in the context of investigations for the targeted search of a person suspected of or convicted of committing a criminal offence, they must obtain authorisation in advance or immediately, at the latest within 48 hours, from a judicial or administrative authority whose decision is subject to judicial review in accordance with Art. 26 para. 10 AIA.
If the requested authorisation is refused, the use of the high-risk AI system in question must be discontinued with immediate effect and any personal data associated with the use of this system must be deleted. Any use of such high-risk AI systems shall be documented in the relevant police file and made available to the competent market surveillance authority and the national data protection authority - except for sensitive operational data - upon request. Annual reports must also be submitted to these authorities.
The use of such AI systems in an untargeted way and without any connection to a criminal offence or the search for a specific missing person is prohibited.
This does not apply to the initial identification of a potential suspect on the basis of objective and verifiable facts that are directly related to the offence.
Member States remain free to adopt stricter legislation on the use of AI systems for post-remote biometric identification.
This Article is without prejudice to the application of Directive (EU) 2016/680.
According to Art. 27 AIA, entities governed by public law and private entities providing public services must carry out a fundamental rights impact assessment when using (applies to the first use) a high-risk AI system in accordance with Art. 6 para. 2 in conjunction with Annex III (except point 2: Critical Infrastructure) and deployers of high-risk AI systems must carry out a fundamental rights impact assessment when using high-risk AI systems in accordance with Annex III point 5 letter b (credit scoring and creditworthiness assessment of natural persons) and (c) (risk assessment and pricing in relation to natural persons in the case of life and health insurance).
The following aspects must be taken into account:
If these obligations are already covered by the data protection impact assessment pursuant to Art. 35 GDPR or Art. 27 of Directive (EU) 2016/680, the fundamental rights assessment supplements the data protection impact assessment within the meaning of the AIA.
Art. 50 AIA lists certain AI systems that pose a limited risk, as the risk can be minimised by means of certain transparency obligations. The following transparency obligations apply to the deployers of the following AI systems:
Without prejudice to other transparency obligations resulting from Union or national law, natural persons exposed to the operation of such systems must be informed about the operation of an emotion recognition system or an AI system for biometric categorisation (see Art. 50 para. 3 AIA). Personal data shall be processed in accordance with data protection regulations. This does not apply to authorised AI systems for the detection, prevention and investigation of criminal offences, provided that appropriate safeguards are in place to protect the rights and freedoms of third parties.
The information must be provided in a clear and distinguishable manner at the latest at the time of the first interaction or suspension and must comply with the applicable accessibility requirements (see Art. 50 para. 5 AIA).
Without prejudice to other EU or national transparency obligations, deployers of an AI system that generates or manipulates text, image, sound or video content that is a deep fake must disclose that the content was artificially generated or manipulated in accordance with Art. 50 para. 4 AIA. Use for the detection, prevention, investigation and prosecution of criminal offences is excluded.
A "deep fake" within the meaning of the AI Act is an image, sound or video content generated or manipulated by an AI that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful (see Art. 3 para. 60 AIA).
If it is obvious that the artificially created or manipulated image, sound or video content is part of an artistic, creative, satirical, fictional or analogue work or programme, the transparency obligation is limited to disclosing the existence of artificially created and manipulated content in such a way that the presentation or enjoyment of the work is not hampered.
The transparency obligations do not apply to generated and manipulated texts if this text has been checked by a human and there is an editor responsible.
The information must be provided in a clear and distinguishable manner at the latest at the time of the first interaction or suspension and must comply with the applicable accessibility requirements (see Art. 50 para. 5 AIA).
There are no mandatory requirements for AI systems with "minimal" risk. Only the obligation for "AI competence" pursuant to Art. 4 AIA also applies to such AI systems. Compliance with Code of Practices is encouraged but voluntary.