Themenbild Services der RTR

Information, disclosure and labelling obligations

In the "limited risk" risk level, information, disclosure and labelling obligations are imposed on providers and deployers of AI systems. The specific provisions can be found in Art. 50 AIA.

The infographic describes the transparency obligations imposed by the AI Act. This is because AI systems with limited risk trigger information, disclosure and labelling obligations.
AI systems with limited risk trigger information, disclosure and labelling obligations. © RTR (CC BY 4.0)

AI systems designed for direct interactions

Providers of AI systems that are intended for direct interaction with natural persons must design and develop them in such a way that the natural persons concerned are informed that they are interacting with an AI system (Art. 50 para. 1 AIA). Chatbots typically fall into this category. The provider of a chatbot system must design it in such a way that it is made clear in conversations that an interaction with an AI is taking place.

Exception: The use of an AI system is obvious from the perspective of a reasonably well-informed, observant and circumspect natural person due to the circumstances and context of use. E.g. virtual assistance systems that interact with their users through voice commands such as Siri (Apple) or Alexa (Amazon)

 


AI systems generating synthetic content

Providers of AI systems, including general purpose AI systems, that generate synthetic audio, image, video or text content shall ensure that the output of the AI system is labelled in a machine-readable format and detectable as artificially generated or manipulated (Art. 50 para. 2 AIA). The labelling must be carried out using technical solutions, such as watermarks, metadata identifiers, cryptographic methods to prove the origin and authenticity of the content, logging methods, fingerprints or other techniques, or a combination of such techniques depending on the circumstances (see recital 133). Typically, this category includes AI-powered text generators such as ChatGPT and AI-powered image and video generators Midjourney or DALL-E, etc. The labelling must be machine-readable; there is no labelling requirement for providers intended for human viewers.

Exception 1: AI systems that perform a supporting function for standard editing or that do not significantly change the input data provided by the deployer or its semantics. E.g. small-scale "generative fill" in image processing programmes.

Exception 2: AI systems that are legally authorised for the detection, prevention or investigation of criminal offences.

 

Deployers also have a number of disclosure obligations. In the case of AI systems that generate or manipulate deepfake image, sound or video content, it must be disclosed that the content has been artificially generated or manipulated. There is no such disclosure obligation for image, sound and video content that is not deepfake. (Art. 50 para. 4 subpara. 1 AIA)

Exception 1: Use for the detection, prevention, investigation or prosecution of criminal offences is permitted by law.

Exception 2: Where the image, sound or video content is part of an obviously artistic, creative, satirical, fictional or analogous work or programme, the transparency obligations set out in this paragraph shall be limited to disclosing the presence of such generated or manipulated content in an appropriate manner that does not impair the presentation or enjoyment of the work. E.g. exaggerated satirical portrayal of persons of public interest.

 

Deployers of an AI system that generates or manipulates text that is published with the purpose of informing the public about matters of public interest must disclose that the text was artificially generated or manipulated (Art. 50 para. 4 subpara. 2 AIA). AI-generated texts that are not published or that are not used to inform the public about matters of public interest are not subject to a disclosure obligation.

Exception 1: Artificially generated text content is subject to human review or editorial control and a natural or legal person bears editorial responsibility for the publication of the content. E.g. traditional media owner (newspaper publisher)

Exception 2: Use for the detection, prevention, investigation or prosecution of criminal offences is permitted by law. 

Is a synthetic image/video/audio also a deepfake?

Although there are overlaps between the terms synthetic content and deepfake, a distinction must be made with regard to the legal consequences.

The AIA defines the term "deepfake" as follows (Art. 3 no. 60 AIA):

an AI-generated or manipulated image, sound or video content that resembles real persons, objects, places, facilities or events and would falsely appear to a person to be real or truthful

The term "deepfake" is a portmanteau word - i.e. a word that consists of at least two word segments - and is made up of the words "deep learning" and "fake". To summarise, it refers to media content that appears realistic but has not actually taken place. The areas of application are wide-ranging, both positive and negative.

Positive examples:

  • Entertainment and media: Deepfakes can be used in films and video games to create impressive special effects. Music videos can also be created using deepfake technology to achieve visually appealing effects.
  • Forensics: Incidents can be reconstructed or visualised that would otherwise be difficult to depict due to a lack of videos or images.


Negative examples:

  • Disinformation and fake news: False information can be spread using deepfake technology by presenting personalities in videos who do or say something that they have never done or said.
  • Cybersecurity and data protection: Criminals can use deepfakes to create fraudulent videos or calls to deceive people or commit identity-related crimes.
  • Misuse and blackmail: Deepfake technology can be misused to create compromising images or videos of innocent people (e.g. production of pornographic content)

 

Synthetic audio, image or video content is all content that has not been generated by humans (see recital 133). The term therefore goes further than a deepfake. An AI-generated cartoon, for example, is synthetic image content, but not a deepfake because it is not realistic. An AI-generated video in which, for example, an actual politician is simulated in front of parliament in an interview situation and talks about political agendas is synthetic video content that is also a deepfake. Such situations take place on a daily basis and can therefore be mistaken for real by citizens.

However, whether a deepfake actually exists must always be investigated on a case-by-case basis!

To summarise: Every deepfake is a synthetic image/video/audio, but not every synthetic image/video/audio is also a deepfake.

 


AI systems designed for emotion recognition

Deployers of emotion recognition systems or a system for biometric categorisation shall inform the natural persons concerned about the operation of the system (Art. 50 para. 3 AIA).

Exception: AI systems that are legally authorised for the detection, prevention or investigation of criminal offences, provided that appropriate safeguards are in place to protect the rights and freedoms of third parties.


Further links:

SaferInternet.at (in German): How do I check online content?