Themenbild Services der RTR

AI literacy

What is "AI literacy"?

On February 2, 2025, the first set of provisions of the AI Act will enter into application, including the requirement for AI literacy as outlined in Article 4 of the AIA. According to Article 4 AIA, the following obligation applies uniformly across all AI systems, models, and risk categories.

Art. 4 AIA: Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.

The term "AI literacy" referenced in the text of Regulation and in the title of Article 4 AIA is further elaborated in Article 3, item 56 AIA. The following sections detail the relationships and distinctions between these two provisions.

Definition of "AI literacy" according to Article 3, Item 56 AIA

According to Article 3, item 56 AIA, "AI literacy" is defined as:

[the] skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.

The concept of AI literacy encompasses, in an abstract sense, the necessary skills required to navigate and succeed in the digital landscape through the effective use of AI systems.

Provider, deployer und affected persons

AI literacy is applicable to all relevant stakeholders within the AI value chain, depending on their roles throughout the value creation process (see Recital 20). Different competencies are naturally required at various stages of this process. For example, providers of high-risk AI systems must possess a thorough understanding of the technical specifics of AI during the development phase to ensure the creation of AI that is both safe and consistent with European values.

Pursuant to Article 4 of the AIA, deployers and providers are required to implement "measures" to ensure that their staff, as well as any other individuals involved in the operation and use of AI systems on their behalf, possess an adequate level of AI literacy. The nature of these measures depends on the specific AI system or model employed and its associated risk level. It is essential to take into account the technical knowledge, experience, training, and education of the employees, as well as the context in which the AI systems are deployed and the individuals or groups they are intended to serve. AI literacy is inherently interdisciplinary, encompassing not only technical expertise but also legal and ethical considerations (see Recital 20 AIA). For example, providers involved in the development of a chatbot will naturally address different concerns than an operator who merely implements such a system within their organization.

The definition of the term "AI literacy" explicitly includes the positive requirement to understand the opportunities presented by AI, enabling the identification of potential value-adding applications.

The following groups are subject to the obligation for AI literacy:

  • Individuals involved in the development of AI
  • Individuals responsible for the operation of AI systems
  • Individuals within a company who utilize AI systems.

The AI Act does not specify the nature of the training measures to be implemented. These may include internal training sessions, external consultations, or in-house courses.

Penalties

Although the AI Act itself does not prescribe administrative penalties for non-compliance with Article 4 AIA, non-compliance may lead to consequences. A lack of employee training is generally attributable to the employer under § 1313a of the Austrian Civil Code, even outside the scope of the AI Act. Article 4 of the AI Act serves to clarify the duty of care that businesses must exercise with respect to AI. Thus, if damages occur due to insufficient AI literacy, Article 4 AIA establishes that there was an obligation to provide appropriate training.

Implementation of Article 4 in the Organization

As an initial step in implementing Article 4 within the organization, two key measures can be undertaken: an assessment of the AI systems currently in use and an evaluation of the organization's strategic orientation concerning AI utilization.

AI literacy at the strategic level

The manner in which AI systems are implemented within an organization constitutes a strategic decision. This decision should be guided by factors such as the overall corporate strategy, the organization's values and culture, its risk tolerance, the level of risk posed by AI systems, the risk environment, and the applicable legal requirements (cf. ISO 42001:2023).

In addition to determining the strategic direction, it is advisable to formalize this overarching approach in an internal AI policy.

Such a policy should specifically outline the corporate strategy, reference relevant organizational policies where applicable, define clear roles and responsibilities, be effectively communicated within the organization, and remain easily accessible and comprehensible to employees. A template for such a policy, along with general considerations for the development of an AI strategy, is provided by the Austrian Economic Chamber: https://www.wko.at/ki

 As part of this strategic framework, the following key questions should be addressed:

  • How can the adoption and integration of AI systems be advanced within the organization? Are there established internal processes to support this?
  • How does the use of AI systems align with other organizational policies, such as those concerning data protection, data security, or ESG principles?
  • Who within the organization is responsible for decision-making regarding the development, procurement, and employment of AI systems? Which considerations are relevant to such decisions?
  • How can it be ensured that all decision-makers and users possess the necessary AI competencies?

Ongoing assessment

Artificial intelligence is already an integral component of numerous software products. Furthermore, updates to existing standard software frequently introduce new AI functionalities. Consequently, AI systems may already be in use within the organization without the full awareness of the responsible internal actors.

To accurately determine the current status, a systematic assessment of the (standard) software presently utilized within the organization is recommended. If existing records—such as those maintained for IT security purposes or as part of a data processing inventory—are available, they can serve as a valuable foundation for this assessment.

Given the diverse applications of AI systems, the responsibility for their implementation may vary across different internal actors. Previous assessments may also provide useful insights into these responsibilities.

To ensure ongoing accuracy, this inventory should be regularly updated and revised as necessary, particularly in response to the introduction of new AI components or systems.

Operational implementation

The development of AI literacy at the operational level will be determined by the organization's strategic direction as well as the nature and scope of the AI systems being deployed.

AI literacy encompasses technical, legal and ethical expertise, along with risk awareness and practical application skills. These aspects should be tailored to the educational background, level of expertise, and specific responsibilities of employees. Furthermore, the risk classification of AI systems is a critical factor, as the considerations relevant to AI system development differ significantly from those applicable to their use.

Different aspects of AI literacy will hold varying degrees of relevance for different employee groups. The requirements for executives, project teams, and trainees will diverge, and even external parties, such as service providers engaged by the organization, must possess AI literacy if they are involved in the deployment or management of AI systems within the organization.

Training programs may be offered on either a voluntary or mandatory basis. Providing employees with access to recurring training opportunities presents additional benefits. Moreover, the training format should be adaptable to specific needs — besides interactive workshops and lectures, e-learning modules may also be considered.

The competencies to be imparted may include the following:

  • Familiarity with the organization’s corporate strategy, relevant policies, and designated points of contact.
  • Fundamental digital competencies (e.g. in accordance with DigComp 2.3 AT)
    • Digital transformation in the workplace
    • Searching for and critically evaluating information
    • Utilizing digital technologies in daily work tasks
  • Understanding of AI and its areas of application, including
    • Functionality and practical examples of AI use
    • Opportunities for innovation through AI and its potential to facilitate work processes
    • Specific characteristics of AI operation, particularly with regard to bias, hallucinations, and the significance of training data
  • Guidance for the use of AI systems implemented within the organization, including:
  • Legal elements, including:
    • Awareness of relevant legal aspects, such as data protection, labor law, and intellectual property rights
    • Fundamental principles of the AI Act, as applicable to the intended use case. An overview is available on the AI Service Center website: https://ai.rtr.at 

Best Practices for fostering AI literacy include:

  • Regular reassessment of deployed AI systems, including periodic re-evaluation of potential AI use cases based on the latest technological advancement.
  • Interdisciplinary oversight involving representatives from key areas such as technology, law, compliance, IT security, human resources, and employee representation bodies, depending on the organization's size and structure.
  • Practice-oriented learning, incorporating systematic testing and evaluation of new AI systems in relation to internal organizational use cases, with a structured approach to gathering and integrating feedback.

Documentation

To demonstrate compliance with Article 4 of the AI Act (AIA), it is advisable to maintain thorough documentation. The organization's AI strategy should be documented in writing, and any internal AI policy, if applicable, should also be formally recorded and easily accessible within the organization. Template AI policies are available from sources such as the Austrian Economic Chamber - WKO (https://www.wko.at/ki). Furthermore, a structured training and knowledge dissemination plan should be developed. If training sessions are conducted, it is recommended that, for documentation purposes, the following information be recorded in the personnel file of each relevant employee:

  • Type of training (e.g., in-person, e-learning, etc.)
  • Training provider, particularly in the case of external training programs
  • Training content and objectives
  • Date of the training session
  • Instances of retraining or repeated sessions
  • … 

Further information

AI literacy is often associated with digital competence, and the two topics are indeed closely related. This is evident in the fact that AI literacy is integrated into various skills areas and sub-competencies. Moreover, AI literacy builds upon the foundation of digital skills. To successfully apply and develop AI systems and models, digital competencies are also required.

National initiatives: https://www.digitalaustria.gv.at/Strategien/DKO-Digitale-Kompetenzoffensive.html

European Commission: DigComp 2.2: The Digital Competence Framework for Citizens

European Commission: Digital skills