On February 2, 2025, the first set of provisions of the AI Act will enter into application, including the requirement for AI literacy as outlined in Article 4 of the AIA. According to Article 4 AIA, the following obligation applies uniformly across all AI systems, models, and risk categories.
Art. 4 AIA: Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.
The term "AI literacy" referenced in the text of Regulation and in the title of Article 4 AIA is further elaborated in Article 3, item 56 AIA. The following sections detail the relationships and distinctions between these two provisions.
According to Article 3, item 56 AIA, "AI literacy" is defined as:
[the] skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.
The concept of AI literacy encompasses, in an abstract sense, the necessary skills required to navigate and succeed in the digital landscape through the effective use of AI systems.
AI literacy is applicable to all relevant stakeholders within the AI value chain, depending on their roles throughout the value creation process (see Recital 20). Different competencies are naturally required at various stages of this process. For example, providers of high-risk AI systems must possess a thorough understanding of the technical specifics of AI during the development phase to ensure the creation of AI that is both safe and consistent with European values.
Pursuant to Article 4 of the AIA, deployers and providers are required to implement "measures" to ensure that their staff, as well as any other individuals involved in the operation and use of AI systems on their behalf, possess an adequate level of AI literacy. The nature of these measures depends on the specific AI system or model employed and its associated risk level. It is essential to take into account the technical knowledge, experience, training, and education of the employees, as well as the context in which the AI systems are deployed and the individuals or groups they are intended to serve. AI literacy is inherently interdisciplinary, encompassing not only technical expertise but also legal and ethical considerations (see Recital 20 AIA). For example, providers involved in the development of a chatbot will naturally address different concerns than an operator who merely implements such a system within their organization.
A provider of a chatbot must ensure during the development process that user-entered data is stored and processed securely (e.g., data encryption, security updates, etc.). An operator of such a chatbot, who makes this system available to their employees, must ensure that no personal data or trade secrets are unlawfully transferred to the provider as a third party. This may include measures such as the operator using "on-premises" solutions, implementing necessary contractual safeguards, and/or providing appropriate training for employees on using the chatbot to ensure that such data entries are avoided (see also information from the Austrian Data Protection Authority regarding AI and data protection).
Given the varied applications and configurations of AI systems, the measures required under Article 4 AIA can differ significantly. There is no universal approach to determining the specific actions necessary to meet the requirements of Article 4 AIA. This also means that not all companies are equally affected by Article 4 AIA, nor is it necessary for every employee to possess the same level of AI literacy. For instance, if a company allows its entire staff to use chatbots like ChatGPT, it must implement appropriate and recurring training sessions (including for new hires) for the whole workforce. Conversely, if an "AI tool" is limited to use within the HR or communications department, the training can be focused on a smaller group. The depth and frequency of training may vary accordingly, as deploying AI systems with limited risk may require different measures than those needed for high-risk AI systems.
It is important to note that, unlike the GDPR, Article 4 AIA does not mandate the appointment of an "AI officer." The decision to provide training for employees or to hire personnel with AI expertise for the implementation of AI strategies is left to the discretion of each individual company. Suitable approaches should be tailored to each specific case. Given that the process is not rigid, it is advisable to incorporate AI literacy as a continuous component of professional development and training programs.
The definition of the term "AI literacy" explicitly includes the positive requirement to understand the opportunities presented by AI, enabling the identification of potential value-adding applications.
The following groups are subject to the obligation for AI literacy:
The AI Act does not specify the nature of the training measures to be implemented. These may include internal training sessions, external consultations, or in-house courses.
Although the AI Act itself does not prescribe administrative penalties for non-compliance with Article 4 AIA, non-compliance may lead to consequences. A lack of employee training is generally attributable to the employer under § 1313a of the Austrian Civil Code, even outside the scope of the AI Act. Article 4 of the AI Act serves to clarify the duty of care that businesses must exercise with respect to AI. Thus, if damages occur due to insufficient AI literacy, Article 4 AIA establishes that there was an obligation to provide appropriate training.
As an initial step in implementing Article 4 within the organization, two key measures can be undertaken: an assessment of the AI systems currently in use and an evaluation of the organization's strategic orientation concerning AI utilization.
The manner in which AI systems are implemented within an organization constitutes a strategic decision. This decision should be guided by factors such as the overall corporate strategy, the organization's values and culture, its risk tolerance, the level of risk posed by AI systems, the risk environment, and the applicable legal requirements (cf. ISO 42001:2023).
In addition to determining the strategic direction, it is advisable to formalize this overarching approach in an internal AI policy.
Such a policy should specifically outline the corporate strategy, reference relevant organizational policies where applicable, define clear roles and responsibilities, be effectively communicated within the organization, and remain easily accessible and comprehensible to employees. A template for such a policy, along with general considerations for the development of an AI strategy, is provided by the Austrian Economic Chamber: https://www.wko.at/ki
As part of this strategic framework, the following key questions should be addressed:
Artificial intelligence is already an integral component of numerous software products. Furthermore, updates to existing standard software frequently introduce new AI functionalities. Consequently, AI systems may already be in use within the organization without the full awareness of the responsible internal actors.
To accurately determine the current status, a systematic assessment of the (standard) software presently utilized within the organization is recommended. If existing records—such as those maintained for IT security purposes or as part of a data processing inventory—are available, they can serve as a valuable foundation for this assessment.
Given the diverse applications of AI systems, the responsibility for their implementation may vary across different internal actors. Previous assessments may also provide useful insights into these responsibilities.
To ensure ongoing accuracy, this inventory should be regularly updated and revised as necessary, particularly in response to the introduction of new AI components or systems.
The development of AI literacy at the operational level will be determined by the organization's strategic direction as well as the nature and scope of the AI systems being deployed.
AI literacy encompasses technical, legal and ethical expertise, along with risk awareness and practical application skills. These aspects should be tailored to the educational background, level of expertise, and specific responsibilities of employees. Furthermore, the risk classification of AI systems is a critical factor, as the considerations relevant to AI system development differ significantly from those applicable to their use.
Different aspects of AI literacy will hold varying degrees of relevance for different employee groups. The requirements for executives, project teams, and trainees will diverge, and even external parties, such as service providers engaged by the organization, must possess AI literacy if they are involved in the deployment or management of AI systems within the organization.
Training programs may be offered on either a voluntary or mandatory basis. Providing employees with access to recurring training opportunities presents additional benefits. Moreover, the training format should be adaptable to specific needs — besides interactive workshops and lectures, e-learning modules may also be considered.
The competencies to be imparted may include the following:
Best Practices for fostering AI literacy include:
To demonstrate compliance with Article 4 of the AI Act (AIA), it is advisable to maintain thorough documentation. The organization's AI strategy should be documented in writing, and any internal AI policy, if applicable, should also be formally recorded and easily accessible within the organization. Template AI policies are available from sources such as the Austrian Economic Chamber - WKO (https://www.wko.at/ki). Furthermore, a structured training and knowledge dissemination plan should be developed. If training sessions are conducted, it is recommended that, for documentation purposes, the following information be recorded in the personnel file of each relevant employee:
AI literacy is often associated with digital competence, and the two topics are indeed closely related. This is evident in the fact that AI literacy is integrated into various skills areas and sub-competencies. Moreover, AI literacy builds upon the foundation of digital skills. To successfully apply and develop AI systems and models, digital competencies are also required.
National initiatives: https://www.digitalaustria.gv.at/Strategien/DKO-Digitale-Kompetenzoffensive.html
European Commission: DigComp 2.2: The Digital Competence Framework for Citizens
European Commission: Digital skills