Several international organisations are involved in artificial intelligence. We have compiled the most important information for you.
The Advisory Panel on Artificial Intelligence consists of up to 38 experts from relevant disciplines and is based in the Office of the Special Representative of the Secretary-General for Technology (OSET). The panel offers a variety of perspectives and approaches on the governance of AI for the common good, with a particular focus on human rights and the Sustainable Development Goals (SDGs).
In 2023, the AI Advisory Panel published its interim report entitled "Governing AI for Humanity". The central element of the report is a proposal to strengthen international AI governance based on seven pillars: Inclusiveness, public interest, centrality of data governance, a universal and multi-stakeholder approach, anchoring AI governance in the UN Charter, promoting human rights and achieving the SDGs.
The final report is expected before the "Summit of the Future" in summer 2024.
On 21 March 2024, the General Assembly adopted the first global resolution on the promotion of safe and trustworthy artificial intelligence systems. This resolution, which was supported by over 120 UN member states, including China and the USA, is intended to serve as the basis for future international guidelines on the regulation of AI.
In the resolution, the General Assembly reaffirms its commitment to the protection and promotion of fundamental and human rights. It emphasises that the rights that people enjoy in the "offline world" must also be guaranteed online throughout the entire life cycle of AI systems. The UN General Assembly therefore calls on all Member States and stakeholders to refrain from using AI systems if they do not comply with international human rights standards or pose unreasonable risks to human rights.
The legally non-binding document further emphasises the importance of data protection and stresses the need to promote the development and implementation of mechanisms for monitoring risks and conducting impact assessments throughout the lifecycle of AI systems. It also calls for increased investment in the development and implementation of effective safeguards, including physical security, security of AI systems and risk management.
The Assembly also recognised the different levels of technology development between and within countries and that developing countries face particular challenges in keeping up with the rapid pace of innovation. It called on Member States and stakeholders to cooperate with and support developing countries so that they can benefit from inclusive and equitable access, bridge the digital divide and increase digital literacy.
In the context of artificial intelligence, the International Telecommunication Union (ITU) occupies a leading position among the specialised agencies of the United Nations. Its focus is on utilising the many opportunities offered by AI technologies to effectively support and advance the 17 Sustainable Development Goals (SDGs).
Due to its specific mandate, the ITU pays particular attention to the use of AI in the field of telecommunications and information and communication technologies. In addition, the ITU works closely with other UN specialised agencies and programmes in three thematic "focus groups" that deal with the challenges and opportunities of AI deployment in the following areas:
In addition, the ITU leads the AI for Good initiative, a digital platform where AI innovators and stakeholders network to identify practical AI solutions that advance the UN SDGs. An annual highlight of this initiative is the AI for Good Global Summit.
In addition, the ITU coordinates the preparation of the report "United Nations Activities on Artificial Intelligence (AI)", which was last published in 2023. This report documents the activities in the field of AI carried out by more than 40 UN specialised agencies, funds, programmes and offices of the UN system, covering all 17 SDGs.
Finally, together with UNESCO, the ITU leads the "inter-agency working group on Artificial Intelligence". This working group pools the expertise of the UN system in the field of artificial intelligence and thus supports initiatives to develop AI ethics and strategic approaches to expand the competences of the United Nations in the field of AI.
UNESCO is focussing on the ethical aspects of AI, particularly in education and culture. The organisation has developed the "Recommendation on the Ethics of Artificial Intelligence", which is intended to serve as a framework for educational institutions and other stakeholders to ensure that AI technologies are used in accordance with fundamental rights and freedoms. A particular focus is on the protection of cultural diversity. UNESCO also organises the "Global Forum on the Ethics of Artificial Intelligence".
WIPO plays a central role in the debates on artificial intelligence and intellectual property. Generative AI systems that utilise large datasets - including text, images and other media - to generate traditionally human-generated content present us with numerous legal challenges. These include potential copyright infringement through the inclusion of copyrighted works in training data and copyright protection for AI-generated works. In addition, there is a need to ensure adequate patent protection for the AI models themselves.
In order to address these complex issues, WIPO promotes international dialogue and has launched the "WIPO Dialogue on Intellectual Property and Artificial Intelligence" initiative for this purpose. This platform brings together leading experts from around the world to discuss the impact of AI on intellectual property law.
UN Global Pulse is an initiative of the UN General Secretariat that aims to use big data and AI to improve humanitarian and development work. By analysing large amounts of data, Global Pulse strives to gain insights that enable rapid and effective responses to crisis situations. This initiative is an exemplary example of the use of AI to analyse data in real time.
The International Monetary Fund is focussing on the economic and financial impact of the ongoing development and implementation of AI systems. In this context, it has issued several publications, of which the following two are particularly noteworthy:
The Staff Discussion Note "Gen-AI: Artificial Intelligence and the Future of Work" focuses on the potential impact of AI on the global labour market. It analyses scenarios in which AI could replace or supplement human labour. The research suggests that developed economies may experience the benefits and drawbacks of AI sooner than developing economies. Furthermore, wage income inequality could increase if there is greater complementarity between AI and higher-paid labour, while an increase in capital gains could further exacerbate wealth inequality. However, if productivity increases are sufficiently large, the incomes of most workers could rise significantly.
The Fintech Note "Generative Artificial Intelligence in Finance: Risk Considerations" addresses the introduction of generative AI in the financial sector. Although AI technology is recognised as a driver of productivity and economic growth through increased efficiency, improved decision-making processes and the development of new products and industries, this document highlights the inherent risks of generative AI and its potential impact on the financial sector. Key dangers identified include bias, privacy issues, the opacity of outcomes, cyber threats and the potential to create new sources and pathways of systemic risk. Generative AI could exacerbate some of these threats and also create new types of risks that, if not adequately addressed, could jeopardise the stability of the financial sector.
The Centre for Artificial Intelligence and Robotics at the UN Institute for Interregional Crime and Justice Research (UNICRI) was established with the task of promoting understanding of artificial intelligence, robotics and related technologies. The centre focuses in particular on the areas of combating crime and terrorism as well as other security threats. The aim of the Centre is to support United Nations Member States in assessing the risks and opportunities of these technologies and to explore ways of using them to strengthen the fight against violence and crime.
In 2022, the International Atomic Energy Agency (IAEA) published the publication "Artificial Intelligence for Accelerating Nuclear Applications, Science and Technology". In addition, the IAEA offers the "AI for Atoms" platform, which contains detailed information on the organisation's AI-related activities, including all relevant initiatives, news, publications and events.
The UN Refugee Agency (UNHCR) uses artificial intelligence (AI) specifically to optimise humanitarian aid. AI technologies are used to predict refugee movements, simulate the spread of diseases such as COVID-19 in refugee camps and coordinate the corresponding responses. In addition, AI supports the analysis of text data from social media and other sources to identify and respond to protection needs.
The OECD Working Party on Artificial Intelligence Governance (AIGO) steers the Digital Policy Committee's work programme on AI policy and governance.
Core areas include analysing the design, implementation, monitoring and evaluation of national AI policies and action plans, assessing the impact of AI and developing trustworthy and responsible AI strategies. The group also oversees the measurement and data collection activities of the OECD.AI Observatory and conducts forecasting work on AI and related new technologies.
Austria is represented in the AIGO by the Federal Ministry of Finance.
On 22 May 2019, the OECD Council of Ministers adopted a recommendation on Artificial Intelligence proposed by the Committee on Digital Economy Policy (CDEP). This recommendation, the first of its kind at intergovernmental level, aims to strengthen innovation and trust in artificial intelligence by promoting responsible AI practices. It emphasises the importance of human rights and democratic values and complements existing OECD standards in areas such as data protection, digital security risk management and responsible business conduct.
The recommendation defines five value-orientated principles for the development and use of AI:
It also provides policy makers with five recommendations for action. These include
A revision of the OECD AI recommendation was adopted at the OECD Council of Ministers in Paris in May 2024.
The organisation's efforts also aimed to reach a general consensus on the definition of an AI system. Recently, the member states of the OECD approved a revised version of the organisation's definition of an AI system:
"An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical real or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment"
The definition set out in the European Union's AI law largely mirrors that of the OECD:
"AI system is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
The OECD.AI Policy Observatory was established in February 2020. The Observatory's activities are driven by six expert groups:
The OECD.AI Policy Observatory operates a real-time database of AI policies and initiatives developed in cooperation with the European Commission, which is used and continuously updated by participating countries and other stakeholders.
Country-specific data can be accessed via a dashboard, including information on strategies and research institutions specialising in the development of artificial intelligence. The platform covers all OECD member countries, as well as key partner countries such as Brazil, China, India, Indonesia and South Africa, and also extends to other countries in Africa, Asia and Latin America.
The OECD Framework for the Classification of AI Systems (Overview | Report) was developed by the OECD.AI network of experts to help policy makers, regulators, legislators and other stakeholders assess the opportunities and risks of different AI systems.
The framework classifies AI systems and applications according to the following dimensions: People & Planet, Economic Context, Data & Input, AI Model, and Task & Output. Each of these dimensions has its own characteristics and attributes or subcategories that are relevant for the assessment of policy considerations for specific AI systems.
The OECD AI Incidents Monitor (AIM) records incidents of artificial intelligence in order to provide decision-makers, experts and other stakeholders worldwide with in-depth insights into the associated risks and dangers.
The following definition of AI Incident has been developed:
An AI incident is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems directly or indirectly leads to any of the following harms:
(a) injury or harm to the health of a person or groups of people;
(b) disruption of the management and operation of critical infrastructure;
(c) violations of human rights or a breach of obligations under the applicable law intended to protect fundamental, labour and intellectual property rights;
(d) harm to property, communities or the environment.
Over time, the AIM will help to identify recurring patterns and develop a better understanding of the complex nature of AI incidents.
First, AI incidents reported in reputable international media worldwide are identified and classified using machine learning models. These models categorise the incidents according to various categories of the OECD framework for classifying AI systems, including severity, affected industry, associated AI principle, types of damage and affected stakeholders. The analysis is based on the title, abstract and first paragraphs of each news article. The news is sourced from Event Registry, a news intelligence platform that monitors global news and can identify specific event types in the articles, processing over 150,000 English-language articles daily.
The OECD.AI website collects numerous publications and reports on current research on AI policy conducted in various policy communities within and outside the OECD. These documents are categorised according to policy areas such as business, education, health, competition and the digital economy.
In addition, the corresponding section of the website provides a continuous overview of AI-related news from around the world, with articles categorised as positive, negative or neutral depending on their attitude towards AI.
On 14 March 2024, the Council of Europe's Committee on Artificial Intelligence (CAI) adopted the draft of the framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law was adopted (Draft Explanatory Report). This draft was formally adopted by the Council of Europe's Council of Ministers on 17 May 2024 2024 (press release) and enter into force as a legally binding international treaty. In addition to the 46 member states of the Council of Europe, the EU Commission and the USA were also involved in the negotiations.
The core of the document comprises fundamental principles, rules and rights designed to ensure that the development and deployment of AI systems respect human rights, promote democracy and uphold the rule of law. The scope of the Framework Convention covers the design, development and deployment of AI systems and aims to regulate them at all stages of their lifecycle. The regulations apply to both public and private organisations, which was controversial until recently. The exemptions for national security AI systems have been retained in a limited form.
The framework convention is intended to extend the existing human rights regime to the field of AI. It includes provisions on data protection and the protection of privacy, the prevention of discrimination through the use of AI and the safeguarding of individual freedom, human dignity and autonomy. The signatory states are encouraged to involve interest groups in the development and use of AI systems and to promote public debate. Commitments specifically focussed on AI include the promotion of AI literacy among the population, the principle of human oversight and the obligation to inform those affected about interactions with AI.
The Framework Convention also establishes minimum requirements for assessing the impact of AI on fundamental and human rights and obliges the contracting states to develop effective guidelines for the identification, assessment, prevention and mitigation of risks and adverse effects of AI applications.
At the G20 summit in Osaka, which took place in June 2019, the G20 community addressed the topic of artificial intelligence (AI) in depth for the first time. The heads of state and government adopted principles for the responsible use of artificial intelligence, which are largely based on the OECD Principles for Artificial Intelligence. These are intended to serve as a basis for national regulations and support internationally active companies in developing their own standards. AI systems should be designed in such a way that they respect the rule of law, human rights, democratic values and diversity.
The final declaration of the summit of G20 heads of state and government in New Delhi in September 2023 lists the following measures to promote artificial intelligence:
In addition to promoting secure, transparent artificial intelligence that is used for good, the Brazilian G20 Presidency is placing a special focus on tackling inequalities (Digital Economy Working Group - Issue Note). The Presidency recognises that current international discussions ignore important challenges and do not address key issues such as the concentration of capacities, datasets and infrastructures in the hands of a few actors. It is therefore considered crucial that developing countries have access to these technologies, can contribute to their further development and fully benefit from them.
The Hiroshima Process was launched in May 2023 at the suggestion of the G7 heads of state and government at the summit in Hiroshima (Leaders' Statement) with the aim of promoting and advancing the international discussion on the opportunities and risks of artificial intelligence.
This process resulted in the G7 AI Principles and Code of Conduct (AIP&CoC) (Guiding Principles for All AI Actors | Guiding Principles for Organisations Developing Advanced AI Systems | Code of Conduct for Organizations Developing Advanced AI Systems). A key aspect of the AIP&CoC is the G7's strong commitment to certain key areas of AI governance. This includes a risk-based approach applied throughout the AI lifecycle, starting with pre-emptive risk assessments and mitigation measures prior to implementation. It also emphasises the need for continuous monitoring, reporting and mitigation of misuse and incidents. As a precautionary measure, the AIP&CoC emphasise the importance of AI developers and users having risk management policies and procedures and strong security controls in place.
In addition to addressing concerns about the risks of advanced AI systems, the G7 AIP&CoC also define priorities for AI research and development. These include the authentication of content, measures to protect data rights, the mitigation of societal, security and other risks, the use of AI to tackle global challenges such as climate change and the development of technical standards.
The G7 countries have committed to further developing the principles and the Code as part of a comprehensive policy framework, with input from other nations, the OECD, the Global Partnership on AI (GPAI) and a broad range of stakeholders from academia, business and civil society.
As part of the G7 initiatives, which will continue beyond 2024 , the Italian G7 presidency is placing a particular focus on the ethical and socially acceptable development of artificial intelligence.
Artificial intelligence was one of the central topics of the G7 summit of heads of state and government, which took place in Apulia from 13 to 15 June 2024. The topic received additional attention due to the participation of Pope Francis, who was the first pope to attend such a forum. In his address, he emphasised that decision-making must always remain in human hands. The final declaration (Leaders' Communiqué) underlines the importance of international cooperation and harmonisation of legislation to increase the safety, transparency and accountability of AI technologies and applications. Risk-based approaches to regulating AI are favoured in order to promote innovation and strong, inclusive and sustainable growth. In addition, the heads of state and government have decided to initiate an action plan on the use of AI in the world of work and to develop a brand that supports the implementation of the international G7 AI Code of Conduct. The crucial role of robust and reliable semiconductor supply chains for safe and trustworthy AI was also emphasised at the summit. To address challenges in this area, the Heads of State and Government welcome the establishment of a Semiconductors G7 Point of Contact Group.
Artificial intelligence also plays a central role at the G7 ministerial summits, including the meeting of ministers for industry, technology and digitalisation in March 2024 (Ministerial Declaration). The ministers recognised the potential of AI technologies to increase productivity, innovation and efficiency in various economic sectors as well as their importance in overcoming global challenges such as data protection, security and the growing digital divide. They also emphasised the need for an ethical approach to AI that promotes innovation while taking the necessary safety precautions to ensure safe, reliable and trustworthy use.
Another important international initiative in the field of artificial intelligence was the AI Seoul Summithosted by the South Korean government in cooperation with the United Kingdom on 21 and 22 May 2024. This summit followed on from the AI Safety Summit which had taken place in Bletchley Park in November 2023.
The AI Seoul Summit ended with the "Ministerial Statement for Advancing AI Safety, Innovation and Inclusivity". In this statement, 27 countries agreed to define common risk limits for the development and use of AI and to intensify scientific cooperation in the field of AI safety. Among other things, this is to be achieved through cooperation on the development and implementation of safety tests and the establishment of evaluation guidelines. The signatories included Australia, Canada, Chile, France, Germany, India, Indonesia, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, Nigeria, New Zealand, the Philippines, South Korea, Rwanda, Saudi Arabia, Singapore, Spain, Switzerland, Turkey, Ukraine, the United Arab Emirates, the United Kingdom, the USA and the European Union. China attended the summit but did not sign the declaration.
In addition, ten countries and the EU signed the "Seoul Declaration for Safe, Innovative, and Inclusive AI" to support the "Ministerial Statement". This declaration aims to establish an international network of publicly funded AI safety institutes to promote harmonised approaches to regulation, testing and research and to accelerate the creation of an international framework for safe AI applications. Among the signatories were Australia, Canada, the European Union, France, Germany, Italy, Japan, South Korea, Singapore, the USA and the United Kingdom.
In addition to political players, leading technology companies also took part in the AI Seoul Summit. Sixteen world-renowned companies, including Amazon, Google, Meta, Microsoft, Samsung and OpenAI, signed the "Frontier AI Safety Commitments". These commitments include a pledge not to develop or deploy AI models that present a serious risk and to establish responsible structures and transparency regarding their AI safety concepts.