Saihub.info
Governance - General
Updated November 11, 2024
State Governance and Policy Initiatives
Somewhat in contrast to the lack of detail in regulation, various non-binding AI governance initiatives (mostly with multilateral government participation) have been adopted or proposed, and set out principles for safe and responsible AI:
Multinational
Global Partnership on AI (proposed at G7 summit in 1988 and launched in 2020)
Guidelines for secure AI system development (November 2023)
AI Safety Summits
The Bletchley Declaration (November 2023)
AI Seoul Summit (May 2024)
G7
Hiroshima Process
International Guiding Principles for Advanced AI Systems (October 2023)
International Code of Conduct for Advanced AI Systems (October 2023)
Ministerial Declaration on AI cooperation (March 2024)
United Nations
AI Advisory Body - Governing AI for Humanity (interim report December 2023)
General Assembly Resolution - Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development (March 2024)
UNESCO - Recommendation on the Ethics of Artificial Intelligence (November 2021)
World Health Organization - Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models (January 2024)
Politico reported in July 2024 that the UN intends to propose creation of a global AI Office.
Regional
African Union
African Union Development Agency, AI White Paper and Roadmap Review (February 2024)
Continental Artificial Intelligence Strategy (June 2024)
Assocation of Southeast Asian Nations (ASEAN), Guide on AI Governance and Ethics (February 2024)
Bilateral
National
Australia
Voluntary AI Safety Standard (September 2024)
AI and ESG: An introductory guide for ESG practitioners (October 2024)
France & China - Joint declaration on artifical intelligence and governance of global challenges (May 2024)
Hong Kong - Financial Services and the Treasury Bureau, Policy Statement on Responsible Application of Artificial Intelligence in the Financial Market (October 2024)
Japan - Liberal Democratic Party, AI White Paper 2024: New Strategies in Stage II - Toward the world's most AI-friendly country (May 2024)
Singapore
AI Playbook for Small States (September 2024)
UK
Digital Regulation Cooperation Forum, AI and Digital Hub
Bank of England, Artificial Intelligence Consortium - a platform for public-private engagement to gather input from stakeholders on the capabilities, development and use of AI in UK financial services.
Use of AI by Government Bodies
Governments are also beginning to issue guidance on use of AI by government bodies, such as:
EU
Artificial Intelligence in the European Commission (AI@EC) Communication (January 2024)
European Data Protection Supervisor, First EDPS Orientations for ensuring data protection compliance when using Generative AI systems (June 2024)
Netherlands - principles for cooperation between Dutch Data Protection Authority and Dutch Authority for Digital Infrastructure in regulating AI (June 2024)
UK - Generative AI Framework for HMG (January 2024)
US - see actions summarized on US Executive Order on AI page.
The OECD published a policy paper Governing with Artificial Intelligence in June 2024 on use of AI to improve public governance.
Private Governance Initiatives
There are also various private governance initiatives including:
JAMS - Artificial Intelligence Disputes Clause, Rules and Protective Order (April 2024)
OECD - Recommendation and Principles on Artificial Intelligence (2019, last updated May 2024)
RAND Corporation - Historical Analogues That Can Inform AI Governance (August 2024).
A 2023 article by Mokander et al proposes a three-pronged combined governance / technical approach to auditing large language models, involving (1) governance audits, (2) model audits and (3) application audits.
Company Policies
Leading AI companies are also developing and evolving governance approachs for their AI activities and models, e.g.:
Google has:
set out AI Principles
released a Frontier Safety Framework (May 2024) - "a set of protocols for proactively identifying future AI capabilities that could cause severe harm and putting in place mechanisms to detect and mitigate them".