Saihub.info
Regulatory & Governance Solutions
Updated May 16, 2024
There is a significant overlap for control of AI between:
regulatory approaches -- involving mandatory state control of AI
governance approaches -- involving voluntary standards for AI, which may be set out by governments or private actors.
We are beginning the process of identifying regulatory and governance solutions that are specific to the harms identified on our harms register (see sub-pages of the Harms page), rather than setting them out generically on this page.
Regulation
Regulatory initiatives. There has been fairly limited regulation of AI to date, but the pace of regulation is increasing, led by the EU and its AI Act. Some key regulatory initiatives around the world are:
Australia - Safe and responsible AI in Australia consultation -- Australian Government's interim response (January 2024)
Canada - Artificial Intelligence and Data Act (proposed)
China
Regulations on the Administration of Internet Information Service Recommendation Algorithms (March 2022)
Deep Synthesis Regulation (November 2022) -- Chinese version
Interim Measures on the Management of Generative Artificial Intelligence Services (July 2023)
draft Artificial Intelligence Law (April 2024) -- see Forbes analysis.
EU
Product Liability Directive - EU authorities reached political agreement in December 2023 to extend existing product liability rules to artificial intelligence products
India - Ministry of Electronics and Information Technology (MEITY) advisory requesting government approval for AI tools that are "under-testing" or “unreliable”, and related requirements (March 2024)
Indonesia - draft AI Bill (April 2024)
Italy - draft Bill on AI (April 2024)
Japan
proposed legislation (original currently available only in Japanese) on generative AI (February 2024)
Ministry of Internal Affairs and Communications, and Ministry of Economy, Trade and Industry, AI Guidelines for Business V1.0 (April 2024)
UK
National AI Strategy (September 2021, updated December 2022)
A pro-innovation approach to AI regulation: government response (February 2024) -- see separate page
Procurement Policy Note: Improving Transparency of AI use in Procurement (March 2024)
Bank of England and Prudential Regulation Authority -- letter to Department for Science, Innovation, and Technology and HM Treasury on strategic approach to AI (April 2024)
Competition and Markets Authority
AI Foundation Models: Update Paper (April 2024)
AI Foundation Models: Technical Update Report (April 2024)
Financial Conduct Authority - AI Update (April 2024)
US
AI in Government Act (2020) -- included in statutory notes
Advancing American AI Act (2022) -- included in statutory notes
Blueprint for an AI Bill of Rights (October 2022)
Executive Order on AI (October 2023) -- see separate page
Artificial Intelligence Safety and Security Board (established April 2024) -- 22-member advisory board to US Department of Homeland Services, including CEOs of leading AI companies
The Bipartisan Senate AI Working Group, Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate (May 2024)
US states
Colorado - Consumer Protections for Artificial Intelligence bill (May 2024)
Utah - Artificial Intelligence Policy Act (March 2024).
There are no multilateral agreements on AI safety, however:
The US and EU have signed an Administrative Arrangement on Artificial Intelligence for the Public Good in January 2024, regarding cooperation on AI research, including safety and privacy issues. It has been reported that the US AI Safety Institute and EU AI Office are planning cooperative work on generative AI.
A key consideration in AI regulation is the definition of "artificial intelligence", which establishes a basis for what is regulated. A leading definition is the one updated by the OECD in November 2023: "An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment."
In an interesting (and likely future-looking) twist on AI regulation, the Porto Alegre City Council in Brazil in November 2023 passed a law on water meters, which is believed to be the world's first law entirely drafted by ChatGPT (and possibly AI generally).
Regulatory Summaries. There are various more detailed summaries of developing AI regulation, including from:
Securiti (a data management company)
Tech Hive Advisory and Center for Law & Innovation -- State of AI Regulation in Africa: Trends and Developments.
Privacy Law. There are also some restrictions on use of AI in existing and proposed privacy laws, such as:
Governance Initiatives
Somewhat in contrast to the lack of detail in regulation, various non-binding AI governance initiatives (mostly with multilateral government participation) have been adopted or proposed, and set out principles for safe and responsible AI:
Multinational
Global Partnership on AI (proposed at G7 summit in 1988 and launched in 2020)
Guidelines for secure AI system development (November 2023)
AI Safety Summit, The Bletchley Declaration (November 2023)
G7
Hiroshima Process
International Guiding Principles for Advanced AI Systems (October 2023)
International Code of Conduct for Advanced AI Systems (October 2023)
Ministerial Declaration on AI cooperation (March 2024)
United Nations
AI Advisory Body - Governing AI for Humanity (interim report December 2023)
General Assembly Resolution - Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development (March 2024)
World Health Organization - Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models (January 2024)
Regional
African Union - African Union Development Agency, AI White Paper and Roadmap Review (February 2024)
Assocation of Southeast Asian Nations (ASEAN), Guide on AI Governance and Ethics (February 2024)
National
Singapore - proposed Model AI Governance Framework for Generative AI (January 2024).
Governments are also beginning to issue guidance on use of AI by government bodies, such as:
EU - Artificial Intelligence in the European Commission (AI@EC) Communication (January 2024)
UK - Generative AI Framework for HMG (January 2024)
US - see actions summarized on US Executive Order on AI page.
There are also various private governance initiatives including:
JAMS - Artificial Intelligence Disputes Clause, Rules and Protective Order (April 2024).
Leading AI companies are also developing and evolving governance approachs for their AI activities and models, e.g.:
OpenAI's safe and responsible AI program involves three teams: Safety Systems, Preparedness and Superalignment
Google has set out AI Principles.
Thought Leadership
Various leading AI figures and other authors have published their thinking on the regulatory and governance measures that are required for safe and responsible AI. Some of our favorites are:
Yoshua Bengio, "For true AI governance, we need to avoid a single point of failure", Financial Times (December 5, 2023).
Start-Ups
There is an emerging group of start-ups that offer AI regulatory compliance and auditing services, including: