Regulatory & Governance Solutions

Updated May 16, 2024

 

There is a significant overlap for control of AI between:

  • regulatory approaches -- involving mandatory state control of AI

  • governance approaches -- involving voluntary standards for AI, which may be set out by governments or private actors.

We are beginning the process of identifying regulatory and governance solutions that are specific to the harms identified on our harms register (see sub-pages of the Harms page), rather than setting them out generically on this page.

 

Regulation

Regulatory initiatives. There has been fairly limited regulation of AI to date, but the pace of regulation is increasing, led by the EU and its AI Act. Some key regulatory initiatives around the world are:

 

There are no multilateral agreements on AI safety, however:

  • The AI Safety Summit at Bletchley Park in the UK in November 2023 was a first step towards multilateral cooperation on AI safety. Further summits are planned in South Korea in December 2024 and then in France. Significantly, the two leading AI powers US and China participated together in the AI Safety Summit (it has been reported that US AI companies engaged in discussions with Chinese AI experts earlier in 2023, with government support).
  • The Council of Europe is finalizing a draft Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law
  • The US and EU have signed an Administrative Arrangement on Artificial Intelligence for the Public Good in January 2024, regarding cooperation on AI research, including safety and privacy issues. It has been reported that the US AI Safety Institute and EU AI Office are planning cooperative work on generative AI.

 

A key consideration in AI regulation is the definition of "artificial intelligence", which establishes a basis for what is regulated. A leading definition is the one updated by the OECD in November 2023: "An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment."

 

In an interesting (and likely future-looking) twist on AI regulation, the Porto Alegre City Council in Brazil in November 2023 passed a law on water meters, which is believed to be the world's first law entirely drafted by ChatGPT (and possibly AI generally).

 

Regulatory Summaries. There are various more detailed summaries of developing AI regulation, including from:

 

Privacy Law. There are also some restrictions on use of AI in existing and proposed privacy laws, such as:

 

Governance Initiatives

Somewhat in contrast to the lack of detail in regulation, various non-binding AI governance initiatives (mostly with multilateral government participation) have been adopted or proposed, and set out principles for safe and responsible AI:

 

Governments are also beginning to issue guidance on use of AI by government bodies, such as:

 

There are also various private governance initiatives including:

 

Leading AI companies are also developing and evolving governance approachs for their AI activities and models, e.g.:

 

Thought Leadership

Various leading AI figures and other authors have published their thinking on the regulatory and governance measures that are required for safe and responsible AI. Some of our favorites are:

 

Start-Ups

There is an emerging group of start-ups that offer AI regulatory compliance and auditing services, including: