Entities & Funding

Updated December 16, 2024

 

There are a lot of people working on AI safety these days, which is a good thing because this is a really big challenge. Some of these entities offer funding for AI safety work. The information on this page is weighted towards the UK (where we are based) and US (which remains the AI technology leader).

 

Government Entities

Numerous countries are establishing AI safety institutes (AISIs) to address technical issues for AI safety. See "Government Entities" on Technical Solutions page for further details.

 

National regulatory bodies specifically responsible for AI are also starting to emerge, for example:

  • China - The Cyberspace Adminstration of China has responsibility for AI legislation (among much broader responsibilities).

  • EU

    • The EU AI Act established an AI Office within the European Commission and an AI Board comprising representatives of EU member states.

    • European Commission President Ursula von der Leyen has proposed a 'CERN for AI' research center, which is explored in the International Center for Future Generations paper CERN for AI (September 2024).

  • US - The Executive Order on AI established a White House Artificial Intelligence Council within the Executive Office of the President.

 

There are also multilateral bodies addressing issues related to AI, including:

  • The Global Partnership on AI (GPAI) is a multinational intiative established by the G7 in 2020, now with 29 member countries and operating from OECD offices in Paris. The AI Safety Summit process may affect the role of the GPAI.
  • The UNESCO Global AI and Governance Observatory aims "to provide a global resource for policymakers, regulators, academics, the private sector and civil society to find solutions to the most pressing challenges posed by Artificial Intelligence."

 

Private Entities

There are many private-sector organizations and and university efforts. A few are:

 

AI Safety Fundamentals have published a longer list of local groups, and the aisafety.community site provides a much longer list of "communities working on AI existential safety".

 

There are also a large number of start-ups working on AI safety issues. The Ethical AI Governance Group maintains the Ethical AI Database project (EAIDB), described as "a curated collection of startups that are either actively trying to solve problems that AI and data have created or are building methods to unite AI and society in a safe and responsible manner" -- see EAIDB's latest market map (H1 2024).

 

Saihub.info is building our own list of entities addressing AI safety. This currently contains far fewer start-ups than the EAIDB market map. We will expand it over time.

 

Funding

Funding for safe and responsible AI initiatives is emerging fairly quickly, and we expect this to continue. A few UK and US funding sources are: