Saihub.info
Entities & Funding
Updated December 16, 2024
There are a lot of people working on AI safety these days, which is a good thing because this is a really big challenge. Some of these entities offer funding for AI safety work. The information on this page is weighted towards the UK (where we are based) and US (which remains the AI technology leader).
Government Entities
Numerous countries are establishing AI safety institutes (AISIs) to address technical issues for AI safety. See "Government Entities" on Technical Solutions page for further details.
National regulatory bodies specifically responsible for AI are also starting to emerge, for example:
China - The Cyberspace Adminstration of China has responsibility for AI legislation (among much broader responsibilities).
EU
The EU AI Act established an AI Office within the European Commission and an AI Board comprising representatives of EU member states.
European Commission President Ursula von der Leyen has proposed a 'CERN for AI' research center, which is explored in the International Center for Future Generations paper CERN for AI (September 2024).
US - The Executive Order on AI established a White House Artificial Intelligence Council within the Executive Office of the President.
There are also multilateral bodies addressing issues related to AI, including:
Private Entities
There are many private-sector organizations and and university efforts. A few are:
AI Alliance - a large alliance of companies (with lead roles for IBM and Meta), universities and others "collaborating to advance safe, responsible AI rooted in open innovation"
International Association of Algorithmic Auditors - "a community of practice that aims to advance and organize the algorithmic auditing profession, promote AI auditing standards, certify best practices and contribute to the emergence of Responsible AI"
AI Safety Fundamentals have published a longer list of local groups, and the aisafety.community site provides a much longer list of "communities working on AI existential safety".
There are also a large number of start-ups working on AI safety issues. The Ethical AI Governance Group maintains the Ethical AI Database project (EAIDB), described as "a curated collection of startups that are either actively trying to solve problems that AI and data have created or are building methods to unite AI and society in a safe and responsible manner" -- see EAIDB's latest market map (H1 2024).
Saihub.info is building our own list of entities addressing AI safety. This currently contains far fewer start-ups than the EAIDB market map. We will expand it over time.
Funding
Funding for safe and responsible AI initiatives is emerging fairly quickly, and we expect this to continue. A few UK and US funding sources are:
US Defense Advanced Research Projects Agency (DARPA) - including AI Forward program.