Please get in contact if you are interested in authoring or co-authoring a blog post on

In June 2024, we moved this blog to Substack -- please subscribe there to receive all of our posts.



Moving Towards Consensus on AI Harms and Risks


In connection with the recent AI Seoul Summit earlier this month, a team of experts led by Yoshua Bengio produced the first International Scientific Report on the Safety of Advanced AI. The report provides a solid high-level summary of evolving work on AI safety.


Most interesting, the report reflects what seems to be an emerging consensus on what topics should be considered in assessing AI risk (and harms -- see my previous blog about the difference between AI 'risk and 'harm'). All of the main AI risks identified in the Seoul report were already identified in the Saihub harms register (in fairly similar terms), and there are no major risks that Saihub identifies that were ignored in the report. In short, smart people looking at the issues around AI safety seem to be reaching similar conclusions.


Of course, we will almost certainly be surprised by unexpected harms from the rapid development of AI -- "unknown unknowns" from Donald Rumsfeld's well-known taxonomy.


And of course the solutions to those AI safety challenges that we have identified mostly remain known unknowns. But at least it is becoming clearer where work is needed.


Maury Shenk