Please get in contact if you are interested in authoring or co-authoring a blog post on

In June 2024, we moved this blog to Substack -- please subscribe there to receive all of our posts.




The benefits and risks of AI: why it's sensible to be both optimistic and cautious


I am very optimistic about artificial intelligence.


AI is already having significant positive impacts -- consider the already large benefits of ChatGPT and other large language models, Google Translate and DeepMind's AlphaFold protein folding predictions, to name a few. These benefits will increase substantially over time. It has been observed that AI is one of relatively few "general purpose" technologies that have come along in human history -- i.e. technologies that have impacts across nearly all sectors, like fire/combustion, electricity, digital computing and the Internet. Such technologies tend to have increasing effects (including on productivity) as they become widely dispersed in society, and we should expect the same with AI.


I believe that the likely benefits of AI substantially exceed the likely risks.


Given this optimism, one might ask why I am spearheading, a website that focuses on the harms and risks of AI.


The answer is simple, in two parts.


First, whether or not one is an optimist, we need to pay attention to the risks, not least to reduce the costs of enjoying the benefits of AI. For those who are pessimistic about AI, a clear-eyed consideration of risks (rather than a simple rejection of AI) makes sense, including because it is apparent that the AI horse has bolted the stable and is not going to be chased down.


Second, providing information and analysis on AI harms and risks is an area in which I and my team have the skills and experience (including in policy, law and risk management) to have real impact. I love talking about the science and progress of AI, but there are plenty of other people talking about this. Safe and responsible AI has gotten a lot of attention in the past year, but information and debate in this area remains muddled. As I wrote recently in my blog The case for a harms-based approach to AI risk, we believe that has a distinctive and valuable perspective as we pursue our aim to be the leading source of information on safe and responsible artificial intelligence.


Some believe that the risks of AI deserve less attention. For example, in The Techno-Optimist Manifesto published in October, Marc Andreesen expressed a proper optimism about AI while dismissing legimate concerns as "lies" and those raising them as "the enemy". This is a false dichotomy between AI progress and AI risk management. Paying careful attention to risks (and adopting appropriate regulation) is what nearly all societies have done for other beneficial technologies, such as electricity, automobiles, aircraft and nuclear power. We should do the same for AI -- so that it can realize the vision of the optimists.


Maury Shenk