Blog

 

Please get in contact if you are interested in authoring or co-authoring a blog post on Saihub.info.

In June 2024, we moved this blog to Substack -- please subscribe there to receive all of our posts.

 

 

Beyond killer robots, will we care enough about AI risk?

 

Much of the attention to AI risk in the last couple of years has been focused on existential risk -- i.e. the concern that AI will lead to widespread destruction of humanity, due to unanticipated conduct of AI (like turning us all into paperclips or grey goo) or killer robots (like Terminators) or something like that. This fear, although not baseless, seems to have been driven largely by unscientific approaches like the speculative reasoning of Nick Bostrom in his widely-read 2014 book Superintelligence and the recent fashion in Silicon Valley of predicting P(doom) -- i.e. "probability of doom" or as The Spectator put it "the probability of Artificial Intelligence causing something so bad for humanity it will feel like Doomsday, or actually be Doomsday". On the other hand, there are individuals and companies doing serious work on existential risk, such as start-up Conjecture and its CEO Connor Leahy.

 

Recently, attention to existential risk scenarios seems to be waning. Leading AI commentator Azeem Azhar (and a personal friend) recently wrote in his weekly newsletter Exponential View: "The world is fine; it’s time to bury the existential risk debate. The doomer crowd’s use of Bayesian reasoning to legitimise these fears is little more than pseudo-scientific hand-waving." Ouch! And Nick Bostrom himself is somewhat out of favor, with his Future of Humanity Institute closed in April after nearly 20 years, apparently largely as a result of discomfort at Oxford University with some of Bostrom's views. Ouch again!

 

The question this raises for me is whether reduced attention to the 'movie-plot' threats (using a term popularized by Bruce Schneier) of existential risk will also reduce attention and needed action to combat the many real harms and risks presented by AI (among which existential risk should be considered). There is reason to worry. In the face of extensive evidence of expected harms from climate change, the world has been very slow to react, addicted to the benefits of fossil fuels and other carbon sources. Even now, as climate harms are accelerating, responses are multifold but probably too slow to present widespread catastrophes. The 2021 Netflix film Don't Look Up presents a comedic parable of humanity's impressive ability to ignore threats like this -- in that case, a comet on a collision course with Earth.

 

On the other, more optimistic, side of the coin, there remains plenty of attention to AI safety, and at Saihub we aim to spread the word about this. It is our mission to bring attention and reasoned analysis to the many likely harms and risks from artificial intelligence, presenting possible solutions and approaches. Changes in attention to existential risk will not alter that mission. As always, please get in touch if you would like to join that journey.

 

Maury Shenk