Saihub.info
Blog
Please get in contact if you are interested in authoring or co-authoring a blog post on Saihub.info.
In June 2024, we moved this blog to Substack -- please subscribe there to receive all of our posts.
The case for a harms-based approach to AI risk
We need a new way to think about AI risk.
To understand why, consider two examples of public controversies about AI:
In the recent turmoil around control of OpenAI, part of the dispute was between the philosophies of "effective altruism" (which focuses on the threat that AI-enabled superintelligence could doom humanity) and "effective accelerationism" (which focuses on the opportuity for AI to produce huge benefits to humanity).
As a result of controversy around the paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? -- which identified risks from language models relating to environmental and financial costs; large, biased data sets; misdirected research and interaction with human biases -- Google in late 2020 fired Timnit Gebru, the co-lead of its ethical AI research team.
Whatever one thinks of the merits of these controversies, at least one thing is clear: they are about different things. The debate about whether AI will destroy or save humanity is on a different planet than the debate about how to deal with the current effects of AI. The people who engage in these debates often don't even like talking with each other.
In fact, it is important to discuss and address both of these categories of AI risk -- and many other types of AI risk. Among other things, in the face of huge uncertainties associated with the rapid progress of AI and the potential for AI to have massive impact on societies and the planet, the precautionary principle indicates that attention to all credible risks is warranted (even if this does not extend to a pause in AI development that some have suggested).
A proper consideration of the many different types of AI risk requires us to analytically consider particular risks (individually and in combination), using an organized approach. At Saihub.info, we believe that this approach should be based on risk management frameworks like ISO 31000 and M_o_R®. These are mature frameworks that require assessment of risks (based upon criteria including impact, likelihood, proximity and velocity) and planning of responsive actions.
Risk management frameworks use a 'risk register' to record risks, and we have adapted that approach to produce an AI harms register to record potential harms from AI (both current and projected harms). We are early in the process of developing this register, and even in earlier in the process of producing analyses of individual risks (there are four such analyses on our website at the date of this blog).
Risk management also has a key feature that risks include both threats (potential sources of harm) and opportunities (potential sources of benefit). Although our initial approach at Saihub.info focus primarily on threats/harms, we fully support the extension of this approach to address opportunities/benefits. The latter are of course the primary reason why AI is attracting so much market attention, and our contribution to safe and responsible AI is aimed primarily at helping to ensure that benefits from AI exceed the harms.
Maury Shenk
© 2023 saihub.info