Blog

 

Please get in contact if you are interested in authoring or co-authoring a blog post on Saihub.info.

In June 2024, we moved this blog to Substack -- please subscribe there to receive all of our posts.

 

 

AI 'harms' and 'risks' are not the same thing – both are important

 

Failure to distinguish clearly between ‘harm’ and ‘risk’ bedevils many discussions of safe and responsible AI. Perhaps it’s not surprising – these moderately challenging concepts are easily drowned out by competing commercial entities launching AI systems, and widespread press reporting about AI that, at its worst, buries serious discussions about AI safety with language like “killer robots” and “AI overlords”.

 

Yet cutting through this confusion is critical to enable meaningful discussions of safe and responsible AI as the societal impacts of AI continue to grow rapidly.

 

In our view, there are two major distinctions between AI harms and risks:

  • Harms:
    1. involve actual or potential damage to individuals, society and/or the environment, and
    2. are general in nature.
  • Risks:
    1. involve uncertainty that may have negative (or positive) consequences for specific activities or projects, and
    2. relate to a specific system or operational context.

 

To take a simple example, a broken leg is a harm. The possibilities that I will suffer a broken leg when I go skiing, or walk to work, are risks – and a much larger risk in the former case than the latter.

 

Below, I explore these two distinctions in detail, and explain why analysis of both harms and risks is important to ensuring safe and responsible AI. 

 

Damage vs Uncertainty

The goal of working on safe and responsible AI must ultimately be to realize the huge likely benefits of AI in a way that is minimally damaging to individuals, society and the environment (as I have written before). At Saihub.info, our work involves potential ‘harms’ from AI (for more details, see my blog entry The case for a harms-based approach to AI risk), by which we mean types of damage from AI and solutions to address it. Harms may be associated with damage that is already occurring now and/or with damage that may occur in the future. 

 

‘Risk’ is different, with a precise definition in risk management practice:

  • ISO 31000 standard: "the effect of uncertainty on objectives, whether positive or negative."
  • M_o_R certification: “an uncertain event or set of events that, should it occur, will have an effect on the achievement of objectives”.

 

Combining and linking these concepts of harms and risks, uncertainties leading to potential harm are what we aim to prevent through risk management.

 

Some others have noted the importance of the distinction between harm and risk. For example, in her excellent paper Toward Comprehensive Risk Assessments and Assurance of AI-Based Systems for digital security company Trail of Bits, Heidy Khlaaf draws a clear distinction between harms (and ‘hazards’ that lead to harms) and risks / risk management.

 

But most others are less clear. For example, in Managing AI Risks in an Era of Rapid Progress, a group of AI leaders including Yoshua Bengio and Geoff Hinton use ‘risks’ to mean what we refer to here as ‘harms’. Similarly, the AI Vulnerability Database sets out a mission to “build out a functional taxonomy of potential AI harms”, but then states that its database “stores instantiations of AI risks”. Such loose use of the word ‘risk’ is in fact very common. 

 

This may appear to be just a semantic distinction. After all, if it is common for people (even leaders like Bengio and Hinton) to use ‘risk’ to refer to damage rather than to uncertainty, why should we care? In general terms, as Heidy Khlaaf states in the paper above:

 

“misuse of terminology entailing compliance with established safety and security properties can mislead stakeholders with regard to the claims an AI system satisfies and provide a false sense of safety”.

 

This distinction also has major and specific practical importance for how we develop safe and responsible AI, as we can see by looking closer.

 

General Harms vs Context-Specific Risks

Moving on to the second difference between harms and risks, harms are general types of bad things that could happen to you, or another person, or an institution, or society. For an individual, they can include things like adverse health events, loss of a job, financial loss, broken relationships and others. 

 

But these general categories of harm do not tell us much about whether we should be concerned about a specific activity. Whether we care about a harm depends on the circumstances. For example, the nature of activity matters – someone on a skiing holiday with a new love interest might be concerned about injury or relationship issues. But they would usually be less likely to see loss of their job as an immediate concern. 

 

Risk management is the discipline of assessing the possibility of specific uncertain harms in a particular context. In her paper, Khlaaf calls such contexts Operational Design Domains.

 

To sum up this distinction and the previous one, harms are general categories of damage, while risks involve context-specific uncertainty.

 

Why We Need to Understand Both Harms and Risks

Considering safe and responsible AI from the perspective of either risk or harm without considering the other is insufficient to deal properly with the challenges that AI presents.

 

It is fairly obvious why we cannot do without risk management for AI. It is crucial to assess the context-specific uncertainties (both threats and opportunities) associated with AI systems, in order to manage the risks associated with deploying those systems. Knowing the general types of harm that might occur is not sufficient for such context-specific analysis.

 

Equally we cannot do without separate analysis of AI harms, but the reasons are somewhat more complicated, and are in at least three related areas. First, failure to assess a full spectrum of potential harms produces excessively narrow focus in safe and responsible AI work, including in risk management (which requires a clear understanding of the full spectrum of harms that might result from an AI-related activity). 

 

Unfortunately, most AI safety debates tend to focus on a single (or perhaps a few) issue(s) – such as existential risk, or risks of bias and discrimination. Presently, many large AI companies are highly focused on ‘alignment’ between the output of large language models and (hotly-debated) human values. But such focus diverts attention from a panoply of other existing and potential AI harms. Instead, we should consider many different types of potential AI harms – the Saihub harms register currently identifies more than 30 distinct categories of harm.

 

Second, various types of harm from AI are already occurring, and may not even be of significant concern to deployers of AI systems. For example, the abilities of certain AI systems to displace or alter human jobs or to engage in autonomous warfare are not bugs, they are intended features of the systems. It is crucial that we recognize the distinction between intended / expected harms and uncertain risks.

 

Third, and flowing from the previous points, high-level policy decisions by governments and organizations about how to regulate and govern AI systems must be based on judgements about general types of harm, and potential solutions to address those harms. It would be impractical and foolhardy to base long-term policies on analysis of the specific risks facing particular AI systems. For this reason, the forthcoming EU AI Act has significant focus (especially in its recitals) on potential AI harms, leaving the task of risk management to providers and deployers of AI systems (as a mandatory obligation under Article 9).

 

Another example of where such differing considerations of risk and harm are important is in the UK government plans for a “cross-economy AI risk register”, proposed in the consultation A pro-innovation approach to AI regulation. At a national level, it makes sense to assess which risks / uncertainties associated with AI deserve most attention in the UK. But this analysis, done properly, will need to begin with a deep understanding of the potential harms that could result from AI, including present harms that require attention. Saihub intends to engage with a further consultation on the UK AI risk register that is expected to be launched this year.

 

How Saihub Is Engaging With AI Harms and Risks

In summary, AI harms analysis and risk management interact extensively and are both crucial to ensuring safe and responsible AI, but they are very different activities. Harms analysis is a research activity that lies at the intersection of various disciplines including public policy and generally technical understanding of rapidly advancing AI. By contrast, AI risk management is a system-focused practical discipline, including evaluations of specific AI models.

 

At Saihub, while we will continue to consider aspects of AI risk management, our focus will remain squarely on analysis of AI harms, as well as potential solutions to address those harms. This is where our expertise lies, and where we believe we can make significant contributions.

 

Thank you for attention to our thinking on AI harm and risk. If you would like to receive more of the same (a few times per month at most) to your inbox, please follow us on Substack. And if you are interested in a deeper discussion or getting involved, please get in contact.

 

Maury Shenk