Category: Information Security and Data Protection

Potential Source of Harm: Cyberattacks

Updated May 16, 2024


Nature of Harm

AI is already both enabling new types of cyberattacks and enhancing existing forms of cyber-attacks. A few examples are:

Such threats will certainly continue to proliferate.


AI is also being used to assist cyber-defense, by companies such as Darktrace and Crowdstrike, although the current balance in impact of AI on cybersecurity appears to be in favor of attackers.


The harm of cyberattacks is related to the separately-identified harm of adversarial attacks on AI models. This page focuses on the use of AI to enable cyberattacks on various IT systems, while the adversarial attacks page focuses on attacks on AI models using various methods (whether or not AI-based).


Regulatory and Governance Solutions

There is substantial existing regulation and governance work on cybersecurity solutions, and most current work focuses on extending AI to such solutions rather than treating AI as a separate problem. A few examples of government work in this area are work by ENISA (a EU body) and the UK National Cyber Security Centre.


Technical Solutions

As for regulatory and governance approaches, technical approaches to cybersecurity from AI-based attacks are mostly extensions of existing cybersecurity work, although new techniques need to be developed in many cases (e.g. for novel attacks such as the worm mentioned above).


In April 2024, the US National Institute of Standards and Technology (NIST) released proposed Secure Software Development Practices for Generative AI and Dual-Use Foundation Models.


In May 2024, OpenAI published a blog on secure infrastructure for advanced AI.


Government and Private Entities

There are a large number of government and private entities addressing cybersecurity challenges associated with AI. Some of these are mentioned above. We will provide further detail over time.