Category: Misinformation

Potential Source of Harm: Deepfakes - Identity Fraud

Updated September 14, 2024

 

Nature of Harm

Online crime and fraud involving identity fraud are nothing new: they have evolved alongside digital technologies, and of course are rooted in fraud-based crime techniques that are hundreds or thousands of years old. But the ability of AI to facilitate convincing impersonations of human images, voices and behavior provides important new tools to criminals.

 

An incident that illustrates the potential of such harm took place in January 2024. A Hong Kong finance employee at international design and engineering firm Arup was convinced to pay out $25 million to criminals based upon confirmations given at a video conference that he believed were the firm's chief financial officer and other staff members -- in fact, all were deepfakes, using both image and voice impersonation.

 

Regulatory and Governance Solutions

Identity fraud is challenging to address with regulation, because those engaging in such fraud are unlikely to be deterred by the fact that their conduct is illegal -- indeed, in the vast majority all cases, incidents of this nature are already illegal under generally applicable law that significantly pre-dates deepfake technology. The lack of deterrent effect is compounded (as has long been the case) by the fact that digital fraud can be committed from a distance, with attackers being challenging to identify and prosecute across borders.

 

Governance measures (especially those involving awareness and education) are likely to be much more effective against deepfake-based fraud. For companies, these include procedures that reduce susceptibility to fraud (such as safe computing practices and disciplined checks on any payment), and associated training for employees. For individuals, safe practices, awareness and training are likewise crucial. Many banks and other financial institutions have adopted increasingly detailed processes to reduce the risk that their customers will be victims of online fraud, including AI-based identity fraud.

 

Technical Solutions

Technical solutions for identity fraud are challenging, including because AI-generated content is increasingly difficult to identify. However, many vendors are beginning to augment their solutions to address such risks.

 

YouTube announced in September 2024 that it is developing tools to assist content creators to identify use of their faces and voices in YouTube videos.

 

Government and Private Entities

Identity fraud for the time being primarily a problem addressed by private initiative rather than government action (apart from prosecution of offenders who are identified). Banks and other financial institutions (e.g. credit card companies) play a crucial role in prevention of identity fraud.