Please get in contact if you are interested in authoring or co-authoring a blog post on

In June 2024, we moved this blog to Substack -- please subscribe there to receive all of our posts.




Connecting AI harms and social media harms


As we work at Saihub to elucidate and address actual and potential harms from artificial intelligence, I am highly conscious that we are writing and talking about the downsides of a very useful and inevitable technology. In Q&A at an event where I spoke last week, a senior executive asked me why we are “scaremongering” rather than providing a more balanced introduction to the benefits and risks of AI.


The short answer is this. I believe that the benefits of AI are greater than the risks and harms. But someone needs to talk about harms – including because there is probably at least 100X as much funding going into AI applications development compared to AI safety research. AI harms is an area that we at Saihub are particularly well-qualified to address. (We also like talking about AI benefits, but there are many better resources on that.)


A more eloquent case for the need to talk about AI harms was made in a recent article Let’s not make the same mistakes with AI that we made with social media in the MIT Technology Review by Nathan Sanders and Bruce Schneier (my favorite commentator on information security).


I have long made the point that the unexpected effects of social media provide a cautionary tale for AI – notably the (mixed but salient) evidence that solving for user engagement has led to user polarization (e.g. here and here). Sanders and Schneier take this much further, identifying five characteristics that social media share with AI and that could cause harm to society: (1) advertising, (2) surveillance, (3) virality, (4) lock-in and (5) monopolization.


I won’t summarize the detailed points made by Sanders and Schneier, and encourage you to read the piece yourself. But I’ll lean on their conclusion:


“The risks that AI poses to society are strikingly familiar, but there is one big difference: it’s not too late. This time, we know it’s all coming. Fresh off our experience with the harms wrought by social media, we have all the warning we should need to avoid the same mistakes.”

I am not an advocate for aggressive AI regulation – indeed, in a recent blog for Steptoe (where I am an advisor) and associated online roundtable, I expressed sympathy for the cautious UK “pro-innovation” approach to AI regulation compared to the much more prescriptive and proscriptive EU AI Act. However, it is clearly essential that we think seriously about how to avoid and mitigate AI harms, and Sanders and Schneier are entirely right that social media provides an instructive analogy.


Maury Shenk