Saihub.info
Blog
Please get in contact if you are interested in authoring or co-authoring a blog post on Saihub.info.
In June 2024, we moved this blog to Substack -- please subscribe there to receive all of our posts.
Saihub 10 weeks on: expanding our content and what's next
We launched Saihub.info just over 10 weeks ago, and since then have made excellent progress, with a recent bump in the road that has helped to clarify what's next.
Making Progress on Safe and Responsible AI Content
The progress is all about the content. We have been rapidly building content on safe and responsible AI, including summary analyses of harms in our harms register, summaries of key AI legal and policy documents, blogs and more. The response to this content has been excellent, including a steady and increasing flow of readers. From a small start, we are building a genuinely useful resource, and we will be continuing to make it much, much better.
Saihub's harms register and the harms/risk-based approach to AI safety that it reflects are central to our work. Comments last week by OpenAI Sam Altman help to illustrate why this approach is crucial. Altman said: "There’s some things in there that are easy to imagine where things really go wrong. And I’m not that interested in the killer robots walking on the street direction of things going wrong. I’m much more interested in the very subtle societal misalignments where we just have these systems out in society and through no particular ill intention, things just go horribly wrong.”
Altman is correct that there will be many unanticipated effects of AI as it becomes more widespread in society. There is a clear analogy to what has happened with social media, where a new technology for connecting people has also become a driver of polarization and misinformation -- which is being amplified by AI.
However, we can reduce the risks of harmful effects from AI by working to better anticipate and analyze them in a disciplined way. That is what Saihub is about.
In conducting our analysis, Saihub is focusing in terms of "harms" rather than "risks", as most others have done -- such as the UK government in its recent consultation response on its Pro-innovation approach to AI regulation. "Harm" and "risk" are closely-related concepts, but the distinction is an important one that we have made intentionally. I will write more about this in my next blog.
Moving Forward in the Same Direction
Our progress makes us ambitious. We set out to be the leading source of information on safe and responsible AI, and this is beginning to look like more than a vision.
In an effort to accelerate our progress, we recently applied for UK government funding in a Innovate UK competition for "regulatory science" projects -- teaming with partners Northeastern University London, St George's House and Z/Yen. The results of the competition were announced this week and, although we got very good scores, we were not successful against tough competition (90 applicants for 30 funding places). This was a disappointing bump to our immediate ambitions.
But this small disappointment has helped clarify the way forward -- doing more of the same, because we are confident that we are on the right track. We will be continuing to steadily build the content on Saihub, as well as looking for ways to support our expansion through funding and otherwise. If you have ideas in any of these areas, please reach out.
Maury Shenk
© 2023 saihub.info