Category: Ethical

Potential Source of Harm: Bias / Discrimination

Updated February 11, 2024

 

Nature of Harm

AI bias is generally defined as a phenomenon that occurs when an algorithm produces results that are systemically prejudiced due to incorrect assumptions in the machine learning process. Such incorrect assumptions are often related to data (e.g. that the data on which an AI model is trained do not correctly reflect the population distribution) and can also be related to the structure of a model. "Bias" in the context of AI harm refers primarily to adverse effects on humans, and not to the broader technical phenomenon that any AI-based representation of a real-world distribution tends to not to exactly match that distribution (which is also referred to as "bias").

 

Bias and discrimination is one of the most studied AI harms, and the Saihub.info team does not yet believe we have the expertise to write effectively about this topic (we are working on building that expertise), so this page is mostly a placeholder. If you are interested in assisting us with this topic, please get in contact.

 

Regulatory Solutions

In July 2023, New York City began to enforce Local Law 144, which prohibits employers in the city from using an automated employment screening tools unless the tool has undergone a bias audit conducted within the previous year. We understand this to be the first law of its kind in the United States.

 

In the SCHUFA Holding decision in December 2023, the European Court of Justice decided that automated credit scoring decisions that are heavily relied upon by users are subject to the restrictions of Article 22 of the EU General Data Protection Regulation "automated individual decision-making".

 

(As noted above, we do not intend to suggest that the above references cover more than small aspects of the law on AI bias. They are included primarily because we have come across them in our research.)