Direkt zum Inhalt Skip to maincontent Direkt zum Footer Skip to footercontent

From BLM to IBM: Racism and Bias in AI

Media Technology and Society

From BLM to IBM: Racism and Bias in AI

With IBM deciding to pull out of all facial recognition business, we want to explore a major problem in AI: Racism and bias.

An entry by Mika Engelhardt und Johannes Eckes

Tuesday, November 10, 2020

 

Earlier this year (2020), the tech-giant IBM announced that the company would shut down all it’s facial recognition business. At the same time, CEO Arvind Krishna called for a “national dialogue” on whether it should be used at all.
The timing on this may or may not be intended, but it all happened in the context of the Black Lives Matter protests that occured after the death of George Floyd in the US and worldwide and are still continuing up to this date.

Besides IBM not making any significant profit out of facial recognition, another reason for the change was bias in the systems themselves. In detail, this means that the accuracy of applications used for facial recognition can vary, depending on characteristics like age, gender or race. This finding isn’t particularly new, as it has already been proven in studies from 2018 and 2019, yet IBM is the first major player in tech reacting to this in a meaningful way. 

How can AI become racist?

As questionable as the use of facial recognition may be, it still has improved greatly over the last few years. At the same time though, the technology has been shown to suffer from biases which can make the tools unreliable and even downright racist or sexist. But how does that happen? A company as big as IBM would surely never sell an openly racist algorithm, as the inevitable PR-crisis would likely be catastrophic. Even less so, we want to accuse any developers of having racist intentions, or reflecting these in his or her work.

Examining the problem further, it becomes apparent that the bias is usually not in the algorithm, but in the data it relies on. Any artificial intelligence can only be as good as the data it’s being provided. In clear terms, this means that artificial intelligences learn from experience and feedback, not unlike children. There are various training methods in use right now but the majority of them rely on the AI starting with assumptions on how to solve a task and getting feedback on the outcome – either by a user or another program. The feedback is then being sent back to the AI in the form of a score and incorporated into the next analysis. The AI repeats this, trying to maximise the score every round. This cycle is repeated until the score has reached a maximum.

But what happens if a facial recognition system is only being fed pictures of one ethnicity or only men? In this case, it is highly likely that a person of different ethnicity or gender will be miscategorized or even not recognized at all. This is because unlike humans, an AI only cares for the raw pixel values of an image. It doesn’t see any deeper meaning in an image than the shapes, colours and textures it is searching for. As an effect of this, a person with different skin color than the people in the used dataset, will likely not be identified. 

Other instances

With IBM taking this very public step away from artificial intelligence, we can also take a look at other instances in which AI has proven to decide or come across in a racist way.
One of the most famous examples is “Tay”, an artificial intelligence bot that was created by Microsoft and was put onto Twitter on March 23, 2016. “Tay” was meant to show how artificial intelligence can learn from everyday life, but after just 16 hours, Microsoft had to pull the plug and delete the profile, after “Tay” started to tweet racist and misogynistic content. Over 96000 tweets were published by “Tay” during this short time, and while at first they only concerned celebrities or horoscopes, this quickly shifted towards controversial topics like the holocaust (“Tay” published tweets praising Hitler and saying she “hated jews”), racist statements or 9/11. This was mainly because “Tay” was fed by online trolls that influenced her behavior to a degree at which it became irreparable. A second version of “Tay” had to be taken offline even quicker. This is a special case, because we can connect the racist behavior of the AI to trolling and influence by humans that either are racist or thought of this as a joke, but it’s still a noteworthy example of how fragile and difficult AI behavior can be.
Another example that is way more damning for people is the US healthcare system that also uses AI. In 2019, word got out that the software implemented deeply racist behavior. In hospitals, a software decides which patients qualify for a special program that deals with long term issues regarding diabetes. The AI was given the costs of all the patients to the healthcare system, but disregarded personal needs or high risk patients. As a result, people of African-American heritage were downgraded and didn’t qualify for the special treatment, because they (generally) use the healthcare system less regularly as white people. This can be connected back to structural problems in the USA. Persons of color are still disadvantaged, often have less money or live in more difficult neighborhoods than white people. White people, however, can afford going to the doctor more often and as a result produce more data for AI to learn from. Ironically, this behavior doesn’t just hurt black people, but also the US healthcare system, because the costs for a follow-up treatment for late effects is much higher than the actual treatment that the AI was meant to assign to people.
Many hospitals and institutions in the USA use this or a similar software and it is said that the disadvantages for people of color are severe. 

Possible solutions

So in the end, the question remains if there is anything to do about these problems. We can’t just tell a machine to be “more ethical” or “less racist” as  it is just that – a machine. It has no concept of human ethicacy, which is in itself incredibly complex and often based on feelings, not on facts (data).
But there are solutions to this problem.

Firstly, while ethics are complex and also often highly subjective, they can be broken down into simple rulesets, which in turn can be programmed into an algorithm. This idea is not new and it’s most prominent example are the three rules of robotics by sci-fi author Isaac Asimov. Another, more recent application is the ongoing debate about autonomous cars and their decisionmaking in critical situations (e.g. hitting a pedestrian vs. saving the driver). In this case, various solutions have been proposed already, but an official decision has not yet been made.
Traditionally, governments have been used by societies to ensure ethics are observed through legislation and policing. Accordingly, said governments are and will be the ones forced into action as AI becomes increasingly prevalent. This is also in the making with the European Commission, the USA, the OECD and several other NGOs having put up committees or roadmaps regarding the topic in the past few years.

Secondly, and probably most obviously, a human control instance, that supervises decisions made by an autonomous system is another option for improval. However, this is only a temporary solution, as an autonomous machine, that is controlled by a human is not really autonomous and thus contradicts the central idea of AI. But as long as another, safely working solution isn’t found, this might be the best option to keep AI in check.

Lastly, sufficiently diversified datasets will be necesary to ensure a sustainable and non-discriminatory use of AI. Though in the end, the biases that appear in AI are only a reflection of the biases that are afflicting society. With this in mind, it becomes obvious that this issue cannot be dealt with without tackling systemic and public bias in society. As abstract and mysterious, as AI may seem to the individual, the problems that affect it are simply the same that affect us all.

 

Additional literature and recommendations:

Crawford, K. (2016). Artificial intelligence’s white guy problem. The New York Times, 25(06). Available online: https://www.cs.dartmouth.edu/~ccpalmer/teaching/cs89/Resources/Papers/AIs%20White%20Guy%20Problem%20-%20NYT.pdf

Zou, J., & Schiebinger, L. (2018). Ai can be sexist and racist – it’s time to make it fair. Nature, 559(7714), 324–326. https://doi.org/10.1038/d41586-018-05707-8

European Commission: Ethics guidelines for trustworthy AI. Available online: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai