<%rssLink ()%> <%googleAnalytics ()%>

Can an algorithm be racist?

Other directories: help, engl1190, umw, lane, commons, open, adp, n490, hyp, mac, psu, cfcs

Yes.

Well, sort of. If an algorithm is simply a mathematical computer program, it is incapable of being racist. It takes certain inputs and gives the direct output. However, if it is viewed as a computer program that assists humans with making decisions, then it is influenced by people. Just as people do, it changes its assumptions based on the person using it, therefore being capable of racist assumptions.

Origin and Prevalence

Humans tend to view algorithms on a pedestal. It isn't on the same level as us; they're supposed to be the product of the best of humanity. This is why it took a while for anyone to imagine that algorithms could be significantly influenced by their creators and the trends of society surrounding that time. In fact, this question was never even thought of until people started to notice the effects of a 'racist' algorithm. The first instance of an algorithm being called out for its racist actions is when HP technology recognized a white person's face but not a black person's. In 2009, CNN reported that an HP computer that had a camera that was supposed to follow a person's face malfunctioned by only recognizing a white person and not a black person. In conjunction with this event, some camera companies programmed their devices to detect if someone is blinking in the photo; more often than not, it will register someone of Asian descent as always blinking. As algorithms have gotten more advanced, some of them carry a more racist undertone than other ones, even if the company is not intentionally doing so.

Issues and Analysis

Can an Algorithm be racist? To start, we need to define what algorithms are. Algorithms are the building blocks of computer programs; they are well defined procedures that take input and produce output. The definition can be simplified as a series of instructions used to solve a problem for humans. For example, if I wanted to catch a flight to Los Angeles and I was looking for the cheapest price possible, I could go to a website like Expedia and they would use an algorithm that compares prices between different airlines.

Algorithms can also use the information it gathers to make assumptions about users. For example, whenever a video is suggested to you on Youtube, it is because it is using the information its algorithm has gathered. It may say things like, "People who watched this video also liked this video too." Algorithms can also be used to manipulate people. For example, websites like Amazon use algorithms that can track your spending tendencies. If someone who shops on Amazon regularly and data show that person spending the most money on Thursday's, their algorithm will notice that. They will show more ads on items they believe the customer would like to buy or even bring up items that I have viewed in the past and thought about buying.

Dr. Safiya Noble, a professor at the University of Southern California Annenberg School of Communication, wrote a book entitled, Algorithms of Oppression, which critiques the digital age. Noble makes the argument that this age of digital communication uses "algorithms of oppression" that marginalize minority groups through the way they structure and encode the world around us. She sees this as a big part of systematic racism. Before she earned her PhD, Professor Noble was in the advertising industry, where she kept track of digital trends. The goal was to get as much content in front of their clients' eyes as possible. Years later, she started to see the world of search engine optimization differently when a friend told her about the content that would come up if you searched the term "black girls." She says the first page was almost exclusively sexualized and pornographic content. At first, she thought it may just be a glitch, but then she saw it was the same thing when she searched for "latinas" or "asian girls." Noble isn't the first person to notice discrimination built within an online tool that lots of people still believe are objective. Latanya Sweeney, an African-American woman with a Harvard PhD, noticed that her search results were showing ads asking, "Have you ever been arrested?". However, these ads weren't appearing for her white colleagues. Sweeney then began a study that demonstrated the machine-learning tools behind Google's search engine were inadvertently racist, linking names more commonly given to black people to arrest records. However, she found that is not just racial discrimination; Google Play's recommender system has been found to suggest that those who download Grindr (a dating app for gay men) also download a sex offender location-tracking app. In both cases, it wasn't necessarily that the programmers that created these algorithms were racist. However, the algorithms were picking up on frequent discriminatory cultural associations between black people and criminal behavior and homosexuality and predatory behavior.

Dr. Noble has written her book on the significance of the digital age we live in. One of her main points is that companies like Google are so influential that they can shape public attitudes, as well as reflect them. Therefore, they should feel the social responsibility to try and shape public attitudes for the betterment of people from all different backgrounds. In 2016, Google stated that its image search engine produces "a reflection of content from across the web, including the frequency with which types of images appear and the way they're described online". This arose after people searched in images, "three white teenagers" and "three black teenagers". The results of the above searches created a reaction on social media that attacked Google for the algorithm's output. Google and companies like it are responsible for representing a constantly changing society. Old prejudices die hard, but these companies need to overcome them (along with everyone else). Moving forward with simple day to day research and more lucrative research, people should be weary of the information they are receiving, and keep in mind how it may be biased.


All Content released CC0 (Public Domain) by the Digital Polarization Initiative.

The Digital Polarization Initiative is a cross-institutional project that encourages students to investigate and verify the information they find online. Articles are student-produced, and should be checked for accuracy before citation as sources.

DigiPo members can edit this page

Photo Credit: Header photos generate in randomly. Check this page for a list of photography credits and licensing.

The Digital Polarization Initiative is a student-run project which allows university students to investigate questions of truth and authority on the web and publish their results. Learn more, or see our index. Photo credits here. DigiPo members can edit this page.