Putting the AI in Racism: How AI technology has gendered and racial biases
If you have been reading the news or been on any other blogs lately you have probably seen ChatGPT, DALL-E, or OpenAI discussed or brought up. These platforms, and their parent company, OpenAI, are taking the AI world by storm and have been the talk of the tech town for a while, including my partner. I can remember sitting watching TV and having my partner run into the living room to show me an image he had created with DALL-E 2 or a prompt he had answered by ChatGPT. He was beyond excited with this new platform and amazed by the accuracy they each had.

Although there were flaws in exactitude, the seeming precision DALL-E and ChatGPT could produce was groundbreaking. “OpenAI started by trying to build a system that understood language, taking advantage of the troves of text on the internet to learn from, OpenAI officials told The Washington Post….OpenAI also tried to combine vision with language…That resulted in DALL-E, which was released in January 2021, and could create images based off human prompts. Soon after, it created DALL-E 2, a program that generated even better photorealistic images.”[i] During 2021, everyone I knew was talking about these platforms and was sending each other things they had the AI create. Whole forums and platforms were dedicated to the sharing and dissemination of such projects.
Below is an example of a prompt written by ChatGPT regarding why it has racial and gender biases.

The company, OpenAI, states, “OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.”[ii] From their mission it seems that OpenAI wants all to benefit in a way that enhances lives. However, as with everything there are drawbacks. “One of the great promises of artificial intelligence (AI) is a world free of petty human biases. Hiring by algorithm would give men and women an equal chance at work, the thinking goes, and predicting criminal behavior with big data would sidestep racial prejudice in policing.

https://www.wired.com/story/ai-sees-man-thinks-official-woman-smile/
But a new study shows that computers can be biased as well, especially when they learn from us. When algorithms glean the meaning of words by gobbling up lots of human-written text, they adopt stereotypes very similar to our own.”[iii] Ultimately, humans program AI and we transfer our biases onto them. Even with machine learning, which many believe to hold no biases, the data entered has an inherently biased starting point. We live in a systemically racial and gendered society where the “norms” are burned into us from an early age and are constantly perpetuated throughout our lives. My point is made by a simple phrase: I mean, how many times have we heard the phrase “boys will be boys? Experts are writing, “It is becoming alarmingly clear from a growing body of evidence that decision algorithms are perpetuating injustice for many. Many cases are emerging that postcode, ethnicity, gender, associations and poverty negatively bias the decisions being delegated to machines….One of the common causes of decision biases arises from ethnicity or gender biases, often unconscious, of the programmer or of those classifying the data samples. Another major cause is the application of machine learning algorithms, such as deep learning, and the way in which they are trained.”[iv] Humans make the AI and there is no way to separate our very thoughts from this creation.
It is further argued, “One of the benefits of using machine learning systems in an engineering context is that they reduce or remove the impact of outliers (examples outside of the norms in the data) in the training data. For example, shaky arm movements of a robot can be turned into smooth movements by training with machine learning. However, in the context of a decision algorithm, the ‘outliers’ can be the minority groups in the data. These may be people from different ethnic groups or a low represented gender. Google illustrates this problem by training a system with drawings of shoes. Most of the sample are the plain shoes or trainers drawn by the majority of people. But some people drew high-heels. Because this was a minority, the post training tests misclassified high heels as ‘not shoes’. This is a simple example where the misclassification can be clearly seen. In a targeting context there would be no way of knowing which minorities could mistakenly fall into the category of legitimate targets. And in conflict it is rarely possible to get clear information about casualties and whether or not they were legitimate targets or collateral damage.”[v]

https://medium.com/swlh/is-text-to-image-ai-empowering-the-creator-economy-or-dooming-it-f01ea1444df1
However, in the end, we cannot rely on AI to be unbiased. “AI decision algorithms and face recognition algorithms can be alarmingly biased or inaccurate with darker shades of skin and with women. These may well improve over time but there have been no magic bullet solutions despite massive efforts and several announcements. Many of the companies developing software, particularly for policing, insist that they did well on their inhouse testing.”[vi]
When discussing computer learning and behavior we must remember, “AI is a set of tools and technologies that are put together to mimic human behavior and boost the capacity and efficiency of performing human tasks.ML [machine learning] is a subset of AI that automatically adapts over time based on data and end-user input. Bias can be introduced into AI and ML through human behavior and the data we generate.”[vii]

Computers are essentially doing our work and we skew the data used to create. “The ML [machine learning] model may be biased from the start if its assumptions are skewed. Once built, the model is tested against a large data set. If the data set is not appropriate for its intended use, the model can become biased. Bias can show up anywhere in the design of the algorithm: the types of data, how you collect it, how it’s used, how it’s tested, who it’s intended for or the question it’s asking. As ML learns and adapts, it’s vulnerable to potentially biased input and patterns. Existing prejudices and data that reflects societal or historical inequities can result in bias being baked into the data that’s used to train an algorithm or ML model to predict outcomes.”[viii] When this happens groups are targeted unintentionally, or in some cases intentionally, making the systemic racial world we live in worse.
Although AI, such as DALL-E and ChatGPT, are showing how far we have come in advancements with technology, we must remember that AI is not completely reliable. In simple facial recognition faces of color are misrecognized significantly more. “Gender and racial biases have been identified in commercial facial recognition systems, which are known to falsely identify Black and Asian faces 10 to 100 times more than white faces, and have more difficulty identifying women than men.”[ix] Advancements are always great, but we must remember to not trust them wholeheartedly and believe that technology can do no wrong. Our technology is made by us and reflects the world we live in and create.
[i] Verma, Pranshu. What to Know About OpenAI, the Company Behind ChatGPT, The Washington Post, https://www.washingtonpost.com/technology/2023/02/06/what-is-openai-chatgpt/.
[ii] OpenAI Website: Mission Statement, https://openai.com/about/.
[iii] Hutson, Matthew. Even Artificial Intelligence Can Acquire Biases Against Race and Gender, Science, https://www.science.org/content/article/even-artificial-intelligence-can-acquire-biases-against-race-and-gender.
[iv] Sharkey, Noel. The Impact of Gender and Race Bias in AI, Humanitarian Law and Policy, https://blogs.icrc.org/law-and-policy/2018/08/28/impact-gender-race-bias-ai/.
[v] Sharkey, Noel. The Impact of Gender and Race Bias in AI, Humanitarian Law and Policy, https://blogs.icrc.org/law-and-policy/2018/08/28/impact-gender-race-bias-ai/.
[vi] Sharkey, Noel. The Impact of Gender and Race Bias in AI, Humanitarian Law and Policy, https://blogs.icrc.org/law-and-policy/2018/08/28/impact-gender-race-bias-ai/.
[vii] Winokur, Rebecca et al. Algorithm Bias, American College of Healthcare Executives, https://www.ache.org/blog/2020/the-impact-of-gender-and-racial-bias-on-an-algorithm.
[viii] Winokur, Rebecca et al. Algorithm Bias, American College of Healthcare Executives, https://www.ache.org/blog/2020/the-impact-of-gender-and-racial-bias-on-an-algorithm.
[ix] Winokur, Rebecca et al. Algorithm Bias, American College of Healthcare Executives, https://www.ache.org/blog/2020/the-impact-of-gender-and-racial-bias-on-an-algorithm.