Home » Lab Report

Attribution-NonCommercial-ShareAlike 4.0 International

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.

Lab Report

Implicit Bias in AI Image Generation

Noel Placencia

Nguyen Doan

Tohid Shafayet

English Department, City College of New York

ENGL 21007

July 27, 2023

Table of Contents

Abstract. 3

Introduction. 3

Materials and Methods. 4

Results. 4

Discussion. 7

Conclusion. 8

References. 10

Abstract

We can see the application of AI being used on many phone applications, web search engines, etc. Despite its usefulness of being quick, and convenient in accessing immediate information, it still sometimes yields results that feature bias, and controversy. In this project, we are going to investigate the elements of bias and controversy of AI in terms of gender and race by using an AI-based tool to generate images about humans. The results we collect would be gender: male or female; and race: black, white, or other (i.e. Asian, Latino, Hispanic, etc.). Apparently, the results show that with the keyword “teacher”, AI regenerated female figures in most of the cases, while for the other two, “engineer” and “powerlifter”, it leans to produce male figures, which contributes to the element of stereotype.  Hence, our observation concludes that AI is still not ready to be practiced widely in the market. And it would also be interesting to think that among many factors, the one that mostly contributes to AI’s bias unsurprisingly lies in humans, the people who have trained it.

Introduction

The computer has been a crucial part of human’s modern life. In the 21st century, scientists have made many significant advancements in terms of technology. For example, in medicine, we have developed capsule endoscopy and gene editing; in business, we have blockchain; in lifestyle, we have E-cigarettes. All these inventions have one thing in common: to enhance people’s lives. In late 2022, the development of Artificial Intelligence, or AI, has gone viral with the name ChatGPT. ChatGPT is an AI-generating tool that can read and analyze the user’s input and output a precise result. Since then, a lot of conversations and arguments have been stirred up around the accuracy and precision of ChatGPT. A lot of studies have been done on how AI can be a biased tool for its users. In this assignment, we did a small project on generating images using AI. By the end of this project, we want to analyze the images in terms of gender and race to see if the AI that people have been working with is biased or not.

Materials and Methods

In order to prove the bias of AI, one of the common ways is to command AI to generate images using neutral keywords that are not too specific. We chose “engineer”, “teacher” and “powerlifter” as our neutral terms since they don’t tell the AI specifically what gender, color, or race they carry. Next, we needed to find an AI tool that generates images from our terms. We found out that DALL-E could be a great tool to tackle the experiment since it, as well as ChatGPT, has been well-known because they are developed by the same professional AI developing company, OpenAI. DALL-E is a powerful AI image generating tool, it can generate any kind of image by following the user’s command, which can be simply one word or a detailed description. Image Creator, owned by Microsoft, is an image-generating tool that is powered by DALL-E, it can generate 4 pictures with every one click. For this assignment, we want to generate at least 25 times, a total of 100 images, for each of our terms. Then, we will collect our results and analyze them in two categories: gender and race by counting how many people in the images belong to each specific category. We then want to compare these results with real-world statistics in order to come up with a solid conclusion for our hypothesis.

Results

The results of our study when observing the gender distribution of our search terms using Bing’s AI to generate images can be summarized in the following charts:

Figure 1. Pie chart of our study’s findings on the gender distribution when using the term “Engineer” to generate an image (Placencia et all, 2023)

Figure 2. Pie chart of our study’s findings on the gender distribution when using the term “Powerlifter” to generate an image (Placencia et all, 2023)

Figure 3. Pie chart of our study’s findings on the gender distribution when using the term “Teacher” to generate an image (Placencia et all, 2023)

Table 1. Table of gender distribution of AI generated images (Placencia et all, 2023)

Search TermEngineerPowerlifter2Teacher
Male7510041
Female25059

The results of our study when observing the racial distribution of our search terms using Bing’s AI to generate images can be summarized in the following chart:

Figure 4. Visual representation of racial distribution of AI generated images (Placencia et all, 2023)

Discussion

It is clear from our results that AI exhibits gender bias and stereotypes when generating images. What is interesting about our findings is that the AI’s distribution of gender matched common stereotypes and for two of our search terms generated images at a percentage close to the actual statistic. When generating images of teachers the AI depicted Females a majority of the time, playing into the stereotype that teaching is a “woman’s profession”. Although statistics show that many teachers are female, it is interesting to note that AI generated male teachers 41% of the time when the actual statistic is 26% (Zippia 2023). That is nearly double the actual amount. We suggest that a study be done to determine if AI favors generating images of men despite actual statistics. We found that this was especially evident in the results of our study, when generating images of our search terms the AI depicted women as the majority when the term was a traditionally nurturing role (teacher). Conversely men were depictied as the majority when the term was more active or required a high level of competence (engineers and powerlifters). According to Marinucci, “In this context, the conceptual association generally underlying patterns of gendered stereotypical thinking is the one that opposes women as primarily nurturing and affectionate and men as competent and active” (Marinucci et all, 2021, page 750). Interestingly, we found that the AI only generated images of male powerlifters although it is a sport that has both male and female divisions, this is something that was not seen in the other search terms.

When it comes to racial bias we found that AI favors generating images of white people over people of color. When generating asking the AI to generate images for our search terms we found that it overwhelmingly generated images of white people for all search terms. When asking the AI to generate images of powerlifters we found that 100% of the images were of white men, clearly favoring depicting white men in a generally diverse sport. We found that the results were more diverse when generating images for engineers and teachers, but still mostly depicting white men. It is clear that AI depicts whiteness as more intellectually and physically superior According to Park, “Through a white racial frame, whiteness is associated with moral as well as intellectual superiority and with moral innocence as well as physical cleanliness” (Park, 2021, pg 1965).

Conclusion

It is clear that the staggering bias towards certain groups is not only an issue in the workforce today, but also something that can continue to plague the chance of equal opportunity at a career for many people. As AI is only a digital extension of our knowledge and practices, it is no surprise that results provided by AI reflect the unfortunate reality we face today. Fortunately, we can utilize the data and evidence to recognize and advocate for a more balanced workforce, one which would greatly contrast with the unjust makeup predicted by artificial intelligence. Leaving the fate and livelihood of thousands of workers, parents, and families to the hands of a few people is already morally questionable. Unfortunately, bias is an everyday part of the workforce. When you introduce artificial intelligence as a digital aid to the status quo, it can result in grim feedback of what AI expects the workforce to look like. This is seen throughout our research and evidence, with huge disparities between different genders and races provided by AI reflecting not only the issues in the makeup of the workforce today, but also which groups of people are going to be favored in the near future.

References

Marinucci, L. (2022). Exposing implicit biases and stereotypes in human and artifcial . Springer 747-761, 747-761.

Microsoft. (2023). Bing. Retrieved from Image Creator: https://www.microsoft.com/en-us/edge/features/image-creator?form=MT00D8

OpenAI. (2023). ChatGPT. Retrieved from OpenAI: https://chat.openai.com/

OpenAI. (2023). DALL-E2. Retrieved from DALL-E: https://openai.com/dall-e-2

Park, S. (2021). More than Skin Deep: a Response to “The Whiteness of AI”. Springer Nature, 1961-1966.

Placencia, N. (2023). “Engineer” Gender Distribution. Figure 1.

Placencia, N. (2023). “Powerlifter” Gender Distribution. Figure 2.

Placencia, N. (2023). “Teacher” Gender Distribution. Figure 3.

Placencia, N. (2023). Racial Distribution of AI Generated Images. Figure 4.

Zippia. (2023, July 27). Retrieved from TEACHER DEMOGRAPHICS AND STATISTICS IN THE US: https://www.zippia.com/teacher-jobs/demographics/

 Placencia, N. (2023). Table of gender distribution of AI generated images. Table 1.