AI image generators often amplify stereotypes

Ria Kalluri and her team made a simple request to Dall-E, a bot that utilizes artificial intelligence (AI) to create images. “We asked for an image of a disabled individual leading a meeting,” Kalluri explains. As someone who identifies as disabled, Kalluri emphasized the importance of this representation. However, Dall-E’s response did not meet their expectations.

Last year, when Kalluri and her fellow researchers requested an image from Dall-E, the bot generated an image showing a “person who is visibly disabled watching a meeting while someone else leads,” as recalled by Kalluri, a PhD student specializing in AI ethics at Stanford University. This research finding, which highlighted bias in AI-generated images, was presented at the ACM Conference on Fairness, Accountability, and Transparency in Chicago in June 2023.

Not recognizing that a person with a disability could lead a meeting exemplifies ableism. Kalluri’s team uncovered instances of bias, including racism and sexism, in images produced by AI-powered bots, amplifying societal prejudices through inaccurate representations.

Unfortunately, these biases prevalent in society are often magnified by AI, creating a skewed and prejudiced version of reality. Other researchers have expressed similar concerns about the disproportionate biases perpetuated by AI.

Despite Dall-E’s shortcomings, the researchers also investigated Stable Diffusion, another image-generating bot, which exhibited its own biases. When asked for images of an attractive person, the majority depicted individuals with light skin and bright blue eyes, skewing towards unrealistic and homogeneous portrayals.

Conversely, when tasked with illustrating a poor individual, Stable Diffusion predominantly represented them as dark-skinned, overlooking the diverse realities of poverty. The disparities between AI-generated representations and real-world demographics underscore the inherent biases present in AI models.

Expanding their analysis to various occupations, the researchers discovered racist and sexist tendencies in the bots’ depictions. For instance, all software developers were depicted as male, with the majority having light skin, despite the diverse demographics within that profession.

In addition to occupational biases, even depictions of common objects like doors and kitchens reflected a biased perspective, favoring a stereotypical image of a North American home, neglecting the global diversity in living environments.

These biased images perpetuated by AI pose substantial risks as they can reinforce stereotypes and prejudices, leading to harmful societal impacts. Research has shown that exposure to biased imagery can significantly influence people’s perceptions and attitudes over time.

Kalluri emphasizes the pervasive influence of AI in shaping narratives through biased images and texts, underscoring the urgency of addressing these issues to prevent adverse outcomes.

Among the attempts to address bias in AI, OpenAI’s Dall-E has been updated to produce more inclusive images, although the specifics of this modification remain undisclosed. It is believed that Dall-E now incorporates edits to enhance the diversity and inclusivity of its generated images.

While efforts are made to rectify biases in AI-generated content, challenges persist in achieving comprehensive inclusivity. Kalluri warns against solely relying on AI to tackle deep-rooted biases, highlighting the ongoing complexities associated with ensuring fair and accurate representations in AI-generated content.

Evident shortcomings in AI-generated images, such as the exclusion of diverse family compositions, highlight the need for a more nuanced and culturally sensitive approach to content creation. Kalluri stresses the importance of incorporating diverse perspectives and values in AI development to avoid perpetuating harmful biases.

While acknowledging the potential benefits of AI technology, Kalluri advocates for a decentralized approach that empowers local communities to shape AI applications based on their unique values and needs, fostering a more inclusive and culturally sensitive technological landscape.

Other articles

Post Image
New Administrators’ Entry Plan

After years of diligent preparation for a school leadership role—participating i …

Read More
Post Image
Cell phone policies in NYC schools highlight difficulties of implementing statewide ban.

Forest Hills High School’s cell phone policy appears straightforward on th …

Read More
Post Image
Michigan school districts must allocate federal stimulus funds before deadline

Michigan is sitting on billions of dollars in COVID-19 federal funding earmarked …

Read More