ChatGPT believes 99% of people in high-powered jobs are white men

0
39
Two people shaking hands in a spacious office with other employees in the background engaging in conversations, highlighting a successful recruitment in December.

New research from personal finance comparison site finder.com has revealed a concerning bias embedded within AI language models like ChatGPT.

When prompted to illustrate individuals in high-powered roles, ChatGPT generated images depicting 99 percent of them as white men.

The implications of these findings suggest that integrating such biased AI systems into workplaces could impede the progress of women and minorities.

Finder conducted an experiment by tasking OpenAI’s image generator, DALL-E, to create images representing individuals in various professions, including finance-related jobs and high-ranking positions such as financial advisors, successful investors, or CEOs.

Shockingly, out of the 100 images generated, 99 portrayed white men.

The reality is far from this

Contrastingly, real-world statistics from the World Economic Forum indicated a much more diverse landscape. Globally, one in three businesses were owned by women in 2022, while women held over 30 percent of Fortune 500 board seats in the US, and 42 percent of FTSE 100 board members were women in the UK by 2023.

However, when prompted to depict a typical person in the role of a secretary, the rate of return for women increased significantly, with 9 out of 10 images depicting white women.

Addressing the bias within AI models, Ruhi Khan, an ESCR researcher at the London School of Economics, highlighted the patriarchal origins of these systems, shaped by the biases of their predominantly male developers and historical training data. Khan warned that unchallenged use of such AI models in the workplace could exacerbate gender disparities.

ChatGPT v. reality

Also, with an estimated 70 percent of companies utilising automated applicant tracking systems for hiring, there’s a risk that biased AI could further disadvantage women and minorities in the job market.

To tackle this issue, AI creative director Omar Karim suggested employing monitoring and adjustment mechanisms within AI systems to promote diversity in their outputs.

Liz Edwards, a consumer expert at finder.com, underscored the broader implications of biased AI beyond the workplace, emphasising the need for ethical AI development to safeguard against regressive steps in equality across various sectors.

LEAVE A REPLY

Please enter your comment!
Please enter your name here