Study Reveals Racist Stereotypes Produced by Large Language Models

A recent study conducted by Stanford University’s Human-Centered Artificial Intelligence and the Allen Institute for AI has revealed that popular large language models, including OpenAI’s GPT-3.5 and GPT-4, as well as Google’s T5 model, continue to produce racist stereotypes, particularly against speakers of the African American English (AAE) dialect. Despite efforts to implement safeguards and limit harmful content, these systems demonstrated “covert racism” when making decisions related to employment, legal matters, and academia.

The study found that in various experimental scenarios, the large language models were more inclined to assign speakers of African American English lower prestige jobs, describe them using derogatory terms such as “lazy,” “stupid,” and “dirty,” and even recommend their conviction for crimes compared to speakers of Standard American English. Stanford researcher Pratyusha Ria Kalluri expressed concern that instead of steady improvement, corporations are merely addressing the criticisms by playing “whack-a-mole.”

In other news, Apple recently unveiled the iPhone 16, the first line of iPhones specifically designed for AI. These new phones will feature “Apple Intelligence,” which encompasses a range of generative AI features such as summarizing audio notes, writing texts and emails, an advanced version of Siri, and AI capabilities for searching photos, generating and editing images, and identifying real-world objects.

In a legal and ethical development, musician Michael Smith from North Carolina has been charged with using artificial intelligence to create hundreds of thousands of songs and using bot accounts to stream the AI-generated music billions of times on platforms like Spotify and Apple Music. This fraudulent scheme allowed him to collect over $10 million in royalty payments since 2018.

In the AI deal of the week, Safe Superintelligence, an AI startup co-founded by OpenAI’s former Chief Scientist Ilya Sutskever, has raised $1 billion in capital from prominent investors like Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. The company, which aims to develop safe AI systems that surpass human cognitive abilities in executing complex tasks, is now valued at $5 billion. Additionally, enterprise AI search platform Glean has raised $260 million at a valuation of $4.6 billion.

A deep dive into the world of AI companion apps reveals that hundreds of “AI girlfriend” applications have flooded the internet. These apps allow users to engage in endless conversations with AI characters, with some individuals using them as replacements for human relationships or to combat loneliness. However, there are concerns about the addictive nature of these apps and their potential to alienate users. The risks are further compounded by a growing male mental health and loneliness crisis.

In the AI index, the global luxury goods market is grappling with the issue of counterfeits. Some companies are turning to AI and computer vision to analyze minute product details and distinguish between genuine goods and mock-ups. The trade of counterfeit goods is projected to reach $1.79 trillion by 2030, with the fake shoe market alone valued at $600 billion in 2023.

Share this post

Let's Create free account on audie. No credit card required, give Author’s Voice a try!

Resend Verification Email

Enter your email to receive the verification code

[uv_resend_verification_form]