OpenAI, a San Francisco-based company specializing in artificial intelligence (AI), has expressed its apprehension regarding the potential consequences of a realistic voice feature in its AI system. The company cited literature suggesting that conversing with AI in a manner similar to human interaction could lead to misplaced trust, and the high-quality voice of the GPT-4o model might exacerbate this effect. OpenAI’s safety report on its ChatGPT-4o version of AI highlighted the concept of anthropomorphization, which involves attributing human-like behaviors and characteristics to nonhuman entities such as AI models.
According to OpenAI, testers have been observed engaging with the AI in ways that indicate a sense of shared bonds, including expressing sadness over the last day of interaction. While these instances may seem harmless, OpenAI believes it is crucial to study how these relationships might evolve over extended periods. The company also speculates that excessive socializing with AI could potentially hinder users’ abilities and inclination to form relationships with other humans.
OpenAI further suggests that extended interaction with AI models could influence social norms. For instance, the deferential nature of the models, allowing users to interrupt and take control at any time, may become anti-normative in human interactions. Additionally, the AI’s ability to remember details and perform tasks might lead to over-reliance on the technology.
Alon Yamin, co-founder and CEO of AI anti-plagiarism detection platform Copyleaks, emphasizes that AI should never replace genuine human interaction. He echoes OpenAI’s concerns and questions the impact of this technology on human relationships.
OpenAI plans to conduct further tests to explore how voice capabilities in its AI system may contribute to emotional attachment. During testing, teams were able to prompt ChatGPT-4o to repeat and produce conspiracy theories, raising concerns about the AI model’s potential to convincingly disseminate such information.
OpenAI faced criticism in June when it was forced to apologize to actress Scarlett Johansson for using a voice similar to hers in its chatbot. Although OpenAI denied using Johansson’s voice, CEO Sam Altman’s social media post with the one-word message “Her” drew attention, as he previously expressed admiration for the film “Her,” where Johansson voiced an AI character. This incident shed light on voice-cloning technology.