The rise of deepfakes on social media platforms has created a breeding ground for fake, AI-augmented images and audio. As the November election approaches, the prevalence of deepfakes has reached alarming levels. From Taylor Swift endorsing Donald Trump while dressed as Uncle Sam to a campaign video of Kamala Harris mentioning Joe Biden’s “senility” – all of these are fake. The issue has become so concerning that even former President Trump and tech billionaire Elon Musk have shared and perpetuated deepfakes.
Earlier this year, AI-generated robocalls mimicking Joe Biden’s voice discouraged voters from participating in the New Hampshire primary. Trump himself has been a victim of AI manipulation, with fake images of him in police custody circulating before his first criminal indictment. Furthermore, Trump has cast doubt on the authenticity of images showing large crowds at Kamala Harris’ campaign events, suggesting they were AI-augmented.
A recent survey conducted by the Elon University Poll and the Imagining the Digital Future Center at North Carolina’s Elon University revealed that a staggering 78% of American adults expect deepfakes and artificial intelligence systems to impact the outcome of the 2024 presidential election. Additionally, 45% of American adults lack confidence in their ability to detect fake photos.
Lee Rainie, the director of the Imagining the Digital Future Center, noted that this uncertainty spans across all demographic groups, indicating a widespread concern among Americans about being victimized by deepfakes. While baby boomers are often criticized for falling for absurd AI-generated content, the reality is that anyone can be deceived, especially as AI technology becomes more sophisticated.
Julia Feerrar, an associate professor at Virginia Tech, highlighted that people often fall for AI content that aligns with their interests or preferences. Misleading content designed to evoke strong emotions, such as shock or anger, tends to be widely shared. It is crucial to consider the potential damage to a person’s reputation when deciding whether to address the issue of fake content. Discreetly pointing out the mistake can be a generous and empathetic approach.
Janet Coats, the managing director of the Consortium on Trust in Media and Technology at the University of Florida, emphasized the importance of engaging in one-on-one conversations rather than publicly shaming individuals. Research has shown that such conversations allow for listening and reasoning, fostering a better chance of improving information quality.
It is essential to remain vigilant and aware of the possibility of falling for AI-generated images ourselves. While current AI-generated images often exhibit strange, hyperreal qualities and inconsistencies, advancements in AI tools may make it easier to deceive even the most discerning individuals. When encountering content that triggers strong emotions or raises red flags, it is crucial to pause and critically evaluate it. Conducting a fact-check using search engines and assessing the credibility of the content’s source can help determine its accuracy.