In the age of advanced artificial intelligence (AI), the ability to discern between real and AI-generated videos has become increasingly challenging. With the rise of deepfake technology, which can create highly convincing and photorealistic videos, it is crucial to develop AI literacy to navigate the web and protect oneself from misinformation, deepfakes, and low-quality content.
AI-generated videos are still in their early stages compared to other AI modalities, such as text, images, and audio. However, experts believe that with access to higher quality data, AI-generated videos will soon become even more realistic and devoid of telltale artifacts that currently mark their creation.
To identify AI-generated videos, it is essential to understand the growing nature of AI technologies and be aware that what one sees may be generated by AI. This AI literacy is more important than relying on individual cues, according to Siwei Lyu, a professor of computer science and engineering at the University at Buffalo SUNY.
Imposter videos and videos generated by text-to-image diffusion models are the two main categories of AI-generated videos. Imposter videos involve face swapping, where a person’s face is replaced with someone else’s, often a celebrity or politician, to create fake content. Lip syncing is another technique used to manipulate a person’s mouth movements and replace the audio.
When identifying face swaps, one should look for flaws or artifacts around the boundaries of the face, especially when the head moves obliquely to the camera. In lip sync videos, attention should be focused on the subject’s mouth, particularly the teeth, as irregularities or changes in the number of teeth can indicate fakery. Additionally, the lower half of the face may appear liquid or rubbery due to imprecise alignment.
Text-to-image diffusion models, which generate videos from text or image prompts, are a newer development. While these models can produce highly detailed and realistic videos, they often require slick editing and are limited in length. Short clips should be approached with suspicion, as longer flawless videos are currently beyond the capabilities of this technology. Temporal inconsistencies, such as physically impossible changes in the background or objects, can also indicate AI-generated videos.
Developing AI literacy is crucial in combating deepfakes and misleading content. It involves verifying the source of information, checking for other sources corroborating the story, and utilizing fact-checking websites like Snopes and Politifact. Media literacy is essential in discerning between real and AI-generated content, and relying on social media as a primary news source should be approached with caution.