As we approach Election Day, the threat of misinformation looms larger than ever. Researchers in the field of disinformation are particularly concerned about the potential misuse of artificial intelligence (AI) and deepfake technology.
Deepfakes are manipulated videos that can make anyone appear to say and do anything posing a significant risk.
Ahmer Arif, an assistant professor at the University of Texas at Austin’s School of Information, explains that elections create a “perfect storm” for misinformation. Arif says during high-stress periods, bad actors can easily exploit uncertainty and distrust within the public.
“I’ve done a lot of technical work on how this stuff spreads,” Arif said. “In some ways the pandemic and even elections can be considered this kind of crisis situation. It’s like this period of time where there’s a lot of uncertainty and anxiety. And when that happens, rumors spread naturally. It’s how humans try to come to grips with that kind of uncertainty and establish a sense of control.”
A report from the U.S. Department of Homeland Security, echoes Arif’s sentiments, the evolution of deepfakes through AI makes it increasingly easier for malicious individuals to mislead the public, potentially leading to disastrous consequences.
“The malicious misuse of synthetic content and deepfakes pose a threat to any company, organization, or government entity that relies on public - or a customer’s - trust to achieve its mission,” the report said. “When seeing is no longer believing, trust in companies, non-government organizations, law enforcement agencies, and the legal system erodes, facilitating an inherently unstable and distrusting environment.”
In roughly 30 minutes, Arif was able to create a deepfake video of Balogun. The specific software he uses allows him to type in anything to make ‘deepfake Balogun’ say anything.
“People are able to produce this stuff like the ease with which this is being, you know, this can be produced has huge implications for society,” Arif warns.
How to Spot Deepfakes
Despite the sophistication of deepfakes, there are methods to identify them. Arif advocates for the SIFT technique, developed by Mike Caufield from the University of Washington, which involves four steps:
1. Stop - Pause before sharing. Take a moment to consider the material you’re viewing.
2. Investigate the Source - Check the credibility of the source.
3. Find Better Coverage - Look for additional reporting on the topic.
4. Trace Claims and Quotes to Original Media - Verify the origins of any claims made.
This approach encourages individuals to engage critically with the content they consume, making it easier to navigate a landscape rife with misinformation.
The DHS report also calls for regulatory measures, including the establishment of a new federal agency to address manipulated content threats.
As the election nears, staying informed and vigilant is crucial. By employing techniques like SIFT and remaining skeptical of sensational claims, everyone can help combat the spread of disinformation.