While many users on social media are adept at identifying false information, the increasing use of deepfakes throws authenticity online into question.
Deepfakes are videos, photos or audio recordings that have been altered with the use of artificial intelligence meant to impersonate someone. While it often has relatively harmless uses, such as to create internet memes, it has also been used to help spread misinformation, leading many people to feel cautious about its potential.
Kristy Roschke is the director of the News Co/La b, a Walter Cronkite School of Journalism and Mass Communication initiative that is focused on advancing media literacy through the use of journalism, education and technology.
According to Roschke, one major reason why deepfakes go viral is that generative AI is becoming harder and harder to distinguish from real life.
"(This) is a problem for all of life, not just as it relates to news, but how we construct what we believe to be reality," Roschke said. "Maybe our idea of reality will be something that's blurred between the things we actually witness with our own eyes and some blend of other types of things."
One potential solution is labels or watermarks on AI-generated content, Roschke said.
"I think labels have been shown to be helpful with content in general, whether it was created by AI or not," she said.
However, due to the wide array of methods for sharing content, this may not be enough to catch all misinformation.
"False information spreads on social media," Roschke said. "False information also spreads on TV. It also spreads between people at work, in their neighborhoods, it spreads because humans spread gossip and humans behave in certain ways that social media exacerbates."
While many, like Roschke, believe labels will help curb the spread of misinformation via deepfake technology, others believe it has the potential to do more harm than good.
Subbarao Kambhampati is a professor in the School of Computing and Augmented Intelligence at ASU.
Kambhampati believes that labels or watermarks placed on AI-generated content could accidentally or intentionally be placed on real information, which could make the spread of deepfakes even easier.
"If you're using watermarks for AI-generated stuff, then only good people will follow those rules," Kambhampati said. "If you are trying to deceive somebody, you will use a service that doesn't have those watermarks."
Since there are many ways to develop and program AI, the technology is not limited to large companies. Therefore, tracking down the source of fake content could be difficult.
"Eventually, we will wind up putting watermarks on real things rather than on fake things," Kambhampati said.
Despite more misinformation being created through deepfake technology than ever before, Kambhhampati believes that there is a limit because humans will become more used to this kind of technology and gain media literacy as technology changes.
"I think we are very good at adapting to the way technology evolves," Kambhampati said. "My sense is we will come to a point where you can't tell the difference, but that would not be the end of the world, because we will learn not to trust our eyes and ears and expect independent collaboration, either because of the trustworthiness of the source or because there is actual cryptographic authentication of the information that you're being shown."
Despite the advances in deepfake technology and generative AI, strategies such as cryptography and watermarking can prove hopeful in the fight against misinformation.
Edited by River Graziano, Sadie Buggle and Grace Copperthite.
Reach the reporter at hrhea@asu.edu.
Like The State Press on Facebook and follow @statepress on X.
Hunter is a senior studying technological leadership. This is his fourth semester with The State Press. He has also worked as a legislative intern.