Today, an amendment offered by Congresswoman Jennifer Wexton (D-VA) during the Science, Space, and Technology Committee markup of H.R. 4355, the Identifying Outputs of Generative Adversarial Networks (IOGAN) Act, was approved with bipartisan support.
The IOGAN Act supports research to close existing gaps in the technology to identify outputs of generative adversarial networks (GANs), also known as “deepfake” videos. Wexton’s amendment would direct the National Science Foundation to conduct research on the public understanding and awareness of deepfake videos as well as best practices for educating the public on how to spot these operations and discern the authenticity of content.
“Deepfakes pose a grave threat to our national security, and we are woefully unprepared to counter the impacts they can inflict on every aspect of our society,” said Congresswoman Jennifer Wexton. “The more we can improve public awareness and understanding of how manipulated content is created and shared, the better we can strengthen and safeguard our democracy in upcoming elections.”
Deepfake videos and images have become a growing problem due to GANs. This technology creates a feedback loop to produce increasingly accurate media outputs that portray highly realistic, but manipulated, content.
Bad actors have already begun to use deepfake videos to create a false perception of political figures on social media. In May, an online blogger published a doctored video of House Speaker Nancy Pelosi that depicted her appearing to slur her words during a press conference. This video was shared widely on Facebook and Twitter, including by the President himself. Before it had been debunked, the video had been viewed millions of times.
Artificial intelligence experts have indicated that significant challenges remain to effectively neutralize the threat of deepfakes. Raising public awareness of what deepfakes are, how they work, and how to identify them, has been pointed to as a crucial component of efforts to combat deepfakes.