Artificial Intelligence & Machine Learning , Fraud Management & Cybercrime , Next-Generation Technologies & Secure Development

Senators Press Social Media Firms to Fight 'Deepfake' Videos

Warner and Rubio Want Companies to Develop Standards and Policies to Combat Fakes
Senators Press Social Media Firms to Fight 'Deepfake' Videos

U.S. senators Mark Warner, D-Va., and Marco Rubio, R-Fla., are urging social media companies to create new policies and standards to combat the spread of "deepfake" videos. In letters sent this week, the two lawmakers urge 11 firms take action, citing the potential threat to American democracy.

See Also: Live Webinar | Empowering Your Human Firewall: The Art and Science of Secure Behavior

Deepfake refers to using advanced imaging and machine technologies to convincingly superimpose video images or audio recordings that give the impression that people have done or said something that they did not (see: Visual Journal: Black Hat Europe 2018).

"Even easily identifiable fabricated videos can effectively be used as disinformation when they are deliberately propagated on social media," Warner and Rubio write in the letters.

The senators sent the letters to Facebook, Twitter, TikTok, YouTube, Reddit, LinkedIn, Tumblr, Snapchat, Imgur, Pinterest and Twitch.

A Call for Action

"We believe it is vital that your organization have plans in place to address the attempted use of these technologies," the senators write. "We also urge you to develop industry standards for sharing, removing, archiving, and confronting the sharing of synthetic content."

Warner and Rubio write that deepfake videos pose a significant threat to the public's trust in the information they consume - particularly the types of images, videos and audio recordings posted online that could affect the democratic process.

Along with having clear strategies and policies for authenticating the types of media appearing on their platforms, as well as slowing the pace of disinformation, the two senators want these social media firms to clearly label deepfake videos to distinguish them from authentic media.

"Establishing clear policies for the labelling and archiving of synthetic media can aid digital media literacy efforts and assist researchers in tracking disinformation campaigns, particularly from foreign entities," the two senators write.

Battling Deepfakes

While deepfake videos have circulated for some time, combating their negative effects is an issue the cybersecurity industry has only recently started addressing. The issue has surfaced at recent events, such as Black Hat and the RSA Conference, and has taken on a special urgency with the 2020 presidential election looming (see: A Vision of the Role for Machines in Security).

Some of the recent victims of deepfake videos include former President Barack Obama, House Speaker Nancy Pelosi and Facebook CEO Mark Zuckerberg. In September, Zao, a free deepfake face-swapping app that can place the user's face on movies and TV shows, went viral in China, causing privacy concerns, according to a report in Bloomberg.

Warner and Rubio have previously sounded the alarm before over deepfake videos.

Other lawmakers have taken notice as well. In June, Rep. Yvette Clarke, D-N.Y., introduced the "DEEPFAKEs Accountability Act." The bill would require the creators of deepfake videos to add irremovable digital watermarks, as well as textual descriptions, to fake videos and audio recordings. Failing to add these disclaimers would be considered a crime.

Social Media Response

Some social media platforms have started taking steps to curtail the amount of disinformation posted on their platforms. Facebook, for example, announced a deepfake detection challenge in September to create better detection tools to spot fakes that use technologies such as artificial intelligence to mislead viewers.

The social media giant also committed $10 million for research and prizes for participants, says Mike Schroepfer, Facebook's CTO.

On Sept. 24, Google, along with Jigsaw, released a large dataset of deepfake videos that have now been incorporated into the FaceForensics benchmark, which is being jointly developed by the Technical University of Munich and the University Federico II of Naples. Researchers can use FaceForensics to help develop synthetic video detection methods.

Example of deepfake video images (Illustration: Google)

At the Black Hat Europe conference in December, Vijay Thaware and Niranjan Agnihotri, India-based researchers at Symantec, presented a tool that could help detect deepfake videos. Thaware noted: "What does it take to make a deep fake? All it takes is a gaming laptop, an internet connection, some passion, some patience, of course, and maybe some rudimentary knowledge of neural networks." (see: Face Off: Researchers Battle AI-Generated Deep Fake Videos).


About the Author

Apurva Venkat

Apurva Venkat

Special Correspondent

Venkat is special correspondent for Information Security Media Group's global news desk. She has previously worked at companies such as IDG and Business Standard where she reported on developments in technology, businesses, startups, fintech, e-commerce, cybersecurity, civic news and education.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing cuinfosecurity.com, you agree to our use of cookies.