My take: Tackling the problem of deepfakes

As artificial intelligence develops rapidly, so does the ability to create synthetic media. SWEAR CEO Jason Crawforth discusses how industry leaders are working to detect deepfakes.

By Jason Crawforth, CEO and founder of SWEAR

In an era of rapidly advancing artificial intelligence, the ability to create hyperrealistic counterfeit videos, audio, and images, known as deepfakes, poses a significant threat to our society. The implications of deepfakes are far-reaching and not yet fully understood. From attempts to influence elections, as seen in recent elections in New Hampshire1 and Slovakia,2 to the sheer volume of content uploaded every minute, detecting fakes amid this deluge is an enormously complex challenge.

As the AI technology underpinning deepfake models continues to improve, we expect artificial content to be a worsening problem. Seven years ago, creating a deepfake impersonating someone’s voice required around 30 minutes of audio of that person talking to train the model.3 Today, only three seconds of audio is needed.4 Similarly, older video deepfakes couldn’t replicate details like how a person looks while blinking. These problems have been solved, and today’s models can even recreate blood flow patterns in a person’s face.5

With synthetic recordings becoming nearly indistinguishable from authentic content, it’s increasingly difficult for humans and computers to detect them. The speed at which disinformation spreads exacerbates the impact and creates an urgent need for effective solutions.

Industry leaders are working to solve the problem. For example, many generative AI tools automatically add watermarks to the media they create, and major social media platforms are starting to take steps to identify synthetic media. But these measures are not foolproof: Watermarks can be removed, and bad actors are unlikely to adhere to voluntary industry standards.

Solving this problem requires a multifaceted approach that includes consumer education, government legislation, and forensic methods for detecting and flagging deepfakes. Detecting and penalizing harmful artificial content is important but fundamentally reactive. It can be hard for a person or business to recover their reputation in the court of public opinion once an image or video goes viral, even if it’s proven that the media was fabricated. That’s why authentication is another critical component.

In a world increasingly flooded with deepfakes, authenticating content is a powerful antidote. Even if a fake goes viral, definitively and immediately establishing the truth can protect reputations and help maintain trust.

At SWEAR, we envision a future where secure authentication is embedded into every digital piece, enabling a new era of online verifiable trust and transparency. Our authentication technology creates hashes of every byte generated while video or audio is being recorded. Those hashes are then recorded on a blockchain, enabling users to verify whether part of an image or video has been altered and confirm its authenticity.

As businesses face a rising tide of synthetic media—with implications for everything from election integrity to business reputation—those that prioritize authenticating their content will appear more credible to increasingly discerning digital audiences. By providing stakeholders with tools to verify the authenticity of the information they encounter, businesses can foster a culture of transparency. We expect this to transform not only how companies use technology but also how consumers interact with digital content.

Endnotes

  1. Ali Swenson and Will Weissert, “New Hampshire investigating fake Biden robocall meant to discourage voters ahead of primary,” AP News, January 23, 2024. 

    View in Article
  2. Curt Devine, Donie O'Sullivan, and Sean Lyngaas, “A fake recording of a candidate saying he’d rigged the election went viral. Experts say it’s only the beginning,” CNN, February 1, 2024. 

    View in Article
  3. YouTube, “#VoCo. Adobe Audio Manipulator Sneak Peak with Jordan Peele | Adobe Creative Cloud,” video, 7:20, November 4, 2016.

    View in Article
  4. Benj Edwards, “Microsoft’s new AI can simulate anyone’s voice with 3 seconds of audio,” Ars Technica, January 10, 2023. 

    View in Article
  5. Kaeli Britt, “How are deepfakes dangerous?,” Nevada Today, March 31, 2023.

    View in Article

Acknowledgments

Editorial consultant: Ed Burns

Design consultant: Heidi Morrow

Cover image by: Meena Sonar