Deepfake disruption: A cybersecurity-scale challenge and its far-reaching consequences

As the effort to detect and combat fake content escalates, the costs of maintaining a credible internet may fall on consumers, creators, and advertisers alike

Michael Steinhart

United States

Bree Matheson

United States

Gillian Crossan

United States

Deepfakes—photos, videos, and audio clips that seem real but are generated by artificial intelligence tools—are making it harder for audiences to trust content that they see online. As AI-generated content grows in volume and sophistication, online images, videos, and audio can be used by bad actors to spread disinformation and perpetrate fraud. Social media networks have been flooded with such content, leading to widespread skepticism and concern.1

In Deloitte’s 2024 Connected Consumer Study, half of respondents said they’re more skeptical of the accuracy and reliability of online information than they were a year ago. Among respondents familiar with or using generative AI, 68% reported concern that synthetic content could be used to deceive or scam them, and 59% reported they have a hard time telling the difference between media created by humans and generated by AI. Eighty-four percent of respondents familiar with gen AI agreed that content developed with gen AI should always be clearly labeled.2

Labeling is one of the ways through which media outlets and social media platforms can flag synthetic content for users, but as deepfake technologies incorporate more advanced models that can generate synthetic content and manipulate existing media, more complex measures may be needed to detect fakes and help restore trust.

Analysts estimate that the global market for deepfake detection—as implemented by tech, media, and social network giants—will grow by 42% annually from US$5.5 billion in 2023 to US$15.7 billion in 2026.3 We predict that the deepfake detection market could follow a similar path as that of cybersecurity. Media companies and tech providers will likely work to stay ahead of increasingly sophisticated fakes by investing in content authentication solutions and consortium efforts. Credible content is expected to come at an increased cost for consumers, advertisers, and even creators, potentially.4

These efforts currently fall under two broad categories: detecting fakes and establishing provenance.

Detecting fakes

Tech companies often use methods such as deep learning and computer vision to analyze synthetic media for signs of fraud or manipulation, leveraging machine-learning models to recognize patterns and anomalies in deepfakes.5 These tools can also detect inconsistencies in video and audio content, such as subtle lip movements or voice-tone fluctuations that are less than human.6

Some gen AI tools include functionality that detects whether a piece of content was made with their help, but these may not detect deepfakes created by other models.7 Some fake-detection tools look for signs of manipulation—or “fingerprints”—of gen AI tools.8 Others use a “whitelist” and “blacklist” approach (maintaining lists of trusted sources and known fakers), while others look for proof of humanity (as opposed to proof of artifice), like natural blood flow, facial expressions, and vocal inflection.9

Current deepfake detector tools claim accuracy rates above 90%.10 One concern, however, is that bad actors may be using open-source gen AI models to generate media that work around these measures. The ability to automate content generation may overwhelm current detectors, for instance, and the subtle adjustments that gen AI tools make to output based on user prompts can also be used to obscure fake content.11

Social media platforms themselves often use AI tools to help detect problematic content in images or videos, score it on a relative scale, and then forward the most suspicious items to human reviewers to make the final designation. This approach can be time-consuming and expensive, however, and efforts are underway to accelerate the process with the help of machine learning.12

If this sounds reminiscent of the cybersecurity landscape, that could be because it is. Just as security-conscious companies have adopted layered approaches to data and network protection, we expect news outlets and social media companies will likely need multiple tools—along with content provenance measures—to help determine the credibility of digital content.

Establishing provenance and trust

The other path that some companies are exploring involves cryptographic metadata (or digital watermarks) added to a media file when it’s created. This data is attached to the media, detailing its provenance and maintaining a record of all modifications.13

Social platforms are collaborating with media outlets, device-makers, and tech companies in cross-industry consortia to help perpetuate standards for content authenticity. Various tech and media organizations, including Deloitte, have joined the Coalition for Content Provenance and Authenticity (C2PA), pledging to use the C2PA metadata standard so that AI-generated images can be verified more easily.14 C2PA technology records every step of the life cycle of an image, from initial creation through the editing process, by creating a detailed log of alterations and modifications.15 With the C2PA record available for perusal, content outlets and users can check the source of visuals and consider their trustworthiness.

In another effort to differentiate accounts run by humans, some social media platforms are beginning to implement verification options for creators. These may require submitting forms of identification and paying a fee for creators to prove their identity. Platforms may also incentivize verification by requiring it for creators to be included in revenue-share programs.16

Certifying the authenticity of human-operated accounts can help platforms improve trust and credibility as AI content grows ubiquitous.17 Platforms may have to evaluate whether passing these certification costs on to creators, advertisers, or users is sustainable. 

Waiting for regulation

Although some governments have initiated measures to regulate content authenticity,18 more comprehensive and globally coordinated legislation may be beneficial. Public awareness campaigns can be equally crucial in informing users about the dangers of deepfakes and helping teach them to verify before accepting media as authentic.

In the United States, legislation requiring digital watermarks on AI-generated content has been introduced and is now under consideration by the Senate Committee on Commerce, Science, and Transportation.19 The state of California is considering AB-3211, a law requiring device-makers to update their firmware such that it attaches provenance metadata to photos, and ordering online platforms to disclose provenance metadata for online content. If passed, this law will take effect in 2026.20 Other individual states have enacted legislation criminalizing the production and distribution of nonconsensual deepfakes intended to spread misinformation.21 The US Federal Trade Commission is formulating new regulations aimed at prohibiting the creation and dissemination of deepfakes that mimic individuals.22

Revisions to the EU AI Act primarily emphasize transparency, mandating clear labeling of AI-generated content and deepfakes. This strategy helps support the ongoing advancement of AI technologies, while ensuring that users know the nature of the content they encounter. The European Commission has established the AI Office to foster the development and application of AI and to promote the effective labeling of artificially generated or manipulated content.23

The swift progression of deepfake technology demands regulatory frameworks that are both flexible and adaptive, capable of evolving in tandem with technological advancements.

Bottom line

The authenticity of a photo, video, or audio clip may be established through analysis and through verification of its provenance. It’s likely that media companies and social networks will invest in both approaches as gen AI continues to be used to create more synthetic media and bad actors adjust their models and output to evade detection.

Staying ahead of bad actors is important as gen AI grows more powerful and versatile. More sophisticated technologies like blood-volume detection and facial analysis can help distinguish real from manipulated content. As with cybersecurity tools, however, these measures should be as unobtrusive as possible for end users and consumers, ensuring content integrity without compromising user experience. Techniques like digital watermarking can help verify authenticity without affecting quality or requiring real-time computing cycles to analyze.24

One leading practice could be critical for companies that use trained machine learning models (or pay third parties) to help detect fake content. Such organizations should prioritize tools and vendors that leverage diverse, high-quality data sets spanning images, video, and audio. These data sets should incorporate a broad array of demographic groups to help promote fairness and minimize bias in detection accuracy.25

Tech and media companies alike should collaborate with peers26 across industries to help create and support standards for deepfake detection and content authentication. Digital watermarking, for example, can be more effective when device-makers and media outlets co-sign content to affirm its creation and publication. This collective effort can lead to more robust and universally accepted practices, enhancing overall security and trust in digital content.

On the enterprise security side, companies across industries should be aware that gen AI can make social engineering attacks more effective and can compromise some authentication measures.27 It may be necessary to implement additional verification layers, especially for video and audio-based processes. End users should be encouraged to cross-check information with reliable sources and utilize multi-factor authentication to help mitigate risks associated with deepfakes. Because of ever-evolving dynamics, user education (along the lines of cybersecurity awareness training) may also be an important measure for companies to consider.

These strategies can not only safeguard against the threats posed by deepfake technology but also help position tech and media companies as leaders in maintaining digital content integrity and trust. At this pivotal time, companies can shape the trusted content space and position themselves as reliable sources in an increasingly uncertain digital landscape.

By

Michael Steinhart

United States

Bree Matheson

United States

Gillian Crossan

United States

Endnotes

  1. Margaret Talev and Ryan Heath, “Exclusive poll: AI is already great at faking video and audio, experts say,” Axios, accessed Oct. 28, 2023.

    View in Article
  2. Susanne Hupfer, Michael Steinhart, et.al, “2024 Connected Consumer Survey,” Deloitte, December 2024

    View in Article
  3. Vivaan Jaikishan, Cameron D'Ambrosi, Jennie Berry, and Stacy Schulman, “The rising threat of deepfakes: Detection, challenges, and market growth," Liminal, May 7, 2024.

    View in Article
  4. Ian Shepherd, “Human vs. machine: Will AI replace content creators?” Forbes, April 26, 2024.

    View in Article
  5. Analytix Labs, “Detecting deepfakes: Exploring advances in deep learning-based media authentication,” Medium, January 4, 2024.

    View in Article
  6. For example, see: Intel, “Trusted media: Real-time FakeCatcher for deepfake detection,” accessed Oct. 28, 2024.

    View in Article
  7. Cade Metz and Tiffany Hsu, "OpenAI releases deepfake detector to disinformation researchers," The New York Times, May 2024.

    View in Article
  8. Danial Samadi Vahdati, Tai D. Nguyen, Aref Azizpour, and Matthew C. Stamm, “Beyond deepfake images: Detecting AI-generated videos,” Drexel University, accessed Oct. 28, 2024.

    View in Article
  9. Alex McFarland, "5 best deepfake detector tools & techniques (October 2024),” Unite.AI, Oct. 1, 2024.

    View in Article
  10. Konstantin Simonchik, “Deepfake detection: Accuracy of commercial tools," LinkedIn, February 2024

    View in Article
  11. Jiansong Zhang, Kejiang Chen, Weixiang Li, Weiming Zhang, and Nenghai Yu, “Steganography with generated images: Leveraging volatility to enhance security,” IEEE Transactions on Dependable and Secure Computing 21, no. 4 (2024): pp. 3994–4005; see also: Mike Bechtel and Bill Briggs, “Defending reality: Truth in an age of synthetic media,” Deloitte Insights, Dec. 4, 2023; and, Loreben Tuquero, “AI detection tools for audio deepfakes fall short. How 4 tools fare and what we can do instead,” Poynter, March 21, 2024.

    View in Article
  12. Barbara Ortutay, “Content moderation in the AI era: Humans are still needed across industries,” Fast Company, April 23, 2024; also see: Meta, “How review teams work,” Jan. 19, 2022.

    View in Article
  13. Glenn Chapman, “Meta wants industry-wide labels for AI-made images,” AFP News, Feb. 6, 2024; also see: Nick Clegg, “Labeling AI-generated images on Facebook, Instagram and Threads,” Feb. 6, 2024; Sasha Luccioni et al., “AI watermarking 101: Tools and techniques,” Hugging Face, Feb. 26, 2024; and Partnership on AI, “Building a glossary for synthetic media transparency methods, part 1: Indirect disclosure,” Dec. 19, 2023.

    View in Article
  14. Ryan Heath, “Inside the battle to label digital content as AI-generated media spreads,” Axios, accessed Oct. 28, 2024.

    View in Article
  15. Demian Hess, “Fighting deepfakes with content credentials and C2PA,” CMSWire, March 13, 2024.

    View in Article
  16. Andrew Hutchinson, “X will require ad revenue share participants to confirm their ID,” Social Media Today, May 22, 2024.

    View in Article
  17. Guy Tytunovich, “The future of trust and verification for social media platforms,” Forbes, May 22, 2024.

    View in Article
  18. Amanda Lawson, “A look at global deepfake regulation approaches,” Responsible Artificial Intelligence Institute, April 24, 2023.

    View in Article
  19. US Congress, “S.2765—Advisory for AI-Generated Content Act,” Sept. 12, 2023.

    View in Article
  20. California Legislative Information, “Assembly Bill 3211—California Digital Content Provenance Standards,” Aug. 24, 2024.

    View in Article
  21. Kevin Collier, “States are rapidly adopting laws regulating political deepfakes,” NBC News, Aug. 7, 2024.

    View in Article
  22. Federal Trade Commission, “FTC proposes new protections to combat AI impersonation of individuals,” Feb. 15, 2024; also see: Michelle M. Graham, “Deepfakes: Federal and state regulation aims to curb a growing threat,” Thompson Reuters, June 26, 2024.

    View in Article
  23. Melissa Heikkilä, “Five things you need to know about the EU’s new AI Act,” MIT Technology Review,

    Dec. 11, 2023.

    View in Article
  24. Deloitte, “How to safeguard against the menace of deepfake technology,” accessed Oct. 28, 2024.

    View in Article
  25. AI Index Steering Committee, “The AI Index 2024 Annual Report,” accessed Oct. 28, 2024.

    View in Article
  26. AI Election Accord, “A tech accord to combat deceptive use of AI in 2024 elections,” accessed Oct. 28, 2024.

    View in Article
  27. Stu Sjouwerman, "The growing threat of AI in social engineering: How business can mitigate risks," Fast Company, April 8, 2024.

    View in Article

Acknowledgments

The authors would like to thank Jeff Loucks, Susanne Hupfer, Duncan Stewart, Jeff Stoudt, Jason Williamson, Tim Davis, Gopal Srinivasan, Shreeparna Sarkar, and Andy Bayiates for their contributions to this article.

Cover image by: Jaime Austin; Getty Images, Adobe Stock

Copyright and legal information

This article contains general information and predictions only and Deloitte is not, by means of this article, rendering accounting, business, financial, investment, legal, tax, or other professional advice or services. This article is not a substitute for such professional advice or services, nor should it be used as a basis for any decision or action that may affect your business. Before making any decision or taking any action that may affect your business, you should consult a qualified professional advisor.

Deloitte shall not be responsible for any loss sustained by any person who relies on this article.

About Deloitte

Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee (“DTTL”), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. DTTL (also referred to as “Deloitte Global”) does not provide services to clients. In the United States, Deloitte refers to one or more of the US member firms of DTTL, their related entities that operate using the “Deloitte” name in the United States and their respective affiliates. Certain services may not be available to attest clients under the rules and regulations of public accounting. Please see www.deloitte.com/about to learn more about our global network of member firms.