From dating to democracy, AI-generated media creates multifaceted risks

With the rise of generative AI, it’s easier than ever to create misleading media, with businesses having to deal with the outcomes. But AI tools can also offer a solution.

For most of human history, seeing was believing. But with the advent of generative AI, it’s easier than ever to create misleading media, and businesses will soon have to deal with an onslaught of misinformation and disinformation.

“In this new world of AI-generated media, you can never truly believe what you see online,” says Ben Colman, cofounder and CEO of Reality Defender, a security company that helps enterprises and governments detect deepfakes and AI-generated media. “What’s happening now with misinformation online is only the tip of the iceberg. It’s going to get much bigger and much worse.”1

The generative AI revolution exploded last year when image- and text-generating tools became publicly available. They captured the imagination of the general public, and businesses also saw opportunities to increase efficiencies through automating code generation, document reviews, and marketing collateral creation, among countless other use cases.

But what’s good for enterprises is also often good for bad actors seeking to exploit businesses’ vulnerabilities. Colman says he’s already witnessing people using generative AI tools in social engineering attacks from romance scams like catfishing to fraud against businesses and, even more extreme, misinformation campaigns aimed at destabilizing democracies.

Some businesses today are taking the risk seriously. Those that have mature cybersecurity teams view the threats associated with artificially generated content as an extension of the fraudulent activity they’re already working to detect and stop, Colman says. However, he adds, most businesses aren’t taking this kind of proactive approach to threats and are less likely to be prepared for the coming tidal wave of malicious content.

“This is a problem that absolutely needs to be proactively addressed today, not kicked down the line,” Colman says.

There are ways to prepare. Just as AI can be a source of malicious content, it can also detect it. Reality Defender’s platform is able to detect synthetic media because it has been trained on petabyte-scale databases of various types of media, including outputs of known deepfake models. The detection models learn to identify anomalies that are indicative of these creation models. When AI tools create objects, they tend to leave behind very subtle tells, such as specific types of deformations or pixelations in images or a certain level of predictability in text. The platform analyzes media for these telltale signs and assigns a score assessing the probability that it was artificially created.

Humans are increasingly less capable of differentiating between artificially generated media and authentic images, video, and text. Even those who spend a lot of time looking at images and reading text created by algorithms struggle to differentiate between the two, and the problem is likely to become even starker as deepfake technology improves. In contrast, AI-powered deepfake detection tools are generally better than humans at detecting those subtle identifiers and, importantly, can do so at scale.

Colman says using these tools allows organizations to detect harmful media before it spreads. Once something goes viral, it’s hard to convince people it’s inauthentic. But proactively identifying artificially generated content and working with social media companies to remove it, for example, allows businesses to intervene before harm is done.

“Once something has gone viral, it’s too late,” Colman says. “If a brand is harmed in the court of public opinion, it doesn’t matter if it comes out a week or two later that the content was untrue.”

Endnotes

  1. Interview, Ben Colman, cofounder and CEO, Reality Defender, August 2, 2023.

    View in Article

Acknowledgments

Writer: Ed Burns

Design consultant: Heidi Morrow

Cover image by: Jim Slatton