Lyin’ eyes has been added to your bookmarks.
AI and ‘deepfakes’
Many executives are concerned about the use of artificial intelligence (AI) to falsify images and videos. But AI can also help detect them.
October 26, 2018
AI is getting easier for non-experts to use. Some AI services and apps now feature intuitive user interfaces and drag-and-drop functionality—a far cry from hard-core coding. These innovations put powerful capabilities in the hands of companies, and are contributing to the increase in AI pilots and implementations.1
There’s a downside to the spread of these tools, however. Humans have a knack for using technology for ill, as well as good. This is certainly true of AI. Using open source AI tools, anyone can make “deepfakes”—highly realistic fake images and videos—with a few mouse clicks. Whether for mischief or malice, it’s easy to create a video in which a corporate executive or politician appears to say something inflammatory. Realistic video forgeries can take “fake news” to new levels of believability and viral reach.
According to Deloitte’s State of AI in the Enterprise, Second Edition (2018 survey), executives who understand AI best—early adopters—believe that the use of AI to create falsehoods is the top ethical risk posed by the technology.
We have seen numerous examples of how authentic videos of executives and employees behaving badly can harm a company’s public perception and stock price. One can imagine the negative impact when deepfake videos, which are designed to provoke anger and derision, go viral.
There is also legal liability to consider. Tech and media companies could be held responsible for “fake news” and other purposely misleading content by governments.2 Fearing censorship, many citizens want tech companies rather than governments to take the lead in fighting fake news.3 In fact, it may take a partnership between the two. Social networks and search firms have signed a “code of conduct” developed by the European Commission, pledging to detect and remove false information.4
This can be tricky, especially with deepfake videos. New approaches are emerging, however, that can separate fact from fiction. The Eagles once proclaimed, “There ain’t no way to hide your lyin’ eyes.” Using deep learning, researchers have found a method of detecting when an image has been manipulated. On most deepfake videos, the subject does not blink his or her eyes as frequently as a person normally would. An algorithm can be used to determine whether a person in a video blinks normally, and to weed out the fakes.5
“Similar to how venom is used to produce an anti-venom, AI can be used to combat the misuse of AI,” notes Paul Silverglate, partner, Deloitte & Touche LLP’s Advisory leader for Technology, Media & Telecommunications.
Undoubtedly, deepfakers will soon find new ways to mask their venomous methods. It will likely be up to AI to continually unmask them, so the public can see the truth.
State of AI in the enterprise, 2nd edition
In our second annual survey, early adopters remain bullish on cognitive technologies’ value and are ramping up AI investments. But questions linger about risk management—and about who at the company will push projects forward.
Read more about State of AI in the enterprise, 2nd edition.
1 Jeff Loucks, Tom Davenport, and David Schatsky, "State of AI in the Enterprise, 2nd Edition: Early adopters combine bullish enthusiasm with strategic investments," Deloitte Insights, October 22, 2018.
2 “Tech Firms Should Be Made Liable for 'Fake News' on Sites, UK Lawmakers Say,” CNBC, July 28, 2018.
3 Amy Mitchell, Elizabeth Grieco, Nami Sumida, “Americans Favor Protecting Information Freedoms over Government Steps to Restrict False News Online,” Pew Research Center, April 19, 2018.
4 Samuel Stolton, "EU code of practice on fake news: Tech giants sign the dotted line," EURACTIV, October 16, 2018.
5 Siwei Lyu, “Detecting Deepfake Videos in the Blink of an Eye,” The Conversation, August 29, 2018.
Small-screen TV viewing on the rise
Movie theater attendance declines amidst video streaming boom