Posted: 04 Nov. 2020 4 min. Lukuaika

Deepfake – is all trust gone?

We are used to trusting what we see and hear, which means that well-executed deepfake may be hard to spot.

A deepfake is AI-generated, lifelike audio and/or video of a real person doing fictitious things. Deepfake technology can also create entirely fictional photos of a real person. In this article we will reveal the evolution of deepfake and give examples of how to avoid falling prey to deepfake scammers.

 

As with almost any technology, deepfake technology can be used for both good and bad. Think of David Beckham, who delivered his malaria awareness message¹ in nine languages using deepfake technology to attempt to help end one of the world’s oldest and deadliest diseases. Because of positive deepfakes and beneficial applications, this technology needs to be understood rather than feared. 
 

The evolution of deepfake 

The term deepfake itself originated in 2017. By October 2019 the AI firm Sensity (formerly Deeptrace) found that the number of deepfake videos had almost doubled over the previous seven months, amounting to 14,678 videos online.² Later, in January 2020, Facebook Inc. announced a new policy banning AI-manipulated deepfake videos³ from both Facebook and Instagram.

A UCL report⁴ published in August 2020 concluded that fake audio and video are now ranked at the top of 20 ways in which AI can be used for crime, and in September, HSBC⁵ became the latest bank to sign up to a biometric identification system as a response to counter deepfake fraud. 

Deepfake technology has evolved rapidly in just a few years, and there are a lot of discussions going on, including discussions about the political and legal implications deepfakes might have. With technology getting better and better, we are heading for a future in which audio and/or video can be exploited by cybercriminals and cannot be easily trusted.   
 

What a deepfake scam may look like

The most well-known deepfake scam⁶ took place in March 2019, when criminals used AI-based software to successfully mimic a chief executive’s voice which demanded a fraudulent transfer of €220,000. The UK executive of the energy company recognized his boss’s voice and transferred money to the Hungarian bank as requested. Later that day, the attackers called again and asked for a second payment. This time the call came from a different phone number and the executive became suspicious. The scammers were never identified or caught. It is still unclear whether this was the first attack using deepfake technology or whether there were other incidents before it that have gone unreported. 

Nevertheless Researchers at the cybersecurity firm Symantec later revealed that they have found at least three cases of executives’ voices⁷ being mimicked to defraud companies and noted that the losses in one of the cases totalled millions of dollars. The names of the victim companies remain undisclosed.
 

How to avoid falling prey to deepfake scammers

We are used to trusting what we see and hear, which means that well-executed deepfakes may be hard to spot. Here are the four ways of how to avoid falling prey to deepfake scams: 

  1. Learn more about deepfake technology. There is a good amount of information on the topic in the internet. For example, CNN has created an interactive knowledge web page on deepfakes⁸ and what it means when seeing is no longer believing. There are other resources to explore.⁹ 
  2. Use “System 2” thinking¹⁰ – a more reflective and analytical approach. Apply mindfulness techniques¹¹ when responding to ‘rather strange’ requests. This means pausing if someone requests an action; considering the timing, purpose and appropriateness of the request; and consulting about any suspicions. Companies should consider putting additional controls in place, for example, an individual asking questions of their ‘CEO’ to verify her or his identify if an unusual request. 
  3. Pay attention to the quality of the audio and video. Poor-quality deepfakes are easier to spot: bad lip synching, no blinking, flickering around the edges of a face, strange reflections on the irises, a voice that is a little ‘robotic’ and background noise that covers up the imperfections of synthetic audio. As the technology improves and it gets harder and harder to spot deepfakes, always verify and consult with a trusted person when something does not feel right.  
  4. Hang up the phone and call the person back. Unless it is a state actor who can reroute calls or a sophisticated hacking group, calling back is the best way to figure out if you were talking to the person who you thought you were talking to. 

The video conferencing that we greatly depend on since the onset of COVID-19 is also exposed to deepfake cybercriminal attacks in the light of recent developments in the technology,¹² however there have not been any reported incidents to date.
 

Bringing it all together

With less face-to-face contact and more activities online, we are more susceptible to social engineering techniques. The pandemic is exposing more people to impersonation frauds and remaining vigilant in the face of COVID-19 fatigue is vital. The strategy to adopt in today’s world is ‘Trust but verify’, a direct translation of a Russian proverb and a signature phrase coined by Ronald Reagan that has never been more relevant or effective.    

Resources 

1. ‘Your voice could help end malaria! Join David Beckham’s #malariamustdie campaign to help drive action on Goal 3’, April 2019 – https://www.globalgoals.org/news/so-what-are-you-waiting-for-your-voice-is-powerful

2. ‘Mapping the Deepfake Landscape’ by Giorgio Patrini, October 2019 – https://sensity.ai/mapping-the-deepfake-landscape/

3. ‘Facebook bans “deepfake” videos in run-up to US election’ by The Guardian, January 2020 – https://www.theguardian.com/technology/2020/jan/07/facebook-bans-deepfake-videos-in-run-up-to-us-election 

4. ‘“Deepfakes” ranked as most serious AI crime threat’ by UCL, August 2020 – https://www.ucl.ac.uk/news/2020/aug/deepfakes-ranked-most-serious-ai-crime-threat

5. ‘Banks work with fintechs to counter “deepfake” fraud’ by The Financial Times, September 2020 – https://www.ft.com/content/8a5fa5b2-6aac-41cf-aa52-5d0b90c41840

6. ‘Fraudsters used AI to mimic CEO’s voice in unusual cybercrime case’ by The Wall Street Journal, August 2019 – https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402 

7. ‘An AI first: Voice-mimicking software reportedly used in a major theft’ by The Washington Post, September 2019 – https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/

8. ‘When seeing is no longer believing’ by CNN – https://edition.cnn.com/interactive/2019/01/business/pentagons-race-against-deepfakes/

9. Episode 21: ‘Fake Believe’ by The New York Times, November 2019 – https://www.nytimes.com/2019/11/22/the-weekly/deepfake-joe-rogan.html

10. ‘Can you believe your eyes? How deepfakes are coming for politics’ by The Financial Times, October 2019 – https://www.ft.com/content/4bf4277c-f527-11e9-a79c-bc9acae3b654

11. ‘Deepfake videos are getting real and that’s a problem’ by The Wall Street Journal, October 2019 – https://www.wsj.com/articles/deepfake-videos-are-ruining-lives-is-democracy-next-1539595787 

12. ‘Thinking, Fast and Slow’ by Daniel Kahneman, 2011.

13. ‘Training to mitigate phishing attacks using mindfulness techniques’ by Matthew L. Jensen et al., 2017.

14. ‘Pinscreen exhibition at World Economic Forum annual meeting in Davos 2020: Deepfaked’ by Hao Li, January 2020 – https://www.youtube.com/watch?v=DZ_k70WKvlg&feature=emb_logo 

15. ‘NVIDIA announces cloud-AI video streaming platform to better connect millions working and studying remotely’, October 2020 – https://nvidianews.nvidia.com/news/nvidia-announces-cloud-ai-video-streaming-platform-to-better-connect-millions-working-and-studying-remotely

Want to know more about deepfake?

Contact me and let's discuss more about the topic:

Julia Seppä

Julia Seppä

Risk Advisory

Julia is a cyber risk manager and serves as a strategic client program manager in Risk Advisory practice in Finland. She focuses on cyber awareness programs and culture transformations that improve employee’s cyber skills and make them a strong line of defense. Julia’s background is in finance and management. She is also a chartered accountant and a council member at the ICAEW.