How are deepfakes and cybercrime related? What is this recent deepfake trend? And what are some cybercrime examples of deepfake technology?
Deepfakes are a curious symptom of our times. The word ‘deepfake’ is a combination of the artificial intelligence sub-set, deep learning and the word ‘fake’. The result of the increase in deepfakes in the wild has played out across the globe in damaging and sinister ways.
Recently, deepfake technology was alleged to be behind a Business Email Compromise (BEC) scam when a British CEO was tricked out of $240,000. The CEO was asked by the boss of the parent company to transfer the money. However, the voice at the end of the line was not his boss at all, but likely a deeply faked voice created to carry out the fraud.
Back in 2018, researchers found 14,698 deepfake videos online. But what is this phenomenon about and how will it affect businesses and individuals in the years to come?
According to TechTarget, “Deepfake is an AI-based technology used to produce or alter video content so that it presents something that didn't, in fact, occur.”
Deepfake technology works by using deep learning neural networks to manipulate video (faces) and audio (voice). Neural networks are in some ways similar to how our brain processes information. Large datasets, in the form of thousands of images or voice samples, are used to train the neural networks.
For example, in the case of deepfake videos, the images of the targets are morphed and merged. Afterward, voice is overlaid and lips are synced.
Deepfake technology has become more accessible and cheaper with a report by Data & Society, finding an increasing use of “cheap fakes”. Once any technology becomes more readily available, cybercriminals enter the market and use it for ill-gotten gains. We are seeing this with the IoT, cloud computing, and artificial intelligence too.
To start with, deepfake technology is perhaps the ultimate form of social engineering. It uses our deep-seated instincts, like trust, to trick us into believing that something is real. The manipulation of trust is one of the reasons for the success of deepfakes as the fakes are based on the use of real voices or videos of people.
The original purpose of deepfake technology was not to cause harm. On the contrary, deepfakes were designed as a tool to manipulate videos for fun rather than malice.
Unfortunately, the technology quickly took on a new ‘face’. The now infamous example of fake Mark Zuckerberg was manipulated around the time that the Facebook privacy debacle hit. It shows the co-founder and CEO of Facebook seemingly saying:
"Imagine this for a second: One man, with total control of billions of people's stolen data, all their secrets, their lives, their futures…I owe it all to Spectre. Spectre showed me that whoever controls the data controls the future."
Some examples of where deepfake technology is being used for nefarious reasons.
Deepfakes can be real bullies. Their use to create ‘fake news’ is a natural home for the deepfake. This example of fake Barack Obama was released to demonstrate how dangerous the technology could be in terms of mass propaganda. It shows Barack Obama saying salacious and inflammatory things. Deepfake news is already a concern for the U.S. 2020 election.
The federal bill “Malicious Deep Fake Prohibition Act of 2018” was enacted in 2018 to help mitigate this. Other U.S. states such as California are also enacting laws to make the use of deepfakes during elections illegal.
One of the areas that deepfakes are likely to excel in is cybercrime. The following are four key areas that will likely be enhanced by deepfake technology:
The CEO in our introduction was a victim of BEC fraud. Social engineering is a key tactic in this scam. Using a deepfake to engage with employees and trick them will mean that an organization will need to be even more vigilant in terms of business processes.
The use of deepfakes in phishing campaigns would make them more difficult for the individual to detect as a scam. For example, in social media phishing, a faked video of a celebrity could be used to extort money from unwitting victims.
Synthetic identity and imposter scams are increasingly used in cybercrime. Imposter scams are the most frequent complaint made to the Federal Trade Commission (FTC). If digital identity systems use verification that requires facial recognition, deepfakes could potentially be used to create fraudulent accounts.
This scam attempts to shame an individual into handing over money. The scam usually involves a threat to show a video of the recipient in a compromising position. Sextortion could become made more sinister if a deepfake video of the recipient was used. High-worth individuals may become the first victims.
Work is ongoing to mitigate the threat of malicious deepfakes. Research from Ekraam Sabir et al has shown success in deepfake detection. Google and partners are also working in this area providing an evolving dataset of deepfake videos; this data is being opened up to the community to build solutions for deepfake detection.
Deepfakes offer an opportunity to the cybercriminal and a challenge to everyone else. As business owners and individuals, we need to be aware of the deepfake threat, not only in terms of fake news but also as a sinister tool in the arsenal of cybercriminals.
If you would like to evaluate the cybersecurity posture of your own organization, you may wish you download our free self-assessment checklist to find out more! For any questions, we are here to support you!