During the 1970s and 1980s, Memorex ran a string of successful commercials about the high quality of their audio cassettes. The tag line was: “Is it live, or is it Memorex?”
Yes, it seems kind of quaint nowadays. After all, in today’s world of AI (Artificial Intelligence), we may have a new catch phrase: “Is it real, or is it deepfake?”
The word deepfake has been around only for a couple years. It is a combination of “deep learning” – which is a subset of AI that uses neural networks – and “fake.” The result is that it’s possible to manipulate videos that still look authentic.
During the past couple weeks, we have seen high-profile examples of this. There was a deepfake of Facebook’s Mark Zuckerberg who seemed to be talking about world domination. Then there was another one of House Speaker Nancy Pelosi, in which it appeared she was slurring her speech (this actually used less sophisticated technology known as “cheapfake”).
Congress is getting concerned, especially in light of the upcoming 2020 election. This week the House Intelligence Committee had a hearing on deepfake. Although, it does look remote that much will be done.
“The rise of deepfakes on social media is a series of cascading issues that will have real consequences around our concept of freedom of speech,” said Joseph Anthony, who is the CEO of Hero Group. “It’s extremely dangerous to manipulate the truth when important decisions weigh in the balance, and the stakes are high across the board. Viral deepfake videos don’t just damage the credibility of influential people like politicians, brands and celebrities; they could potentially cause harm to our society by affecting stock prices or global policy efforts. Though some people are creating them for good fun and humor, experimenting with this technology is like awakening a sleeping giant. It goes beyond goofing off, into manipulative and malicious territory.”
Now it’s certainly clear that deepfake technology will get better and better. And over time, this may make it difficult to really know what’s true, which could have a corrosive impact.
It’s also important to keep in mind that it is getting much easier to develop deepfakes. “They take the threat of fake news even higher as seemingly anyone can now have the ability to literally and convincingly put words in someone else’s mouth,” said Gil Becker, who is the CEO of AnyClip
So what can be done? What can we do to combat deepfakes? Well, one approach is to have a delay within social networks to evaluate the videos – say by leveraging sophisticated AI/ML — before they go viral. To this end, Anthony recommends a form of watermarks.
“Whichever way that authentication is developed technologically, it’s clear this is the kind of investment that will cost a ton of money, but it has to be done,” he said. “Silicon Valley and all the tech companies are all about growing fast and keeping their cash flow in the positive. I expect they’ll continue to fight back on making these investments in security.”
Yet despite all this, the fears about deepfakes may still be overblown. If anything, the recent examples of Zuckerberg and Pelosi may serve as a wake-up call to spur constructive approaches.
“Currently, there is a lot of sensationalism on the use and implications of deepfakes,” said Jason Tan, who is the CEO of Sift. “It is also very much fear-based. Even the word sounds sinister or malicious, when really, it is ‘hyper realistic.’ Deepfakes can provide innovation in the market and we shouldn’t blatantly dismiss the technology as all bad. We should be looking at the potential benefits of it as well.”