What’s at stake for companies or individuals when seeing or hearing is no longer believing?
The fact that people can see something like this believe that it’s true and collectively the markets can react to it are a huge concern for people.
The newest attack that we are seeing, which people had not anticipated, in some sense is the use of the audio deepfakes to cause financial scams.
In the last few months, security firm Symantec says it’s seen three attempted scams at companies involving audio deepfakes of top executives. In one case, the company lost 10 million dollars after a phone call where a deepfake audio of the CEO requested a transfer of the money. The perpetrators still haven’t been caught.
It’s no different than what you would see for like what powers like Alexa or some other products like that. And the CEO will answer whatever question you have because they can be created on the fly.
The Profitable Side
Deepfakes also offer profit opportunities for some companies. You’d have an actor that would license their likeness and then at a very low cost, the studio could produce all kinds of marketing materials with their likeness without having to go through the same level of production that they do today, you know. And so I can imagine lots and lots of audio being produced using the voice of an actor, the voice of someone other VIP and all of that being monetized.
Now Amazon is doing just that. It announced in September 2019 that Alexa devices can speak with the voice of celebrities like Samuel L. Jackson.
On Instagram, A.I. generated influencers like Lil Miquela are backed by Silicon Valley money. The deepfake videos of these virtual influencers bring in millions of followers, which means a lot of potential revenue without having to pay talent to perform.
And in China, a government-backed media outlet introduced a virtual news anchor who can work 24 hours a day. I will work tirelessly to keep you informed as texts will be typed into my system uninterrupted.
But the potential for misuse is high. So one of the most insidious uses of deepfakes is in what we call revenge porn or pornography that somebody puts out to get back at somebody who they believe wronged them. This also happens with celebrities. But certainly, things like this that would ruin the reputation of a celebrity or somebody else in the public eye are going to be top of mind for these social media companies.
Elections & Global Stability
Also top of mind for social media companies: the 2020 elections. Researchers expect that in the 2020 election, deep fakes will probably be deployed. Will it be deployed by a foreign nation looking to cause instability? That’s possible. And that could be significant. In that case, you would have a candidate saying something totally outrageous. Or something that enflames the markets or something that puts their chances of being elected in question.
There is also a concern about faking words from leaders of countries, from leaders of organizations like the IMF that would have a significant consequence, even if it was short term, on markets and even on global stability in terms of conflict.
In May, House Speaker Nancy Pelosi accused Facebook of allowing disinformation to spread when the company refused to take down a manipulated video of her. In response, Facebook updated its content review policies, doubling down on its refusal to remove deepfakes. Two British artists tested Facebook resolve by posting a deepfake of CEO Mark Zuckerberg on Instagram. Whoever controls the data controls the future. Facebook held its ground, refusing to remove it along with other deepfakes like those featuring Kim Kardashian and President Trump.
Now Facebook is trying to get ahead of deepfakes before they make it on its platforms. It’s spending more than 10 million dollars and partnering with Microsoft to launch a Deepfake Detection Challenge at the end of the year. Facebook itself will create deepfakes with paid actors to be used in the challenge. Then pre-screened participants will compete for financial prizes to create new open-source tools for detecting which videos are fake.
And at least until Facebook announced monetary prizes, the business potential on the detection side was small. There’s not really a market segment for deepfake protection that’s mature yet. The tech is new. The threat landscape is just beginning to emerge or whatever. So we’re the first or amongst the first in terms of companies to develop and ship the technology around this.
One of the best things about the Facebook challenge is that it brings in a lot of people who probably weren’t interested in this technology to try and work on it, and I think what we really need in the space with deepfakes is finding something that is novel that we haven’t thought of before that works for detecting these.
Twitter told CNBC it challenges eight to 10 million accounts per week for policy violations, which includes the use of Twitter to mislead others.
As a uniquely open service, Twitter enables the clarification of falsehoods in real-time. They proactively enforce policies and use technology to halt the spread of content propagated through manipulated tactics.
And it recently acquired a London-based startup called Fabula A.I., which has a patented A.I. system it calls geometric deep learning that uses algorithms to detect the spread of misinformation online.
At YouTube, which is owned by Google, community guidelines prohibit deceptive practices and videos are regularly removed for violating these guidelines. Google launched a program last year to advance detection of fake audio specifically, including its own automatic speaker verification spoof challenge, inviting researchers to submit countermeasures against fake speech.
One small cybersecurity company has already launched an open-source tool that’s helping create algorithms to detect deepfakes.
The way our platform works is we’re pulling in billions of pieces of content on a monthly basis. Text, images, video, all kinds of stuff. And so in this case, as the video flows through our platform, we’ll now route it through deepfake detection that says like, deepfake, not deepfake, and if it is a deepfake, alert our customers.
Baltimore based ZeroFOX intends to be the first to have customers pay to be alerted of deepfakes. Meanwhile, academic institutions and the government are working on other solutions.
Another approach is to put a registry out there where people can register their authentic content and then other people can check with the registry to see if that content is in fact, authentic or not.
The Pentagon’s research enterprise, called Defense Advanced Research Projects Agency, or DARPA, is fighting deep fakes by first creating its own and then developing technology that can spot it.
Regulation is Begining
Even if you can detect deepfakes, there is currently little legal recourse that can be taken to stop them.
For the most part, they are legal to research and create at this point. For the most part, if you’re a public person, you really don’t own the rights to your public appearances or even videos taken without your consent in public. There are some things that are unclear about the law, but for the most part, this also applies to just regular people who post videos of themselves on Facebook and YouTube.
But if your image is used in adult content, it’s likely illegal in states like California, where revenge porn is punishable by up to six months in jail and a $1,000 fine.
A lot of porn websites, for example, have declared that they are not going to allow hosting of deepfake or uploading of deep fake-based porn.
In China, a deepfake app that allowed users to graft their faces onto popular movie clips went viral. But it was shut down earlier this month over privacy concerns because the app maintained the rights to users’ images.
In June, New York Congresswoman Yvette Clarke introduced the DEEPFAKES Accountability Act in the House. It would require creators to disclose when a video was altered or generated and allow victims to sue.
The problem with Cyber Laws
There is no way that a law like this will enforce a lawsuit against somebody who is sitting in a country in Eastern Europe or anywhere else across the globe that has already proven to be hostile to the U.S. when it comes to enforcing our laws around cybersecurity.
If you’re intent on publishing a deepfake and not having it traced back to you, there are plenty of ways that you can remain anonymous.
AI vs AI
The people who are defending us against deepfakes are using A.I. just as much as the people who are creating them are using A.I. It’s just that those who are creating deepfakes seem to have a running start on this.
For better or worse, deepfakes are only getting more refined. The challenge will be whether the technology to detect and prevent them can keep up.
The people who are creating deepfakes for nefarious reasons are way ahead of us. I think that they have access to A.I. that is more advanced than what we have working on the solution side and certainly access to more resources than we have so far given people to fight against the problem. Hopefully, that will change with what’s taking place now.
May sound basic, but how we move forward in the age of information, is going to be the difference between whether we survive or whether we become some kind of dystopia.