96% of deepfakes are porn. New laws give victims a way to fight back.

Artificial Intelligence abstract face created by neural network machine learning system, powered by ...

One of the most frightening aspects of technology today is its ability to bend reality. Recently, a study by cyber-security company Deeptrace found that Deepfake videos have nearly doubled in nine months. What's most worrying is the type of deepfake that's beginning to dominate the internet.

If you're unfamiliar with deepfakes, think of them kind of like Snapchat's faceswap. Through this type of video manipulation, politicians, celebrities, and just regular everyday people can be made to appear to say and do things they otherwise would not.

Researchers at Deeptrace reported that deepfake videos online increased from 7,464 in December 2018 to 14,698. Of those videos, researchers said 96 percent were pornographic.

Part of the rise in deepfakes is because they really aren't too hard to create. Katja Bego, principal researcher at innovation foundation Nesta, explained to BBC that all you need to make one is a small amount of input data.

However, the ease of creation doesn't explain why pornographic deepfakes are being made so often. According to BBC, researchers often found deepfakes with a face of a celebrity replacing an original adult actor in a scene — meaning that the deepfake itself is likely nonconsensual.

"As the technology is advancing so rapidly, it is important for policymakers to think now about possible responses," Bego said. "This means looking at developing detection tools and raising public awareness, but also [to] consider the underlying social and political dynamics that make deepfakes potentially so dangerous."


This isn't the first time issues with pornographic deepfakes have come to light. In June, DeepNude, an app that allowed people to create nude images of women without their permission, ended up shutting down shortly after going public.

When the app was live, it could be downloaded for Windows or Linux machines. The free version included large watermarks on any images that were created. Users could pay $50 for the full app which would have that removed.

"Despite safety measures adopted (watermarks) if 500,000 people use it, the probability that people will misuse it is too high," the app's creator, known only as Alberto, wrote on Twitter. He finished his message with, "The world is not yet ready for DeepNude."

Given the growing concern, some states are looking to take control back into their own hands with laws directed at deepfakes. Recently, California cracked down on political and pornographic deepfakes, a move that may be important for the upcoming 2020 presidential election.

The first California bill signed into law makes it illegal to post manipulated video of any candidates within 60 days of an election. The bill specifically mentions videos intended to "injure the candidate’s reputation or to deceive a voter into voting for or against the candidate."

There is a loophole in the law, though. You can post deepfakes as long as it contains a disclosure saying that the video has been manipulated. However, the second bill passed is pretty absolute — and targets the vast majority of deepfakes.

Under the second law, residents of California can sue anybody who makes a pornographic deepfake including their image without their consent. California's law doesn't guarantee victims of pornographic deepfakes anything, but it is a start to countering a violating use of technology.