What are deepfakes – and how can you spot them?

AI-generated fake videos are becoming more common (and convincing). Here’s why we should be worried!










Kartik Pawar CC-BY

What is a deepfake?

Have you ever come across videos where Barack Obama seems to be calling Donald Trump names, or Mark Zuckerberg appears to be boasting about controlling tons of people's private information, or maybe you've seen Jon Snow apologizing for the not-so-great Game of Thrones ending? If you answered yes, then you've witnessed something called a "deepfake."


Think of deepfakes as the 21st-century version of messing with photos in Photoshop. They use a type of artificial intelligence called "deep learning" to create fake videos or images that look incredibly real. That's why they're called "deepfakes" - they're deeply convincing.


So, if you've ever wanted to make a video of a politician saying something they never actually said, star in your favorite movie scene, or show off your dance moves like a pro, then you might want to dive into the world of making deepfakes.


What are they for?

A lot of deepfake videos are actually pornographic. In September 2019, a company called Deeptrace discovered around 15,000 deepfake videos online, and that number almost doubled in just nine months. Shockingly, 96% of these videos were pornographic, and 99% of them used the faces of female celebrities and put them onto the bodies of porn stars.


Now, because new techniques are making it easier for regular people to create deepfake videos using just a few pictures, these fake videos could go beyond celebrities and be used for things like revenge porn. This means that deepfake technology is being used to harm women, as pointed out by Danielle Citron, a law professor at Boston University.


But it's not all about the adult stuff. Besides that, there are also deepfake videos created for fun, jokes, and pranks, like spoofs and satires.


Is it just about videos?

No. Deepfake technology can make realistic but completely made-up photos. For example, there was a person named "Maisy Kinsley" who claimed to be a Bloomberg journalist on LinkedIn and Twitter, but it's likely that she was a deepfake - a fake identity created by this technology. Another fake person, "Katie Jones," said she worked at the Center for Strategic and International Studies, but it's believed she was also a deepfake made for spying.


This technology doesn't stop at images; it can also manipulate audio to create fake voices of famous people. In one case, the head of a UK branch of a German energy company wired a lot of money to a Hungarian bank account because a scammer called him and pretended to be the CEO of the German company using a fake voice. The company's insurance company suspects that it was a deepfake, but they don't have clear proof. Similar scams have used recorded WhatsApp voice messages.


Who is making deepfakes?

People from various backgrounds are using deepfake technology. This includes everyone from researchers, hobbyists, and special effects studios to people in the adult entertainment industry. Even governments are getting into it, using deepfakes as part of their online tactics. They might use it to undermine extremist groups or to try and communicate with specific individuals they're interested in.


Michael Ordonez CC BY-ND


What are the impact of Deepfakes?










































KartikPawar CC-BY

How do you spot a deepfake?

As deepfake technology gets better, it becomes more challenging to spot fake content. For example, in 2018, researchers in the US found that deepfake faces didn't blink like real people because most images they learned from showed people with open eyes. Initially, this seemed like a way to detect deepfakes, but then deepfake creators started making their creations blink like real people. When a weakness in the technology is revealed, it gets fixed.

Low-quality deepfakes are easier to identify. They might have issues like mismatched lip movements, uneven skin tones, flickering edges around the fake faces, and they struggle with fine details like hair, especially when individual strands are visible near the edges. Poorly rendered jewelry and teeth, as well as odd lighting effects, can also give away a deepfake.

To combat this, governments, universities, and tech companies are funding research to detect deepfakes. There's even a competition called the Deepfake Detection Challenge, supported by Microsoft, Facebook, and Amazon, where research teams worldwide compete to find the best ways to spot deepfakes.

Just so you know, Facebook recently banned deepfake videos that are likely to deceive people by making them think someone said things they never actually said, especially during important events like the 2020 US election. However, this policy only covers fake content created using AI, so "shallowfakes" (less advanced manipulated content) are still allowed on the platform.






https://www.freepik.com/free-photo/portrait-hacker-with-mask_4473952.htm#query=deep%20fakes&position=8&from_view=search&track=ais&uuid=ad4173a1-e831-4a4c-9b55-4a69717013f6 CC-BY freepik

Michael Ordonez CC BY-ND


What’s the solution?

Interestingly, AI could be the solution here. Artificial intelligence is already being used to identify fake videos, but most of the current systems work better with celebrities because they have lots of video material available for training. Now, tech companies are developing detection systems that can recognize fakes, no matter who's in the video.

Another approach is to track where the media comes from. Digital watermarks aren't perfect, but think of a blockchain online ledger system like a tamper-proof record book for videos, pictures, and audio. It keeps track of where they came from and any changes made to them, so we can always verify their origin and any alterations.

Comments

Popular Posts