Deepfakes - What Are They and How to Spot Them

Deepfakes refer to media that uses artificial intelligence to replace a person in an existing image, audio recording, or video with someone else’s likeness. This advanced technology can make it look like someone said or did something they never actually said or did. Deepfakes have become increasingly realistic and challenging to detect.

While they can be used for fun and entertainment, there are concerns that they could also spread misinformation or harm reputations. Read on for a deeper look at how deepfakes work and their implications.

What is a Deepfake? 

A deepfake is a hyper realistic fake video or image created using artificial intelligence. It is made by feeding a neural network hours of video footage of a target person until the AI algorithm can mimic their facial expressions, mannerisms, voice, and movements. The algorithm can then take an existing video or image and replace the person's face and voice with a synthesised fake that looks and sounds authentic. 

Deepfakes leverage powerful AI capabilities like deep learning and neural networks to produce compelling forgeries that are incredibly difficult to detect. This emerging technology raises concerns about the potential for misinformation and fraud.

What are Deepfakes Used For?

Deepfakes have some legitimate applications in media and entertainment. For example, they can digitally resurrect deceased actors or de-age older performers in new films and shows with the proper consent. This allows creative works to feature past stars or younger versions of existing celebrities and public figures. However, deep fakes also enable several unethical uses that violate privacy:

  • Non-consensual intimate media: Faces of celebrities and private individuals are commonly grafted onto adult video content without their permission. The resulting realistic fake adult videos proliferate rapidly. They inflict severe reputational damage and psychological trauma. Victims often find it extremely difficult to contain the spread once their likeness is used this way without consent. 

  • Fake news: High-quality fake videos portraying prominent individuals saying or doing things they never actually did can spread rapidly online. They have the dangerous potential to impact public discourse and sway opinions during events like elections.

  • Fraud: Realistic deepfakes combined with existing personal info obtained illegally can facilitate identity theft and large-scale financial fraud.

In summary, while deepfake tech has some creative uses, it can also enable crimes and unethical privacy violations, resulting in severe personal, reputational and societal damage. Regulating this technology and building tools to authenticate media and detect deepfakes will be crucial.

Examples of Deepfakes 

Some examples of Deepfakes include:

  • Non-consensual intimate videos and images of celebrities like Scarlett Johansson, Emma Watson, etc.  

  • Former US President Barack Obama delivered a public speech he never made.

  • Facebook CEO Mark Zuckerberg boasts of his control over stolen data. 

  • Tom Cruise videos show him doing impossible stunts.

How are Deepfakes Made?

Deepfakes are made using deep learning artificial intelligence to realistically replace a person's face or body in images and video.

The process begins by feeding an AI algorithm large datasets of images and videos of a target individual. The AI uses this visual data to decode and learn the intricate details of the target's facial expressions, lip movements, body language, voice, and mannerisms. 

After thoroughly analysing the visual and audio patterns, the AI then maps the learned characteristics of the target person onto an imposter's face and body in a separate video. The AI seamlessly stitches their expressions and gestures onto the imposter to create realistic fakes that can fool the human eye.

Creating convincing Deepfakes requires massive visual data sets and extensive neural network training. The more source data the AI has to draw from, the more accurate the vocal, facial, and bodily representations it can synthesise in the counterfeit videos. This is why deepfakes of famous people with many images and footage are common.

How to spot a Deepfake Video

Here are some tips to spot a Deepfake:

  • Lighting inconsistencies: Look for odd shadows or lighting direction on the face that doesn’t match the scene.  

  • Weird teeth and eyes: Teeth and eyes are more challenging to mimic. Check for discoloured or misshapen teeth, missing eyelashes, and odd eye reflections.

  • Strange outlines: Check closely around hairlines, nostrils, neck, and ears for blurry pixels.  

  • No visible breath: In cold scenes, look for breath condensation. Lack of it exposes the fake.  

  • Minor detail: Deepfakes struggle with earrings, hair decorations, and reflections in their eyes. Their absence is a red flag.

  

  • Unnatural movement: Stilted gestures, strange drifts, and jittery head movements point to a deepfake.

  • Pixelated patches: Blurry, pixelated patches show up around moving body parts as the AI fails to stick them seamlessly. 

  

  • Mismatched audio: Odd lip movement delays, muted sounds, and lousy dubbing reveal manipulated audio.   

  • No public record: Verify using Google if the speech or event shown happened.

Potential Harms of Deepfakes

Deepfakes pose many dangers to individuals and society, including:

  • Non-consensual private imagery: Deepfakes are used to create inappropriate videos and images without consent, causing lasting trauma.  

  • Misinformation: Realistic forged videos can spread false information and propaganda faster than text and images.

  • Financial fraud: Deepfakes of executives or clients could be used for fraudulent transfers and transactions.  

  • Blackmail and extortion: Realistic deepfakes depicting an individual in a false compromising scenario could be used for blackmail and extortion.  

  • Sabotage: Forged videos could sabotage the reputation and credibility of influential individuals and entities. 

  • Geopolitical instability: State-sponsored deepfakes could be crafted to damage international relations and global security by depicting false events.

Deepfakes enable the non-consensual spread of misinformation and private content, fraud, blackmail, sabotage, and geopolitical manipulation. They exploit personal likenesses and undermine public trust in media authenticity. In our interconnected world, the potential for widespread harm by malicious deepfakes is very high.

Conclusion

It will be crucial to manage and mitigate deepfakes’ harmful potential while balancing creative prospects. Educating people and verifying media authenticity will help counter disinformation. Advances in AI also offer hope for detecting deepfakes. Maintaining public trust while preventing exploitation will require the collaboration of tech policymakers, AI experts, lawmakers, and media giants.

FAQs

Are deepfakes illegal?

It depends. Using deepfake to create non-consensual imagery or materials violates privacy laws in some countries. But deepfakes themselves are not illegal universally.  

How does a deepfake work?

Deepfakes use AI algorithms to decode a target's facial and vocal details from large datasets. These learned details are then used to replace the face/voice of another person in a video.  

What is the most common deepfake?

Currently, non-consensual imagery of celebrities followed by deepfakes is the most common type of deepfakes. They exploit likenesses without consent, causing harm.

How harmful are Deepfakes?

Deepfakes enable fraud, blackmail, materials causing geopolitical tension and more. They exploit likenesses and undermine public trust. The potential harm is very high.  

Can Deepfakes be tracked?

Deepfake media leaves subtle hints invisible to the naked eye. Using digital forensic tools, experts can spot manipulated pixels and embeddings left during deepfake creation to establish authenticity.

What are celebrity Deepfakes?

Celebrity deepfakes involve grafting a famous person’s likeness from images or videos without their consent. These fake videos pose reputational risks.