5446 views

How we’re all training A.I. in the development of deepfakes

“A lie can circle the planet while the truth is putting its shoes on.” – Mark Twain

deepfake, scary AI, dangers of deepfakes

Once in a while, we all have to solve CAPTCHA challenges to prove we’re human.

We’re asked to type out some scrambled word or find stop lights in pictures.

These ‘humanity tests’ train Google’s ‘vision.’

Through them, Google’s algorithm learns to read text better and differentiate between objects in pictures of intersections.

When we are solving these ‘are-you-human?’ puzzles, we are actually helping Google train its image recognition algorithms.

A big chunk of machine learning is giving the machine both the question and the correct answer. Once it sees enough correctly marked stop lights, it can better see them on its own.

In the last decade, we saw amazing leaps in computer vision and deep learning, with releases of mind-blowing creative projects like Google’s DeepDream (makes trippy images) and NVidia’s StyleGAN (copying different “features” from one picture to another), etc.

Many of these advancements in machine learning and computer vision were released as open-source projects on Github.

If you’re technically inclined, you could run these projects on your own home computer. 

Deepfakes under the hood

A wave of hobbyist projects came out that used face, pose, and gesture detection.

Remarkable, groundbreaking stuff was released publicly, allowing people to learn from it and build on top of it.

In 2017, all this progress and unprecedented open-ness culminated in a new phenomenon: DeepFakes.

Deepfakes are computer-manipulated videos in which faces can be replaced with another face, frame-by-frame, by computer code. All it needs is face pictures – lots of face pictures.

To implement this, the code analyzes each frame and replaces the original face with a new face. 

More sophisticated code can reproduce the original pose, facial expression, mouth shape, match skin color, and more.

Then it stitches the manipulated frames back into a video. 

Deepfakes find an engaged audience on Reddit

For a while, these manipulated videos were everywhere on Reddit.

Anyone subscribed to a popular TV show’s subreddit has seen Nicolas Cages’ face faked onto a host of characters from other shows.  

Shortly after the first deepfaked videos were posted on Reddit, the source code allowing anyone to create their own deepfake videos was released to Github.

As many innovations are, this technology was applied to humanity’s noblest pursuits: Porn. Lots and lots of deepfaked porn. 

Redditors wanted to see their favorite actresses in porn. Now, they had the tools to make that a reality. 

New videos and celebrity-specific requests were posted daily.

In the pursuit of believability, lively discussions centered around which porn actress body would ‘work’ with this-or-that celeb’s face.

While small in scope, this trend suggest that women face asymmetric risk when it comes to the rise of deep fakes.

Women are significantly more likely to be targeted for revenge porn campaigns than men, often launched by aggressors with personal agendas against them. It is not inconceivable then that they might also become targets for sextortion schemes and opens up the sphere of who could target them to just about anyone.

Reddit regulates on deepfakes

For all its strengths, historically, Reddit doesn’t have a stellar record in banning objectionable content. 

They generally rely on communities to self-moderate, rarely stepping in with dramatic measures. 

For example, it took them a long time to ban subreddits like r/CreepShots, where NSFW, often candid pictures of women were posted, despite numerous complaints.

But that wasn’t the case with deepfakes.

The largest internet hosting platforms reacted swiftly.

After Gfycat and PornHub banned deepfaked porn, Reddit followed suit. 

Reddit updated its terms of use to clarify that deepfaked porn qualified an “involuntary” porn, initially used. to describe revenge porn.

Reddit prohibits the dissemination of images or video depicting any person in a state of nudity or engaged in any act of sexual conduct apparently created or posted without their permission, including depictions that have been faked…Additionally, do not post images or video of another person for the specific purpose of faking explicit content or soliciting “lookalike” pornography.

Swept under the rug

By banning all faked sexualized content, the big platforms relegated deepfakes into private, closed-off, and less moderated communities. 

But people didn’t stop creating deepfake content. The tools continue to improve and have found homes in other dark corners of the internet. 

As the steady stream of content created by Redditors and porn enthusiasts (potayto, potahto) curtailed, deepfakes made their app debut.

In 2018, Github reportedly limited access to FaceSwap an open-source repository project for creating deepfakes.

The barrier of entry?

A login.

Deepfakes, the app

In 2019, Motherboard reported on DeepNude, a $50 app that uses neural networks to undress women. 

You give it a picture of a clothed woman and it would “undress her” by replacing the clothes with naked body parts. 

The neural network in this app was only trained on pictures of women.

“It swaps clothes for naked breasts and a vulva, and only works on images of women. When Motherboard tried using an image of a man, it replaced his pants with a vulva.” (FTA).

Later that year, The Verge reported that Github had banned all open-source repositories with DeepNude code.

Open-source code re-packaged or re-imagined as apps is hardly surprising – it’s par for the course.

But there’s something unsettling about DeepNude’s nonconsensual and invasive nature.

Trojan horse

That DeepNudes only works on women highlights the additional threats they may face as this technology continues to develop. 

But it’s hardly the only one.

FaceApp fiasco

In 2019, the Washington Port reported on a free viral app called FaceApp. 

The popular mobile download manipulated faces to look older. Innocent as that was, panic grew as it became known that the company – based in Russia – stored user photos on their servers.

FaceApp’s Terms of Use stated that any uploaded photos could be used in any way the company wanted, commercial or otherwise, and that use of your image was “irrevocable,” “perpetual,” and “without compensation to you.”

Sinister intent?

Of course, from an objective point of view, we can’t assume every company out there is out to get our data for malicious purposes – can we?

Perhaps. 

But if you’re getting something for free, you’re not the customer – you’re the product. 

Eventually, that photo with the bomb lighting or seeing how they’ll look at age 50 could be source material for videos intended to cause real harm. 

Under the radar

Progress is not limited to image manipulation. In 2016, Adobe debuted Voco, dubbed the “Photoshop for voice.”

The software can listen to voice samples and make that voice say anything.

Believable altered videos and manipulated voices to match – what could possible go wrong?

Defenses against the threat of deepfakes

The good news is that there are companies working on creating deepfake detectors. They too, are getting better all the time. 

But can they keep up? 

Neural networks can be pitted against each other. With the recent invention of Generative adversarial networks (GANs), a class of neural networks that can improve on their own, the role of human input is minimized.

One of the most time consuming tasks in machine learning is often crafting and acquiring the input data that ‘trains’ the neural network.

GANs offload some of that work to other neural networks. That allows the neural networks to continue learning, human free.

GANs are used for both creating fakes and detecting fakes. The detector can self-optimize until it detects fakes reliably. Inversely, the deep fake-creator does the same, auto-improving until it no longer triggers detectors. 

deepfakes deepfake face combination, AI technology

The paradox of this yin-yang dynamic is that whoever has the best deepfake detectors will also have the best deepfake-creation technology. 

The future of deepfakes

With little regard for the many ways they can cause damage, these tools continue to develop with some of the brightest minds in the industry.

It will get increasingly difficult to differentiate between reality and deepfaked content. 

The escalating sophistication of these tools will create more problems than they will solve.

Least we forget: Facebook, once a ‘good fun’ way to connect with others, brought deeper issues than many bargained for. 

Deepfaked content provides marginal entertainment value and hard-to-quantify, but undeniable potential social detriment. 

Porn might be the primary use now. Even the current use and applications of this technology warrants regulation and severe penalties for creating nonconsensual, exploitative, content.

But it’s only the beginning.

Imagine a deepfaked video of a politician inciting violence. Careers can be sabotaged by actions never taken. The extortion and blackmail potential are high.

We’ve already seen an escalation of dangerous applications.

Speaker of the House, Nancy Pelosi, was the subject of two deepfaked videos that made their rounds on social media.

Both times, the videos were manipulated to make it appear as though she were under the influence or impaired.

The videos were shared almost 200,000 times before being removed by Facebook.

Inversely, it will be hard to tell what’s real, eroding accountability at all levels of society. Perhaps that politician really did incite violence, but the general existence of deepfakes creates plausible deniability.

Both of these situations pose more risks than deepfakes can redeem in positive applications. Funny as Ron Swanson’s face on Wednesday Adam’s body might be, it’s a Trojan horse.

If we think misinformation is a problem now – we ain’t seen nothing yet. 

Leave a Reply

Discover more from Upside Chronicles

Subscribe now to keep reading and get access to the full archive.

Continue reading