Stephanie Reynolds/AFP via Getty Images
A new batch of deepfake videos and images is causing a stir — a cyclical phenomenon that appears to be happening with more frequency, as several bills targeting deepfakes remain before Congress.
The issue has made headlines this week because of fake pornographic images of pop star Taylor Swift Proliferate on X (formerly Twitter), Telegram and elsewhere.Many posts were deleted, but some of them were deleted millions of views.
The attack on Swift’s famous image is a reminder that deepfakes have become increasingly easier to create in recent years. Many apps can swap faces onto other media with high fidelity, and the latest iteration promises to use artificial intelligence to produce more convincing images and videos.
Deepfakes often target young women
Many deepfake apps are promoted as a way for ordinary people to create funny videos and memes. But many end results don’t live up to that hype. As Caroline Quirk writes in ” princeton law journal Last year, “since the technology became more widely available, 90-95% of deepfake videos are now non-consensual pornographic videos, with 90% targeting women—mostly minors.”
At their core, such deepfakes are an attack on privacy, said law professor Danielle Citron.
“It’s turning women’s faces into porn, stealing their identities, forcing sexual expression, and giving them an identity they didn’t choose,” Citron said last month In a podcast at the University of Virginia, she teaches and writes about privacy, free speech, and civil rights at the university’s law school.
Citron points out that deepfake images and videos are just new forms of lying—a problem humans have been dealing with for millennia. The problem, she said, is that these lies are presented in video form, which often touches people’s hearts. In the best deepfakes, the lies are concealed by sophisticated techniques that are extremely difficult to detect.
We’ve seen moments like this coming.Deepfake videos showing “Tom Cruise” in a variety of unlikely scenarios have racked up hundreds of millions of views in recent years on tiktok Created by photographer and visual effects artist Chris Umé and Cruise impersonator Miles Fisher, the project is quite tame compared to many other deepfake campaigns, and the videos come with a watermark tag that reads “#deeptomcruise,” a nod to its Official nod. status.
Deepfakes pose growing challenges with little regulation
The risk of harm from deepfakes is wide-ranging, from the theft of women’s faces to create sexually explicit videos, to the use of celebrities in unsanctioned promotions, to the use of manipulated images in political disinformation campaigns.
These risks have been highlighted several years ago—especially in 2017, when researchers used what they called “visual forms of lip-syncing” to generate Several very realistic videos Former President Barack Obama speaks.
In the experiment, researchers paired real audio of Obama’s speech with computer-processed video. But it had a disturbing effect, as it showed the potential power of video to put words into the mouth of one of the most powerful men in the world. planet.
That’s it Reddit commenter This dilemma was described in a deepfake video from last year: “I think everyone gets fooled: older people think everything they see is real, and younger people have seen so much deepfake stuff that they won’t believe it. Everything you see is real. ”
As UVA law professor Citron said last month, “I think law needs to be reintroduced into calculus because now the ‘Internet,’ I use air quotes, right, is often seen as, like , the Wild West.”
So far, the most stringent restrictions on the use of deepfakes in the United States are not at the federal level, but at the state level, including: california, Virginia and hawaiiprohibiting non-consensual deepfake porn.
But as the Brennan Center for Justice reports, these laws and other state laws have different standards and focus on different content patterns.At the federal level, the Center said last monthAt least eight bills seek to regulate deepfakes and similar “synthetic media.”
In addition to revenge porn and other crimes, a number of laws and proposals seek to impose special restrictions and requirements on films related to political campaigns and elections.But some companies are taking action on their own – last year, for example, when Google, then Meta, Announced that they would require labels if political ads were created using artificial intelligence.
Then it’s a scam
Over the past month, visitors to YouTube, Facebook and other platforms have seen video ads claiming Jennifer Aniston was offering great deals on Apple laptops.
“If you’re watching this video, you’re one of 10,000 lucky people who have the chance to buy a Macbook Pro for $2,” the Aniston knockoff says in the ad. “I’m Jennifer Aniston,” the video incorrectly states, urging people to click on a link to claim their new computers.
A common goal of this type of scam is to trick people into signing up for expensive subscription services online because the website Reported malware tips In a recent similar strategy.
Last October, actor Tom Hanks warned people that artificial intelligence was using his image to appear to sell dental insurance online.
“I had nothing to do with it,” Hanks said. Instagram post.
Soon after, cbs morning show Co-host Gayle King sounded the alarm with a video purportedly showing her hawking diet gummies.
“Please don’t be fooled by these AI videos,” she said.