Worried about artificial intelligence hijacking your voice for deepfakes?This tool can help you


How the new tool AntiFake works

Washington University in St. Louis


hide title

Switch title

Washington University in St. Louis

When Scarlett Johansson discovered that her voice and face were being used to promote an artificial intelligence app online without her consent, she took legal action against the app’s maker, Lisa AI.

The video has been deleted. But many of these “deepfakes” can circulate on the Internet for weeks, such as a recent video featuring MrBeast, in which an unauthorized likeness of a social media celebrity can be seen hawking a $2 iPhone.

Artificial intelligence has become so good at mimicking people’s looks and voices that it’s hard to tell whether they’re real or fake.In two newly released artificial intelligence surveys, about half of the respondents were from Northeastern University and Voicebot.ai and Pindrop – said they couldn’t differentiate between synthetic content and human-generated content.

This has become a particular problem for celebrities, for whom trying to stay ahead of AI robots has become a game of whack-a-mole.

Now, new tools are making it easier for the public to detect these deepfakes while making them harder for artificial intelligence systems to create them.

“Generative AI has become an enabling technology that we think will change the world,” said Zhang NingAssistant Professor of Computer Science and Engineering at Washington University in St. Louis. “But there has to be a way to build a layer of defense when it’s abused.”

scrambled signal

Zhang’s research team is developing a new tool to help people combat deepfake abuse, called Anti-counterfeiting.

“It disrupts the signal, preventing AI-based synthesis engines from producing effective imitators,” Zhang said.

Zhang said AntiFake was inspired by the University of Chicago glaze — a similar tool designed to protect visual artists’ work from being used to generate artificial intelligence models.

The research is still very new; the team will present the project at a conference later this month major security meeting in Denmark. It’s unclear how it will expand.

But essentially, before you can publish your video online, you need to upload your voice track to the AntiFake platform, which is available as a standalone application or can be accessed over the web.

AntiFake scrambles the audio signal to confuse the AI ​​model. The modified audio track still sounds normal to the human ear, but sounds messy to the system, making it difficult to create clean-sounding voice clones.

A website A description of how the tool works includes many examples of the technology altering real sounds, which sound like this:

AntiFake Live Audio Editing

In this regard:

AntiFake Scrambled Audio Clip

You will retain all rights to the track; AntiFake will not use it for any other purpose. But AntiFake won’t protect you if your voice is already widely shared online, Zhang said. That’s because artificial intelligence robots can already access the voices of a wide range of people, from actors to public media reporters, and can produce high-quality clones of individuals with just a few seconds of speech.

“All defenses have limitations, right?” Zhang said.

But Zhang said that when AntiFake launches a few weeks ago, it will give people a proactive way to protect their speech.

Deepfake detection

Meanwhile, there are other solutions, such as deepfake detection.

Some deepfake detection technologies embed digital watermarks in videos and audios to allow users to identify whether they were produced by AI.For example Google’s Synthetic ID He Yuan’s stable signature.Others, developed by companies like needle drop and Viridasyou can tell if something is fake by examining tiny details, such as how the sound of a single word synchronizes with the speaker’s mouth.

“There are certain things that humans say that are difficult for machines to represent,” said Pindrop founder and CEO. Vijay Balasubramanyan.

but Lu SiweiA computer science professor at the University at Buffalo who studies the security of artificial intelligence systems said the problem with deepfake detection is that it only works on content that has already been published. Sometimes unauthorized videos can remain online for days before being flagged as AI-generated deepfakes.

“Even if the gap between this thing showing up on social media and being identified as AI-generated is just a few minutes, it could be damaging,” Lyu said.

need balance

“I think this is just the next step in how we protect this technology from misuse or abuse,” said Rupal PatelProfessor of Applied Artificial Intelligence at Northeastern University, Vice President of AI Company Viritone“I just hope that with this protection, we don’t end up throwing the baby out with the bathwater.”

Patel believes it’s important to remember that generative AI can do amazing things, including helping people who have lost their voices speak again. Actor Val Kilmer, for example, has relied on synthetic voices since losing his real voice to throat cancer.

Youtube

Developers need a lot of high-quality recordings to produce these results, which they wouldn’t have if their use was completely restricted, Patel said.

“I think it’s a balance,” Patel said.

Consent is key

Consent is key when it comes to preventing deepfake abuse.

In October, members of the U.S. Senate announced they were discussing a new bipartisan bill — The Foster Originality, Nurture the Arts, and Keep Entertainment Safe Act of 2023, or simply the Prohibit Fakes Act of 2023 — would hold creators of deepfakes liable if they use someone else’s likeness without authorization.

“This bill would provide uniformity across federal laws, as disclosure rights currently vary from state to state,” said Jarl WeitzAttorney at the New York art law firm Kaye Spiegler.

Currently, only half of the states in the United States have “right of publicity” laws that give individuals the exclusive right to license the use of their identities for commercial promotion. And they all offer varying degrees of protection. But federal law may be years away.

This story was edited by Jennifer Vanasco.Audio produced by Isabella Gomez Sarmiento.





Source link

Leave a Comment