DeepFake News-“AI-Synthesized Content”- Latest Challenge for News Content Pipeline and Providers

By Melissa Pandika, Science Writer

A viral video of what appears to be Barack Obama provided a chilling glimpse into the growing power of fake news when it was released last April. It opens with Obama warning viewers about how America’s enemies “can make it look like anyone is saying anything at any point in time.” At about the halfway mark, a split screen reveals director Jordan Peele impersonating Obama, who has merely been mouthing his words. Peele’s production company created the video with Adobe After Effects, a motion graphics software, and FakeApp, an AI-powered face-swapping app that resynthesized Obama’s mouth to be consistent with the audio track of Peele’s voice.

The video played before a packed auditorium at the University of California, Berkeley to kick off a panel session at the recent TechCrunch event on Robotics + AI. Moderated by TechCrunch writer Devin Coldewey, the panel brought together two leading experts in AI and synthetic imagery to discuss the reality we now face: Advances in AI technology are enabling the synthesis of incredibly convincing photos, videos, and audio tracks that threaten to make fake news even more deceptive. The panel also tackled the thorny questions of how to tamp down on fake content and how to even know what to trust anymore.

After Peele’s video played, Coldewey asked the panelists how they would classify it. Hany Farid, a computer science professor at Dartmouth College who researches digital forensics, human perception and image analysis, said the correct terminology is “AI-synthesized content.” (Media outlets, however, have run with the catchier “deepfake,” after Redditor deepfakes, who used the algorithm FakeApp is based on to swap out porn stars’ faces for those of celebrities.) “I don’t think it takes a stretch of the imagination to see the power of this type of fake because you can literally put words into now anybody’s mouth,” Farid said.

Panelist Alexei Efros, an electrical engineering and computer sciences professor at UC Berkeley and member of the Berkeley Artificial Intelligence Research Lab, pointed out that “AI-synthesized content” could also include the visual effects that film studios have begun creating with the help of AI technology. The key distinction is intent. A Hollywood film often marketed as fiction, whereas something like a deepfake tries to pass off manipulated content as real.

But people have been creating fake content for years—so “why are we only talking about it now?” Coldewey asked.

Efros noted that indeed, the roots of this practice stretch back to photography’s beginnings, citing the example of how a photo of Abraham Lincoln’s head was attached to an engraving of John C. Calhoun’s body, yielding a composite image that hangs in many classrooms to this day. “I think the main difference is that now it’s starting to be much more democratized,” he said. “It’s not just the dictators, the CIA, the KGB and the special effects houses that could do it. Any kid with a computer is able to make something in Photoshop.” Plus, the deep learning methods used to create fake content are only getting better.

Not only can pretty much anyone create compelling fake content, they can also disseminate it widely and instantly over social media—and “a bunch of knuckleheads” are willing to consume, like and share it, Farid said. With the convergence of these factors, “you have, in many ways, the perfect storm.” And while misinformation isn’t a new phenomenon, “it is on extra strong steroids right now,” he added, citing Russian interference in the 2016 presidential election, as well as misinformation campaigns in the UK, Myanmar, the Philippines, Sri Lanka and India.

Farid believes the 2020 presidential election may very well be the first in which we will see widespread fake media. But he pointed to what he views as an even more sinister problem: If everyone can now create fake content, that means everyone has plausible deniability. “In a highly partisan space, if everybody can simply say the video, the image, the audio, the news story, is fake because it doesn’t conform with my worldview, we have a problem as a democracy,” he said.  “If we can’t agree on basic facts of what’s going on in the world, I think we’re in a lot of trouble.”

Efros forecasted an ongoing arms race between fake content creators and those developing systems to detect such content. As these systems improve, content creators will adapt, devising new methods to evade detection. “You are fighting, but you’re never going to win,” he said. He later went on to explain that the computer graphics technology employed to create fakes is fundamentally a helpful tool that, in the right hands, allows for creative expression, although he believes its use needs to be restricted through legislation. Lawmakers need to start thinking about how to update current laws, such as copyright laws, to address this, he said.

Farid agreed, but emphasized the importance—and challenge—of maintaining a balanced approach. While the First Amendment may make it hard to restrict fake content, he thinks most people view the weaponization of deep fakes to influence elections as problematic. On the other hand, he also enjoys the humor and satire deep fakes can provide and believes the draft legislation to get a handle on them is “overreaching” in many parts of the world.

The solution to curbing fake content must be multidimensional, Farid said. Firstly, social media platforms like Facebook, YouTube and Twitter need to get serious about it. “This isn’t fun and games anymore,” he said. “Your platforms have been weaponized, and you have a responsibility.” Meanwhile, journalists need to shoulder the huge burden of sorting out real from fake content, and consumers of digital content need to educate themselves and the next generation.  “We’re going to have to take this more seriously and start understanding that the things that happen in the digital world don’t stop in the digital world.

The panelists’ outlook wasn’t all bleak, though. Efros said Google and Facebook have reached out to him for help, and he pointed to DARPA’s MediFor program, which aims to develop tools to authenticate digital visual media. But again, he foresees an ongoing arms race between forensics and fake news. “I think the fake news will always be a slight bit ahead,” he said.

Creating forensic tools is a double-edged sword, Farid said. Making them public—as encouraged in the academic research world—arms fake content creators, who will respond by creating even more convincing content. He, too, envisions an arms race, which the forensics side will lose. But that may not necessarily be a bad thing, since it would take the content-creating technology out of the hands of the Average Joe and into those of a smaller and smaller group of experts— “which is still a threat, but I think it will become a more manageable threat that we can deal with.”

Learn more at TechCrunch Sessions: Robotics + AI.

Melissa Pandika writes about science and health.  She holds a B.A. in molecular and cell biology from the University of California, Berkeley. She can be reached at mmpandika@gmail.com/



from AI Trends http://bit.ly/2XRVKoH
via IFTTT

Comments

Popular posts from this blog

Underwater Autonomous Vehicles Helping Navy Get More for the Money 

Canada regulator seeks information from public on Rogers-Shaw deal