Scientists have found a way to crack AI face changing videos

Fake videos made with Deepfake face-changing technology are enough to be true. However, a few days ago, researchers used artificial intelligence to analyze the blinking of people in fake videos, which can effectively detect fake videos. With the rapid development of artificial intelligence, many people use the so-called Deepfake black technology to create fake videos. This video-swapping technology has caused strong concerns in the industry.

As they call it, Deepfake is the latest — and perhaps the most disturbing — manifestation of the ever-evolving digital disinformation. Modifying images has long existed, and methods for tampering with audio are constantly improving. Until recently, editing videos has been very laborious, requiring a lot of professional skills and patience. However, the development of machine learning technology has led to the continuous acceleration of this process.

Since then, similar fake videos and related software have sprung up on the Internet. Although some are relatively harmless, this face-changing tool obviously has negative effects. It's easy to imagine that a well-made video could increase tensions, cause riots or aggravate crime. Mutual trust between people may be eroded, and there is widespread concern that the speed of technological development has exceeded the development of policies.

Fortunately, the scientific community is dealing with this problem. A team led by Siwei Lyu of the University of Albany in New York discovered the loopholes in these fake videos. The DeepFake algorithm creates a video based on the fed image. Although relatively accurate, artificial intelligence has never been able to perfectly reproduce all the physiological signals naturally produced by humans. Lyu and his team paid special attention to one thing: blink. Humans usually blink every two to three seconds. However, since portrait photos usually do not close their eyes, training these algorithms through photos means that portraits in the generated video rarely blink.

Therefore, Lyu and his team designed an artificial intelligence algorithm to detect where there is no blinking in fake videos. Their algorithm—a combination of two neural networks—first detects the faces of people in the video, and then aligns all consecutive people in the video to analyze the eye area in each person. Part of the neural network determines whether the portrait closes its eyes. The other is used as a memory system, remembering frame-to-frame changes to determine if blinks have occurred over time.

First, they train artificial intelligence on a labeled dataset with eyes open and closed eyes. Later, in order to test the training results, they generated the DeepFake video set by themselves, and even did some post-processing to make the fake video look more natural.

The results are impressive. According to Lyu, their artificial intelligence identified all fake videos.

Lyu explained that manually adding blinks in the post-processing of fake videos is not a huge challenge, and some fake videos—including fake videos on BuzzFeed—do include blinks. However, the algorithm they developed can at least help prevent and delay the process of creating fake videos. "We are forming the first line of defense," Lyu said. "In the long run, this is actually an ongoing battle between making fake videos and detecting fake videos."

This research is in line with broader efforts. As part of its media forensics program, the research is sponsored by the Defense Advanced Research and Planning Agency (DARPA) and will run from 2016 to 2020. Their goal is to develop a set of tools to check the authenticity and accuracy of digital information such as audio and video.

"We want to assure the public that there are technologies that can combat this kind of fake media and fake news," Lyu said.

Lev Manovitch, a professor of computer science at the University of New York, believes this is also an example of the increasing competition between artificial intelligence. "We are very clear that computational data analysis can usually detect patterns that humans can't detect," he explained, "but how to detect another pattern generated by artificial intelligence? We will see "wars" between artificial intelligence in the future. Does it happen at a level of detail that we will never notice?

Currently, Lyu's team is studying how to further develop technology to solve complex issues such as blink frequency and duration. The future goal is to be able to detect various natural physiological signals including respiration. "We are working very hard to solve this problem," Lyu said.

Of course, the double-edged sword of public scientific research is that once crooks have read and understood how their scam was discovered, they can adjust the algorithm accordingly. "In this sense, they have the upper hand," Lyu said. "It's hard to say which side will win in the end.

Wall AP Wireless AP

Wall Ap Wireless Ap,Wall Mount Access Point,Wall Mount Wifi Access Point,Access Point Ac In-Wall

Shenzhen MovingComm Technology Co., Ltd. , https://www.movingcommiot.com