

One example is scanning faces for human infrared signals, which wouldn’t be reproduced in a VR system. In the “cat-and-mouse game” of face authenticators and attacks against them, there are definitely ways systems can improve to defend against these attacks. Department of Computer Science/UNC Chapel Hill Working on facial rendering to produce realistic texture. At this point, the faces were ready to be animated as needed for “liveness clues” like blinking, smiling, and raising eyebrows-basically authentication system checks intended to confirm that a face is alive. The last step for each face render was correcting the eyes so they appeared to look directly into the camera for authentication. If a face model didn’t succeed at fooling a system, the researchers would try using texture data from a different photo. “Obtaining an accurately shaped face we found was not terribly difficult, but then retexturing the faces to look like the victims’ was a little trickier and we were trying solve problems with different illuminations,” Price says. The system also needed to extrapolate realistic texture for parts of the face that weren’t visible in the original photo. To create digital replicas, the group used the photos to identify “landmarks” of each person’s face, fit these to a 3-D render, and then used the best quality photo (factoring in things like resolution, lighting, and pose) to combine data about the texture of the face with the 3-D shape.
FACEBOOK PHOTO PRIVACY HACK FULL
Available photos were often low resolution and didn’t always depict people’s full faces. For the UNC researchers, the most challenging part of executing their 3-D replica attack was working with the limited image resources they could find for each person online. Still, the group was able to find at least three photos of each of them.įacial authentication spoofing attacks can use 2-D photos, videos, or in this case, 3-D face replicas (virtual reality renders, 3-D printed masks) to trick a system. “You can’t always control your online presence or your online image.” Price points out that many of the study participants are computer science researchers themselves, and some make an active effort to protect their privacy online. “We could leverage online pictures of the, which I think is kind of terrifying,” says True Price, a study author who works on computer vision at UNC. They found anywhere from three to 27 photos of each volunteer.
FACEBOOK PHOTO PRIVACY HACK PROFESSIONAL
The researchers instead went about collecting images of the 20 volunteers the way any Google stalker might-through image search engines, professional photos, and publicly available assets on social networks like Facebook, LinkedIn, and Google+.

Other groups have done similar research into defeating facial recognition systems, but unlike in previous studies, the UNC test models weren’t developed from photos the researchers took or ones that the study participants provided. Spotting poor, mediocre, and high-quality images of one study participant’s face using publicly available Facebook photos.

Faces plastered across the web on social media are especially vulnerable-look no further than the wealth of facial biometric data literally called Facebook. By and large your bodily features remain constant, so if your biometric data is compromised or publicly available, it’s at risk of being recorded and exploited. Their attack, which successfully spoofed four of the five systems they tried, is a reminder of the downside to authenticating your identity with biometrics. The researchers used a VR system shown on a smartphone’s screen for its accessibility and portability. A VR-style face, rendered in three dimensions, gives the motion and depth cues that a security system is generally checking for. Now researchers have demonstrated a particularly disturbing new method of stealing a face: one that’s based on 3-D rendering and some light Internet stalking.Įarlier this month at the Usenix security conference, security and computer vision specialists from the University of North Carolina presented a system that uses digital 3-D facial models based on publicly available photos and displayed with mobile virtual reality technology to defeat facial recognition systems. But people can be fooled (disguises! twins!), so it’s no surprise that even as computer vision evolves, new attacks will trick facial recognition systems, too. After all, humans already use a powerful version of it to tell each other apart. Facial recognition makes sense as a method for your computer to recognize you.
