DeepFace The Crazy-Smart Face Recognition Software By Facebook.
Facebook seems to be trying to bring the gap between humans and computers together with a facial recognition software sense the company has spoken on its development of a technology that recognizes whether two different images are displaying the same face, a skill which is near the same ability of a human to make distinction.
This technology has been dubbed DeepFace and has been claimed to be 97.25 percent accurate, reducing the margin of error with current technology by more than 25 percent. Facebook has commented that DeepFace is nearly approaching human-level performance, which has been scored down as 97.5 in a standardized test.
“In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify,” Facebook said in a research paper, released last week. “We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network.”
The paper goes on to say, DeepFace maps out a 3D model of an “average’ front-faced person, and them creates a flat model, which is filtered by colors to characterize specific facial features. To increase the accuracy of the technology, Facebook has previously used over 4.4 million images of faces from over 4,030 different people on its social network.
DeepFace is still currently in its researching stage, Facebook has released a paper to receive feedback from the research community ahead of presenting it at the IEE Conference on Computer Vision and Pattern Recognition in June.
Here is an excerpt from the research paper, released by Facebook:
An ideal face classifier would recognize face in accuracy that is only matched by humans. The underlying face descriptor would need to be invariant to pose, illumination, expression, and image quality. It should also be general, in the sense that it could be applied to various populations with little modifications, if any at all. In addition, short descriptors are preferable, and if possible, sparse features. Certainly, rapid computation time is also a concern. We believe that this work, which departs from the recent trend of using more features and employing a more powerful metric learning technique, has addressed this challenge, closing the vast majority of this performance gap.