Why facial recognition software isn’t ready for prime time

In the wake of the manhunt for the Boston bombers, opinions are divided on whether facial recognition technology helped or hindered the search. Headlines like “Why Facial Recognition Failed” (Salon.com) are echoed in a statement from the Boston police commissioner, who told The Washington Post that the technology “came up empty.”

The opposite interpretation can be found at Technorati (“Facial Recognition Technology Helps Identify Boston Marathon Bombing Suspects”). So who is right, and were today’s facial recognition techniques up to the task?

The high-tech video intelligence methods hyped in the media during the manhunt may be available for use by investigators, but that doesn’t mean they’re effective or actually used by law enforcement. Neither San Francisco nor San Jose police use facial recognition, for example, and an FBI biometric system planned for introduction in California and eight other states next year apparently only makes exploratory use of face recognition, relying instead mostly on the trusty fingerprint.

Jim Wayman, director of the National Biometric Test Center at San Jose State University, said automated facial recognition didn’t fail in the Boston case: it simply wasn’t used. Contrary to reports like that of San Francisco’s ABC7, Wayman said video intelligence company 3VR’s products were not used to find the Boston bombing suspects.

3VR did not respond to our request for comment. The FBI also has no large-scale automated face recognition system, according to Wayman.

The essential problem with face recognition is getting an algorithm to correctly match degraded cell phone or surveillance images with well-lit, head-on photos of faces. While this is effortless for the human brain (unless you have prosopagnosia), hair, hats, sunglasses, and facial expressions can throw off automated recognition methods. Of course, before you can even get to the matching stage, you have to identify a suspect, and hope their face is included in driver’s license, mugshot, or other databases.

Face_recognition_with_hopfield_network

What video surveillance more broadly was useful for in the Boston case was tracking the movements of the suspects. This still required a considerable human effort: the Post reports one agent watching the same video clip 400 times.

The next development step for facial recognition, both academically and commercially, is 3D, using shadows and facial landmarks to create best-guess models of faces. Face recognition challenges organized by the National Institute of Standards and Technology have expedited improvements at a Moore’s law-like pace, but the nuances that impede computers, like image alignment, occlusion, and face angle, remain a problem.

Better and cheaper (and more ubiquitous) cameras should address the issues of grainy and blurry images; an international standard requires a resolution of 90 pixels between the eyes for facial recognition algorithms to work, says Wayman, whereas the images released of the Boston suspects had 12-20 pixels. A database with which to compare is still required, however; identifying and tracking a face across video streams would be much more useful.

And even when facial recognition technologies improve and mature, the question still remains: should they be ready for prime time, in a way reminiscent of Minority Report? Wayman said currently employed systems that compare live people to their passport photos at airports still have a false negative rate of about 15 percent. If performance in such controlled situations is so fickle, it seems there is still a lot of work to do before these systems can automatically, and accurately, pick out faces of interest from surveillance footage.

Image via Wikimedia Commons user Mrazvan22

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

  • GigaOM Research highs and lows from CES 2013
  • How HR can make the case for workforce analytics
  • The 2013 task management tools market

    


GigaOM