Facial Recognition Gets Better and New Uses Emerge
In New York, a sanitation worker used a driver’s license to impersonate his dead twin brother and draw more than $500,000 in disability benefits. In Iowa, an escaped bank robber got nailed when he applied for a driver’s license to fake his identity. A New Jersey man was caught applying for two fraudulent commercial driver’s licenses, even though his licenses had been suspended 64 times.
In each of these cases, reports Stateline, a service of the Pew Charitable Trusts, authorities used facial recognition technology to trip up would-be cheaters at the local department of motor vehicles office.
Facial recognition is here to stay, and the uses and potential uses are burgeoning. Motor vehicle agencies are using them to stop people from getting drivers’ licenses under false names. Intelligence agencies and the military are interested in the technology to identify threats in airports, border crossings, war zones, and even crowds. Welfare agencies are looking at using the technology to guard against benefit fraud. Vehicle makers may someday use it in conjunction with driver-safety systems.
The technology is still far from perfect: A partly covered face, bad lighting, or an off-angle for the photo can all undermine accuracy. Privacy concerns, meanwhile, have some groups challenging certain uses as invasive. Citizens may accept government-owned sensors in airports as a condition of travel, just as they endure body screening and baggage search. But it may be a whole different story when commercial enterprise gets involved. Case in point: Consumers are balking at Facebook’s use of the technology to automatically tag personal photos; the company is facing a number of lawsuits over that application.
That’s why products are now emerging that can obscure appearance to combat recognition technology, and that’s why the Electronic Privacy Information Center (EPIC) called for a suspension of all uses of the technology until the Department of Commerce develops a framework for its use, including adequate safeguards. The American Civil Liberties Union (ACLU) has proposed a framework, but nine advocacy groups, including the ACLU, pulled out of talks with Commerce on the subject in June.
How It Works
Every human face has a number of key distinguishing characteristics, such as the distance between the eyes, width of the nose, shape of the cheekbones, and the length of the jaw line. Software algorithms, advanced digital cameras, and the powerful number crunching capacity of cloud computing can extract those attributes and compare them with vast databases of known images to rapidly match two faces with a high degree of accuracy.
Advances have been rapid. A 2002 test at Boston’s Logan Airport, where 10 of the 9/11 terrorists boarded their deadly flights, failed to detect test subjects 96 times, while correctly noting 153 subjects. The National Institute of Standards and Technology (NIST) says it has seen a 30 percent improvement from 2010 to 2013. Google claims its facial recognition search tools are nearly 100 percent accurate.
New computing tools are helping searchers to dig deeper and faster into databases. The computing power unleashed by cloud computing is a driving force in the improvement of facial recognition, according to researchers at the University of Rochester and the Rochester Institute of Technology.
In addition, scientists are getting more creative in how they devise their algorithms. One model might look at facial characteristics – the geography of eyes, nose, mouth. Another doesn’t see the face as an object, but more like an abstract mathematical puzzle, said Mario Savvides, director of Carnegie Mellon’s CyLab Biometrics Center.
A forensic approach, on the other hand, might focus on distinguishing characteristics such as moles, scars, and tattoos. “It could be any of these people, but it can’t be this person, because he has a mole on his left cheek, or it can’t be this person, because his eyebrows are much thicker,” Savvides said. “Right now we are kind of mixing all these together. We will try to sort things based on eyebrows, then re-sort based on beard shape, then sort again on some other factor.”
The two most common facial recognition scenarios are drivers license fraud and immigration control. The first matches an applicant against an existing database. The second may compare a traveler to a database of potential terrorists or known criminals, and may also be used to confirm that the traveler is in fact the person whose picture is on the passport.
But there are more. The American Bar Association reports on efforts in St. Louis to keep high-risk threats out of the courthouse. Ontario uses facial scans to flag self-identified problem gamblers when they enter a gaming center.
San Diego has used facial recognition at crime scenes to match possible suspects with known offenders. Maryland has a database of over 3 million known offenders and compares their faces with those of suspects.
Even animal control can get in on the action. In Sacramento, shelters are using a facial recognition application in the hopes of identifying and returning lost pooches to their proper homes.
Patrick Grother has been asking “What’s next?” as biometric testing project leader in the NIST Information Technology Laboratory. One answer cuts to the core of a societal tragedy: Child exploitation. “It’s a gut-wrenching problem, but when it features the faces of children in photographs, then we can ask: Can they be identified?” he said.
Perhaps a child could be matched against a database of missing children. In another scenario, Grother envisions a child identified in two different exploitative images. This would give investigators something to work from, perhaps something common in the background or some other element tying the images together.
Identity verification is another potential use. Facial features could be matched to government-issued identification to ensure benefits only go to eligible individuals. “That is a classic application of biometrics,” Grother said. “Fraud prevention.”
On the other end of the spectrum, the Digital Democracy Program is using facial and voice recognition to identify lawmakers and lobbyists in online video. The tool then uses this identifying information to access video clips and other information from the same source, giving citizens a well-rounded picture of a lawmaker’s activities.
None of this is as easy as it sounds. Today’s technology is pretty accurate, but only “as long as you have a person who would like to be recognized, who is happily smiling at the camera – then the quality of the algorithms is very high,” said J. Ross Beveridge, a professor of computer science at Colorado State University.
But when the individual is turned away from the lens and his or her face is obscured, or if the image being compared was taken with a phone or by a parking lot surveillance camera at night, the process gets much harder.
Said Grother: “Success depends on the exact engineering you do, especially in terms of manipulating the environment so you can get a good photograph.”
For those situations when that’s not possible, researchers are trying to see how much useful information they can tease out of bad images. Savvides refers to it as UROPA: unconstrained resolution, occlusion (or masking), a variable pose and the effects of aging. Each of those factors makes recognition harder. But within five years, Savvides said, improved algorithms will be able to overcome those problems and accurately match faces to photos, despite the challenges.
Which is why the independent National Institute of Informatics (NII) in Japan is promoting a “Privacy Visor,” which it claims will confuse and defeat facial-identification systems by disrupting light and dark patterns around the eyes and nose, making it harder to generate the measurements needed for comparisons.
Consumers may be interested in such technology as a way to fight back against the sense that they are being tracked wherever they go. Pushback against Facebook may be just the beginning. If retailers track individual customers through their stores using cameras – creating a record of physical movement that parallels digital footprints left on retail websites – will consumers go along? If cities mount cameras on street corners so police can use facial recognition to identify and track potential suspects, will the public be accepting?
Maybe, said Colorado State’s Beveridge, “If it makes things easier and more convenient.”
That’s already the case at state motor vehicle departments, where the technology can quickly identify fraud and only rarely turns up false positives, as it may with identical twins. At the DMV, like at airports, citizens have little expectation of privacy. And no one has much sympathy for criminals seeking licenses under false names.
Adam Stone writes on technology management, business, government and military topics. His recent work has appeared in USA Today, Federal Times, Public CIO, Government Executive, and many other publications.