The FBI is launching a new Big Brother-like program called Next Generation Identification. The bureau will use its $1 million budget toward creating an algorithm that allows public officials to identify anyone that they spy on a security camera by face.
There is an ever-growing amount of closed-circuit television (CCTV) systems in the United States. In the New York subway system alone, there are 3,700 cameras online, and an estimated 507 that send live feeds to the command center from Grand Central Station, Penn Station and Times Square. While it is not as bad as it is in the United Kingdom, where there is an average of one CCTV camera for every 14 citizens, it is already pretty impossible to walk around a large city in the United States without being spotted on a CCTV camera.
Now the FBI wants to step it up a notch. Some states have already begun uploading photos to the system, and it is expected to go nationwide by 2014. The program wants to be able to compare law enforcement officials' mug shots with camera footage, so that they can be able to pick criminals out from a crowd. Though they say that the pilot program only uses pictures of known criminals, it is unclear if, when the system is rolled out, civilians will be added to the database.
Using the system, the FBI will be able to apply facial identification with any image source, with more sophisticated versions of the same technology already being used by Facebook and iPhoto. Tattoos will also be able to be tracked.
Ideally, the FBI's use of such systems will allow it to identify criminals more accurately and efficiently. But privacy advocates are concerned that people without criminal records, who end up next to a person of interest on camera, will be added to databases or subject to increased surveillance.
New Scientist reports that the technology being used could be accurate if used with photos taken in controlled environments, like passport photos or mug shots. 2010 tests found that the best algorithms could pick someone out of a pool of 1.6 million mug shots with 92 percent accuracy. Other algorithms can even distinguish a person who is not looking at the camera. At Marios Savvede's lab at Carnegie Mellon University, researchers analyzed the features of front and side views of mug shots, created a 3-D model of a person's face, and rotated it as much as 70 percent to match the position of the person's face in camera footage with a high degree of accuracy.
The program is being developed in conjunction with Lockheed Martin Transportation and Security Systems and IBM. The pilot program was launched in February.
Published by Medicaldaily.com