Grants and Contracts Details
Description
There exists now an important need for technology that reliably identifies specific individuals from the
general population by means of biometric identification. Technologies such as fingerprint matching and retina
scanning, while highly reliably indicators of identity, are very expensive as well as cumbersome to implement
by requiring the target subject to cooperate with the scanning device. As an alternative, facial recognition
coupled with an automated and non-intrusive surveillance system offers a significant degree of conveniance
for the target subject who is "scanned" without any direct participation. But existing face recognition
techniques have not yet proved reliable in such environments, failing to solve potentially insurmountable
problems such as working under varying lighting conditions and with various head poses, facial disguises,
and facial features (hair, glasses).
In order to address the difficulties in 2-D image and video-based face recognition, a handful of efforts
are underway to use 3-D facial scans, but while results have been promising, the existing research involves
high-resolution laser scanning equipment only applicable in a kiosk-style interface. As an alternative, this
proposal focuses on the integration of traditional surveillance video with depth sensing cameras based upon
structured-light illumination, stereo-vision, and time-of-flight as a means of (1) extracting facial scans in a
transparent fashion such that the user is unaware of the data collection process and (2) allowing for scans to
be made from long distances (up to 35 meters) between the sensor and the subject. Such a network would
directly address issues important to National and Homeland Security resulting in significant advances in
science and engineering through the integration of computing, networking, and human-computer interfaces.
Upon completion of this 24 month project, we will have developed a small surveillance network of
hybrid camera systems that fuse multiple active and passive range-sensing techniques. The video sequences
from these pods will be processed to detect and track faces from which high resolution 3-D models will be
constructed as a subject moves through the surveyed environment, combining data recorded from any and
all of the component cameras. Finally, the 3-D position of the face within the environment will be used
to target a time-of-flight sensor to acquire a single, high-resolution model of the face to complement that
created by the network. By first detecting faces in the 2-D video sequences, we will minimize the required
amount of computational and network bandwidth needed to build the 3-D face model by foclL';ing all of our
3-D processing on only the observable faces within the scene, as opposed to constructing a complete 3-D
environment from which faces would be extracted. Specific milestones that this project will achieve are listed
as follows:
. Develop hybrid surveillance camera "pods" that combine the techniques of shape from motion, multiview
stereo, and structured light illumination to record high resolution, RGB+depth video.
. Extend the range of a commercially available time-of-flight range sensing camera from 3.5 to 35 meters
by coupling the near-IR pulse source with a spatial light modulator to concentrate the wideband,
near-IR pulse into a concentrated beam of light that we can electronically steer onto faces within the
surveyed area.
. Assemble a small network of calibrated surveillance pods that automatically detect faces within the
surveyed environment. As the subject moves through the network, a central cluster will continuously
refine a 3-D model of the target face through the fusion of the various range sensing modalities.
Now while the resulting surveillance system is not, at the completion of this project, expected to work in
real-time, we will, under subsequent funding, look at issues relating to process latency and the required
computer infrastructure needed to achieve real-time operation.
1
Status | Finished |
---|---|
Effective start/end date | 2/15/05 → 9/1/08 |
Funding
- Eastern KY University: $654,668.00
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.