Newswise — Engineers here are developing a computerized surveillance system that, when completed, will attempt to recognize whether a person on the street is acting suspiciously or appears to be lost.

Intelligent video cameras, large video screens, and geo-referencing software are among the technologies that will soon be available to law enforcement and security agencies.

In the recent Proceedings of the 2008 IEEE Conference on Advanced Video and Signal Based Surveillance, James W. Davis and doctoral student Karthik Sankaranarayanan report that they've completed the first three phases of the project: they have one software algorithm that creates a wide-angle video panorama of a street scene, another that maps the panorama onto a high-resolution aerial image of the scene, and a method for actively tracking a selected target.

The ultimate goal is a networked system of "smart" video cameras that will let surveillance officers observe a wide area quickly and efficiently. Computers will carry much of the workload.

"In my lab, we've always tried to develop technologies that would improve officers' situational awareness, and now we want to give that same kind of awareness to computers," said Davis, an associate professor of computer science and engineering at Ohio State University.

The research isn't meant to gather specific information about individuals, he explained.

"In our research, we care what you do, not who you are. We aim to analyze and model the behavior patterns of people and vehicles moving through the scene, rather than attempting to determine the identity of people. We are trying to automatically learn what typical activity patterns exist in the monitored area, and then have the system look for atypical patterns that may signal a person of interest -- perhaps someone engaging in nefarious behavior or a person in need of help."

The first piece of software expands the small field of view that traditional pan-tilt-zoom security cameras offer.

When surveillance operators look through one of these video cameras, they get only a tiny image -- what some refer to as a "soda straw" view of the world. As they move the camera around, they can easily lose a sense of where they are looking within a larger context.

The Ohio State software takes a series of snapshots from every direction within a camera's field of view, and combines them into a seamless panorama.

Commercially available software can turn overlapping photographs into a flat panorama, Davis explained. But this new software creates a 360-degree high-resolution view of a camera's whole viewspace, as if someone were looking at the entire scene at once. The view resembles that of a large fish-eye lens.

The fish-eye view isn't a live video image; it takes a few minutes to produce. But once it's displayed on a computer screen, operators can click a mouse anywhere within it, and the camera will pan and tilt to that location for a live shot.

Or, they could draw a line on the screen, and the camera will orient along that particular route -- down a certain street, for instance. Davis and his team are also looking to add touch-screen capability to the system.

A second piece of software maps locations within the fish-eye view onto an aerial map of the scene, such as a detailed Google map. A computer can use this information to calculate where the viewspaces of all the security cameras in an area overlap. Then it can determine the geo-referenced coordinates -- latitude and longitude -- of each ground pixel in the panorama image.

In the third software component, the combination map/panorama is used for tracking. As a person walks across a scene, the computer can calculate exactly where the person is on the panorama and aerial map. That information can then be used to instruct a camera to follow him or her automatically using the camera's pan-and-tilt control. With this system, it will be possible for the computer to "hand-off" the tracking task between cameras as the person moves in and out of view of different cameras.

"That's the advantage of linking all the cameras together in one system -- you could follow a person's trajectory seamlessly," Davis said.

His team is now working on the next step in the research: determining who should be followed.

The system won't rely on traditional profiling methods, he said. A person's race or sex or general appearance won't matter. What will matter is where the person goes, and what they do.

"If you're doing something strange, we want to be able to detect that, and figure out what's going on," he said.

To first determine what constitutes normal behavior, they plan to follow the paths of many people who walk through a particular scene over a long period of time. A line tracing each person's trajectory will be saved to a database.

"You can imagine that over a few months, you're going to start to pick up where people tend to go at certain times of day -- trends," he said.

People who stop in an unusual spot or leave behind an object like a package or book bag might be considered suspicious by law enforcement.

But Davis has always wanted to see if this technology could find lost or confused people. He suspects that it can, since he can easily pick out lost people himself, while he watches video footage from the experimental camera system that surrounds his building at Ohio State.

It never fails -- during the first week of fall quarter, as most students hurry directly to class, some will circle the space between buildings. They'll stop, maybe look around, and turn back and forth a lot.

"Humans can pick out a lost person really well," he said. "I believe you could build an algorithm that would also be able to do it."

He's now looking into the possibility of deploying a large test system around the state of Ohio using their research. Here law enforcement could link video cameras around the major cities, map video panoramas to publicly available aerial maps (such as those maintained by the Ohio Geographically Referenced Information Program), and use their software to provide a higher level of "location awareness" for surveillance.

Three Ohio State students are currently working on this project. Doctoral student Karthik Sankaranarayanan is funded by the National Science Foundation. And two undergraduate students -- Matthew Nedrich and Karl Salva -- are funded by the Air Force Research Laboratory.

MEDIA CONTACT
Register for reporter access to contact details
CITATIONS

Proceedings of the 2008 IEEE Conference on Advanced Video and Signal Based Surveillance