Institut für
Robotik und Prozessinformatik

Deutsch   English

Zur Zeit leider nur in Englisch vorhanden...




Video sensors have become very important for automotive applications due to the ongoing cost reductions of cameras and technical advances in image processing hardware. Some series vehicles are already equipped with a monocular rear-camera as the only sensor directing backwards, but the high capabilities of this camera for assistance systems are still unexploited. E.g. one desired application is a sensor system for time to collision estimation, which is motivated by the high frequency of road traffic rear-end vehicle collisions. The severity of resulting passenger injuries may be reduced by an onboard system that estimates the time of possible rear-end crashes leading to immediate preparations e.g. by moving each headrest in an optimal position or by tightening the seat belts.

In cooperation with the Volkswagen AG we are investigating the feasibility of reliable real-time vehicle detection and time to collision estimation with rear cameras. The task of iRP in this project is to develop the video sensor software that delivers state information about approaching vehicles to the actuating safety elements provided by VW.

Camera Setup

We are using single monocular color CCD-cameras monitoring the rear area of the car. These cameras deliver images in NTSC format and are mounted into the hatch as illustrated in the figure below. Due to the low-cost optics and signal processing elements, we are developing algorithms that are robust under difficult lighting conditions and noisy images.



In many of our approaches for vehicle detection we use a top-down-view that is generated by projecting the camera pixels via Inverse Perspective Mapping (IPM) onto the street as can be seen in the image below and in the corresponding video. In this view we are looking down onto the street plane at right angle. Thus, no perspective mapping has to be considered for the distance calculation of two arbitrary points of the street plane, which simplifies processing of many algorithms.

One of our approaches uses this top-down-view to generate a street texture that describes the expected appearance of the street. In the figure below (a video is also available) you can see the generated street texture in the right column and the source images in the left column. The top row shows the view as seen from the camera and the bottom row shows the corresponding top-down-view. Source image and street reference texture are compared in order to detect approaching vehicles indicated in the figure below and in the video with a green line.



Animation sequence of the Inverse Perspective Mapping.
Video download: ipm.wmv (Windows Media, 23.3MB)

Detection of vehicles with the help of a street reference texture.
Video download: detection.wmv (Windows Media, 29.2MB)

It took 0.27s to generate this page.