Institut für
Robotik und Prozessinformatik

Deutsch   English

Zur Zeit leider nur in Englisch vorhanden...

Parking spot detection by digital image analysis


spot_detection.jpg

Introduction

More and more cars are equipped with driver assistance systems (DAS). They provide the driver with information and support and thus can help to prevent accidents and contribute to safer traffic. Some of these systems have been implemented and are in use successfully for many years, like ABS or ESP. Others, like automatic lane detection or infrared vision, are just being developed and installed in newer cars. Modern DAS are designed to simplify driving or to increase the driver's comfort and reduce fatigue. One of these modern systems is the parking assistant, which automatically maneuvers the car into a parking spot. But before actually starting the parking process, a parking spot has to be detected. In State-of-the-art systems, the driver himself has to locate a spot, then ultrasonic sensors are used to measure its size. In cooperation with the Volkswagen AG, we are developing a vision-based system that is able to automatically locate parking spots. Compared with ultrasonic sensors, cameras have great advantages due to their wide field of possible applications. With the help of image processing, almost every desired information can be extracted from camera images. Furthermore, cameras have a big range of sight, allowing gathering data even at great distances. Our system uses cameras with fisheye lenses and structure from motion to attain information about the surrounding area. The collected data is then interpreted to locate parking spots.

Setup and Methods

Fisheye cameras have the advantage of delivering a large field of view. It is possible to get a 180° view with just one camera. Therefore two cameras suffice to cover the area on the left and right side of a moving car. They were installed in the side mirrors of a teste vehicle provided by Volkswagen. The following picture shows an example of a right view.

right_view.jpg
Right view with a fisheye camera

As we only have one camera at each side, we can not use stereo vision to gather 3D information. But because the car is moving, we can use structure from motion. This technique uses two views of a scene from different viewing angles to triangulate points in the 3D world and calculate their exact position. The result is a 3D scatter diagram of the passed area.

Results

An example for such a scatter diagram can be seen in the following image.

scatter.jpg
3D Scatter diagram

Every point in this diagram represents a real 3D point. Points at the street level are colored red, obstacles (= points above the street level) are displayed in black. If every 3D point is projected on its corresponding point in the ground plane, a top down view is created which shows the whole scene as seen from above. Now patterns become visible which can clearly be identified. Cars, for example, build a pattern shaped like the letter "U". If some of these can be found in the top down view, the positions of cars are clearly recognized. Free parking spots are now detected by searching for free space between these cars. For an illustration of the process, see the following image. Recognized cars are highlighted in red, free space is marked green.

topdown.jpg
Topdown view of the scene

If a parking space is found, the automated parking process can be initiated.

Experimental vehicle: Paul

The current experimental vehicle Paul (German: "Parkt allein und lenkt") uses our vision-based parking spot detection. Paul was presented at the Hannover fair 2008 and demonstrates Volkswagen's Park Assist Vision system.

BZ-Artikel.jpg
An article from the "Braunschweiger Zeitung (15.04.2008)"


Paul.jpg
Paul at the Hannover fair 2008

The german television show "Abenteuer Auto" (26.04.2008, Kabel 1) reported about Paul at the Hannover fair 2008:

Video Download: Video Abenteuer Auto (Windows Media, 37.7 MB)

It took 0.3s to generate this page.