Institute for
Robotics and Process Control

Deutsch   English

Force Torque Maps and Particle Filters for Tolerant Robot Assembly and Sensor Fusion

Introduction

In this project we examine a new approach towards tolerant robot assembly that can be programmed and run automatically. Consider mounting one "active" object onto a second "passive" object. In order to be tolerant against variations in the robot workcell (sizes/poses of objects etc.), our approach allows fusion of different sensors.
In our experiments we concentrated on different peg-in-hole assembly tasks. When e.g. the real position of the passive part is slightly different than expected, the peg will not meet the hole but touch its border. The reacting force can be measured by a force torque sensor; additionally we watch the scene with a greyscale camera.
Please note that all following considerations about forces and torques work also for more complex objects as they are independent of the objects' shapes.

assembly_tasks.gif
Assembly Tasks


Sensor #1: Force Torque Maps

One important sensor is the Force Torque Sensor that is mounted in the robot's wrist. Basically our idea is to use this sensor to recognize (or at least estimate) the displacement of the relative pose between the two objects when they get in contact. Depending on the amount and direction of displacement (translational and/or rotational) different forces and torques will be measured.
In general (complex part geometry), it is computationally difficult to conclude the displacement parameters directly from those torques during assembly. Therefore, we pre-calculate the expected forces and torques for a large amount of different displacements and store them in a so-called Force Torque Map:

ftmap.gif
Example of a Force Torque Map

Force Torque Maps of course can be obtained by scanning the space around the correct object placement. Additionally, we have developed a method to compute them automatically from given CAD data:
The first step for this is the calculation of expected contact points for a given displacement. We use the PC's GPU to render the objects (parallel projection onto a seperating plane) into Z-Buffers A and B, and find those points where the difference between those Z-Buffers is minimal:

computation_contact_points.gif
Calculation of contact points between two objects

In the second step we can estimate the expected torque, now given the contact points (resp. their convex hull) and the mating direction:

computation_torque.gif
Calculation of expected torques

This procedure is repeated for a large set of different offsets between active and passive parts, thus yielding a Force Torque Map with desired degrees of freedom, resolution, and size.

Particle Filter for Sensor Fusion

We use a particle filter for estimation of the current relative object pose. Each particle represents a relative pose hypothesis.
In the first step, the particles are distributed with a gaussian distribution. Then the robot tries to assemble the objects, obtaining forces and torques in contact (and a camera image). These sensor values are used to evaluate all particles by comparing the real sensor values to the ones to be expected from the particle. The "best" particle represents the new pose estimation. For the next iteration, all particles are redistributed according to their weights.
The robot moves according to the pose estimation, and of course all particles also have to be move accordingly. This procedure is repeated until the parts have been successfully assembled. See Results for an example of how the particles develop during assembly.

Sensor #2: Computer Vision

Until now, we use a very simple but efficient algorithm based on a single camera, specially adopted to our peg-in-hole assembly task: We detect the silhouette lines of active and passive part and calculate their angle. As the vision sensor's task is to evaluate relative pose hypotheses, the measured angle can be compared to the angle which is obtained from the particle at hand. Thus, only the non-ambigious 3d-to-2d conversion is necessary; we do NOT try to estimated 3d poses from 2d images!
For other assembly tasks, different sensors or algorithms could be used. In future work, we plan to select sensors and algorithms automatically.
Note that one camera can only judge about one translational and one rotational degree of freedom. The vision sensor alone can not tell which particle is the best, but it can give a low rating about most "wrong" particles.

camera_image.gif
Typical camera image with detected line features

Results

As you can see here, our calculated Force Torque Maps are very similar to those obtained by scanning the environment:

measured_vs_simulated.gif
Calculated vs. measured Force Torque Maps


The following image shows a typical particle evolution: Initialisation, first contact, estimation, second contact, better estimation, successful assembly.

particles_example.gif
Evolution of Particles


The following diagrams show the number of necessary contact movements for successful assembly during our experiments. As you see, inclusion of an additional vision sensor decreases this number significantly: Most of the times, one or two contact movements were sufficient.

results_ftm_only.gif
Typical camera image with detected line features


results_ftm_and_vision.gif
Typical camera image with detected line features


A more detailed description of this project can be found in our paper (DAGM 2007, [link]).

It took 0.51s to generate this page.