One important sensor is the Force Torque Sensor that is mounted in the robot's wrist.
Basically our idea is to use this sensor to recognize (or at least estimate) the displacement
of the relative pose between the two objects when they get in contact.
Depending on the amount and direction of displacement (translational and/or rotational)
different forces and torques will be measured.

In general (complex part geometry), it is computationally difficult to conclude the displacement parameters directly from
those torques during assembly. Therefore, we pre-calculate the expected forces and torques for a large amount of different
displacements and store them in a so-called Force Torque Map:

Example of a Force Torque Map
Force Torque Maps of course can be obtained by scanning the space around the correct object placement.
Additionally, we have developed a method to compute them automatically from given CAD data:

The first step for this is the calculation of expected contact points for a given displacement. We use the PC's GPU to
render the objects (parallel projection onto a seperating plane) into Z-Buffers A and B,
and find those points where the difference between those Z-Buffers is minimal:

Calculation of contact points between two objects
In the second step we can estimate the expected torque, now given the contact points (resp. their convex hull)
and the mating direction:

Calculation of expected torques
This procedure is repeated for a large set of different offsets between active and passive parts, thus yielding
a Force Torque Map with desired degrees of freedom, resolution, and size.