Institute for
Robotics and Process Control

Deutsch   English

Automatic Planning and Execution of Assembly Tasks

Description

This project aims at developing methods for automated robot programming. While market of products is changing rapidly, costs for installation of production lines increases dramatically. Thus, a strong demand arises for programmable flexible tools which support the robot programmer. Our aim is to develop a CAD-based interface for robot programming, such that a programmer may give instructions like 'put the object onto plane' by just clicking on appropriate surfaces in a virtual environment. The generation of robot programs should be done automatically by such a system, the time and cost expensive teaching should become redundant.
Fig. 1 illustrates a complex aggregate, an automotive headlight assembly consisting of more than 30 parts. The geometric description of parts is available. Also the complete product is specified; that means each object has its goal position defined. First of all, assembly sequences are generated for the product.

fehlt.jpg
Fig. 1: The automotive headlight assembly

System Overview

Fig. 2 describes the system, which has been developed in our institute. In the first step the assembly group is specified using symbolic spatial relations (Fig. 3). The user only has to click on the appropriate surfaces to do this. Possible contradictions and errors produced by the user can automatically be detected by the system. We have implemented both an algebraic and a graph based verification method.
After specification of assembly groups, assembly sequences are generated applying the assembly-by-disassembly strategy. For an assembly group, which contains n-parts, 2^(n-1)-1 possibilities exist to separate it into two subassemblies. Thus, the number of assembly sequences grows exponentially. This problem is known to be NP-complete. Some heuristics have been implemented to generate suitable assembly sequences. The geometrically feasible sequences are stored in an AND/OR-Graph which is evaluated considering the size of the depart space, parallelism and necessary reorientations during assembly. The most suitable assembly plan producing minimal costs is selected.
The actual assembly sequence is obtained by traversing the assembly graph from bottom to top. Each branch in the assembly tree represents one assembly operation mating two subassemblies.
After determining the assembly sequence a collision free path planner has to be applied.
Furthermore, the assembly operations have to be transformed into appropriate skill primitive nets. A skill primitive net consists of skill primitives arranged in a graph, where the nodes represent the skill primitives and the edges are annotated by entrance conditions. Each skill primitive represents one sensor based robot motion.
With this concept, many different sensors can be used simultaneously. Currently we have employed cameras and force torque sensors. For example, the robot should keep a contact force in z-direction and simultaneously reduce the distance between two edges detected in an image. By linking these simple robot motion, i.e. skill primitives, we obtain skill primitive nets. Execution of a skill primitive net leads to execution of robot task like for example inserting a lamp into a bayonet socket or placing a slantly gripped object on an automatically detected surface.
Planning of such processes is carried out in virtual environment, hence displacements between real world and virtual world may occur. Theses displacements are treated by skill primitive nets successfully. Therewith a system for planning, evaluation, and execution of assembly tasks is provided.

systemueberblickdiss2.gif
Fig 2: An overview of the entire system from specification up to execution


screen.gif
Fig 3: The specification tool for the definition of symbolic spatial relations integrated into a commercial robot simulation system

Results (Mating Directions)

For an automated assembly process, valid mating directions have do be computed first. Fig. 5 illustrates the steps necessary for computing c-space obstacles and mating directions.

lampegestell.jpg
Fig 4: (I) Movable part (green) and passive part (red). The green object is a lamp and the red object is part of the projection system of the headlight. (II) The objects after triangulation of surfaces. (III) The decomposition into convex surface patches. (IV) The configuration space obstacles with computed mating directions.

Based on geometric reasoning, suitable assembly sequences are generated and evaluated. Hard constraints are for example the connectivity of subassemblies or geometric feasibility, whereas soft constraints might be parallelism or necessary reorientation during one assembly tasks.

Results (Assembly Planning)


planneu.jpg
Fig. 5: Generated assembly plan for the complete automotive headlight assembly


Fig. 6 shows some assembly tasks where skill primitive nets have been applied for sensor based assembly execution.

aut_5714.jpg


aut_5715.jpg
Fig. 6: Sensor based execution of assembly tasks


aut_5718.jpg


It took 0.23s to generate this page.