Project acronym:
Title: / IST-2001-39250
MOVIE
Motion Planning in Virtual Environments
Deliverable No: D 5.2: Combining internal and external motions
Short description:
Motion planning for digital actors (or virtual mannequins) is a key action of the Movie project. The progresses mainly deal with:
- the continuation of a motion capture based walking controller,
- the combination of manipulation and locomotion tasks, and
- the coordinated motion planning for several artifacts carrying a same object.
Due month:
Delivery month:
Partners owning:
Partners contributed:
Classification: / 24
24
LAAS
LAAS, KINEO
Public
Project funded by the European Community under the “Information Society Technologies” Programme (1998-2002).
Results Obtained
Combining internal and external motions is the central topic of the research conducted on digital actor animation by LAAS with a contribution by Kineo on the application to virtual mannequins for assembly task planning. The overall objective is to create methods to automatically compute humanlike motions for digital actors (or virtual mannequins). Indeed a digital actor moves in its space by walking or running (external motions); such external motions come from the combination of cyclic motions of the (internal) locomotion dofs.
The work is focused on producing and combining different behaviors, mainly locomotion and manipulation to automatically generate complex animations.
In the previous period (Month 1-12), we have addressed the problem of locomotion planning. The solution is based both, on probabilistic motion planning and on motion capture based blending and warping techniques. We have developed a modular architecture to generate a sequence of human walking from planning a trajectory in a 3D cluttered environment. This architecture has been extended in the current period to include manipulation task capabilities while the motion controller has been finalized.
Walk control
Walk control aims at automatically providing a natural walking sequence from a given configuration to a given goal. Configurations of the mannequin are in 3-dimensions: two parameters for the position, one for the orientation. Our walking controller is based on motion capture data editing techniques.
Our contribution [6] is to provide an original description of the problem allowing a simple and efficient geometric computation of the best motion captures to be blended as well as their respective weights to answer a desired control. The principle is illustrated in Figure 1. The key idea is to transform all the motion captures of a given database (Motion Library) into single points lying in a 2-dimensionnal velocity space of linear and angular velocities (Control Space). The points are then structured into a Delaunay triangulation which is a well known data structure in computational geometry that allows efficient queries for point location and nearest neighbor computations. Our control scheme is based on a blending operator working from three motion captures (Blending Space). The respective weights are automatically computed by solving a simple linear system with three unknown variables.
Figure 1: Walk controller architecture.
Manipulating while walking
We are interested in imposing manipulation constraints to a walking virtual character. For this we extend our motion planner to allow a digital mannequin to carry a bulky object in a cluttered environment. The approach is based on an analysis of the global task (manipulating while walking) according to three types of constraints [1]:
- 3D obstacle avoidance
- believable locomotion
- object manipulation
Figure 2: Functional decomposition of the DOFs.
To address these constraints altogether we decompose all the degrees of freedom of the mannequin into three classes (mobility, grasp and locomotion, Figure 2). Then we combine three types of techniques within the same framework: probabilistic path planning methods to deal with obstacle avoidance; the motion capture based walking controller above to provide believable animations and inverse kinematics techniques to deal with object manipulation. In addition, to address two hand manipulation we make use of the RLG algorithm dealing with path planning for closed kinematics chain [2].
Figure 3 gives an example of planned motion. In this example, an additional constraint has been added: the tray should be maintained horizontal.
Figure 3: Manipulating a tray while walking.
Coordinated manipulation
We extend our approach to consider coordinated manipulation among two or more virtual mannequins (Figure 4). Here we model the global task within a single system that gathers all the degrees of freedom of the agents and the object. This system is automatically built by computing a so-called "reachable cooperative space" [3,5]. Coordinated motions are produced by applying an algorithm with three stages:
- Plan a collision free trajectory for a reduced model of the system.
- Animate locomotion and manipulation behaviors independently.
- Tune the generated motions to avoid residual collisions.
These steps are applied based on a geometric and kinematic decoupling of the system and using different techniques such as path planning, locomotion controllers, inverse kinematics and path planning for closed-kinematic mechanisms [2].
Figure 4: A virtual mannequin and two robots move a piano
Assembly planning with virtual mannequins
Manipulation planning is currently being extended to the context of mechanical part assembly planning. The goal here is to automatically compute a collision-free path for both the part to be disassembly and the mannequin manipulating it. Two approaches are proposed according to the difficulty of the problem. Both are based on a general probabilistic diffusion algorithm working in the configuration space of the considered system developed in the previous period. The first approach consists first in planning a path for the part alone and then in checking the feasibility of the solution by adding the mannequin. The second one considers the part grasped by the mannequin as a single system. While the first approach performs quickly the second one is able to solve more constrained and difficult cases. Both solutions are based on a same path planning library allowing the user to easily evaluate the proposed solutions. Preliminary experimental results are based on feedback experiences in automotive industry.
Figure 5: Manipulation planning for assembly.
Deviations
None
Publications
[1] G.Arechavaleta, C.Esteves, J-P. Laumond, Planning fine motions for a digital factotum, Proceedings IROS, Sendai, 2004.
[2] J. Cortes, T. Simeon, Sampling-based motion planning under kinematic loop-closure constraints, Proceedings WAFR, Utrecht, 2004, pp. 59-74.
[3] C.Esteves, G.Arechavaleta, J-P. Laumond, Animation planning for virtual mannequins cooperation, Poster at Eurographics/ACM Symposium on Computer Animation, Grenoble, 2004.
[4] C.Esteves, G.Arechavaleta, J-P. Laumond, Planning cooperative motions for animated characters, International Symposium on Robotics and Automation, Mexico, 2004
[5] J.P. Laumond, E. Ferré, G. Arechavaleta, C. Estevès, Mechanical Part Assembly Planning with Virtual Mannequins, Submitted to ISATP, Montreal, 2005.
[6] J. Pettré, J.P. Laumond, A motion capture based control-space approach for walking mannequins, Submitted to International Journal on Visualisation and Computer Animation, 2004.
[7] C. Esteves, G. Arechavaleta, J.P. Laumond, Motion planning for virtual mannequins cooperation, LAAS-CNRS Report 04574, 2004.
[8] C. Esteves, G. Arechavaleta, J. Pettre, J.P. Laumond, Animation planning for virtual mannequins cooperation, Submitted to ACM Trans. on Graphics, 2004.