Proposal Submitted to
Dr. Jane Shi
Dr. Roland Menassa
MSR Lab, GM R&D Center
Autonomy for Automotive General Assembly
Principal Investigators:
Prof. Reid Simmons and Prof. Sanjiv Singh
Robotics Institute
Carnegie Mellon University
5000 Forbes Avenue, Pittsburgh PA 15213
412-268-2621; 421-268-7350 (fax)
{reids,ssingh}@ri.cmu.edu
DUNS: 05-218-4116
Submitted: May18, 2007
Proposed duration: July 1, 2007 through June 30, 2009
Proposed budget: $422,000
Reid Simmons, PI
Sanjiv Singh, PI
Randall Bryant, Dean, School of Computer Science
Susan Burkett, Associate Provost
Autonomy for Automotive General Assembly
Principal Investigators:
Prof. Reid Simmons and Prof. Sanjiv Singh
Robotics Institute
Carnegie Mellon University
Abstract
Given an assembly task as specified by a sequence of subtasks, an autonomous assembly system is capable of achieving the specified goal in a known or unknown environment, without human interaction, from an initial unknown location, with uncertain sensing, uncertain execution, and uncertain actuation.
Given an automotive general assembly task and its associated environment, this autonomous assembly research project seeks to investigate, identify, and solve fundamental problems that prevent achieving 100% task autonomy. The research will be based on our experience in last 10 plus years developing various robotic systems from walking robots, indoor mobile robots, social robots, and more recently, multi-robot systems and robotic assembly.
For this project, we will apply and extend the ideas we have developed into the area of autonomous automotive general assembly. In particular, we will focus on low-payload, high-dexterity tasks, such as wiring harness installation or assembling dashboards. We will investigate autonomous performance in tasks that have a reasonably high amount of uncertainty. For the behavioral level, we will pursue force control for reliable placement and validation of task completion. We will also investigate integrated mobility and manipulation while dealing with a moving target, which will extend our previous work in combined mobility and manipulation. For the executive level, we will investigate how to structure such low-payload, high-dexterity tasks in terms ofdecomposition into primitive actions and trajectory planning in confined spaces. We will identify and investigate factors that prevent achieving 100% autonomous assembly. We will also focus on the need for very high reliability, especially in detecting anomalous situations and recovering from errors.
Introduction
Opportunities for automation in automotive manufacturing fall into two general categories – precise, repetitive operations (such as welding and painting) and flexible, dexterous operations (such as cabling). While much progress has been made in automating the former types of operations, relatively little has been done with the latter. In large part, this is because flexible, dexterous tasks need a level of autonomy that is not currently available in commercial robotic systems. In particular, to perform such operations reliably, robot systems need to plan in real time, use sensor-based feedback to deal with uncertainty, and monitor the situation and react and/or replan as the situation warrants.
Much research has been performed in recent years in the area of architectures to support autonomous operations in complex, uncertain, and dynamic domains. In particular, layered architectures have been used to great advantage [Simmons 94; Bonasso 97; Albus 97; Muscettola et.al. 98; Musliner et.al. 93; Nesnas et.al. 06]. The most popular type has three layers (Figure 1) – the top (planning) layer is responsible for receiving high-level goals and deciding how to perform the task, at a fairly abstract level. The middle (executive) layer is responsible for hierarchically decomposing tasks into executable actions, sequencing the execution of tasks, and monitoring execution. The bottom (behavior control) layer is responsible for controlling hardware, dealing with sensors, and reacting to events in real time. Typically, each layer uses representations and algorithms that are specific to the functionality needed by the particular layer. The layers communicate by having the upper layers send goals/commands to lower layers and having the lower layers send back information and status signals to the upper layers.
We have been developing a three-layered architecture for autonomous robotic systems for quite some time. The architecture has been applied to walking robots [Simmons 94], indoor mobile robots [Simmons et.al. 98], social robots [Simmons et.al. 03] and, more recently, to multi-robot systems [Goldberg et.al. 03] and robotic assembly [Sellner et.al. 06]. As part of that research, we have developed tools to facilitate developing robot systems based on such an architecture, including a package for flexible interprocess communication [Simmons & Whelan 97], a new language, which extends C++, that provides syntax for specifying task-level control [Simmons & Apfelbaum 98], and a C++-based implementation of the skill manager [Bonasso 97], which provides a framework for defining real time behaviors based on a data-flow model.
In addition, we have done significant work in robotic assembly, especially using multiple coordinating robots [Simmons et.al. 00; Hershberger et.al. 00; Hershberger et.al. 02; Sellner et.al. 06]. Using our layered architecture, we have developed a system to perform large-scale assembly using a team of autonomous robots consisting of a high-payload, but relatively inaccurate, manipulator, a low-payload, dexterous mobile manipulator, and a mobile robot that provides visual estimates of the relative positions of objects in the environment (Figure 2) [Simmons et.al. 00]. The manipulator robots use the visual position information to close a behavioral servo loop [Hershberger et.al. 00], enabling them to do assembly with fairly tight tolerances. We have developed a novel method for controlling the mobile manipulator that controls both the arm and the base simultaneously [Shin et.al. 03]. A key feature is that the approach uses commands specified as motion of the end effector and chooses actions that optimize the manipulability of the arm and, thus, the range of tasks that can be performed. Finally, we have investigated methods for increasing the reliability of the system, both through autonomous execution monitoring and error recovery and sliding autonomy [Sellner et.al. 06], which enables humans and robots to seamlessly share the achievement of complex tasks. The resulting assembly system can perform tasks much faster than can a remote teleoperator, yet has the reliability of a purely human-controlled system.
Work Plan
For this project, we will apply and extend the ideas we have developed into the area of autonomous automotive general assembly. In particular, we will focus on low-payload, high-dexterity manipulation tasks from a mobile manipulator, where the part being worked on is, itself, moving at slow speed. such as wiring harness installation or assembling dashboards. We will investigate autonomous performance in tasks that have a reasonably high amount of uncertainty. For the behavioral level, we will pursue force control for reliable placement and validation of task completion. We will also investigate integrated mobility and manipulation while dealing with a moving target, which will extend our previous work in combined mobility and manipulation. For the executive level, we will investigate how to structure such low-payload, high-dexterity tasks in terms ofdecomposition to primitive actions and trajectory planning in confined spaces. We will identify and investigate factors that prevent achieving 100% autonomous assembly. We will also focus on the need for very high reliability, especially in detecting anomalous situations and recovering from errors.
More specifically, our work plan includes the following:
- Work with GM to define precisely the demonstration scenario, for both the first and second years of the project.
- Develop a “task board” with fiducials, attachment points for clips, and cut-outs for having the robot reach behind the board. For the first year, the task board will be stationary; we will develop a mechanism to move it linearly in the second year.
- Design and fabricate an end effector.
- Develop behaviors and task-level procedures for achieving the task.
- Integrate force feedback into the behavioral control layer.
- Develop the ability to coordinate the mobile manipulator while tracking a moving target.
- Identify points of failure in the system and work to develop reliable execution monitoring and error recovery strategies.
- Demonstrate a succession of increasingly complex task scenarios, starting with the robot and task board both stationary in a (relatively) known fixed position, moving to the robot starting at an (unknown, variable) distance from the task board, to having the task board moving.
- Evaluate the robot assembly system in terms of performance (primarily speed of performing the task) and reliability (primarily the percentage of successful task achievements).
- Analyze the approach taken to identify key long-term drivers for autonomous general assembly, especially in regards to handling task and environmental uncertainty.
In addition, we will write reports and create videos documenting our efforts, in accordance to the tasks described below.
Tasks
- September 20, 2007 (first portion of work on Executive for task decomposition)
(1) Investigate and develop new capabilities at Executive Layer to attain a high level goal (end-to-end assembly task) so that it can be decomposed into primitive actions/steps for trajectory planning. Document the investigative result and approach taken and preliminary software design
- October 20, 2007
(2) Design and fabricate EOAT and “task board” to achieve a basic functional hardware environment
- December 10, 2007
(3) Use our three layer architecture to achieve preliminary assembly task at a stationary and known position. Document any changes and major reasons for the changes. Document methods of specifying parameters needed for the assembly tasks. Video of robot performing task from stationary position without force feedback
- February 20, 2008 (second portion of work on Executive layer for task decomposition)
(1b) (see 1a) Preliminary software implementation of new capabilities for task decomposition. Evaluate overall functionality and limitation. Document the capability assessment and related limitation
- March 30, 2008
(4) Develop new force feedback control function at the Behavior Layer to achieve the assembly task and verify/confirm successful assembly at a stationary and known position. Document the force control algorithm and preliminary evaluation of its performance. Document methods of specifying parameters needed for the assembly tasks. Video of robot performing task from stationary position with force feedback
- June 20, 2008 (third portion of work on Executive layer dealing with subtask execution control, possible replanning and error recovery)
(1c) (see 1a and 1b) Investigate specific subtask execution control using TDL declarative methods, investigate possible re-planning scenarios and error detection methods. Preliminary software implementation. Document the investigation TDL specification and subtask constraints, possible scenarios for re-planning and recovery methods.
- July 31, 2008
(5) Demonstrate autonomous assembly in a static environment from arbitrary position with a reasonably high amount of positional uncertainty. Document required task specification. Identify failure scenarios and possible recovery methods. Document the investigative results and identify main factors that prevent 100% autonomy. Video of robot performing task from arbitrary position. Combine all previous documents for Summary Report describing general framework and methods and their limitations to achieve autonomous assembly with stationary target.
- October 31, 2008
(6) Develop new capabilities at Behavior Layer to detect moving and tracking workpiece with adequate vision sensing speed. Enhance Executive layer’s capability to deal with moving target (related to 2c). Document Behavior Layer’s virtual sensor and its corresponding action to achieve tracking and reactive servoing capability. Document required task specification.
- January 31, 2009
(7) Extend RMRC to accomplish simultaneous tracking, manipulation, and assembly. Demonstrate autonomous assembly in a moving environment with reasonable high uncertainty. Document the predictive algorithm. Document the real time behaviors based on a data-flow model. If needed, enhance Executive layer’s capability to deal with moving assembly (related to 2c). Preliminary evaluation of system performance and exceptions. Preliminary evaluation of the system autonomous capability. Document evaluation results. Document required task specification. Video of robot performing task with moving target.
- March 31, 2009
(8) Identify failure scenarios and possible recovery methods. Document the investigative results and identify main factors that prevent 100% autonomy. Develop execution monitoring and failure detection methods and recovery methods. Implement several of the important error recovery procedures. Document the methods for monitoring and error recovery for improved reliability. Prototype software for entire autonomous system – source code as well as instructions for compiling environment to produce valid executables.
- June 30, 2009
(9) Final report describing project, including quantitative evaluation of system performance and reliability and analysis of key long-term drivers for autonomous assembly.
Deliverables
- September 20, 2007
Report on design of End of Arm Tool (EOAT) and task board. Report on Executive Layer decomposition function and preliminary software design. ($50,000)
- December 10, 2007
Report on adaptation of three layer architecture and required methods of specifying parameters needed for the assembly tasks. Video of robot performing task from stationary position without force feedback. ($50,000)
- March 30, 2008
Report on the Executive Layer capability assessment and related limitations. Report of the force control algorithm, experiment results, and required methods of specifying parameters needed for the assembly tasks. Video of robot performing task from stationary position with force feedback. ($70,000)
- June 30, 2008
Report on Executive TDL specification and subtasks, possible scenarios for re-planning and recovery methods. ($42,000)
- July 31, 2008
Report on main factors that prevent 100% autonomy. Video of robot performing task from arbitrary position. First year summary report describing general framework and methods and their limitations to achieve autonomous assembly with stationary target. ($40,000)
- October 31, 2008
Report on Behavior Layer’s new capability for detecting and tracking workpiece and the required task specification. ($40,000)
- January 31, 2009
Report on the predictive algorithm and investigative results and required task specification Video of robot performing task with moving target ($50,000)
- March 31, 2009
Report on the methods for monitoring and error recovery and its investigative results. Prototype software for entire autonomous system – Source code as well as instructions for compiling environment to produce valid executables. ($40,000)
- June 30, 2009
Final report including quantitative evaluation of system performance and reliability, and analysis of key long-term drivers for achieving 100% task autonomy for automotive general assembly. ($40,000)
References
[Albus 97] J. S. Albus, “The NIST Real-time Control System (RCS): an approach to intelligent systems research”. Journal of Experimental and Theoretical Artificial Intelligence, 9(2-3): 157-174, 1997
[Bonasso 97] R.P. Bonasso, D. Kortenkamp, D.P. Miller and M.G. Slack. “Experiences with an Architecture for Intelligent, Reactive Agents”. Journal of AI Research, 9:1, 1997
[Goldberg et.al. 03] D. Goldberg, V, Cicirello, M. B. Dias, R. Simmons, S. Smith, and A. Stentz, “Market-Based Multi-Robot Planning in a Distributed Layered Architecture”. In Proceedings of the Multi-Robot Systems Workshop, Washington, D.C., March 17-19, 2003
[Hershberger et.al. 00] D. Hershberger, R. Burridge, D. Kortenkamp, and R. Simmons, “Distributed Visual Servoing with a Roving Eye”. In Proceedings of the Conference on Intelligent Robots and Systems (IROS), Takamatsu Japan, October 2000
[Hershberger et.al. 02] D. Hershberger, R. Simmons, S. Singh, J. Ramos, and T. Smith, “Coordination of Heterogeneous Robots for Large-Scale Assembly”. In Robot Teams: From Diversity to Polymorphism, T. Balch, L. Parker (eds.), AK Peters, 2002
[Muscettola et.al. 98] N. Muscettola, P. P. Nayak, B. Pell and B. Williams, “Remote Agent: To Boldly Go Where No AI System Has Gone Before”.Artificial Intelligence103(1-2):5-48, August 1998
[Musliner et.al. 93] D. J. Musliner, E. H. Durfee, and K. G. Shin, “CIRCA: A Cooperative Intelligent Real-Time Control Architecture”. IEEE Transactions on Systems, Man, and Cybernetics, 23:6, p. 1561-1574, 1993
[Nesnas 06] I.A. Nesnas, R. Simmons, D. Gaines, C. Kunz, A. Diaz-Calderon, T. Estlin, R. Madison, J. Guineau, M. McHenry, I. Shu, and D. Apfelbaum”. International Journal of Advanced Robotic Systems, 3:1, pp. 23-30, 2006
[Sellner et.al. 06] B. Sellner, F. W. Heger, L. M. Hiatt, R. Simmons, and S. Singh, “Coordinated Multiagent Teams and Sliding Autonomy for Large-Scale Assembly”, Proceedings of the IEEE, 94:7, July 2006
[Shin et.al. 03] D.H. Shin , B. S. Hamner, S. Singh, and M. Hwangbo, “Motion Planning for a Mobile Manipulator with Imprecise Locomotion”. In Proceedings IROS, Las Vegas, October 2003
[Simmons 94] R. Simmons, “Structured Control for Autonomous Robots”. IEEE Transactions on Robotics and Automation, 10:1, pp. 34-43, February 1994
[Simmons & Whelan 97] R. Simmons and G. Whelan, “Visualization Tools for Validating Software of Autonomous Spacecraft”. In Proceedings of International Symposium on Artificial Intelligence, Robotics and Automation in Space, Tokyo, Japan, July 1997
[Simmons & Apfelbaum 98] R. Simmons and D. Apfelbaum “A Task Description Language for Robot Control”. In Proceedings Conference on Intelligent Robotics and Systems, Vancouver Canada, October 1998
[Simmons et.al. 98]R. G. Simmons, R. Goodwin, K. Zita Haigh, S. Koenig, J. O'Sullivan, M. M. Veloso, “Xavier: Experience with a Layered Robot Architecture”. Intelligence, 1998
[Simmons et.al. 00] R. Simmons, S. Singh, D. Hershberger, J. Ramos, and T. Smith, “First Results in the Coordination of Heterogeneous Robots for Large-Scale Assembly,” In Proceedings of the International Symposium on Experimental Robotics (ISER), Honolulu Hawaii, December 2000
[Simmons et.al. 03]R. Simmons, et.al. “GRACE: An Autonomous Robot for the AAAI Robot Challenge”.AAAI Magazine, 24:2, pp. 51-72, Summer 2003.