1.1.1Mobility and Manipulation Thrust

1.1.1.1Research Program

A unifying theme that pervades the Mobility and Manipulation Thrust (MoMaT) is the provision of safe mobility and manipulation. Assuring safety has been identified by both our end-user focus groups and practitioner partners as one of the important and difficult challenges of developing usable QoLT systems. The primary concern is safety of the user, in addition to concerns for the safety of others such as caregivers, damage to the user's environment, and modes of failure of assistive devices. We are focusing on two major barriers to developing safe systems that assist andphysically interact with a user: 1) planning appropriate behavior in complex environments, and 2) safe physical interaction, even in cases where substantial power is transferred.

The performance of existing assistive devices is limited by their lack of intelligence. We have chosen the QoLT Systems Active Home and Personal Mobility and Manipulation Appliance (PerMMA) to drive us to develop real-life-ready planning algorithms, with a 3-year goal of developing planning algorithms that succeed with clean sensor data, a 5-year goal of working with real sensor data in known situations, and a 10-year goal of success in unstructured environments and tasks. Though significant challenges, they are not totally open-ended because we are coupling our systems to the needs of individual users, and taking advantage of some regularities in their lifestyles.

To address safety, we are developing soft interaction capabilities to support safe physical interaction. The technologies we develop will be used in the Active Home and PerMMA systems. Our strategy is to first develop soft and safe control on existing robots. By the end of Year 5, our target is new robot designs that are inherently safe and tactile sensing surfaces that can be applied to robots, furniture, and other contact surfaces. Our 10-year goal is a complete soft interaction system that supports applications ranging from low-force tasks, such as hygiene, to the high-force task of transferring a person in and out of beds and chairs.

In collaboration with the Person and Society Thrust (PST), we match human needs to possible technologies. In the problem formulation and design process, we have been working with potential users to articulate needs and to envision and evaluate possible solutions. Rather than committing to one specific solution, such as a humanoid robot, we have adopted a systems-level approach. Rather than separately considering how to move the user and how to assist him with manipulation, we consider mobility and manipulation as an integrated problem. Our discussions with end-users highlighted the need for assistance with transfer between bed, wheelchair, toilet, and bathing area. An earlier review of everyday living activities revealed the need for manipulation of relatively lightweight objects: fetching objects; opening wrapped packages and boxes; preparing meals; and household chores. More recent stakeholder meetings focused on the need to improve our user interfaces. One surprising finding is that users are much less concerned with the speed with which assistive robots perform tasks than they are with functionality at any speed, even if far slower than that of a person.

A major intellectual contribution of MoMaT is planning, control, and other decision-making algorithms that work well and safely in complex environments like homes. A qualitative improvement in autonomy of assistive devices is one of the transformative elements of our QoLT ERC strategic plan. New approaches to planning and control are key components.

Another major intellectual contribution is high performance force control and fail-safe performance of mobile manipulators that physically interact with humans. This contribution includes whole body force control based on skin and joint load sensing, back-drivable actuation, soft structures, and human-like balance shifts and shifts of support so that reliable force control can be provided from a mobile base.

Achievements: Summary

Major achievements in the reporting period include:

  • We developed a new framework of Task Space Constraints (TSRs) for planning to manipulate objects with task constraints, such as heavy weights or full water pitchers, was developed. The approach extends traditional bidirectional random sampling planning algorithms to constraint manifolds. The approach is also capable of dealing with closed-chain kinematic constraints. Uncertainty in object and environment pose estimates is taken into consideration in the TSR framework explicitly during planning time enabling the planner to produce worst-case feasible plans that are driven by the sensor uncertainty and task requirements.
  • We improved our previously developed trajectory optimization algorithm CHOMP that produces smooth trajectories that optimize a dynamic criterion while avoiding obstacles.
  • We developed behavior design algorithms for compliant robots based on optimal control and dynamic programming.
  • We constructed a multi-link soft robot prototype using inflatable technology, and a prototype continuum robot which moves using elastic deflection. We further developed our theory of optimal design of soft robots and our force control experiments.
  • We developed several prototypes of direct manipulation and teleoperation interfaces for programming assistive robot arms using a skin for contact and force sensing.

Project Descriptions

Mobile Manipulation

The goal of this project is to develop the algorithms that underlie mobile manipulation found in the Active Home and PerMMA systems. These algorithms must facilitate easy operation, safety, robustness, and high performance. A central effort is to develop efficient motion planning algorithms for autonomous grasping and manipulation of household objects, and integrate them into reliable robotic platforms.

Figure MoMaT-1: (Left) Configuration space of a 3DOF manipulator generated by exhaustive sampling. (Center) 3-Link Manipulator configurations corresponding to several points along a path that moves the weight from one table to the other. (Right) Snapshots from a 7DOF WAM arm with a 8.17kg end-effector mass executing a path found by the CBiRRT to move the dumbbell from one table to the other.

Our primary research efforts over the past year focused on developing fast, general algorithms for grasping and arm-motion generation in constrained situations. Everyday life is full of tasks that constrain our movement. For example, carrying a coffee mug, lifting a heavy object, and sliding a milk jug out of a refrigerator are tasks that involve constraints imposed on our bodies as well as the manipulated objects. Creating algorithms for general-purpose robots to perform these kinds of tasks also involves computing motions subject to multiple simultaneous task constraints. For example, a robotic manipulator lifting a heavy milk jug while keeping it upright involves a constraint on the pose of the jug as well as constraints on the arm configuration due to the weight of the jug. In general, a robot cannot assume arbitrary joint configurations when performing constrained motions. Instead, the robot must move within some manifold embedded in its configuration space that satisfies both the constraints of the task and the limits of the mechanism. To create plans for such constrained tasks, we have developed the Constrained Bi-directional Rapidly-exploring Random Tree (CBiRRT) motion planning algorithm, which uses Jacobian-based projection methods as well as efficient constraint-checking to explore constraint manifolds in the robot’s configuration space. The CBiRRT can solve many problems that standard sampling-based planners, such as a generic RRT or Probabilistic Roadmaps (PRM) cannot. Our framework for handling constraints allows us to plan for manipulation tasks that were previously unachievable in the general case, such as solving complex puzzles and sliding and lifting heavy objects. We have developed a set based representation to handle uncertainty, Task Space Regions(TSRs). TSRs allow the specification of continuous regions in the six-dimensional space of poses as goals for a manipulator’s end-effector. TSRs allow us to plan for manipulation tasks in the presence of pose uncertainty by ensuring that the given task requirements are satisfied for all hypotheses of the object’s pose. TSRs also provide a way to quickly reject tasks which cannot be guaranteed to be accomplished given the current pose uncertainty estimates.

This work has been published at the ICRA2009, IROS2009, and Humanoids2009 conferences.

Figure MoMaT-2: Snapshots from example plans.(Left) Simultaneously opening a door and putting a bottle into a refrigerator. (Right) A closed-chain kinematics constraint for lifting a box.

Soft Interaction

The goal of Soft Interaction is to develop ways for robots to physically interact safely with humans. Many aspects of caregiving demand the ability to gently and safely manipulate humans, in particular to transfer between bed, wheelchair, toilet, and bathing area; likewiseseveral instrumental activities of daily living such as feeding, dressing, grooming, and housework involve physical interaction with the user. The requirements for a manipulation system that touches people are quite different than those that deal with inanimate objects, which is why we distinguish between these two systems. Furthermore, soft interaction should be available from a mobile robot, not just from a device rigidly mounted to a floor or wall.

Soft manipulation is a transformative capability for safely interacting with people, and central to the Active Home and the PerMMA systems. Mobile soft physical interaction with humans is a relatively undeveloped area. No current humanoid robots are fully back-drivable from any contact point, and those that implement soft physical interaction do so only at selected sites using localized force sensing. Soft manipulation of fragile humans is a new challenging area for robotics that we are pioneering. In our discussions with practitioners, transfers have been identified as an important area for improved assistance because they are ubiquitous tasks and the cause of injuries to front-line providers; hence we expect our work to produce a range of soft and safe human-robot physical interaction techniques thatwill be of great benefit to member organizations.

A fundamental barrier is current actuation technology. It is difficult to generate large forces with low impedances and high speeds. Another fundamental barrier is the poor reliability of software. Hardware reliability is also an issue, but it is better understood than software reliability.

In the past year we developed approaches to automatically design force control algorithms for complex systems that must balance and exert large forces. Optimal control was used to develop controllers for expected load conditions, and a load estimator was developed to recognize loads and select appropriate controllers. We implemented this system on our Sarcos robot. We have also developed large scale dynamic programming algorithms (a form of optimal control) that plan compliant behavior. We have exploring the use of cluster supercomputers for this work. Papers on this work were presented at IROS 2009 and Humanoids 2009.

We developed a prototype multi-link inflatable robot arm, as well as a soft robot arm that controls its kinematic structure by creating buckling points.The characteristics of inflatable links were analyzed and force control experiments were conducted. Theoretical and experimental kinematics of a continuum robot were studied.To gain further insight into the design of robots which are safe for physicalhuman robot interaction (pHRI), integrated control-structure design was utilized, which helped evaluate the role of distributed compliance in safety and its influenceon the performance of the system. Our research has shown that flexible links,which have so far been ignored in prior research in the field of pHRI, allow safe interaction while demonstrating good performance. A multi-link inflatable robot prototype was constructed using a novelmechanism to couple two links. We also explored the use of variable morphology structures for safe robots. The idea is that the robot changes its form (length of links, placements of jointsetc.) to adapt its safety characteristics (reflected mass, inertia, stiffness) accordingto the nature of its motion. We developed a system capable of controlling thelocation of a joint and the length of link connected to this joint. This was doneby using a flexible structure capable of buckling at controllable locations along itslength (the Variable Buckling Hinge Mechanism).A paper on this work was presented at IROS 2009.

We are investigating intuitive user interfaces for devices like the PerMMA, while ensuring safety for the user. We have created a skin that detects when any part of the robot contacts an individual or an object, which helps avoid injury or property damage. This skin is compatible with any robotic arm, which could enable powerful commercial robots to be utilized in QoLT applications. The skin enables a user to interact directly with the robot. When the user grasps the robot arm (Figure MoMaT-7), it is placed in compliant mode in which it behaves as though it is light and has low friction. The user is thus able to position the robot arm to augment her strength or range of motion. When she releases the robot, it holds position, allowing her to adjust her grasp point. For example, she might position the robot to grasp a bowl, move it, and stabilize it while she adds and mixes ingredients. This might be an appropriate method of assistance for an individual with hemiplegia. Similarly, with this method of interaction, a powerful robot arm could enable an individual with weakened upper extremities to augment his or her strength, for instance to retrieve a heavy roasting pan from the oven and move it with relatively little exertion on her part. This method of directly interacting with the robot is most appropriate for individuals with a large range of motion for the upper extremities but with upper extremity pain or weakness, for example users with paraplegia and multiple sclerosis, that limits their ability to manipulate objects. Because the user physically interacts directly with the robot, this interface is intuitive, requires little training and has low cognitive demands. A paper on this work was presented at the International Conference of the IEEE Engineering in Medicine and Biology Society.

Plans For The Future

In terms of robot behavior planning, the next steps are to handle lower quality data, less prior knowledge, environments that are more dynamic, and deformable objects. We will develop planning algorithms for multiple assistive systems to work together, such as supporting 2-arm tasks on HERB2.0 and the PerMMA device. In terms of compliant interaction, our next goal is to develop transfer devices that can be easily controlled by an older adult or attendant.

In terms of mobile manipulation, a major question we will focus on in the next few years is "How much information do we really need to manipulate an object?" We want to be able to handle objects and environments we have not modeled in advance, and be able to refine incorrect or partial models. We also want to develop manipulation techniques that do not critically depend on accurate models. We will call these techniques "model-reduced" manipulation strategies. As an example of how we will reduce the need for complete knowledge we will discuss incremental viewing. We have obtained promising preliminary results on next-best view planning using cameras mounted on the palm of the robot's hand. Instead of taking multiple views of the entire workspace to remove all uncertainty, we intend to bootstrap our perception algorithm with our grasp planning algorithm with the purpose of removing just enough uncertainty to be able to successfully grasp the object. Metrics that focus on graspability rather than metrics that focus on reconstruction will be crucial for successful manipulation under imperfect information.

We will also extend our model-reduced manipulation techniques so we can more effectively manipulate deformable objects. These model-reduced manipulation techniques will make use of behavioral primitives, such as the caging grasp we have already developed. One can push open a door with a finger, given the dynamics of the tasks. One can rely on the hinges to guide the door, and friction to stop the door’s movement. Wiping a counter with a rag is best described as a behavioral primitive that implements a particular control policy, rather than as a planning problem which has to locate all parts of the rag.

Another major issue that will dominate our work in the next few years is performing tasks in a dynamic environment. In our current work, the environment is static and the robot stops to manipulate. In everyday homes humans are always on the move, and objects move on their own or are moved by other agents (humans or other robots). For a robot to perform useful tasks in such an environment, it needs to possess a deeper semantic understanding of the obstacles it sees in the world.This capability is crucial for a safe and meaningful interaction in a human environment.

We will also focus on multi-arm manipulation, including using more than one robot arm and also human-robot cooperation where the human helps the robot do tasks by holding, pushing, or manipulating part of the task. To that end, we are generalizing our existing manipulation planning algorithms to multiple arms. We have already obtained promising results for the first case, extending our constraint planner to plan for a humanoid robot manipulating a heavy box with two arms while simultaneously maintaining balance.

In the next few years we will consider a number of paradigms to assist transfer. The first paradigm is a device with human-like arms that manipulates a human in a way similar to how another human would. Our Sarcos Primus Humanoid will serve as an experimental tool for this approach. We will also consider other paradigms. One paradigm is to use inflatable actuation. Inflatable actuation can be very powerful if it is interposed between a support surface and the object to be lifted. For example, inflatable actuation is used to lift cars. Compartments and controllable inflation can be used to achieve desired force distributions and avoid focused pressure points. Strain gages printed on the inflatable surface can be used to monitor contact forces. We are currently working with Bayer Material Sciences to explore appropriate materials for durability and sensing; that work includes a new associated project to develop pressure sensitive material based on carbon nanotubes for QoLT applications ranging from wheelchair seats to robot skins. Inflatable actuators can be very cheap and widely distributed. They can be built into beds and chairs, they can be inserted by an active system, or they can self-insert where needed. We will also continue to explore new interfaces to control force-based manipulation of humans. We have discussed direct manipulation and teleoperation style interfaces. We will also explore coaching based interfaces, in which the controlling user is able to give advice to a transfer system much like a human would train another human.