With the Advances in Machine Automated Manufacturing and Other Operations, More and More

Issues with Real-Time Sensing and Control Systems in

Real-Time Robotic Applications

By Steve Harvey

SE 545 Specification and Design of Real-Time Systems

Dr. Kornecki

Individual Research Paper Assignment

October 31, 2007

Abstract

Sensing and control is an overwhelming part of all robotic systems. In most cases these embedded sense and control systems must operate under a real-time environment. By examining how these systems are designed and implemented, there are plenty of design characteristics that can be reapplied in future development of any real-time sensing and control system, to promote the increase of reliability and safety. The characteristics include high-level to low-level functional and non-functional constraints. Also, decoupling of complex control systems can decrease complexity during design and implementation, to ensure in execution of implementation meets it required deadlines.

1. Introduction and Objective

With the advances in machine automation in business operations, most notably in manufacturing processes, more and more robotic systems are being developed and deployed. In industry the main goal of these robots is to make money. To ensure this, they are carefully designed to perform a closed set of necessary tasks with controllable, predictable and reliable outcomes. All of these robotic systems are basically an embedded set of sensor and actuators with some minimal processing related only to the small set of tasks they are required to perform. The most important characteristic of these systems is their temporal determinism. In order to guarantee that their operations are effective and predictable, they must adhere to a set of timing constraints. These constraints can be defined from the hardware level all the way up to their overlying tasks. No matter where they are explicitly defined, timing characteristics are critical at all levels and therefore must be understood at each one. We shall examine the role that sensors, actuators, data processing, and fault tolerance have on the real-time characteristics of embedded robotic design as well as any issues that are related. As these elements can be controlled and monitored to real-time requirements, they can ensure that their high level tasks can ultimately be depended upon to minimize cost. This means that business resources can be reduced, embedded resources are reduced and loss of life or property is also reduced.

2. Embedded Sensing and Control

In the area of sensing and control, the majority or responsibility lies in sensing. As robots perform their defined set of operations, they rely on their sensors at many levels. The initial purpose of the sensors is to sense the environment or a particular stimulus. In many cases this occurs amongst unknown conditions. Due to the non-ideality of the world sensor readings are uncertain. This is mostly because internal and external sources add noise to the readings. There is also the case or where unstable data can cause a malfunction of the sensor, altogether. No sensor will deliver accurate information at all times. Inconsistent sensor reading can also be a result of a system or process failure. More information on sensor validation and fault tolerance is covered later in this paper.

By using multiple sensors the robots perception has increased fidelity, can be filtered against multiple dimensions and reliable data can be polled. Buttazzo notes that by using multiple sensors, several different properties can be extracted from an explored object, and the probability of a correct recognition increases substantially. These properties may include geometric features (such as shape, contours, holes, edges, protruding regions), mechanical characteristics (such as hardness, flexibility, elasticity), or thermal properties (such as temperature, thermal conductivity). [2]

Agogino denotes the potential advantages of the utilizing of multisensory information can be decomposed into a combination of four fundamental aspects:

1. Redundancy: reduced uncertainty and increased reliability in case of sensor error or failure.

2. Complementary: multiple sensors allow features in the environment to be perceived that are important to perceive using just the information from each individual sensor operating separately.

3. Timelines: more timely information as compared to the speed at which it would be provided by a single sensor due to either the actual speed of operation of each sensor, or the processing parallelism that may be achieved as a part of the integration process,

4. Less Costly Information: in the context of a system with multiple sensors information is obtained at a lesser cost when compared to equivalent information that could be obtained from a single sensor [1]

Beyond environment or external unknowns, sensors can be applied to actuators to determine control accuracy or error. Reading signals and processing sensor data is crucial to robotic operation, however in most cases this is not enough. In most cases sensors have to be adaptive and intelligent by taking sensory data one step further. This requires that sensory data be analyzed and ultimately create a symbolic representation of the sensed environment through classification of details and recognition of objects. We call this complex activity as a “perception process”. Depending on whether the perception process is, or is not, strictly related to a motor activity, we distinguish between passive and active perception. As we shall see later, the use of passive or active perception makes a lot of difference in terms of real–time processing requirements. [2] Finally it is noted that there is no deterministic relationships between the sensor readings and the stimuli being monitored.

2.1 Passive Perception

In the case of passive perception, sensors are statically fixed with no feedback loops between sensing and control. The data is read, process, and perceived and a predetermined trajectory is fed to the actuators. Once the control task is issued it is not interrupted or adjusted until the task is completed. An example of this would be the case of stationary object picking. In this case a fixed cameras would capture imagery, and the object would be recognized and its location would be determined. Then this location data would be translated to a robotic arm trajectory to pick the object up. Since the movement of the arm does not need modification from its original plan, there is no need for real-time processing.

2.2 Active Perception

In the case of active perception the sensing is dynamic, adding movement for increasing or driving sensory data. This perception process involves tying sensing and control together for more complex tasks in unstructured environments like searching or probing as opposed to just seeing and touching. Sensors are often mounted on actuators and are used by the robot system to probe the environment and continuously adjust fine movements based on actual data. Active perception is a problem of intelligent control strategies applied to data acquisition processes, which depend on the current state of the data interpretation [2]. An example of this would be the case of object following. In this case one or more cameras would capture a series of imagery to detect the object as well as its speed and direction. This data would be processed to calculate a route to the object, however this loop would repeat continuously while the control was operating to readjust the actuators as new data arrived.

Figure 1. Difference between passive(a) and active(b) perception, for control structures and processing requirements. [2]

Figure 1 schematically illustrates the difference between passive and active perception in terms of control structures and data processing requirements. As we can see, the difference between the two processes causes a radical change in the system architecture. In active perception, the influence that the actuator movements have on the sensor responses forces the system to react in real–time, causing the control architecture to be hierarchically organized in a multilevel structure of feedback loops. In general, to support a wide range of sensory–motor capabilities, ranging from low level reactions to complex exploratory procedures, the system architecture must be able to handle hierarchical control loops operating at different frequencies. Efficient and time bounded communication mechanisms are also required to close real–time control loops at each level of the hierarchy, including short range reflex arcs, for effective support of guarded movements. [2]

3. The Need For Real-Time

It is not always clear whether real-time computing is required to achieve adequate timing behavior in control application development. Therefore, a crucial question that one should keep in mind when developing a control task is whether the application requires time constraints. Unfortunately, answering this question is not always so obvious. In fact, there are control applications in which the goal is specified in terms of explicit time requirements, but tasks execution does not need a real–time support.

Imagine a en example similar to the passive perception of stationary object picking, only there is a set of stationary object that must be sorted. Suppose that each object has a firm deadline within which it has to be sorted. If we decompose the sorting operation in to object recognition, action planning and robot control it will help us understand how the timing constraint fits in. Once the objects are recognized, and their deadlines are derived, the purpose of the planning task is to construct a sequence of actions so that each object is sorted by its deadline. The action plan has is the obvious time constrained element, in which the object picking and placing to its proper location is time critical. However, once the plan is completed, the robot trajectory is determined, and the arm can start its blind motion in a table–driven fashion. Since the deadlines associated to the objects do not impose any time constraint in the execution of the robot tasks. The meet of the deadlines only depends on the action plan and on the robot speed, that must be known in advance. Because the processing structure of the sorting application is similar to the typical scheme of passive perception, where sensing, planning, and control are separated, it is easy to recognize that each sub operation must be executed sequentially. Essentially, since there is no need for feedback in the control operation there is no need for real–time requirements.

There are many cases in robotics application where timing is not explicitly defined in requirements, but real-time computing is needed. Consider, for example, a polishing operation, in which a robot arm has to buff the surface of an object with a grinding tool mounted on its side. This task may be specified as with a requirement for constant speed along side the object, and a requirement for a specifying the force exerted must stay with in a certain range. In order to maintain a constant contact force against the object surface, the robot must be equipped with a force sensor near the buffing tool. Moreover, to keep the normal force within the specified maximum value, the force sensor must be acquired periodically at a constant frequency, which depends on the environment characteristics and on the task requirements. Since the robot’s route is unknown beforehand, the robot must be designed to correct its movement at each step to ensure that its contact force stays with in its specified range. While time constraints are not explicitly given, they must be imposed on the tasks execution to guarantee the meet of the application requirements.

4. Decomposition of Sensing and Control Tasks

In many robotic applications, sensor-control loops can become a towering mess of complexity for developers to embrace and understand real-time control strategies. Given the decision to utilize a real-time kernel for development of embedded robotic sense and control systems, it is recommended that all tasks be decomposed at several levels so real-time control strategies can be tackled manageably and flexibly. The following proposes a hierarchical programming environment to simplify sensing and control tasks into conceptual robotic system capabilities.

Figure 2. Hierarchical software environment for programming complex robotic applications. [2]

As shown in figure 2, the control architecture is organized in a hierarchical structure of layers, each of them provides the robot system with new functions and more sophisticated capabilities. The importance of this approach is not simply that one can divide the program into parts, rather it is crucial that each procedure accomplishes an identifiable task that can be used as a building block in defining other procedures.[2]

At the Device level all low-level I/O operations and hardware management is defined in separate modules. By defining a library of basic hardware control and interaction, we can manage each device separately and any constraints associated with specific understanding of each peripheral device. These operations would include each sensor reading data, actuator engagement/disengagement, or output display.

At the next level, called the Behavior level, we implement a collection of all sensor-based control strategies. This gives a definition of second order robot behavior. While still fairly low level, this combines some device level functionality in a manageable chunk. This level is for closed real-time control loops essential for executing autonomous tasks in unknown conditions and are building blocks for more skilled actions at the next level. Examples of Behavior level functionality includes planned trajectories or force application that can be modified on-line by sensor feed or hybrid control schemes.

With low-level skills of the robot defined, the next level of the hierarchy enhances capability with sophisticated sensory-control activities at the Action level. The goal of the Action level functionality is to provide all complex sense-and-control capabilities with yet a level of abstraction from the specific required tasks of the system. This level is basically the highest-level programming interface for complex tasks in unstructured environments. Some developed actions may include contour following, reflexive obstacle avoidance, visual tracking or object following. Utilizing device level sensory information directly and/or behavior level modules can allow for easy implementation of many different actions.

The highest level, Application level, is reserved for grouping and specialization of robot actions to accomplish required applicative tasks. While these tasks are sophisticated and complex in terms of control, their implementation is now simple because all sub task primitives are clearly defined and implemented already at the lower levels of the hierarchical control structure. This level might contain functionality like mechanical parts assembly, exploring unknown objects, or handling delicate materials.