Medium Size Mobile Robot for Visitor Guidance

Alex McMahon

MEng Cybernetics,

abstract

Mobile autonomous robots are a topic of regular interest to the field of cybernetics, this paper discusses the development of a robot intended to guide visitors around sites of interest, in particular giving guided tours to visitors to the Department of Cybernetics. The robot makes use of several technologies: sonar for mapping and obstacle avoidance, a CANbus network for distributed processing, an on-board PC for mapping, navigation and user interfacing, and finally a beacon identification system for identifying locations and re-localising the robot’s position. The paper discusses the design decisions and the various aspects of the final design. As this is a group project particular attention is given to the author’s main area of responsibility namely the use of frontier based exploration and mapping.

1.  Introduction

Robots, of a sort, have been used in many applications for some time now; where the environment is relatively simple and the need for ‘intelligence’ is small a robot can save both time and money by replacing humans (in factory production lines for example).

In dynamic or more complex situations the use of robotics has had less impact. There has been research into individual aspects such as reactive behaviour, mapping methods, fault tolerant systems and user interfaces, however this is still not a commercially viable technology and most have only limited scope.

The aim of the project is to bring together several fields of research and construct a user-friendly and useful robot, with the possibility of a commercial application. The project is run in an industrial style situation with the academic staff acting as the customer, therefore there is a given budget and administrative responsibilities, the project must also follow a specific plan and meet various deadlines. The project work is split between a five person team, with each member responsible for different aspects of the design and construction.

Figure 1. Conceptual Robot Design

2.  The Level of AI

There are, broadly speaking, two schools of thought on the topic of AI; the classical AI approach is based on the ‘Symbol System Hypothesis’ where many modules need to be combined together to generate any sort of behaviour. The nouvelle AI approach, based on the physical grounding hypothesis, uses independent modules without the use of symbolic data, each of which displays simple behaviour, to build up an AI system.[1]

Classical and nouvelle AI have each been applied, with varying degrees of success, to many applications, with each seeming to be better suited to different fields. There has also been work to try and combine the two approaches, so that the robot has simple reactive behaviour combined with a world model, this has particularly been used for anticipatory agents [2] for purposes such as sport playing robots, where a prediction needs to be made.

There is clearly a degree of uncertainty as to which method of AI is the most likely to succeed, and many projects do indeed take inspiration from both arguments. In terms of this project there have been numerous decisions as to how autonomous the robot should be, and how much information we should provide it with, thereby restricting the potential domain of the robot. It would have been possible, for example, to make a robot which did not make its own map and just followed a series of beacons around a set path. However it was decided that the robot should be as autonomous as possible, with the only external aid being the use of beacons for location identification.

3.  Sensors and Actuators

Any system can be separated into its inputs, processes and outputs, in the case of the robot the inputs are provided by various sensors, and the outputs are enacted using actuators.

3.1.  Short-Medium Range Sensing

Figure 2. Sonar Layout

The sensor used for short to medium range sensing is a sonar time-of-flight sensor. This works by transmitting a sonic pulse, which is then reflected from the obstacles in the pulse’s path, with the reflection being detected by the sonar receiver, the time between transmission and detection gives the distance to the obstacle. The sonar layout can be seen in Figure 2, the bottom layer has sonar pointing in perpendicular directions, these sensors will be the main ones used for map construction. The top layer adds sonar at the poorly covered areas of the bottom layer; these should result in better obstacle avoidance.

3.2.  Tactile Sensing

If the short range sensors fail to detect an obstacle and a collision occurs the robot needs to be able to realise this and stop the motors, this is both a safety and mapping consideration. The bump sensors utilise micro-switches behind spring-loaded shields which skirt the base of the robot. The connections of the switches are connected to a custom designed/built PCB with a microprocessor (PIC 18f248) and CAN transceiver, if any of the switches is triggered then a message will be sent over the CANbus to stop the motors.

3.3.  Motor Encoders

So that the robot knows how far it has travelled encoders are attached to the drive motors, these detect how far the motors have turned and allow the distance travelled to be calculated.

3.4.  Beacon Detection

The use of encoders for position estimation will lead to errors as there will always be wheel slippage and internal linking tolerances, to overcome these problems uniquely identifiable beacons will be used so that the robot can re-localise whenever it detects a beacon that it has already seen. To detect the beacons a simple USB camera will be used.

3.5.  User Interaction

As the robot needs to interact with a range of different people the user interface is of utmost importance, the method of interacting with the robot also needs to be simple, to avoid the use of traditional computer interface devices such as a mouse or keyboard a touch screen interface has been chosen, this means that the interface can be used directly, resulting in a more transparent interface [3].

3.6.  Motors and Motor Drivers

Figure 3. Motor and Wheel layout

The output of the system will be through two motors in a differential drive setup (Figure 3); this allows the robot to turn on the spot, which helps the mapping process reliability. To control the motors locally a motor driver board is used, which is also connected to the CANbus network.

4.  Parallel Distributed Processing

4.1.  Why use Parallel Processing?

There are several distinct tasks which the robot needs to perform, some of which require a relatively large amount of processing, a single processor could be used to perform all these tasks; however this could cause delays in key tasks and potentially cause safety issues. It was therefore decided that there should be dedicated processors for many tasks, performing the local processing in parallel and then communicating using commands and variables only.

As well as being more economic in both time and money, this layout can result in a more fault tolerant system; if a single task (such as mapping) fails the rest of the system can continue to function in a diminished capacity, rather than failing completely.

4.2.  Network Architecture

The network architecture chosen is a Control Area Network (CAN) [4]. Originally developed by Bosch for in-car communications, the CAN protocol is now used for several applications where sensors, actuators and ‘intelligent devices’ need to communicate.

On a CANbus network each device has access to a common bus, messages consist of a header and the message itself, the header is used to identify either where the message is sent from, or where it is being sent to. Each device can be set up so that only messages with certain headers are received.

Figure 4. CANbus Network Layout

Each device uses a CAN transceiver, which has two output lines, by taking two points from each line a connection to two other devices can be made, and even if the device fails the connection will still exist between the adjacent devices. Figure 4 shows how the devices are connected over the CANbus network, note that the CANBridge device is just a small device which provides the link between the PC and the CANbus network.

5.  Mapping

As the robot moves around its environment it builds up a map, this means that the robot is able to react to new and unexplored environments. The map is used to allow navigation from one point to another and for displaying to the visitor. The robot has two main modes, in mapping mode it explores the environment using the method detailed in section 5.2, in tour mode the stored map is used to navigate from one point to another.

5.1.  Map Structure

There are two main types of maps used in robotic applications, topological and grid based. A topological map describes the connections between points, but has no indication of distances, a grid based map, in contrast, explicitly measures distance by representing the environment as a grid of cells.

The map structure used for the project is a grid based approach, where each cell is linked to its four neighbours by a ‘risk’ value; this is a figure representing the likelihood that the cell is blocked. The cells are stored as a linked list data structure; this means that, potentially, any size environment could be mapped, with any starting position. A static array of a very large size could have been used, starting in the middle; however this would be less efficient and would require extra programming so that if an exhaustive search of the cells was attempted then unexplored cells would not be checked.

Each cell consists of a position (x and y values), an array of pointers to adjacent cells and an array of risks, both indexed by a direction; each cell can also potentially contain a beacon.

5.2.  Frontier Based Exploration

As the robot is not given any predefined map, it needs to explore its environment; there are several possible algorithms for exploring, with the aim being to explore the environment as quickly as possible and without being too complex.

The chosen exploration method is ‘frontier based exploration’ the main idea behind this method is “To gain the most new information about the world, move to the boundary between open space and uncharted territory.”[5]

Figure 5. Frontier detection: (a) evidence grid,
(b) frontier edge segments, (c) frontier regions [5]

In Figure 5(a) white space represents cells of low risk, dark cells are those with a high risk and light grey cells are those with unknown risk. An edge detection algorithm is then applied, searching for boundaries between empty space and unexplored space (b). Finally the edge segments are separated into distinct frontiers which must consist of a minimum amount of cells (c).

Once the frontiers have been detected, the method in [5] leads the robot to the centre of the nearest accessible frontier and performs a 360° sonar sweep, unfortunately we are not using a 360° sonar sweep, so an alternative method has been chosen, whereby the nearest end of a frontier is navigated to, the robot then traces the frontier using the side mounted sonar to explore the frontier. Once the frontier has been fully traced the frontier detection process is repeated.

5.3.  Navigation

Once the environment has been explored the robot can be used in tour mode, this allows the robot to navigate from its current position to another point in the environment, with the aim being to get from one point to the other in the minimum amount of time with the least likelihood of obstruction.

The method used for navigation is a variation on Dijkstra’s algorithm, where the shortest route to any (and all) nodes is found in a connected graph. Each cell in the map acts as a node, with the connections between adjacent cells dictating connections (arcs) between nodes. The weight of each arc is based on a weighted sum of the risk in travelling between the two cells, and whether the travel will result in the robot being closer to the final destination. Hopefully the weighting can be adjusted so that a compromise between shortest and least obstructed route can be found. Once the distances to each cell has been calculated a recursive search from the destination traces back the best route, then the motor commands required to move the robot along the path are stored in the robot’s instruction queue, ready to be sent to the motors.

6.  Beacon Design

Figure 6. Beacon Design

As has been mentioned beacons will be used for re-localisation, these beacons will consist of a red outer circle, used to find the actual beacon and three coloured areas, which will be used to provide a unique beacon ID. The horizontal line (Figure 6) will be used to indicate east. The beacons will be printed onto paper/card and attached to the ceiling, a camera pointing upwards will then be used to scan for beacons and interpret the beacon ID and direction.

6.1.  Beacon IDs

The number of different unique IDs will depend on the number of different colours that the camera can recognise; bearing in mind that varying lighting conditions will hinder colour recognition. The regions will therefore make up a base n ID (where n is the number of distinguishable colours.)

6.2.  Beacon Types

To improve the autonomy of the robot beacon types are assigned to different beacons, this will allow different beacons to be treated in different ways. For example a door beacon indicates that there is a door in the direction indicated by the horizontal line. This will aid the robot in getting through doors, which can be a problem.

6.3.  Customizable Tours

Instead of hard coding a specific tour into the robot a customizable tour will be used, each beacon can have various tour resources associated with it, for example the tour start beacon can have a link to a series of questions to be asked, and/or video clips to display. This means that the robot could quite easily be used in alternative settings like a museum.

7.  Testing

Although the robot is not finished, the completed modules have been tested. The mapping software has also been unit tested, with the links between several of the units tested. Once the robot is complete the remaining modules and the system as a whole will be tested.