Faculty of Science and Engineering

Distributed Operational Object Mobilizer (DOOM)

Group 11

ENG 4000

Presented to: Dr. Eshrat Arjomandi

April 30, 2007

Group Members:

Amir Saeidi / 207025653
Derek Poon / 205542162
Leo Chan / 206850523
Pierre Malavoy / 206952030

Advisors:

Professor JinJun Shan
Professor Minas Spetsakis

Abstract

The Distributed Operational Object Mobilizer (DOOM) is a system that coordinates the movement of two mobility devices in order to lift an object in unison. The source of information necessary for the control of the devices is entirely provided by the analysis of overhead images taken of the object and mobility devices assembly. The analysis of these images yield relative position and orientation measurements of the devices with respect to the each other and the object. These, in turn, are processed to produce the appropriate commands to the mobility devices.

Figure 1: Mobility Devices Lift Object in Unison

DELIVERABLES

Our final product consists of an overhead camera connected by cable to a laptop. The laptop contains the image processing software and C program used to issue commands to the mobility devices’ microcontrollers via the laptop’s serial port. The mobility devices in question consist of two ‘mobility devices’ using differential drive and a forklift device. The system as a whole functions autonomously from the image acquisition phase to the positioning of the mobility devices and lifting of the object.

APPLICATIONS

Automated forklift systems are already being manufactured and used in commercial applications. Corecon is one such manufacturer producing Automated Guided Vehicles (AGV) of all kinds. Their automated forktruck, the Falcon F150, featured in the adjacent figure is lauded as a key component to the future of warehouse automation. The

Figure 2: Falcon F150 Forktruck

advantages of automation are many: human error is eliminated paving the way for an incident/accident free workplace; there would be a significant financial advantage in the long run due to an important increase in efficiency and a decrease in manpower.

The key difference between the current trend and our system is in the guidance system of the forklift. While current automated vehicles rely on sensors integral to each unit, our system’s operation is based on the image analysis of the entire situation: forklifts, payloads and environment. This type of setup ensures cohesion among the multiple automated vehicles since their movements would all be controlled through a unique command system. The use of an overhead camera would also enable secondary applications such as security surveillance and the tracking of inventory in the warehouse.

The increase in efficiency offered by such a system would also translate into power consumption and therefore reduce an industry’s overall impact on the environment. Its autonomous nature would also undoubtedly produce a safer workplace.

Table of Contents

1 Abstract1

1.1 Deliverables1

2.1 Applications2

Acknowledgments6

2 Introduction7

3 Technical Description9

3.1 Image Analysis9

3.1.1 Software9

3.1.2 Trajectory Accuracy10

3.1.3 High-Level Algorithm10

3.1.4Markers10

3.2 System Control12

3.2.1 Programming the Microcontroller12

3.2.2 Control Algorithm15

3.2.3 Servo Control and Operation19

3.2.4 Bluetooth21

3.4 Mobility Device22

3.4.1Microcontroller22

3.4.2 Motor Controller22

3.4.3 Gearbox and Motors23

3.4.4 Wheels and Caster23

3.4.5 Chassis and Proto-board24

3.4.6 Power Supply24

3.4.7 Forklift Mechanism24

3.4.8 Circuit Design25

4Constraints27

4.1Operational Constraint27

4.2 Heatlh, Environmental and Safety Constraints27

5 Budget27

6 Future Work28

7 Conclusion28

8 Bibliography29

Table of Figures

Figure 1: Mobility Devices Lift Object in Unison2

Figure 2: Falcon F150 Forktruck3

Figure 3: Protocol Diagram8

Figure 4: Image Analysis sequence9

Figure 5: Mobility Device and Object Assembly11

Figure 6: Mobility Device Angular Orientation12

Figure 7: Servo Pulse Parameters19

Figure 8: Servo Operation19

Figure 9: Gear Rack19

Figure 10: Fast PWM Mode Timing Diagram20

Figure 11: Motor controller Setup23

Figure 12: Chassis Modification24

Figure 13: Forklift Mechanism24

Figure 14: Mobility Device Circuit on Protoboard25

Figure 15: Circuit Layout26

Figure 16: List of Expenses27

Acknowledgments

Ator Sarkisoff was extremely helpful in aiding us to implement our vision of the forklift mechanisms for both mobility devices

Introduction

The objective of our group project was to make two individually operated mobility unitslift an object in unison based on positional information derived from an overhead image of the object and mobility devices. A camera was used to take the aerial snapshots. The analysis of these images determined the two positions around the object from which it is be lifted as well as the direction and relative position of the mobility devices with respect to these two positions.

There will be a need for image processing software that will perform the analysis on the image to retrieve the desired information. This program first isolates the object from its background and sets it in a coordinate system. Assuming that the density and thickness of the object is constant, its centre of gravity is determined. Using this measurement, two positions are calculated from which the object needs to be lifted to keep it balanced. Finally, the program correlates these positions with that of the mobility devices. To unambiguously identify each mobility device as well as their direction and relative position, they are fitted with visual markers that the image analysis program can manipulate.The mobility devices receive commands from connections to the RS232 port of the central control unit (laptop) and physically lift the object through a forklift device fitted to their front end.

Figure 2 shows the general protocol of the system. The camera sends images taken to the computer which analyzes it and processes the coordinates into commands. These commands are then sent to the mobility devices through the max232 chip that converts the serial signal into TTL for the microcontroller. The latter then implements the commands by operating both the servo through Pulse-Width Modulation (PWM) and the motors, through the motor controller, with a serial signal from its USART (Universal Synchronous/Asynchronous Receiver/Transmitter) port.

Technical Description

IMAGE ANALYSIS

Software

The image analysis is performed with MATLAB (Matrix Laboratory). MATLAB comes with a variety of pre-defined functions that can be easily used to manipulate images. The software package implemented for Image Acquisition and Processing incorporatees the following functions:

  • cMass(bimage): Finds the Center of mass of ANY object given a binary image. The object can be regular or irregular, as long as it is a uniform shape. This function is written in a way that it can detect the Center of mass of multiple object within a binary image and then return coordinates as 2xN matrix (where n is the number of objects in the image).
  • fitCircle(image): Receives any image format (in our case RGB) converts it to binary and returns the radius of a fitted circle onto an object. This function was initially designed strictly for circular objects, but it can work as well with any other shape. The outcome for any shape is the best fitted circle onto the given image. This can be modified to perform for multiple shapes within an image.
  • ColourCode(bimage,image): Receives a binary and regular image, colour codes each image ranging from black to white (order is black=’0 0 0’, blue=’0 0 1’, green=’0 1 0’, red=’1 0 0’ and white ‘1 1 1’). This function was originally designed to colour code an image containing no more than 4 shapes but by adding hybrid colours we could colour code (Maximum 3 bits) up to 7 shapes. This can be done just as well with intensity, where we can have plenty more shapes coded within one image, but the initial idea was to use simple colours (i.e. white in our case).

A simple demo of our software is shown in the adjacent figure. This is a very basic program that acquires an image through a webcam and then, using an image subtraction algorithm, makes use of the pre-written functions such as the ones shown above. Following the isolation of the object, the cMass() function is used to obtain the coordinates of the centre of mass of the object. This information is then passed onto the command algorithm which translates relative positions into commands to the mobility devices.

Trajectory Accuracy

The Logitech Quickcam used has a FPS of 30. The operating speed of the mobility devices, averaged over several test runs, is 5.01 cm/s (0.05 m/s), with an average error tolerance of 5mm/m traveled and an average travel distance of 1.5m .We then have:

Total error = 1.5*(5) = 7.5 mm

Error Duration = 0.75/5=0.15 seconds

So in order of us to be able to keep the error tolerance at 5mm for every meter traveled we need to have an FPS of:

FPS = 1/0.15 = 6.67 frames/sec ≈ 7 FPS

And this is well within the hardware constraint for our imaging system (30 FPS). Also the average run time of 1 frame for the tested code has been estimated at 0.1 seconds (Maximum of 10 FPS), and our findings seem to give the software enough time to acquire and process the images with no real time shortage.

High-Level Algorithm

The imaging software is able to perform the following in a short period of time:

  • Acquire the image
  • Remove non reflective back-ground
  • Locate the designated symbols (colour coded)
  • Calculate the center of each symbol and draw a vector between the centers of mass of two corresponding symbols
  • Use this information to calculate the position as well as the orientation of each mobility device, and compare it with the target points on the object to be lifted.

The following diagram should give a better impression of what the image processing software is “seeing” after removing the background and drawing the corresponding vectors:

Figure 5: Mobility Device and Object Assembly

Using this simple algorithm the central computer can process and calculate an updated trajectory for each frame taken. A few frames can be used for comparison only, just to check if the mobility device has followed the trajectory to within the average error tolerance, if not then the software interrupts the process and updates the mobility devices’ positions by sending out updated commands.

Markers

The markers used for each mobility device are two circles of different size lined up along the central axis of the mobility devices. They are white in colour in order for the camera to easily dintinguish them from the black background. For the same reason, these circles are placed a black piece of bristle board that is just large enough to mask the mobility device from the camerea.

The rear side and the front side of the mobility devices are distinguished using the differential area method. The smaller (area wise) shape represents the front side of the mobility device. Distinguishing the difference between mobility device 1 and mobility device 2 is implemented by making both maker circles of one mobility device smaller than those of the other one.

SYSTEM CONTROL

Command Algorithm

The algorithm that guides the two robots, or rather one in this case, is really quite simple. Every robot in any part of the image can be guided to the target in exactly 4 maneuvers. After the image analysis and the centers of each shape in the picture was determined, the difference vectors of target point and the robot orientation were used to find out the initial turn. After this turn is made the robot should be orientated perpendicular to the target point in the “y” component of the target point.

The algorithm for figuring out the initial turn is mentioned below. The following figure gives a thorough explanation of the orientation of the robot in the initial step.

Figure 6: Mobility Device Angular Orientation

To figure out the turn angle the following code is used:

theta1=atan(-(y2-y1)/(x2-x1));

The reason –(y2-y1) is used is that in images the coordinate of each pixel is different than that of the conventional Cartesian coordinate system. The (Xmax,Ymax) point in this case are at the lower right side of the picture, rendering the Y-axis of the image equal to the negative Y-axis of the conventional coordinate system.

To determine whether the robot needs to turn to the right or left at the start we need to visualize 4 separate quadrants in the image and using if-statements to check what the right maneuver is. We need to determine the orientation of the robot and then check for 2 test cases to see if a left or right turn is needed depending on wheter the mobility device eis above or below the object. The if-statement below shows the robot’s orientation (x-point in the 1st and 4th quadrant) and then checks if the robot is facing towards or away from the object.

if((x2-x1)>0)
if(y1<py)
thetat=theta1+(pi/2);
t=thetat/omega
t=round(t*666)
turn('R',t,thetat,id);
elseif(y1>py)
thetat=(pi/2)-theta1;
t=thetat/omega
t=round(t*666)
turn('L',t,thetat,id);
end

Evaluation of each of the nested if-statements can cause the respective right or left turn to maneuver into a perpendicular manner, facing away from the object by 90 degrees. The next if-statement is followed by the one showed above that describes the maneuver in the 2nd and 3rd quadrant.

elseif((x2-x1)<0)
if(y1<py)
thetat=(pi/2)-theta1;
t=thetat/omega
t=round(t*666)
turn('L',t,thetat,id);
elseif(y1>py)
thetat=theta1+(pi/2);
t=thetat/omega
t=round(t*666)
turn('R',t,thetat,id);
end

The duration of the turn is determined by the rotational rate of the robot in pixel/sec which depends on the distance of the camera from the ring, as shown in the algorithm (t=thetat/omega)

The next maneuver involves bringing the robot as close as possible to the target point while reamining perpendicular to it. It is a simple forward movement that uses the constant robot velocity across the screen to find out the movement duration. Using v=d/t and knowing the average velocity of the robot we can figure out the movement duration.

d=abs((y1-(6.11*3))-py)
t2=(d/(6.11))/(2)
serialcon(id,'F',t2);
theta2=pi/2
t3=theta2/omega
t3=round(t3*666)

After this step the robot should just need a 90 turn to completely face the target and then by a single forward movement the robot should be at the wanted point. The code of the 90 degree turn is very similar to that of the initial turn because there is a direction (left or right) determination problem. Again with the use of if-statement we can figure out the direction of the turn, but this time knowing the turn degree exactly.

if((x1-px)>0)
if(y1<py)
turn('R',t,theta2,id);
elseif(y1>py)
turn('L',t,theta2,id);
end
elseif((x1-px)<0)
if(y1<py)
turn('L',t,theta2,id);
elseif(y1>py)
turn('R',t,theta2,id);
end
end

After the 90 degree turn the robot is facing the target and with a simple forward movement now we can have out robot on the target. This part of the program is again very similar to that of the first forward movement.

d=abs((x1-(6.11*3))-px)
t4=(d/(6.11))/(2)
serialcon(id,'F',t4);

With the use of simple algebra we can guide each robot to the wanted point in the image with only 4 maneuvers.

Programming the Microcontroller

For this project, we used the program called AVR Studio 4 to implement the microcontroller unit (MCU). It is a professional Integrated Development Environment (IDE) for use with Atmel’s AVR microcontrollers. It is freely distributed by Atmel and can be downloaded from the Atmel website. AVR Studio 4 provides an assembler for both assemble and c language, chip simulator and in-circuit emulator interface. It also has the ability to simulate a predefined input pattern to the pins of any port at a specified clock cycle in a stimulus file. For example, a cycle number of 200 at port B will have the pattern of 01011011. We then put the following in the .sti file

000000001:00

000000200:5B

XXXXXXXXX:XX

.

999999999:FF

A .sti file must end with this line

In order to connect the microcontroller with the motor controller and the computer’s serial port, we linked the two devices to the USART (Universal Synchronous/Asynchronous Receiver/Transmitter) ports on the microcontroller. The main feature of USART is to provide an Asynchronous serial reception or transmission operation at a given Baud Rate (bps). In our case we have set port USART0 on the MCU to receive signals from max232 serial conversion chip and port USART1 to transmit signals to the motor controller.

Before controlling the two motors on the motor controller, we had to define the motors and give each of them a unique identifier number. Configuration was achieved by sending a three byte packet consisting of the start byte, a configuration command byte and the new configuration byte:

Start Byte = 0x80 / Change configuration =0x02 / New Setting, 0x00-0x7F

In our project, we set the MC with the following 3 bytes:

[0x80 | 0x02 | 0b010000000] ; 1-motor mode, controlling motor 0

[0x80 | 0x02 | 0b010000001] ; 1-motor mode, controlling motor 1

[0x80 | 0x02 | 0b000000010] ; 2-motor mode, controlling motor 0 and 1

The first two commands are already predefined in the motor controller and only the last command is sent to the motor controller to assign number two as the UID to control both motors at the same time.

In order to set the speed and direction of a motor, we send a four byte command as follow:

Start Byte = 0x80 / Device type = 0x00 / Motor # and direction / motor speed

At byte number three Bit0 is used to specify the direction of the motor. Setting it to 1 will make the motor go forward and 0 to go backward. Bits 1 – 6 specify the motor number, in this case it is only between 0 – 2. Byte number four is used to set the speed of the motor. A value of 0 turns the motor off. The possible range of values is between 1 and 127 in DEC.

Here is an example of setting the motor to full speed and then decelerating it until stop:

For k=1 to 127

[0x80 | 0x00 | 0x05 | k ] ; In this loop the motor controller starts both motors (UID 2) at the speed of 1 and goes up to 127

For k=127 to 0

[0x80 | 0x00 | 0x05 | k ] ; In this loop the motors start decelerating

To make a left or right turn, a command is sent to the microcontroller to makeone motor run and the other stop.

[0x80 | 0x00 | 0x01 | k ] ;This will make a right turn

[0x80 | 0x00 | 0x03 | k ] ;This will make a left turn

The most integral part of the MCU (microcontroller unit) programming depends on the interrupts which are actually provided by the MCU header files. Interrupts provide a “go to” event in which they trigger a certain function that would be executed every time a certain condition was met during the program cycle of the MCU.