SE423 Laboratory Assignment 8
Robot Vision
Recommended Due Date: By the end of your lab time the week of April 9th
Possible Points: If checked off before 9:00pm April16th … 14 points
If checked off after 9:00pm April16th … 0 points
Goals for this Lab Assignment: Learn low level vision processing algorithms in order to use the robot’s color camera for object recognition.
DSP/BIOS Objects Used: SWI, TSK
New Library Functions: init_colorvision, userProcessColorImageFunc_laser.
Prelab:Continue updating your LABVIEW application. If you haven’t already, you should start modifying the LABVIEW program so that you can download new values to variables in your DSP program. This will be very useful when you are trying to tune your algorithm and controllers for the final contest.
Laboratory ExerciseIntroduction:
The CMOS color camera on the DSP returns an image every 1/25 of a second. The image size coming from the camera is 288 rows by 352 columns of 8 bit pixel data, 101376 bytes of data, formatted in the Bayer pattern. The DSP has 224K bytes (224*1024 = 229376 bytes) of internal RAM that we have dedicated to the DSP’s code. There is another 128K bytes of internal memory that we previously have used as the location for our shared memory between the ARM and DSP cores. We will also use this memory for storing our 176X144 RGB image but that will take up 76032bytes of the 131072 bytes (128K) of internal shared memory so external memory will be needed to store the raw Bayer patterned vision data. There is 128 Mbytes of external DDR RAM. Up to this point, our DSP programs have only used the internal memory of the DSP because this RAM is much faster. The internal RAM runs at the CPUs clock speed of 300 MHz while the external DDR runs at 132MHz. The DSP does have cache memory that speeds up most accesses to the external DDR but the access is still quite a bit slower than the internal RAM. Due to the size of our Bayer pattern image, we have to store the image in external DDR and take the hit of slower memory access time.
Each pixel of this 144 by 176 image has a red, green and blue 8-bit intensity value associated with it. 16 indicates that there is no color intensity and 240 indicates full color intensity. In the file ColorVisions.c, you will be adding code to the function userProcessColorImageFunc_laser(volatile bgr *ptrImager). This function is called each time a new image is received from the camera. The parameter ptrImage is a pointer to an array of the type defined structure bgr. bgr is defined in the include file ColorVision.h to be:
typedef struct bgr {
unsigned char blue;
unsigned char green;
unsigned char red;
} bgr;
For example, a C statement to access the green component of the 40th row and 30th column could be: value = ptrImage[39*IMAGE_COLUMNS+29].green.
Why is “laser” in the name for this function? Short answer is I don’t want change the DSP/BIOS default setup. Long answer is that I used to pass a sliver of the full image to this function that would be used in conjunction with a laser pointer to sense the distance the camera is away from an object. This became obsolete when I purchased the LADARs. Please ignore “laser” in the name.
Exercise 1: Find the X Y pixel location of the center of a bright light
For this first exercise, you will assume that there is only one bright object seen by the camera. You are to find the center of this one bright object and display its X and Y pixel coordinates to the text LCD screen. The robot can see the room lights when it is on the floor so you will need to ignore the top section of the image so the bright pixels of the lights are not added to your centroid calculation. In Exercise 3 you will be asked to have the robot follow this bright light so the centroid values need to be communicated to your robot control process. Set up global variables to communicate the new centroid and the number of object pixels to your control process (RobotControl()). To perform this communication of data between the vision process and the control process use a set of global variables as a type of shared memory between the two. One of variables should be a flag that indicates to the control process that there is new data for the control process to use. Since the code for each of these processes are located in different C files you will have to define and initialize the global variables in one file and “extern” them in the other file. In the control process, print the centroid and the number of pixels to the LCD screen every 100 ms. Use the Lab678OMAPL138ProjCreator.exe project creator to create a new project for these exercises.
For exercise 1, 2, and 3 you will be programming your own single centroid algorithm. In exercises 4, 5, and 6 you will be using a multiple centroid algorithm that is given to you. So for exercises 1, 2 and 3 I would like you to remove (delete) the “ColorVision.c” file from your project and add (Add Files) “ColorVisionLab8parts1_2_3.c” (this file is also located in the “src” directory of your project directory). When adding “ColorVisionLab8parts1_2_3.c” to your project make sure to select “Link to File” NOT “Copy File.” Ask your instructor if you have trouble removing and adding these files.
To look for the bright white light of the flash light check when the red, green, and blue pixels are all greater than a certain threshold value. Your algorithm then will find the centroid of all brightwhite pixels. Inside the function “userProcessColorImageFunc_laser”, create a dual loop to loop through the image once. Threshold the image and find the center of all pixels greater than this certain threshold value. You will need to experiment to find a good threshold value. To display your threshold image to the robot’s color LCD screen set the blue and green pixels to 0 and set the red pixels to 0 for pixels less than the threshold, 255 for pixels greater than threshold. Remember not to process the upper row pixels so the overhead lights do not cause you problems. Two #defines IMAGE_ROWS 144 and IMAGE_COLUMNS 176 will be helpful in writing this code.
One additional item to add to this code: Display a green cross hair at the centroid of the found object on the color LCD screen.
Exercise 2: Watching the Image on the PC.
It can also be useful to display your image on the PC. A Labview application has been developed that allows you to display the robot’s camera data. The Labview project is called“LVImageDisplay.lvproj” located in the directory <your_repository>\LVImageDisplay. Open this Labview project and open LVImageDisplay.vi.
The flow of the transmission of picture data from the DSP to Labview is as follows: Your DSP project already has the code for sending the vision data to Linux. You will need to run the Linux application “LVimageTX” from the Linux console. Then this “LVimageTX” application will receive the vision data from the DSP and send the new picture over TCPIP to Labview. The procedure to get the Labview project working is first start your DSP program. Then in embedded Linux start the LVimageTX application. (type ./LVimageTX). Once the Linux application is running and listening for a TCPIP connection, type your robot’s IP in the “Robot’s Address” text box and run the Labview program. After a short delay you should start seeing the Robot camera’s images. When you are finished press the “Stop” button. This should also cause the Linux application to stop. (If not use‘Ctrl-C’ to stop the Linux app).
Because exercise 1 asked you to change the pixel values to either 0 (black) if the pixel was not above the threshold and 255 (red only) if the pixel was above the threshold, the image you are viewing in Labview is the threshold image. To view the unedited image, change your code so that it does not modify the pixel values. Reload your DSP code and repeat this process to display the full color image in Labview.
Exercise 3: Go toward the light
Using a laser shining on a nearby wall (or a flashlight pointed at the robot), design a P controller to make the robot drive toward the light. Use the ultrasonic sensors to detect when you get close to the wall, and slow the robot down to a stop when it gets to within 1 tile of a wall. A good starting Kp for vision error measured in pixels is 0.05. The IR beam from the IR distance sensors is seen by the robot’s camera as a bright white dot. This can mess up your white light detection when the robot is close to a front wall. So for this exercise position the IR sensors so that they are not pointed towards the front for the robot and use the ultrasonic sensors (or the LADAR) to sense when the robot needs to stop if an obstacle is in its way.
Exercise 4: Following an object of a certain color
To follow an object of a certain color is obviously very similar to following a bright light; the only difference with exercises 1-3 is that we assumed that there was only one bright light in the robot’s field of view. If there were two light sources, our light following code would not have worked as well. So for the remaining exercises we are going to introduce some new code that adds a segmentation algorithm to the vision processing. This new code is given and found in the original ColorVision.c file. So remove the “ColorVisionLab8parts1_2_3.c” file from your project and add back the “ColorVision.c” file. If you look in this source file you will find a more extensive vision algorithm in userProcessColorImageFunc_laser. The code in userProcessColorImageFunc_laser implements the algorithm discussed in Spong, Hutchinson, M. Vidyasagar Robot Modeling and Control This code will be discussed some in class but part of this exercise is for you to become familiar with it. Read through the code and explain to your TA the “big picture” of what this code is doing.
For this exercise, program the robot to follow a light blue golf ball or any other (non-orange) object that you choose. To help you find the blue golf ball’s HSV color threshold use Matlab and the ColorThreshold.m M-file. See the document for an introduction in using this M-file.
Exercise 5: Find the distance between a golf ball and the center of the robot.
Using centroid information of the light blue golf ball, calibrate a distance measurement to the golf ball. The golf ball is always assumed to be lying on the floor. Approximate the error at different distance measurements. Make sure to have the robot center the golf ball in its field of view before calculating the distance.
Exercise 6: Use Landmarks to Update your Dead-Reckoned position.
In this exercise you are going to use color recognition to correct the drift errors of your robot position calculations. The course is set up with two green colored corners. Your final goal will be to determine when you are at a certain colored corner and update your robot’s X, Y and theta position to that corner’s coordinates.
As a first step add your wall following code to this new project. Set the robot’s speed at 1.0 tiles/second. Once you have the wall-following working add code to determine when the robot is in any corner. Then use the compass to determine which corner you are in.
The above step works well if the robot is only performing the wall following of our square course. If the robot finds itself in a different corner (say in the center of the course if there were a corner there) it will be tricked to think it is at one of the outside corners. For that reason we need to use the camera to determine when we are definitely at an outside edge corner. To this point your vision algorithm only looks for one color when processing the image. You will need to modify your code so that multiple colors can be recognized in order to search for both green landmarks and light blue golf balls. It may be possible to search for the two colors each image sample but that will add a lot of processing load on the DSP. (Have your instructor show you how to use a Digital Output and the oscilloscope to check your time loading.) For this exercise, alternate which color is being searched for each time you come in the vision processing function.
When you determine that you are at a landmark, update your robot’s X,Y coordinates to the approximate coordinates of that corner. Also increment an integer variable indicating the number of times you have been at this landmark. Display this number to the text LCD screen. Now your LABVIEW application should be able to display the robot always staying inside the square of the course.
Lab Check Off:
- For three (3,4 and 6) of these check-off items, your LABVIEW application needs to be working and displaying the location of your robot on the screen.
- Demo your robot following a bright light. If the robot gets close to a wall it should stop.
- Demo your robot following a colored object and stopping if too close to an obstacle.
- Demo your robot displaying the distance to a light blue golf ball lying on the floor.
- Demo your robot performing wall-following and correcting its dead-reckoned position when it is at a colored corner landmark.
GE423, Mechatronic SystemsLab 8, Page 1 of 4