BCI CONTROL OF A ROBOTIC ARM

Hardik Shah Paul Vandadi

The main objective of this project is to create the virtual representation of a robot’s working environment. This virtual space gives user the ability to test the physical system without ever having to set up the physical environment and also user can practice without having to be on site. Another benefit of using a virtual space is that we can create any representation needed for the user. To control the robot in the real world, as well as the virtual world, we use MATLAB/Simulink to numerically analyze the inverse dynamics of the system. This allows us to specify the robot’s position that we want and then calculate the joint angles that will move the robot to that desired position. The robot will be used to manipulate a set number of objects with known positions within the system, real world or virtual. We will use trajectory planning techniques to move the robot within the environment; this gives the robot a level of semi-autonomy and allows the user to only have to initiate a single command instead of having to control the robot the entire time it is in use.

The robot arm and office environment have been set up at 307 Clarkson Hall, Clarkson University, Potsdam. Work is in progress on how to get p3 wave from the brain of a subject which in turn controls the robotic arm. Subjects are being tested to determine the accuracy of BCI 2000 software. Eventually as we are trying to control the robotic arm directly from a brain signal it will be more interactive than any other input device.

Application

The development of technologies such as the electroencephalogram (EEG) that use non-invasive methods to record brain activity have been used effectively to allow a user to control a mouse cursor and to select letters to form words. The primary benefit of this brain-computer interface (BCI) has been that it allows individuals with movement disorders to communicate their needs to a care provider. Our research has focused on extending the capabilities of the BCI to not only control a cursor, but to control a robotic arm. This will allow the user to not only communicate their needs, but to be able to manipulate objects within their environment themselves, greatly improving the quality of life of the individual.

Steps in our Project :

1.  Designing the environment in 3dsMAX.

2.  Importing it into Virtools.

3.  Scaling the environment.

4.  Adding scripts to the composition.

5.  Developing the equations in Dynaflex pro.

6.  Exporting the equations to MATLAB/SIMULINK using Block Builder.

7.  Connecting MATLAB to Virtools.

Block Diagram: BCI control of a robotic arm


Description of the virtual environment:

The virtual environment consists of a robotic arm in an office environment with multiple objects. The user will get two control options when he starts the cmo. Pressing ‘V’ key on keyboard will give him the ability to control the robotic arm from keyboard.

On the other hand pressing ‘M’ key will control the robot automatically from MATLAB. MATLAB gets input from BCI 2000 software. Input from the keyboard is for demonstration purpose only. The ultimate goal of the project is to control the robotic arm using MATLAB. The path planning of the robot is done as explained in the following block diagram.

Modeling and Scripting:

The environment is modeled in 3D Studio Max and the robotic arm is created in Pro-Engineer and then imported into Virtools. We have used the following stream for scripting.

1.  Physicalisation of the Robot and the Environment.

2.  Attach a frame at the end of each link of the robotic arm.

3.  Set up constraints between links of the robot

4.  Use motion controller BB at each link to make it follow the frame attached to it.

5.  Add script to get input either from keyboard or from MATLAB.

6.  Move the frames according to the input received.

Controls:

Left/Right Arrow : Translation of Base Plate in X direction.

Num 8 and Num 5 : Rotation of Shoulder in Z direction.

Num 7 and Num 4 : Rotation of Upper Arm in Z direction.

Num 9 and Num 6 : Rotation of Gripper in Z direction.

Num 1 and Num 3 : Rotation of Fore Arm in Z direction.

Num . and Num 0 : Opening and Closing of Gripper.

Interaction:

For MATLAB input we are going to use an input device called electrode cap which will help us in getting the input directly from the user brain. So, this system is autonomous and target driven. The subject just needs to initiate the signal to get the object and path planning will take care of the rest. There are specific objects kept at specific places in the environment. If the user wants to pick the object he will just initiate the signal for that object.

Input from electrode cap

Cameras:

We have two cameras in our cmo. One gives the front view and the other top view. User can switch between cameras. This is achieved by using the ‘switch on key’

BB. The controls for that are as follows.

Key ‘F’ - Front View

Key ‘T’ - Top View
Difficulties and Limitations:

Initially we faced difficulty in placing the frames at the exact positions and making the robotic joints to follow the corresponding frame exactly. Eventually using the trial and error method we could figure out the exact position of the frames. Now the robot joints follow the frames exactly. But one difficulty we still face is that there is a conflict between collision detection of the robotic arm with the environment and the robotic joints following the frames. As the frames are not set for collision detection they are moving past the obstacles which is causing the robotic arm also to move past the obstacles. But because the robotic arm has collision detection it is not allowing it to pass through the obstacles. We tried adding collision detection to frames but it did not fix the issue completely. Eventually as we are going to control the arm using advanced path planning it is not going to cause any issue because then the frames will be controlled autonomously.