International Journal of Science, Engineering and Technology Research (IJSETR)

Volume 1, Issue 1, July 2012

[(]

Classification of Items for Machine Automation System (MAS)

Ei Ei Wai Myo Win1, Wut Yi Win2

Abstract—Industrial applications require some sort of automated visual processing and classification of items placed on a moving conveyor. In our research work, we are trying to classify the shapes of the items on the moving conveyor of MAS (Machine Automated System) and collect the same shape of the items in group. Here the visual sensor is the digital camera that takes the suitable image of the items for further processing. Edge detection is one of the key research works in visual processing. This is because the edge of an image contains a wealth of internal information of the image and have higher pixel intensity values than those intensity values surrounding it. The most common methods used in the detection of edges are Roberts, Sobel, Prewitt, Laplacian, Canny, etc. In our research work, we classify the shapes of items in three types (circle, rectangle and irregular shape) with the help of Sobel Edge Detection Algorithm in MATLAB image processing blockset. In this system, several feature sets and neural network topologies to obtain a better classification performance.

Index Terms—Sobel edge detection algorithm, Visual processing, Machine Automation System, Artificial neural networks

I. INTRODUCTION

Many techniques develop in data acquisition, processing, and process control system, efficiency of many of the industrial applications has been controlled with the aid of PLC, SCADA and some sort of automated visual processing etc...[1]. In this work, computers are used for interfacing between visual processing and PLC systems. This industrial application requires some sorts of automated visual processing. Visual processing on the moving conveyor belt is used to detect the image of the object for a variety of tasks [2]. Visual processing is the sequence of steps that provide the digitalized information based on the data that flow from visual sensors to classify the shapes of the objects. Here the visual sensor is the digital camera that takes the suitable image of the items for further processing [3]. Edge detection is one of the key research works in visual processing [4]. In research work, the shapes of objects are classified into three types (circle, rectangle and irregular shape) with the help of Sobel edge detection algorithm in Matlab image processing blockset. Interfacing circuit is required to connect between personal computer (PC) and PLC training set. Object on the conveyor belt is captured by using web-camera. The capturing (RGB) image of object is transformed to (gray) image in order to detect the edge of image of object. Image of the object is operated by using Neural Network training [9] which is given the expected outcome of assigned object. The output of classification result is sent by using parallel port which is connected by interfacing circuit. This circuit is activated to G7M-DR40A Programmable Logic Controller [8]. Programmable Logic Controllers (PLC) widely used for control of technological processes. 5/2 way single acting cylinder is used for pushing device. Defined object such as square shape or circle shape is pushed by cylinder and undefined object is allowed to fall-down in respective tray at the end of moving conveyor. This industrial control system can be used robustly for classifying objects in every industry where the visual processing and accurate process results needed.

Visual processing is the sequence of steps that provide the digitalized information based on the data that flow from visual sensors to classify the shapes of the items [5].

Figure1.Block diagram of electro-mechanical conveyor belt system for object classification

A typical classification process comprises five main steps:

1)  Locating or recognizing the items on the conveyor belt via some type of a sensor as a camera, scanner, etc.

2)  Acquiring the necessary data from the item (i.e. taking pictures, measuring the method of reflected light, electromagnetic wave, or another type of signal). The acquisition device is usually located above the conveyor belt to view the items orthographically

3)  Processing the data to extract several useful features.

4)  Classification of the item using the extracted features and a classifier.

5)  Performing the necessary action following the classification result of the classifier.

II. Sobel Edge Detection Algorithm

In general, an algorithm of edge detection finds the sharp intensity variation of an image and in this way it obtains the edges of the objects contained on the image. There are various methods to detect the edges which use descrete gradients, laplacians, etc. The most common methods used in the detection of edges are Roberts, Sobel, Preview, Laplacian, Canny, etc. Their operators are masks of 3x3 windows (2x2 windows in the Roberts algorithm) which are convolved with the incoming image to assign each pixel a value of 0 or 255. To obtain better results each method applies between two and four masks to find edges in the image [10].

Equation (1)

Each element in the matrix is called pixel or image elements, f (x, y) represents the light intensity of the pixel, also known as the gray values (i.e. the brightness value) of f (x, y). It is a form of energy, f (x, y) values range from 0 (black) to 255 (white). Different figures from 0 to 255 stand for different gray level. Edge of object image takes a grayscale or a binary image as its input, and returns a binary image BW of the same size as (I), with 1's where the function finds edges in (I) and 0's elsewhere.

Sobel edge detection algorithm uses 4 operators (also called masks or kernels) of 3x3 windows which measure the intensity variation of the image when they are convolved with it in 4 directions: horizontal, vertical, right diagonal and left diagonal.

A.  An Edge Detection Model Base on Sobel Operator

The directional derivative estimate vector G was defined such as density difference /distance to neighbour. This vector is determined such that the unit vector will give the direction of G to the approximate neighbor [6].

a / b / c
d / e / f
g / h / i

Figure.2. A point and its 8 neighboring values

The neighbors group into antipodal pairs: (a,i), (b,h), (c,g), (f,d). The operator uses two 3×3 kernels which are convolved with the original image to calculate approximations of the derivatives - one for horizontal changes, and one for vertical. If we define A as the source image, and Gx and Gy are two images which at each point contain the horizontal and vertical derivative approximations, the computations are as follows:

and

Equation.2.

Where * here denotes the 2-dimensional convolution operation. Since the Sobel kernels can be decomposed as the products of an averaging and a differentiation kernel, they compute the gradient with smoothing. For example, Gx can be written as

= Equation.3.

The x-coordinate is defined here as increasing in the "right"-direction, and the y-coordinate is defined as increasing in the "down"-direction. At each point in the image, the resulting gradient approximations can be combined to give the gradient magnitude, using:

Equation.4.

Using this information, it can also calculate the gradient's direction:

Equation.5.

Where is 0 for a vertical edge which is darker on the right side.

The operator consists of a pair of 3×3 convolution masks as shown in Figure 3. One mask is simply the other rotated by 90°. These masks are designed to respond maximally to edges running vertically and horizontally relative to the pixel grid, one mask for each of the two perpendicular orientations.

3(a) 3(b) 3(c)

Figure.3. Three different object groups

Sobel edge detection method is utilized for this process to classify the shape of the object whether if this object is round, undefined or square which are used for detection. Figure.3 shows the objects which are used for this control system.

Figure.4. Sobel Algorithm used in MATLAB

Figure shows Sobel edge detection algorithm is used in this

Object classification system.

III.  Object Classification System

A.  Image Acquisition and Preprocessing

Image processing starts with acquiring the digital image. The image acquisition part of the system blueprints in order to capture the surface of the object. The system consists of a camera, connection cable, light source and a personal computer. The image acquisition design shows in figure.4.

Camera has mounted on top of a stand with fixed lighting conditions. There have been hundreds of articles describing various methods for 2D object recognition in industrial applications [3]. Systems with extremely robust performance are available commercially for a wide variety of tasks including automobile, electronics and metal industries. This automated visual processing system used edge detection algorithm to defined object (i.e. circle, square and irregular) which are formed in 2D object recognition.

Figure.5. Web-Camera for image acquisition

B.  Artificial Neural Network

Neural Network is used for training image. Artificial Neural Network (ANN) classifier can achieve very high classification rates. A training set is needed for adjusting the weights of an ANN. This data set should contain enough number of samples and represent as much variations as possible for an effective learning. The advantages of mention, this system utilize the back propagation of perceptron neural networks algorithm.

Simulation the training set by using Perceptron Neural Network (newp NN) [6] with the control algorithm of Matlab is utilized for comparing between snap-shoot image and trained images.

In this research, 30 images are trained for each class. Image of the object is operated by using Neural Network training which is given the expected outcome of assigned object. ANN algorithm is written by graphical user interface (GUI). Figure.7. shows the program of this classification system.

Figure.6. ANN writes in GUI

Figure.7. ANN is used in MABLAB.

Figure.8. Result

C.  Overall Flowchart

The operation of the system describes as follows.

Figure.9. Flowchart of the overall system

1) When the conveyor starts, the sensor detects the object whether the object is reached at the predefined location.

2) When the object has reached a defined location, camera is captured the object to get object image and sent the input RGB image to the computer by passing the USB port. Moving conveyor is stopped at this condition.

3) After finishing the operation of image classification process, sending the result to the relay circuit by passing parallel port consisted of the object information of the object image. This result signal triggers the PLC to operate the following instruction.

4) Depending on the incoming data from the parallel port; one of the PLC input pin activates the output of single acting cylinder at defined position. In accordance with the coming input data, PLC restarts the conveyor, and after pre-determined duration, it stops the conveyor again. The purpose of this operation is to bring the object into the proper position for the pneumatic cylinder.

5) When the object fall the correct tray, the conveyor is started again and the system waits for the next object.

IV.  Parallel Port with Interfacing Circuit

Accessing the individual pins of the parallel port under Windows 2000 and Windows XP is a privileged operation. The Data Acquisition Toolbox installs a driver called winio.sys that provides access to the parallel port pins. After connection a 330 ohm resistor between BC 547 transistor and 5 V supply of PC, an LED will be light up through a Printer Port.

Figure.10. Parallel port connects with interfacing circuit

A single parallel port label LPT1 is chosen to receive data from the PC and activate the defined cylinder. To create a DIO object for this port is parallel port = digitalio ('parallel','LPT1').

Figure.11. Interfacing circuit diagram

V. Machine Automation System

MAS training set has the main features.

Figure.12. MAS training set

They are available with standard and modular type, selectable controller such as a PLC or PC-based controller, durability and safety with a short circuit protector, changeable sequence with modular type, industrial sensor and actuator control and easy usage and exciting challenge.

A.  PLC Programming and Tables

MAS set have GMWIN PLC programming. Designated serial port connects the PLC in GMWIN in the PC with RS-232C cable. GMWIN is a programming and debugging tool for the full range of GLOFA PLC.

It is very easy to create and test a program because you can include several programs in one PLC system and possible to compile and debug several programs at the same time. It can create a program using symbols for easy understanding and memory address is also assigned automatically and it supports various data types.

Figure.13. GMWIN PLC and Ladder Diagram

Figure.14. Solenoid Valve Unit and Conveyor Motor

Ladder diagram is written by GM-7 of PLC type. Operation of this ladder diagram state systematically. When start button (SW1) is pressed, green lamp (L1) for output signal is lit. After detecting optical fiber sensor (S1), the output of distribution cylinder (Y1) advances. After distribution cylinder advances, the limit switch (LS1) is opened. LS1 stands for input single. 5/2-way double solenoid valve are used for Y1 cylinder advance direction and Y2 cylinder retract direction. When LS1 is on, (Y2) is activated as retract direction. After opening the limit switch (LS2), transfer cylinder (Y4) advances and retracts to move object into conveyor. The object on the conveyor is transferred with the aid of transfer cylinder.

Web-cam sensor capture object image. On timer (TON) is used for image operation processes. After waiting for 20s, conveyor motor is run. Motor speed must be adjusted to reach the object at the defined location. I00 is the input data received from parallel port and it makes the sorting cylinder (Y5) advance direction into respective tray. When limit switch (LS8) input module is opened, sorting cylinder is retracting at once.

Figure.14. Distribution Cylinder

TABLE I
Input/ Output Location List

Input
NO / Control Unit / PLC unit / Remark
1 / SW1 / %IX0.0.0 / Start
2 / SW2 / %IX0.0.1 / Stop
3 / S1 / %X0.0.4 / Object detection sensor
4 / I00 / %X0.0.16 / Object result from PC
5 / LS1(B1) / %X0.0.8 / Distribution cylinder retract detection
6 / LS2(B2) / %X0.0.9 / Distribution cylinder advance detection
7 / LS5(B5) / %X0.0.12 / Transfer cylinder retract detection
8 / LS6(B6) / %X0.0.13 / Transfer cylinder advance detection
9 / LS7(B7) / %X0.0.14 / Sorting cylinder retract detection
10 / LS8(B8) / %X0.0.15 / Sorting cylinder advance detection
Output
NO. / Control Unit / PLC unit / Remark
1 / L1 / %QX0.0.0 / Green Lamp
2 / Conveyor / %QX0.0.4 / Conveyor motor
3 / Y1 / %QX0.0.5 / Distribution cylinder advance
4 / Y2 / %QX0.0.6 / Distribution cylinder retract
5 / Y4 / %QX0.0.8 / Transfer cylinder
6 / Y5 / %QX0.0.9 / Sorting cylinder

Input/output modules, used for complete system processing, display in table I. It contains required control units for this system.