Road Guidance Sign Recognition in Urban Areas by Structure
Vavilin Andrey and Kang-Hyun Jo
Graduate School of Electrical Engineering, University of Ulsan, Korea
680-749 San 29 Mugeo 2-Dong, Nam-gu, Ulsan, Korea.
{andy/jkh2005}@islab.ulsan.ac.kr
Abstract – Road guidance sign localization and recognition problem in cluttered environment is considered. Detection of signs in input images is based on both color and shape properties. Road guidance signs have specific background color (green, blue or brown) and rectangular shape. First, color segmentation is applied to detect sign candidate regions. Obtained regions are grouping using 8-neighbors method. Then additional filtering by shape properties applied to discard non-rectangular regions. Typically symbols inside road guidance sign can be divided into 3 groups (except “sign-in-sign” case): arrow region, text regions with direction descriptions and region with distance to crossroad.. One of crucial moments in recognition of guidance sign is detecting arrow region and understanding of its structure. Typically this region has the biggest area among the symbols in the sign plate. Colors which are used for road signs are highly contrast. It allows extracting symbols from sign background using color information. Two different algorithms were applied to detect arrowheads: genetic algorithm and border tracing algorithm. Deformable model of arrowhead with five deformation parameters was used for genetic algorithm. Initial population was randomly distributed inside arrow region and evolutionary changed in order to achieve maximum matching. Border tracing algorithm is based in detecting corner points on the outer boundary of arrow. All corner points were checked to confirm parameters of arrowhead. The proposed algorithm localize road guidance signs in different weather and lighting conditions in day and night time with probability higher than 92%. Processing speed is high enough to apply this algorithm in time-critical application. In case of border tracing method total processing time for one image was less than 0.08 sec.
I. INTRODUCTION
Road guidance signs provide drivers with information important for efficient navigation. It makes automatic detection and recognition of those signs an important problem for driver assistance systems. Those systems must be fast and provide robust results in different lighting and environment conditions. During last years problem of traffic signs recognition was in the field of interest of different researches. However, recognition of road guidance signs is still not so well-studied. Only several papers related to this topic [1,2,4,6]. Proposed framework based on structural analysis allows recognition of road guidance signs with complex structure.
II. SIGN DESCRIPTION
Information signs are not so well structured as other types of signs. However they have a set of specific properties which are important for detection and recognition. Typically information sign can be one of three colors: green, blue or brown. Or it may consist of several regions of different colors (“sign-in-sign” case) as shown in Fig.1. Hence color information can be used to detect sign candidates. Usually, white color is used to represent information on the sign.
Fig. 1. Example of information signs (“sign-in-sign” case).
Typically information sign is designed with a set of components (Fig.2) such as follows:
- arrow region;
- text regions with description of direction;
- distance to crossroads;
- road numbers.
Complexity of arrow region can be different depending on real road situation. Key issue in recognition of information sign lies in understanding structure of arrow region. Knowledge about number, position and orientation of arrowheads simplifies separating symbol information into groups by arrow directions. Furthermore arrow region can contain information about road numbers.
Fig.2. Sign structure.
III. DETECTION AND RECOGNITION OD SIGNS
The proposed algorithm can be divided into two main parts: detection and recognition. First we use color properties to detect sign candidates in the image. Second we disjoint detected candidates into structure components and analyze them in order to recognize structure of the sign. Main scheme of algorithm is shown in Fig.3.
Fig. 3. Scheme of detection and recognition.
A. Sign Detection
Detection algorithm consists of two main parts: candidate detection and filtering. Color segmentation method in RGB color space was used to detect candidate regions. The result of this step is a binary masks (where 0 means background and 1 means possible sign) which represent sign candidate locations.
To detect green and blue information signs (brown signs are not considered in this paper) the following criterion was applied:
(1)
where - red, green and blue components of pixel with coordinates of input image .
Pixel belongs to information sign if
(2)
Next step is grouping connecting regions. This step is key issue in separating signs and non-sign regions of environment. On this step all connected candidate pixels are grouping as one candidate (using 8-neighbors) if distance in color space between two pixels was less than 30. Bounding box for all candidates was also calculated during this step. After that step we obtain the set of candidates: , where is number of candidates. Each candidate is a group of pixels connected together with similar color properties. Bounding box and area of candidate are also calculated on this step.
To reduce number of false candidates a set of constraint rules is applied. The following candidate’s parameters are to be checked:
- Area should be greater than predefined threshold value.
- Width, height and width to height ratio also should satisfy to constraint threshold in order to reduce too small and too thin candidates.
- Information signs contain symbol information which is of white color, hence candidates without white pixels inside can be discarded.
- Non-rectangular candidates can also be discarded.
This step allows to reduce number of sign candidates which is very important in urban areas. Example of sign detection is shown in Fig.4.
Fig. 4. Result of sign detection: (a) original image; (b) detected candidates; (c) candidates left after applying constraint rules.
B. Sign Recognition
Some approaches for recognition of information signs are presented at [4-6]. Proposed method of recognition is based on structural analysis of sign components. Hence one of important problems is extraction of symbol information from sign candidate. Extraction method is based on color properties. Colors used for information signs are very contrast; hence pixels with color intensity greater than 1.2 comparatively to neighbor pixels will be selected as symbols. Example of sign extraction is shown in Fig.5. Connected symbol pixels are also grouped into “symbol clusters” on this step.
Fig. 5. Example of symbol extraction: (a) original image; (b) extracted symbols.
On the next step we use area of symbol clusters to find the arrow region. Typically, arrow region is the biggest symbol. We choose all symbol clusters with area greater than 90% of maximum area. If over than 4 clusters selected sign contains no arrow region or symbol information was extracted incorrectly.
Information sign shows drivers where he should go to reach some specific destination. Hence the meaning of information signs can be represented as a set of pairs “direction-destination”. All possible directions are represented by arrow region and all destinations and additional information described by text regions. Text regions are grouped near by arrowheads. So, localization of arrowheads inside the arrow region can provide us with information important for separating text regions. Border tracing algorithm was used to detect arrowheads. Arrowheads have specific geometrical properties, which can be used for localization.
Fig. 6. Arrowhead structure.
As shown in Fig.6. the problem of arrowhead detection can be solved by localizing arrowhead point. This problem can be solved using the following algorithm:
- trace border of arrow region until the corner point is detected
- if corner point satisfies Eqs.(3,4) arrowhead is localized.
- continue tracing the border until loop is detected.
(3)
(4)
where is interior angle and are lengths of connected edges. After all arrowheads are detected text regions are divided into clusters using distance criteria.is equal to number of arrowheads. Each cluster corresponds to one arrowhead and hence it describes one of possible directions. Results of recognition step are shown in Fig.7.
Fig. 7. Recognition results: (a) original image; (b) arrow region; (c) arrow border; (d) arrowheads; (e) text information; (f) text clusters shown by different color; (g) Recognition result.
IV. EXPERIMENTAL RESULTS
Real images with resolution 640x480 were used for experiments. Images were taken in different lighting and weather conditions in order to analyze robustness of proposed algorithm. Final database consists of 95 images with 119 signs. All images are divided into 3 groups depending on lighting conditions. First group contains images made in daytime in good weather conditions. Second group contains images made in rainy and in cloudy weather. Last group contains images made in late evening and in high rain and fog. Examples of signs of those groups are shown in Fig.8. Results of detection and recognition steps are presented in Table 1. Sign was decided to be recognized correctly if algorithm showed correctly all pairs direction-description.
Fig. 8. Examples of sign groups: (a) images made in good lighting conditions; (b) images made in bad environment conditions such as rain and fog; (c) images made in bad lighting conditions.
Table 1. Experimental results.
Group 1 / Group 2 / Group 3Number of images / 23 / 47 / 25
Number of signs / 28 / 62 / 29
Not detected / 0 / 2 / 1
Number of recognized signs / 23 / 45 / 22
All experiments were done on Pentium-IV 2.6 with 512MB RAM under Borland Builder 6.0 environment. Images of size 640x480 were used. Computation time was measured by means of C++ to hundreds of second. Average computation time is shown in Table 2.
Table 2. Average computation time.
Stage / Time for stage / accumulated time (sec)Detection / 0.01
Filtering / < 0.01
Recognition (for one sign) / 0.10
V. CONCLUSIONS
Experimental results indicate that the proposed method can be used for real-time detection and recognition of restricting, warning and information signs with processing time around 0.11 seconds per one frame. The proposed system shows good recognition rate for various lighting and weather conditions. The recognition rate in bad environment conditions is 96%. It provides robust results in dark and rainy environment. Nevertheless our system can be improved to decrease number of false-positive detections for reducing total computation time.
ACKNOWLEDGMENT
The authors would like to thank to Ulsan Metropolitan City and MOCIE and MOE of Korean Government which partly supported this research through the NARC and post BK21 project at University of Ulsan.
REFERENCES
[1] S. Azami, S. Katahara and M. Aoki, “Route Guidance Sign Identification Using 2-D Structural Description”, Proceedings of the IEEE Intelligent Vehicles Symposium, pp. 153–158, 1996.
[2] S. Azami and M. Aoki, “Route Guidance Sign Recognition”, Proceedings of the IEEE Intelligent Vehicles Symposium, pp. 338–343, 1995.
[3] P. Gil-Jimenez, S. Lafuente-Arroyo, H. Gomez-Moreno, F. Lopez-Ferreras and S. Maldonado-Bascon, “Traffic sign shape classification evaluation. Part II. FFT applied to the signature of blobs”, Proceedings of the IEEE Intelligent Vehicles Symposium, pp. 607–612, 2005.
[4] T. Kato, A. Kobayasi, H. Hase and M. Yoneda, “An Experimental Consideration for Road Guide Sign Understanding in ITS”, Intelligent Transportation System,. IEEE, pp. 268–273, Singapore, 2002.
[5] S. Lafuente-Arroyo, P. Gil-Jimenez, R. Maldonado-Bascon, F. Lopez-Ferreras and S. Maldonado-Bascon, “Traffic sign shape classification evaluation I: SVM using distance to borders”, Proceedings of the IEEE Intelligent Vehicles Symposium, pp. 557 – 562, 2005.
[6] J-H. Lee and K-H. Jo, “Traffic sign recognition by division of characters and symbols”, Science and Technology, Proceedings KORUS 2003. The 7th Korea-Russia International Symposium, vol.2, pp. 324-328, 2003.