CHAPTER 1
INTRODUCTION
Blindnessis the condition of lackingvisual perceptiondue tophysiologicalorneurologicalfactors. Various scales have been developed to describe the extent ofvision lossand define blindness. Total blindnessis the complete lack of form and visual light perception and is clinically recorded as NLP, an abbreviation for no light perception.Blindnessis frequently used to describe severevisual impairmentwithresidualvision. Those described as having only light perception have no more sight than the ability to tell light from dark and the general direction of alight source. Visually impaired people need some assistance in order to move from one place to another in day to day life. It might be in a dependent manner with the help of others or in an independent manner with the help of canes, trained dogs etc. to guide them. In both the cases the significant objective of them is to detect the obstacle in front of them and avoiding it while moving. With the advent of electronic technologies self-assistive devices are made to help them. Some of the present technologies are as follows.
1.1 LASER CANE
This is an electronic cane that uses invisible laser beams to detect obstacles, drop offs, and similar hazards in the surroundings. Once the cane detects the obstacle or drop off using the laser beams, it will produce a specific audio signal. The cane has three distinct audio signals; each indicates a specific distance. The audio signal informs the user of the distance of the obstacle or the height of the drop off .This device can detect objects and hazards up to a distance of 12 feet.
Figure 1.1 BLIND PERSON WITH LASER CANE
A part of the canes handle also vibrates when there is an object in front of the user. The laser cane is suitable for persons who are blind and persons who are deaf blind. It can be used on its own. However mobility experts strongly recommend that blind persons first learn the use of the long white cane before using the laser cane. The Laser Cane emits beams of invisible light which results in sounds or vibrations when the beam encounters an object, so as to alert the user to an obstruction ahead. Weighs one pound, made of aluminum-steel.
1.2 SONIC MOBILITY DEVICE
This is a device that is generally mounted on users head. It uses ultrasonic technology to detect obstacles and other objects that are located in front of user’s path. The sonic mobility device’s uses the musical scale’s 8 tones to indicate the distance of the object. Each tone signifies a particular distance from the obstruction. The user hears the tone through the devices earpiece.
Figure 1.2 SONIC MOBILITY DEVICES
1.3 GPS DEVICES FOR THE BLIND
Although mainly used in identifying one’s location, GPS (Global Positioning System) devices also help blind persons in travelling independently. Blind persons can use portable GPS systems to determine and verify the correct travel route. They can use these devices whether are they are walking or riding a vehicle.GPS devices for the blind include screen readers so the user can hear the information.
Other GPS devices are connected to a Braille display so the user can read the information displayed in Braille. Blind person should use a particular mobility device in addition to the GPS system.
FIGURE 1.3 GPS DEVICES
The Braille devices and software help blind people to improve their skills in reading and writing. To become literate is very important for this kind of individuals because it allows them to hope for a productive future at the same time live with confidence. These innovative Braille devices and software help the visually-impaired individuals print and store information quickly, quietly, and reliably.
1.4 ULTRASOUND BASED DETECTION
Here a wearable system for visually impaired users is implemented which allows them to detect and avoid obstacles. This is based on ultrasound sensors which can acquire range data from the objects in the environment by estimating the time-of-flight of the ultrasound signal. Using a hemispherical sensor array, we can detect obstacles and determine which directions should be avoided. However, the ultrasound sensors are only used to detect whether the obstacles are present in front of users. Unimpeded directions are determined by analyzing patterns of the range values from successive frames. Feedback is presented to users in the form of voice commands and vibration patterns.
1.4.1 NEW BRAILLE TECHNOLOGY
Using this technology visually impaired persons can read a person’s emotion or facial expressions to whom he is conversing. To make this possible here an ordinary web camera, hardware as small as a coin and a tactile display is used. This enables the visually impaired to direct interpret human emotions.
Visual information is transferred from the camera into advanced vibrating patterns displayed on the skin. The vibrators are sequentially activated to provide dynamic information about what kind of emotion a person is expressing and the intensity of the emotion itself.
Figure 1.4 BRAILLE DEVICES AND SOFTWARE
The first step for a user is to learn the patterns of different facial expressions which can be done by displaying the emotions in front of a camera which translates it into vibration patterns. In this learning phase visually impaired person have a tactile display mounted on the back of a chair. When interacting with other people a sling on the forearm can be used instead.
The main research focus is to characterize different emotions and to find a way to present them by means of advanced biomedical engineering and computer vision technologies. This technology can also be implemented on mobile phones for tactile rendering of live football games and human vibration information through vibrations which is an interesting way of enhancing the experience of mobile users.
1.5 COMPUTER ASSISTIVE TECHNOLOGYFOR THE BLIND
The most important advancement since blind assistive technology began to appear in the 1970s is screen reading software, which simulates the human voice reading the text on computer screen or renders hard-copy output into Braille. Screen readers are designed to pick out things that will catch sited people, such as colors and blinking cursors, and can be modified to choose areas the user wants or doesn’t want.
Figure 1.5 VISUALLY IMAPAIRED ASSISTIVE DEVICES
1.6 CANE WITH SENSOR
The cane is very essential for safe mobility of vision-impaired people. With this device, they are able to stroll around without worrying for bumps. And along with the innovations made in technology, the cane being used blind people are better improved in terms of safety and functionality.
Figure 1.6 CANE WITH SENSOR
1.7 BATTERY-OPERATED SPHYGMOMANOMETER
Blind person can also be subjected to hypertension. And it is good to know that with the availability of beeping or talking sphygmomanometer, vision-impaired individuals can now accurate take or monitor blood pressure by simply using a beeping or talking sphygmomanometer. This type of medical equipment is battery-operated. The blood pressure and pulse readings are announced in a clear voice and shown simultaneously on a digital display.
Figure 1.7 BATTERY-OPERATED SPHYGMOMANOMETER
1.8 NAVIGATIONAL AID HEADSET
This device is still in concept. However, if successfully launched, the aid headset will help the blind person to confidently, independently and safely walk through the city streets. The said navigational aid device comes will a built-in microphone and audio transducer. It will also incorporate a GPS system, speech recognition, and obstacle detection technology. Using the microphone, the user will tell his destination and from the audible information, the GPS system will direct the user to his desired location and the obstacle technology will help him safely reach the place by informing him any impediments he might encounter.
Figure 1.8 NAVIGATIONAL AID HEADSET
It is estimated that 7.4 million people in Europe are visually impaired [11]. For many, known destinations along familiar routes can be reached with the aid of white canes or guide dogs. By contrast, for new or unknown destinations along unfamiliar routes (that may change dynamically) the limitations of these aids become apparent [12, 13, 14] (e.g. white canes are ineffective for detecting obstacles beyond 3-6 feet). The mobility aids are only useful for assisting visually impaired people through the immediate environment (termed as micro-navigation), but do not facilitate the traveller in more distant environments (termed as macro navigation).
Figure 1.9 ELECTRONIC TRAVEL AIDS (ETAS)
` With the proliferation of context-aware research and development, Electronic Travel Aids (ETAs) such as obstacle avoidance systems (e.g. Laser Cane and ultrasonic obstacle avoiders ) have been developed to assist visually impaired travellers for micro-navigation. Whereas, Global Positioning Systems (GPS) and Geographical Information Systems (GIS) have been/are being developed for macro navigation (e.g. MOBIC Travel Aid & Personal Guidance System).
However, despite recent technological advancements, there is still considerable scope for Human Computer Interaction (HCI) research. Previous work has predominantly focused on developing technologies and testing their functionality as opposed to utilizing HCI principles (e.g. Task Analysis) to actively assess the impact on the user. For instance, Dodson et al. [12] make the assumption that ‘since a blind human is the intended navigator a speech user-interface is used to implement this.
However, despite the contextual complexity of a visually impaired traveller interacting with various mobility aids (i.e. navigational system and guide dog/white cane), existing research has failed to fully address the interaction of contextual components and how usability is influenced. Further, as more contextual sources are used to identify and discover a user’s context, it is becoming increasingly paramount that information is managed appropriately and displayed in a way that is tailored to the visually impaired traveller’s task, situation and environment.
1.9 WHITE CANE
Awhite caneis used by many people who areblindorvisually impaired, both as a mobility tool and as a courtesy to others. Not all modern white canes are designed to fulfil the same primary function, however: There are at least five varieties of this tool, each serving a slightly different need.
TYPES:
Long cane: This "traditional" white cane, also known as a "Hoover" cane, after Dr. Richard Hoover, is designed primarily as a mobility tool used to detect objects in the path of a user. Cane length depends upon the height of a user, and traditionally extends from the floor to the user'ssternum. Some organizers favour the use of much longer canes.
Figure 1.10 LONG WHITE CANE
"Kiddie" cane: This version works in the same way as an adult's long cane, but is designed for use by children.
Figure 1.11 KIDDIE CANE
Identification cane ("Symbol Cane" in British English): The ID cane is used primarily to alert others as to the bearer's visual impairment. It is often lighter and shorter than the long cane, and has no use as a mobility tool.
Figure 1.12 IDENTIFICATION CANE
Support cane: The white support cane is designed primarily to offer physical stability to a visually impaired user. By virtue of its colour, the cane also works as a means of identification. This tool has very limited potential as a mobility device.
Figure 1.13 SUPPORT CANE
.
CHAPTER 2
PROBLEM DESCRIPTION
Visually impaired people cannot navigate easily in their day to day life. They need help of others or cane or other electronic mobility devices or guided dogs which guides them in an appropriate manner. So they need a self assistive device to guide and make them independent from being dependent on others for navigation. The very preliminary and significant thing is the detection of obstacles in front of them and avoiding it. In this project we need to classify the objects, recognize obstacles or objects, identify them and track the objects through an image processing technique and suggesting them an alternative path.
CHAPTER 3
LITERATURE SURVEY AND RELATED WORKS
3.1 VISUAL ATTENTION
The visual system is not capable of fully processing all of the visual information that arrives at the eye. In order to get around the limitation, a mechanism that selects regions of interest for additional processing is used. This selection is done bottom-up, using saliency information, and top down, using cueing.
The processing of visual information starts at the retina. The neurons in the retina have a center surround organization of their receptive fields. The shapes of these receptive fields are among others modeled by the difference of Gaussian (DoG).This function captures the “Mexican hat” Shape of the retina ganglion cells receptive field. These cells emphasize boundaries and edges. Further up the visual processing pathway is the visual cortex area V1.Here are cells that are orientation selective. These cells can be modeled by a 2D gabor function. Itti and Koch’s implementation of Koch and Ullmans saliency map is one of the best performing biologically plausible attention model[1][2][3]. Itti et al.[3]implemented bottom up saliency detection by modeling specific feature selective retina cells and cells further up the visual processing pathway. The retina cells use a center surround receptive field which is modeled in[2] by taking the DoG. They also model orientation selective cells using 2D Gabor filters. For each receptive field there is an inhibitory variant. For example if an on-center off-surround receptive field shows excitation on certain input, then the input will cause the opposite off-center-on-surround receptive field to inhibit.
The sub-modalities that Itti et al..[3]use for creating a saliency map are intensity, color and orientation. For each of these sub-modalities a Gaussian scale pyramid is computed to obtain scale invariant features. For each of these image scale features maps are created with a receptive field and its inhibitory counterpart.
For each of these image scales feature maps are created with a receptive field and its inhibitory counterpart .For the intensity sub-modality on-center off-surround and off-center on-surround feature maps for different scales are computed based on the pixel intensity. For the color sub modality feature maps are computed with center surround receptive fields using a color pixel value as center with its opponent color as surround. The color combinations used for this are red-green and blue-yellow. The feature maps for the orientation sub-modality were created using the 2D Gabor filters for the orientation 0,45,90,135 degrees.