SIXTH SENSE TECHNOLOGY
Now a days When we encounter something or some place, we use our five natural senses to perceive information about it, that information helps us make decisions and chose the right actions to take. The SixthSense prototype is comprised of a pocket projector, a mirror and a camera (mobile phone)
Aim of the sixth sense technolofy is to rethink the ways in which humans and computers interact, partially by redefining both human and computer. So if we achive this goal then we can continually learn from our surroundings. There is no link between our digital devices and our interactions with the physical world.SixthSense bridges this gap
SixthSense technology is a wearable gestural interface that enhance the physical world around us with digital information and lets us use natural hand gestures to interact with that information. (like touch screen)
Some of the more practical uses of this technology:
This Sixth Sense Technology is currently being used on a very small scale with efforts made to make it more plausible for other situations.From healthcare to to supermarkets in daily life
reading a newspaper and viewing videos instead of the photos in the paper.
live sports updates while reading the newspaper.
The device can also tell arrival, departure or delay time of air plane on tickets.
For book lovers, Open any book and find the Amazon ratings of the book. pick any page and the device gives additional information on the text, comments and lot more add on feature
Brief Summary
A 28 year old MIT student named Pranav Mistry has invented new technology. He calls it his Sixth Sense Technology. In short, Misttry has created is a device that you can take with you anywhere that will aid you in day to day activities. Anything from simply projecting an image on the wall in front of you, to taking a picture simply by using your hands.
Pranav Mistry
Is the inventor of SixthSense, a wearable device that enables new interactions between the digital world and the world of data.Pranav Mistry is a PhD student in the Fluid Interfaces Group at MIT's Media Lab. Before his studies at MIT, he worked with Microsoft as a UX researcher. Mistry objective is to integrating the digital information with our real-world interactions.
ABOUT
. TED (Technology, Entertainment, Design) is a working prototype of a multifunctional device that can become part of our lives in five years to ten. Set named sixth sense consists of a camera (it captures the movement of hands), the projector (it produces the image on any surface), the mobile phone (it is in your pocket and need only to communicate with the abstract database server) and four fingers on which to detect movements, wearing colorful caps, perceived by the camera
Reference:
Reference:
Blind people and the World Wide Web
Alasdair King, Gareth Evans, Paul Blenkhorn, UMIST, Manchester, UK. Links may be different from the original 2004 article.
1 Blind people and the World Wide Web
Perhaps you've read a book recently? Perhaps when you finished you picked up a newspaper and got the sports headlines, or went online and surfed some travel sites to book next year's summer holiday? Your local newsstand easily has a hundred newspapers and magazines. If you have web access, you have billions of sites available to you. Unless, of course, you're blind, when accessing printed or net resources suddenly becomes a very different proposition.
Traditionally, blind people have had only limited means of accessing printed material. Braille is the most famous access method, but only a tiny proportion of blind people can read Braille - some 2% in the UK. Recent years have seen the wider adoption of audio recordings, but like Braille these suffer from a lack of immediacy - you want the news today, not to wait a week for it to be translated - and a blind user is usually reliant on sighted people, often volunteers, to produce the material. This reliance and the higher costs of producing alternative format materials such as audiotapes necessarily reduce the material available. This is a poor comparison with what is available to sighted users and their choice of material.
The rise of affordable personal computing in general and the Internet in particular promised an incredible improvement in access to written materials. With a personal computer, some easily-available technology, and a web browser, you are no longer restricted to tapes sent through the post or the passive technology of the radio: you now have access to billions of web pages, personal, corporate, educational, entertaining, all available from your home. And there is no better time for this huge revolution: the great majority of blind people in developed countries become so because of the effects of age. With average life expectancies increasing the number of potential blind Internet users grows and grows. There are some one million registered visually-impaired people in the UK, of whom 750,000 are over 75. They want access to the same material they've always had, whether it's the London Times or the National Enquirer, but the material may not be available in an alternative format. Relying on what other people choose to translate for your benefit reduces your choice and freedom. Besides, sighted people have taken to the Internet in their millions for booking holidays, researching family trees and countless other uses: blind people need the same opportunities, and the technology makes it possible.
This is not to say, alas, that the web is a happy land where a blind person can surf and browse with all the freedom and ease of a sighted person. To understand why, we need to examine how blind people access computers in the first place.
2 How blind people access computers
The last decade has seen the triumph of the rich graphical desktop, replete with colourful icons, controls and buttons all around the screen, controlled by the mouse pointer moving about the screen clicking and dragging. This is not, on the face of it, a usable environment for blind people, but use it they must.
Many people with a significant visual impairment have some degree of residual vision. There are assistive technology solutions for them: a screen magnifier application, such as ZoomText from Ai Squared, magnifies a small area of the display, potentially filling the entire computer screen. The user can move the area being magnified around the desktop. This allows the user to control the computer interface directly, and is a good solution for people with gradually-degrading vision, especially those who are already familiar with their computer interface but are starting to have trouble seeing it. However, for those with a significant visual impairment or complete blindness, there are different two options.
The first is to use a screen reader. This is an application that attempts to describe to the blind user in speech what the graphical user interface is displaying. It turns the visual output of the standard user interface into a format that is accessible to a blind user. In practice this means driving a Braille output device - a row of Braille cells with mechanical pins that pop up and simulate Braille characters under the user's fingers - or, more commonly, a text-to-speech synthesizer. We will deal exclusively with these text-to-speech users in the rest of this article because they form the great majority of users, actual and potential. The screen reader acts almost as a sighted companion to the blind user, reading out what is happening on the screen - popup boxes, command buttons, menu items, and text. Ultimately screen readers have to access the raw video output from the operating system to the screen and analysing it for information that should be presented to the user. This is a complex process, as you would expect from an application that is attempting to communicate the complicated graphical user interface in a wholly non-visual way. There are many screen readers available, including JAWS from Freedom Scientific, Window Eyes from GW Micro, or Thunder from Screenreader.net. If you have Windows 2000 or XP, you'll find that Microsoft have included a basic screen reader in the operating system, called Narrator: try activating it, opening Notepad and typing some text or checking your email without looking at your screen.
The goal of a screen reader is to make it appear to the user as if the current application was itself a talking application designed specifically for blind users. This is difficult to accomplish. Applications often have particular user controls or methods of operation that must be supported by the screen reader. For example, a spreadsheet program operates very differently from an email client. This forces screen reader developers to adapt their programs to support specific applications, typically the market leaders like Microsoft Word. It also means that applications that utilise simple interface components like menus and text boxes will work best with screen readers. Those with non-standard interface components like 3D animations may be difficult for a screen reader to access.
The second way for a blind person to use a computer is to take advantage of self-voicing applications. These are usually applications written specifically for blind people that provide their output through synthesised or recorded speech. The obvious advantage is that the application designer can ensure that what is communicated to the user is exactly what the designer wants communicated - although this assumes that the designer's conception of what the user needs or wants to hear is correct! Aside from the extra design and development required to produce a self-voicing application, the main drawback is that the application cannot be used at the same time as the user's screen reader. If the application usurps the screen reader, the user's customary interface to the computer, it takes upon itself the responsibility for being at least as comfortable and usable for the user as their screen reader. Users become accustomed to their particular screen reader and its operation and will have it configured just as they want it. The hotkeys of a self-voicing application may be different; the voice may be different, and have different characteristics. For example, many screen reader users set them to read out as fast as possible, which sounds odd if you have never heard it before but makes sense if you are accustomed to it. With a self-voicing application, the user may even have to switch off their screen reader, which is most undesirable if they want to use another non-self-voicing application at the same time.
Whether using a screen reader or a self-voicing application, the use of the sense of hearing rather than vision has great implications for the design of the interface. The visual sense, or visual modality, has an enormous capacity for communicating information quickly and easily. If you look at an application on your computer display and you will immediately notice the menus, icons, buttons and other interface controls arrayed about screen. Each represents a function that is available to you, and a quick glance allows you to locate the function you want and immediately activate it with the mouse. Say the application is a word processor: you can go straight from reading the text of your document to any one of the functions offered by the interface. Now imagine that to find the print function you will have to start at the top left-hand corner of the screen and go through each control in turn, wait until its function is described to you, until you find the function you require. Of course, experience blind computer users will not rely on navigating through menus for every function. They will utilise shortcut keys, such as "CTRL-P" to print a document, develop combinations of keystrokes to complete their most common tasks, and learn the location of commonly-used functions in menus and applications. This requires, however, a consistent user interface, where shortcut keys and keystroke combinations can be relied upon to perform the same function each time and menu items are always located in the same place.
The important constraint on the use of computers by blind users is that they rely on hearing, rather than sight. Why is this such a problem? First, blind users are constrained into examining one thing at a time in an order not of their own making - they do not know the structure of things before they explore them. This is the problem with unfamiliar, rich, new interfaces. Second, blind users have to listen to a surprising amount of text to give them the same amount of information as a sighted user might be able to gain in a quick glance. Sighted users might be able to glance through a large document, scanning the chapter and paragraph headings for a key word or phrase, because they can see the headings instantly distinct from the body text and what words they contain. A blind user, even if they can jump from heading to heading, has to wait for the slower screen reader to speak the heading: setting it to read as fast as possible might seem more sensible now.
These two constraints, fixed order of access and time to obtain information, mean that interfaces that rely on hearing must comply with a principle of maximum output in minimum speech. This greatly changes usability: superfluous information is not just a distraction, as a page with lots of links might be for a sighted user, but a real barrier to using the interface. Blind users must not be asked to use a complex interface with many options. If a user misses some output, it will need to be read out again, so an explicit way to repeat things is required. Most importantly, users need control over what is being said: sighted users can move their gaze wherever they want whenever they want, and blind users need some similar control of the focus. Imagine reading something where you can only see one word at a time with no way to go back or forwards. Non-visual interfaces need to provide means to navigate through the document, stop, go back, skip items, repeat and explore the text available. This affects how blind people browse web pages, as we will find out next.
3 Access to the web
So, knowing some of the problems that blind people have with accessing computers in general, how can they access the wonders of the World Wide Web in particular? What are the particular characteristics of browsers and more especially web pages themselves?
Websites vary enormously, but with a quick browse around the most popular sites you will quickly notice a common characteristic: a very heavily visual graphical interface: images, including animated advertising banners; non-linear page layouts, like a newspaper front page with items and indices arranged around the screen; navigation menus and input controls for search functions and user input. And these are simple static items: advanced sites now take advantage of dynamic web page features like whole user interfaces written in Flash. For every Google, applauded for a simple and accessible user interface, there is another website with tabs, buttons, pop-ups and other great features for sighted users.
It is important to realise that not only are web pages full of rich features, but that their arrangement in the pages are completely non-standard. We have described how blind users can use complex graphical applications by the use of hotkeys and learning the user interface. This required a consistent user interface. Surf about some more websites, and you will realise quickly that no such consistent user interface exists for web pages. In face, a single web page can be as rich a user interface as a standalone application. Imagine arriving at an online bookshop's website, with all those images, links, titles and text paragraphs, and having to start at the top left-hand corner and progress one item at a time through the page to find the login to check your last order. No shortcut keys are available for useful functions like "search this website" or "contact the website owner" that might be available on the page. Every website has a different user interface which must be explored and understood to use it, which places great demands on blind users to make the necessary effort. So, how does a blind user start to get to grips with these pages?
The immediate response might be to use the user's screen reader to access a conventional browser like Internet Explorer. This has problems: we know that each application makes different demands on the screen reader, and the heavily-visual and non-standard interfaces of web pages pose considerable difficulties to a screen reader. Navigating the web can be compared to trying to use the largest and most complex application that a blind person will ever attempt. A specific problem with Internet Explorer is that the need to allow the user to move around the document we have described is complicated by the lack of a caret on a web page, an indicator of the position at which you will enter or delete text usually shown as a flashing vertical bar in a text editor. Sighted users can simply glance at a different area to change their focus, but screen reader users need to move the focus of the screen reader to the area of interest, and this is normally done by moving the caret. Browser windows, however, do not have carets - you can only scroll the whole page up and down and look for the text of interest. The only items you can select individually are links or form items. A screen reader could simply choose to read a web page displayed in Internet Explorer from the very top of the page to the bottom, but this would be immensely time-consuming for the user. Tables and frames and forms further complicate a web page. This is not to say that using a screen reader is impossible: advanced screen readers do provide special navigation modes for web pages with a great deal of success. After all, web browsing is one of the common applications which a screen reader developer will try to support. However, complex navigation mechanisms are the result, and whilst these are excellent for experienced and highly skilled users, they are not necessarily ideal for the newly blind user who may be coming to the technology late in life. Web access is a general, not specialist need, and needs to support a general, no n-specialist group of users.