e-Proceeding of the Social Sciences Research ICSSR 2017 /

BEYOND IMAGINATION

Deepak T and Aishwarya Ayachit

Department of Computer Science and Engineering

Jyothy Institute of Technology

Bangalore, India

,

Guru Charan R

Department of Computer Science and Engineering

Vivekananda Institute of Technology

Bangalore, India

ABSTRACT

This paper brings information into real time experience through Mixed Reality (MR) technology. Ability to merge real and virtual spaces is applied to merging different knowledge types, such as existing in thought and concrete knowledge. To evaluate whether the merging of knowledge types can benefit learning, MR is applied to an innovative problem in the field of education. We present an approach requiring a single RGB-D camera image only for generating glossy reflections on virtual objects. Our approach is based on a partial 3D reconstruction of the real environment combined with a screen-space ray-tracing mechanism. Learners can experience the information in a 3D (Animation) format which are incorporated into a virtual environment and then linked to a real time data associated with that asset.

Field of Research:Augmented Reality, 3D reconstruction, HoloLens, Mixed Reality, OCR technology, Ray Tracing, Virtual Reality

------

1. Introduction

“Today we focused on the next frontier- mixed reality. Providing devices with the ability to perceive the world, breaking down the barriers between virtual and physical reality, is what we call mixed reality”, Terry Myerson, executive vice president for Windows and Devices group, said in a blog post. That sounds great but real world applications of MR are still largely happening behind closed doors. According to the insight of the paper, a mixed reality headset would place virtual objects in the real world and allow users to interact with them through gestures and voice commands. The purpose of this research is to investigate if MR technology can help the user in bringing the difference between a metaphysical thinker and an operational thinker. The world around you will become an entirely new canvas for you to play, learn, communicate and interact with. “Mixed Reality” works by scanning your physical environment and creating a 3D map of your surroundings so the device will know how exactly where and how to place digital content into that space realistically while allowing you to interact with it using gestures.

”Walking into living memory” is the main motto of Mixed Reality. It makes use of sensors, custom camera, advanced optics, 3D capture technology, hololens tracking system, recording and play backing the entire session through gestures or voice commands. Mixed reality can define the next generation computing power. In short “Magical way of experiencing live captured memories in one single pair of lenses” is MR.

MR aims to take the best aspects of VR and AR and solder them together. It also happens to be the most exciting technology of the three. MR allows physical and virtual objects to co-exist and interact in real time, creating an entirely new environment altogether. However the other sections describes how it really works, its advantages in many of the fields adds on to this paper.

2. SYNERGYSTIC GRAPHICS- shaping the future of computing

Computer graphics concerns with the pictorial synthesis of real and imaginary objects from their computer based models, whereas the related field of image processing treats the converse process, analysis of scenes or the reconstruction of models of 2D or 3D objects from their pictures.

Image processing can be classified as

Image enhancement

Pattern detection and recognition

Scene analysis and computer vision

The image enhancement deals with the improvement in the image quality by eliminating noise or by increasing image contrast . Pattern detection and recognition deals with the detection and clarification of standard patterns. The Optical Character Recognition (OCR) technology is a practical example for pattern detection and recognition. Scene analysis deals with the recognition and reconstruction of 3D model of scene from several 2D images.

Synergistic graphics provides a tool called motion dynamics. With this tool emptor can move and tumble objects with respect to moored observer, or he can make object immobile and the viewer moving around them. With the recent ontogenesis of digital signal processing (DSP) and audio synthesis chip the inter mutual graphics can now provide retaliation along with the graphical feed backs to make the simulated environment even more pragmatic.

OpenGL is designed as a streamlined, hardware independent interface to be materialized on hardware platforms. To achieve these qualities no commands for performing windowing tasks or obtaining user input are subsumed in OpenGL instead, you must work through whatever windowing system controls the particular hardware you’re using. With OpenGL, you must build up your desired model from set of geometric primitives-points, lines and polygons. A refined library that provides these features could certainly built on top of OpenGL. The OpenGL Utility Library (GLU) furnish many of the modeling features, namely quadric surfaces and NURBS curves and facet.

3. RAY TRACING - a realistic approach

Ray tracing elucidates a more realistic method than either ray casting or scan line rendering, for producing images constructed in 3D computer graphics environments. It works by tracing in reverse a path that could have been taken by a ray of light which would intersect the imaginary camera lens. As the scene is traversed by following in reverse the path of a very large number of such rays, visual information on the appearance of the scene as viewed from the point of view of the camera and in lighting conditions specified to the software is built up. The ray’s reflection, refraction or absorption are calculated when it intersects objects and media in the scene.

Scenes in ray tracing are described mathematically, usually by a programmer, or by a visual using intermediary tools, but they may also incorporate data from images and models captured by various technological means.

In nature, a light source emits a ray of light which travels, eventually, to a surface that interrupts its progress. One can think of this “ray” as a stream of photons wayfaring along the same path. In a perfect vacuum this ray will be a straight line. In reality, any combination of three things might happen with this light ray, in one or more directions. It might also absorb part of the light ray, resulting a loss of intensity of the reflected and/or refracted light. If the surface has any permeable or sheer properties, it incurvates a portion of the light beam into itself in a different direction while absorbing some of the spectrum (and possibly altering the color) between impregnation, reflection and refraction, all of the incoming light must be accounted for, and no more. A surface cannot, for instance, reflect 66% of an incoming light ray and refract 50%, since the two would add up to be 116%. From here, the reflected and/or refracted rays may strike other surfaces, where their absorptive, refractive and reflective properties are again calculated based on entering rays. Some of these rays travel in such a way that they hit our eye, causing us to see the scene and so contribute to the final rendered image. Attempting to resemble this real world process of tracing light rays using a computer can be considered extremely wasteful, as only a minuscule fraction of the rays in a scene would actually reach the eye. But that’s still not the whole picture. In order to achieve even more realistic rendering, the indices of reflection and refraction of the material have been taken into consideration. In other words the amount of light reflected at that point of impact with the primary ray and the amount of light that passes through the material have to be accounted for. Here again, rays are emitted to determine the final color of the pixel.

4. INFLUENCE OF RAY TRACING

Ray tracing’s popularity stems from its basis in a realistic simulation of lighting over the other rendering methods (such as scan line rendering or ray casting). Effects such as reflections and shadows, which are difficult to simulate using other algorithms, are a natural result of the ray tracing algorithm. Reliantly elementary to implement yet yielding impressive visual corollary, ray tracing often represents a first foray into graphics programming. Reflections are one area where ray tracing excels. Currently, with the 3D engines used by modern games, Reflections located “at infinity” and in the environment, as the name indicates, but for close objects, the approach shows its limitations. With ray tracing, reflections are perfectly managed, without complicated algorithms, and everything is handled directly by the rendering algorithm.

Another advantage is that inter reflections, such as the reflection of a side –view mirror on the car body, are extremely difficult to reproduce using rasterization and are handled in exactly the same way as any other reflection.

Another important advantage is shadow calculation. The technique that became the standard in the rasterization world is shadow mapping. But it suffers from several problems, such as aliasing and the amount of memory space it takes up. Ray tracing can solve the problem of shadows elegantly, again without introducing a complicated algorithm, while still using the same basic primitive object and without using additional memory space.

Finally, another of ray tracing’s strong points is its ability to manage curved surfaces natively. For several years now, modern GPU’s have included support for curved surfaces (intermittently appearing and disappearing with driver versions and new architectures). But while rasterizes have to make an initial tessellation pass in order to generate triangles, which is the only primitive object they can manage internally, a ray tracer can test the intersection of rays directly with the true mathematical definition of the surface.

5. MIXED REALITY USING RGB-D CAMERA- a cost efficient smart device

In the newfangled times, depth cameras have been extensively brought to bear in camera trailing for augmented and mixed reality. MR allows you to go beyond the traditional screen. So far beyond, in fact, that it renders it completely obsolete. The space around you becomes your screen. When dealing with complex surveillance scenarios, two dimensional information is not always sufficient to obtain reliable results of detection and tracking in real time. Further more passive 3D solutions such as stereo cameras require additional processing for computing depth information and they are not able to estimate depth for lowly textured areas. For these reasons, the advent of active and low-cost 3D sensors, such as Microsoft Kinect, significantly improved the inquisition on mobile applications and computer vision.

RGB-D cameras relies on the continuous wave time-of –flight technology to infer depth that is an array of emitters sends out a modulated signal that travels to the measured point, gets reflected and is received by the photosensitive element of the sensor. The sensor requires a 512x424 depth map and a 1920x1080 color image at 15 to 30 fps depending on the lighting condition, since the sensor exploits an auto-exposure algorithm.

Figure 2: Head set representing video guide or map when stirred into action by elevated stress levels

6. ALGORITHM

A word list can be loaded from a file and may be exteneded with additional words that are specific to the application use case.Word lists can also be filtered using filter lists to exclude certain words from being detected (using black-list filters) , or to allow only specific words to be detected (using white-list fiters).


CONCLUSION

In Education, AR has been used to complement a standard curriculum. Text, Graphics, Video and audio were superimposed into a student’s real time environment. Text books , flashcards and other educational reading material contained embedded “markers” or triggers that, when scanned by AR device , produced supplementary information to the student rendered in multimedia format.

REFERENCES

Edward Angel, “Interactive Computer Graphics A top down approach with OpenGL, 5th Edition, Addison, Wesley, 2008.

F.S. Hill. Jr, Computer Graphics using OpenGL, 2nd edition, Pearson Education, 2001.

James D Foley, Andries Van Dam , Steven K Feiner, John F Hughes, Computer Graphics-Addison-Wesley 1997

Donald Hearn and Pauline Baker, Computer Graphics –OpenGL Version-2nd Edition, Pearson Education, 2003

James Kent, The Augmented Reality Handbook-Everything you need to know about Augmented Reality

Alan B. Craig, Understanding Augmented Reality concepts and applications

Gary Bradski, Adrian Kaehler, Computer Vision with the OpenCV Library

Hashimoto straight, ARToolKit Augmented Reality Introduction to programming large book

Nikkeni, All of AR change the mobile phone and net augmented reality book.

Hashimoto straight, Recipies of AR programming- Processing in creating augmented reality.

Masters of Doom (a novel) by David Kushner

Ready Player One by Ernest Kline-bringing virtual reality into mainstream consciousness

Snow Crash by Neal Stephenson- bringing 3D into reality

Designing for Mixed Reality- Blending data, AR and the Physical World by Kharis O’Connell.

E-Proceedings of the 5th International Conference On Social Sciences Research 2017
(e-ISBN: 978-967-0792-14-9). 27th & 28th March 2017, Berjaya Times Square Hotel, Kuala Lumpur, Malaysia. Organized by 1