T-800 \\ Project Description

Project by

Asaf Shamir / 039433396 /
Baruch Segal / 036557015 /
Jonatan Russo / 038747523 /

Introduction

Our project goal was to allow everyday people the ability to see the world as T-800, the legendary cyborg from Arnold Schwarzenegger’sTerminator movie, would.

T-800’s view, as seen in the original movie

The T-800 app simulates the cyborg’s view of reality, displays acquired information and allows the user to interact with it.

Application Interface

T-800 is an augmented reality app, displaying an alternative reality to the user.

The main user interface is a live camera feed which is displayed on the phone’s screen after a few transformations and modifications:

  • Color adjustment – Real life colors captured by the phone’s camera are transformed to a red color palette to mimic Terminator’s infra-red vision.
  • Textual information – All information relevant for the user is displayed as a textual layer on top of the world view.
  • Threat recognition, tracking and classification – Each face in sight is recognized, tracked and classified as a threat \ innocent (the whole process is described in whole in the following sections). The results are shown on screen as well.
  • Image manipulation – Eliminated threats are shown differently on screen to give a ghost-like illusion.

Pressing the phone’s screen allows the user to mark significant regions in sight. Those regions are used for ‘Threat elimination’ and\or ‘Color adjustment’ (both described below).

App’s Button Menu:

/ Color vision (on)
Turn on the Terminator’s infra-red vision / / Color vision (off)
Turn off the Terminator’s infra-red vision
/ Threat Elimination
Enable threat elimination – touching the screen will eliminate a threat in the touched region. / / Color Adjustment
Enable color adjustment – touching the screen will adjust the marker color to the touched color.
/ Market (Triangle) Use red triangles as a threat marker. / / Marker (circle)
Use two circle targets as a threat marker.
/ Color Picker
Adjust color by picking the required color. / / White Balancing
Adjust color by balancing all color using a given white spot as a reference.

Implementation

Like most augmented reality apps, the main challenges we faced creating T-800 were the implementation and modification of image processing algorithms and techniques. Those challenges are:

  • Color vision – transforming a given image to a red color palette is done by a simple matrix multiplication. However, we encounter an unexpected problem realizing this simple process isn’t efficient enough (especially for older phones with a weaker CPU). Therefore, after several experiments with different ways of multiplying the matrices, we adopted a different approach by transforming each color channel separately, achieving much better results.
  • Face detection –experimenting with both face detection methods implemented in OpenCV,we ended up using LBP after it proved to be a few times faster than HAAR. As mentioned before, optimizing our code was a major concern in order to achieve real-time results.
  • Tracking –our face detection isn’t perfect, and detecting moving faces proved to be even more challenging. However, the accumulated data gathered from multiple frames holds an opportunity as well.

We implemented our own algorithm to utilize this data: Trying to minimize false-positives, no decision is taken before a few consecutive detections are made. Moreover, high reputation achieved by several detections enables the algorithm to avoid false-negatives in future frames.

  • Marker detection – the first marker detector we tried to implement was a polygon detector. We filtered all shapes of a certain color (default: red) and extracted their contours. Then, we tried to convert each contour to a specific polygon by removing excess points (a contour point which has a similar normal to its neighboring points) and match the polygon – we specifically searched for triangles.

Our detection worked quite well, especially after we fine-tuned the polygon conversion, but it still had its flaws which are even more frequent in poor light conditions.

Aiming for better results, we tried to implement a new marker detector searching for circle shaped targets. The new detector filtered all shapes in two different colors and then tried to find circles that have overlapping centers. This detector proved to be far more accurate and resistant to changes in camera view and lighting.

Today, the app supports the detection of both markers.

  • Color picking – colors may appear different under different lighting conditions, which may prevent the markers from being detected. Therefore, we implemented a color adjustment method allowing the user to pick the wanted color under any light conditions.
  • White balancing – for the same problem we implemented a second solution using white balancing. The user can adjust the app’s color scale by taking a picture of a white surface under any light conditions. The app in its turn uses the given image to define the new white a as baseline for its color scale.
  • Canny edge detector – we used Canny edge detector to transform the faces of “eliminated” threats. Viewing only the edges creates a ghost-like illusion.

Results

Below, the app’s features demonstrated using different images captured by the phone.

A demo video showing real-time performance and usability is attached to this document.

Figure 1 - Full display:
Color vision, Textual information, Face and Marker detection / Figure 2 – Triangle Marker detection challenges under different lighting
Figure 3 – Circle Target Marker achieving better results / Figure 4 – Multiple Face detection and Tracking (with \ without markers)
Figure 5 – Aliminated “enemy” / Figure 6 – Aliminated “enemy”

Limitations and failures

Hoping to improve the app’s performance, we tried to use OpenGL for graphics. However, we weren’t able to embed our OpenGL code with the rest of the app, because it appeared that using both OpenCV and OpenGL is quite a challenge (especially when both libraries attempt to use the camera).

Without using OpenGL, adapting the app for Google Cardboard, as we initially intended, wasn’t feasible. Since this was our least important feature (it holds no image processing challenge) it was left for last and eventually dropped.

Bibliography

HalilDemirezen, Mehmet Baran “Triangle detection with color information” (2010)

Feng Xiao, Joyce E. Farrell, Jeffrey M. DiCarlo, Brian A. Wandell “Preferred Color Spaces for White Balancing” (2003)