Team G:Registering Retinal Images

Project Documentation

1. Introduction

The "Retinal Images Registration" project tries to solve a problem of registering and merging two retinal images from the same person.

The problem

We have two retinal images of the same eye, with both different and overlapping parts. Our objective is to find the best possible registration between the two images, merging them into one.

Fig. 1: An example of two retinal images from the same eye

Several registration techniques exist. They are mostly based on features extracted from an image. In our case, it can be useful to match features pertaining to vessels within the eye. We enhance and skeletonize the vessels, find the bifurcation points and structures, and utilize them in the registration.

Main solution steps

The main algorithm steps are as follows

  1. Background masking
  2. Vessels extraction and skeletonization
  3. Image registration

2. Algorithm description

Background Masking

The background masking is a preprocessing step used by the next algorithm stages.

The background mask is found in the following simple manner:

  1. Mark all the pixels below 10 (corresponding to black background and the text subtitle) and all the pixels above 245 (corresponding to the text subtitle). This is the initial mask.
  2. The initial mask is dilated by 20 pixels (4-connectivity) in order to compensate for intermediate shades and false negatives within the initial mask.

Fig. 2: Results of background masking.

Left: original image. Right: masked image

(white=background; black=foreground).

Vessels extraction and skeletonization

The main vessel extraction algorithm is explained in detail in [1]. The algorithm deals with extracting the vessels of different width as opposed to other stains, blobs etc.

Fig. 3: Results of vessels enhancement (Above) and their thinning (below).

Image registration

The main vessel extraction algorithm is explained in detail in [2]. Ii is based on a stable bifurcation structure, which is equivalent to an ordered 10-elements vector:

Fig. 4: Left: bifurcation structure.

At the 1st step of the algorithm, the bifurcation structures are found. A pair of the most similar bifurctation structures in the two images is then found utilizing the L1 distance. The 4-points matched between the two images allow us to estimate the the initial affine transformation. We can then use the found transformations to locate more corresponding bifurcation structures, and therefore refine the transformation.

References

[1] “Multiscale vessel enhancement filtering”, Frangi et al, 1998

[2] “Feature-Based Retinal Image Registration Using Bifurcation Structures”, Chen & Zhang, 2009

1