Using image processing to monitor coffee usage

Image processing techniques have been applied in many different fields such as identification in security systems, space exploration and automated machines in factories. This project applies image processing to monitor the level of a non-transparent fluid in a transparent container, in this case coffee in a glass coffee pot at a coffee machine. If the level of coffee is measured over a period of time valuable statistical information can be extracted. This information can be used to monitor the workload of the coffee machine and determine the current status of the coffee pot. It can even be used to predict when the next fresh pot of coffee will be made by examining the intervals between refills!

Image processing is chosen above methods involving the weight of the container or other methods which require mechanical hardware. The system developed in this project takes the onus off the coffee machine manufacturers to implement monitoring systems in their machines. Also, image processing makes the system more adaptable as it overcomes many of the physical obstacles that more mechanical methods may experience.

Three stages are involved in the use of the system namely setup, calibration and operation. The setup stage involves the physical installation of a computer and attached camera to monitor the coffee machine. The calibration stage requires a user to supply the system with a background image and an image of a full pot. Since these images change each time the camera or the coffee machine is moved recalibration may be necessary from time to time. After calibration the system enters the operation stage where it will continually capture images of the coffee pot and take measurements. These measurements and the time at which they were made are stored in a database. The data in the database can then be accessed and processed to determine any statistical information a user may require.

1.2 The work this project is based on

Image and signal processing techniques are widely used in several communication and automation systems. Most of the techniques used during the development of this project are already standardised and well established in the industry. This project builds upon standardised techniques as far as possible to help ensure a high quality end result.

Literature explaining these techniques is abundant and chosen for this project is Gonzales and Woods’ Digital Image Processing. This book was found to be most comprehensive and the reader is urged to acquire a copy if he/she is interested in the subject of image processing.

1.3 Project objectives

A number of objectives were pursued in the course of this project and can be summarised as follows:

·  Locate a coffee pot in an image containing other objects in the background.

·  Measure the amount of coffee in a pot using a camera attached to a computer and applying image processing techniques.

·  Create a system which can monitor coffee usage and calculate statistics.

·  Create a web interface to control the measuring system and view its results.

This thesis will examine the suitability, effectiveness and practicality of different methods of achieving these objectives.

1.4 Contributions made by this work

The contributions made by this project are the following:

·  A method for determining the level of fluid in a container from an image in the presence of background noise, changing light conditions and other practical problems.

·  An example of the practical application of selected image processing techniques.

·  A demonstration of how a web interface can be used effectively to control a remote system by means of CGI.

1.5 Thesis overview

Chapter 2 explains the techniques used in this project. Some are standard image processing techniques and others techniques which have been developed during the course of this project.

Chapter 3 demonstrates the application of the techniques detailed in Chapter 2 to the problem of measuring coffee levels with a camera.

Chapter 4 gives an overview of the entire coffee usage monitoring system which combines the techniques discussed in chapter 3 with a web interface. The programming languages and software used throughout the system development are mentioned. Some practical complications that were encountered during implementation are also discussed.

Chapter 5 discusses the main experiments and tests conducted during the project development.

Chapter 6 draws some conclusions from this project and makes recommendations for future development.


Chapter 2

Theoretical work and techniques

2.1  Introduction

This chapter explains all the techniques used in this project in order to clarify the ensuing discussions in chapter 3. The explanations will be accompanied by simple examples.

The standardised techniques covered in this chapter are two-dimensional correlation as well as the subtraction, thresholding and edge detection of images. Some non-standard techniques were developed during this project and are discussed in this chapter as well. These are an algorithm for finding the largest area of contiguous pixels in a binary image and an algorithm for finding the point of maximum symmetry in an image.

2.2 Standardised techniques

This section discusses the standardised techniques used in the project.

2.2.1 Subtraction of images

An image can be represented as a matrix where each element represents the grey level of the corresponding pixel. Given this representation, image subtraction becomes a simple matrix operation. If two images are represented by As and At then this operation can be written as

IMAGE1 – IMAGE2 = As – At. (2.1)

This operation is discussed at greater length by Gonzales and Woods [1].

2.2.2 Thresholding of images

Image thresholding is the process of applying a hard limiter to the grey level of each pixel in an image. If matrix A represents an image then thresholding can be written as

(2.2)

where m is the grey level threshold. Figure 2.1(a) shows a test image with the grey levels indicated on the figure. Figure 2.1(b) shows the limiter function. Setting m = 150 and applying the limiter to the test image produces the binary image in figure 2.1(c).

(a) / (b) / (c)

Figure 2.1: The result of thresholding an image. (a) A test image with grey levels indicated by the numbers on the figure. (b) The thresholding function. (c) Thresholded image with white indicating 1 and black indicating 0.

Image thresholding is discussed by Gonzales and Woods [2].

2.2.3 Detection of edges in an image

Edges in images are indicated by sharp jumps in grey levels. The sharper the jump in grey level is the clearer the edge will be. For example, if an image depicts a black pot in front of a white wall then the edges of the pot can easily be found, but if the pot had an off-white colour then the edges would be less clear.

Figure 2.2(a) below is a test image with its edges shown in figures 2.2(b)-(d). Figure 2.2(b) (edges in horizontal direction) and figure 2.2(c) (edges in vertical direction) are the results of applying the filters in figures 2.2(e) and (f) respectively. Figure 2.2(d) is the sum of figure 2.2(b) and (c) and shows all the edges of the test image.

(a) / (b) / (c)
(d) / (e) / (f)

Figure 2.2: Edge detection. (a) A test image. (b)The horizontal edges of the test image. (c) The vertical edges of the test image. (d) The sum of (b) and (c) shows all the edges of the test image. (e) The filter applied to the test image to obtain (b). (f) The filter applied to the test image to obtain (c).

The filters in figure 2.2(e) and (f) are applied to each pixel in the test image. The way the filter in figure 2.2(e), for example, is applied to a point T(x,y) is clarified by the following equation:

Eh(x,y) = - 1T(x-1,y-1) - 2T(x+0,y-1) - 1T(x+1,y-1)

+0T(x-1,y+0) +0T(x+0,y+0) +0T(x+1,y+0)

+1T(x-1,y+1) +2T(x+0,y+1) +1T(x+1,y+1) (2.3)

where T is the test image in figure 2.2(a) and Eh is the horizontal edges image in figure 2.2(b). Edge detection is discussed in more detail by Gonzales and Woods [3].

2.2.4 Two-dimensional correlation

The convolution of two one-dimensional signals is intuitively described as a function giving the size of the areas that overlap as one signal is slid over a mirror image of the other. For one-dimensional signals convolution is defined as

. (2.4)

The values of the resulting function c(n) each correspond to one possible way of overlapping the two signals. Figure 2.3(a) shows a square signal to be convolved with a translated version of itself in figure 2.3(b). The result of this convolution is shown in figure 2.3(c).

(a) / (b) / (c)

Figure 2.3: One-dimensional convolution. (a) A square signal f(t). (b) A translated version of f(t). (c) The result of convolving f(t) with g(t).

Note that the position of the peak value in figure 2.3(c) is the same as the centre position of the square in figure 2.3(b). This result shows that a known signal form can be found in a longer signal using convolution.

The signals in the above example are symmetrical, but if they were not then one of them had to be mirrored before convolving to obtain the peak in figure 2.3(c). The reason is that the convolution operation mirrors one of the signals before starting. If one mirrors an asymmetrical signal then the result can never match the original signal by simply translating it. Thus the peak value in the example will never be found. If, however, the signal is mirrored and then mirrored again by the convolution operation the signals will match perfectly at some point as they are slid over one another.

One may argue that the convolution can then be replaced by correlation

(2.5)

and the twofold mirroring can thus be avoided. After all, correlation is the same as convolution except for the mirroring step that convolution starts with. The reason convolution is used instead, however, is that unlike correlation the convolution of two signals can be computed efficiently on a computer using the Fast Fourier Transform (FFT). Both convolution and correlation require many multiplications on a computer and is therefore computationally expensive. However, convolution can be calculated in the Fourier domain using

(2.6)

which requires much less processing power. The proof of equation 2.6 is found in [5]. From this point onward correlation will be the subject of discussion, but bear in mind that it is implemented using convolution and the FFT.

One-dimensional correlation is a well-known operation in engineering, but two-dimensional correlation is a bit less intuitive. Like the one-dimensional case two-dimensional correlation gives the area under the overlapping region of two functions. In this case, however, the correlation is a function of two variables each representing a dimension (or axis) of the functions being convolved. The following equation defines two-dimensional correlation:

. (2.7)

If an image is represented as a two-dimensional function of x and y coordinates then the position of a known image portion can be found in a larger image using two-dimensional correlation. This works exactly as it did with the one-dimensional case. Correlate the larger image with a centred version of the known image portion and look for the peak value in the resulting function. The position of the peak will reveal the translation of the image portion in the larger image.

The matrices in figure 2.4 represent discrete two-dimensional functions or images. Figure 2.4(a) shows the known matrix portion centred at the origin of its axes. Figure 2.4(b) shows the larger image containing a translated version of the known matrix portion. Figure 2.4(c) shows the result of correlating the matrix in figure 2.4(a) with the matrix in figure 2.4(b).

1 / 2 / 3
7 / 4 / 2
9 / 4 / 2
/ 0 / 0 / 0 / 0 / 0 / 0
0 / 0 / 0 / 0 / 0 / 0
0 / 1 / 2 / 3 / 0 / 0
0 / 7 / 4 / 2 / 0 / 0
0 / 9 / 4 / 2 / 0 / 0
0 / 0 / 0 / 0 / 0 / 0
/ 0 / 0 / 0 / 0 / 0 / 0 / 0 / 0
0 / 0 / 0 / 0 / 0 / 0 / 0 / 0
0 / 2 / 8 / 23 / 30 / 27 / 0 / 0
0 / 16 / 44 / 104 / 70 / 39 / 0 / 0
0 / 35 / 88 / 184 / 88 / 35 / 0 / 0
0 / 39 / 70 / 104 / 44 / 16 / 0 / 0
0 / 27 / 30 / 23 / 8 / 2 / 0 / 0
0 / 0 / 0 / 0 / 0 / 0 / 0 / 0
(a) / (b) / (c)

Figure 2.4: Two-dimensional correlation. (a) A known matrix portion. (b) A larger matrix containing a translated version of the matrix portion in (a). (c) The two-dimensional correlation of the matrices in (a) and (b).

Note that the peak value of the matrix in figure 2.4(c) is at row 5 column 4. This position can be used to obtain the position of the known image portion in the matrix of figure 2.4(b). Suppose the peak value in the correlation result occurs at (xp,yp). The translation of the known matrix portion is then

(2.8)

where Wc and Hc are the width and height respectively of the known image portion. The fix function discards the fractional part of a decimal number in order to make it an integer, so that fix(2.5) = 2.

The reader may investigate the application of two-dimensional correlation as applied to image processing by referring to [7].

2.3 Techniques developed during this project

The following techniques have been developed in the course of this project.

2.3.1  Finding the largest area of contiguous pixels in a binary

image

The idea of the algorithm described in this section is to find the largest area of non-zero pixels in an image and discard the smaller areas.