We Limit Our Discussion to Memoryless Quantizers

We Limit Our Discussion to Memoryless Quantizers

Section 3.3

Quantization

This involves representing the sampled data by a finite number of levels based on some criteria such as minimization of the quantizer distortion, which must be meaningful.

Quantizer design includes input (decision) levels and output (reconstruction) levels as well as number of levels. The decision can be enhanced by psychovisual or psychoacoustic perception.

Quantizers can be classified as memoryless (assumes each sample is quantized independently or with memory (takes into account previous sample)

We limit our discussion to memoryless quantizers.

Alternative classification of quantisers is based on uniform or non- uniform quantization. They are defined as follows.

Uniform quantizers / Non-uniform quantizers
They are completely defined by (1) the number of levels it has (2) its step size and whether it is midriser or midtreader. We will consider only symmetric quantizers i.e. the input and output levels in the 3rd quadrant are negative of those in 1st quandrant. / The Step sizes are not constant. Hence non-uniform quantization is specified by input and output levels in the 1st and 3rd quandrants.

Figure (3.6) shows the uniform and non uniform quantizers.

Quantizer design

Given the number of input or output levels , quantizer design involves minimizing any meaningful quantizer distortion such as:

  1. Mean square Quantization Error (MSQE),

defined as,

where input ranges from to and is probability density function of

  1. Mean absolute quantization error (MAQE):
  1. Mean norm quantization error

is any positive integer

  1. Weighted quantization error

Where is the weighting function.

Section-1

Non-Uniform Quantization

  • Max Lloyed Quantizer

In this case quantizers are designed minimizing MSQE and tables are developed for input governed by standard distribution functions such as gamma, Laplacian, Gaussian, Rayleigh, Uniform. Quantizers can also be designed tailored to histograms. For a well designed predictor, the histrogram of predicted errors tend to follow Laplacian distribution.

Given the range of input f as from to, and the number of output levels as J, we design the quantizer s.t MSQE is minimum.

Let =decision level ( level)

and =reconst level ( level)

This is shown in the figure above.

If then and

where = Probability density function of random variable f

The error can also be written as

and are variables. To minimize we set

Since and solution that is valid is

or

i.e. input level is average of two adjacent output levels. Also

or

Hence

where

Output level is the centroid of adjacent input levels. This solution is not closed form. To find input level , one has to find and vice versa. However by iterative techniques both and can be found using Newton’s method.

When number of output levels is large, the quantizer design can be approximated as follows: Assuming is constant over each quantization level, or a piecewise constant function

As shown in Figure (3.7), we have, a ;

We have,

i.e. each reconstruction level lies mid-way between two adjacent decision levels.

Section 2

Uniform Quantizer:

When , as shown

then

Since

(obtained by substituting for )

then

constant step size

=

or

i.e. all decision and reconstruction levels are equally spaced.

Since quantization error is distributed uniformly over each step size (=q)with zero mean i.e. over , MSQE is,

Let range of be and the variance;

Then

A b-bit quantizer can represent J output levels as quantization step size

.

Hence MSQE SNR for uniform quantizer

=10 log( )

  • Properties of optimum Mean square quantizers:.
  1. The quantizer output is an unbiased estimate of input i.e.

for uniform quantizer increases by 6 dB /bit

  1. The quantizer error is orthogonal to the quantizer output i.e., quantization noise is uncorrelated with quantized output.
  2. The variance of quantizer is reduced by the factor where denotes the mean square distortion of the B-bit quantizer for unity variance inputs.

i.e.

  1. It is sufficient to design mean square quantizers for zero mean and unity variance distributions.

Example: Suppose of image sensor takes values from 0 to 10. If samples are quantized uniformly to 256 levels then decision and reconstruction levels are:

or for ,…….257

and Note as range is (0,10).

(1) Proof of property 1

1. If is the probability of i.e.

then

As we have

Substitute for and using the definition of , we have

2. Proof for prop 2: From This implies,

An interesting model for the quantizer follows from this i.e.

where is quantizer noise.

As the quantizer noise is uncorrelated with we can write

Since this implies average power of quantizer output is reduced by average power .

Also quantizer noise is dependent on input since.

  1. Proof of property (3):

Since for any mean square B-bit quantizer,

we get from previous results,

  • Non Uniform Quantization:

Although uniform quantization is straight forward and appears to be a natural approach it may not be optimal.

Suppose is much more likely to be in one region than in others. It is reasonable to assign more reconstruction levels to that region.

If falls rarely between and , the reconstruction level is rarely used.

Rearranging reconstruction levels so that they all lie between and makes more sense. Quantizers in which reconstruction and transition levels do not have even spacing is called non-uniform quantization.

The notion that uniform quantizer is the optimal MMSE when is uniform suggests another approach. Specifically we can f to g in such a way that is uniform.

Quantize g with a uniform quantization and then perform the inverse nonlinearity.

The non linearity is called companding. One choice of nonlinearity or companding is given by

The resulting is uniform in the interval . The nonuniform quantization by companding minimizes the distortion.

Section 4

  • Image Quantization:

In case of image coding, one has to quantize many scalars. One approach is to quantize each scalar independently. This approach is called scalar quantization of a vector source. Suppose we have N scalars. Called , . Each scalar is quantized to reconstruction levels. If can be expressed as a power of 2 and each reconstruction level is coded with an equal number of bits, will be related to required number of bits by

or

Total number of bits B required in coding N Scalars is

The total number of reconstruction levels L is

If we have a fixed number of bits B to code all N scalars using scalar quantization of a vector source, B has to be divided among N scalars. The optimal strategy for this bit allocation depends on error criterion used and the probability density functions of the scalar. The optimal strategy typically involves assigning more bits to scalars with larger variance and fewer bits to scalars with small variance.

As an example suppose we minimize the mean square error wrt for

Where is the result of quantization of . If the probability density functions are the same for all scalars except for their variance, and we use the same quantization method for each of the scalars, an approximate solution to the bit allocation problem is:

where

i.e. number of reconstruction levels for is proportional to the standard deviation of .

Although the above equation is an approximate solution obtained under certain specific conditions, it is useful as a reference in other bit allocation problems.

As in the equation can be – ve and is not in general an integer, a constraint has to be imposed in solving the bit allocation problem such that is a non - ve integer.

To prove:

We find an expression for the distortion and then find the bit allocation that minimizes the distortion. We perform the minimization using the method of Lagrange multipliers. Let average number of bits/simple to be used by the vector source be R=B/N; and average number of bits/sample used by the kth scalar be, then,

Where N is number of scalars in vector source. The MSQE i.e. reconstruction error variance is given by

Where is a factor the depends on the input distribution and the quantizer; and is variance of scalar fk. The total reconstruction error variance,

The objective of bit allocation procedure is to find that minimizes equ(3), subject to constraint given by eqn (1). Let us assume that is a constant for all k.

We can set up the minimization problem in terms of Lagrange’s multipliers as

Taking the derivative of J wrt and setting it equal to zero, we obtain,

Substituting equ (5) in (1) we get a value for as:

Substituting (6) in (5) we get

It is to be noted that the values are not guranteed to be integers or even positive. The standard approach is to set the negative to Zero.

This will increase the average bit rate above the constrain. Therefore, the non zero are uniformly reduced until the average rate is equal to R.

Substituting for and R in equ (7), in terms of and B, we have

Section 3.3

  • Visual quantization

In gray scale quantization of monochrome images, if the number of quantization levels is not sufficient, a phenomenon called “contouring” become visible. When groups of neighbouring pixels are quantized to the same value, regions of constant gray levels are formed, whose boundaries are called “contours”. Uniform quantization of common images where pixels represent luminance, require 8 bits (i.e. 256 gray levels). Contouring effects start becoming visible at or below 6 bits/pixel. A mean square quantizer (MSQ) matched to the histogram of the image may need only 5 to 6 bits/pixel without any visible contours. As histogram of images vary drastically optimum quantization with 8 bits/pixel is usually used. In evaluating quantized images, human eye is quite sensitive to contours and errors that affect the local structure. Note that contours do not contribute very much to mean square error. Thus a visual quantization scheme is used to hold quantization contour below the level of visibility over the range of luminances to be displayed.

Methods of visual quantization:

Contrast quantization- Since visual sensitivity is nearly uniform to just noticeable changes in contrast, it is more appropriate to quantize the contrast function.

Two non linear transformations used to represent contrast are, ,,

or

and

where u is the luminance; are constants chosen as and are constants.

For the given contrast representation, we simply use the MMSE quantizer for the contrast field.

To display the image, the quantized contrast is transformed back to luminance value by inverse transformation.

Experimental studies indicate that a 2% change in contrast is just noticeable.

ie

This needs 50 levels in a scale of contrast from 0 to 1.Therefore, if uniformly quantized, the contrast scale needs 50 levels or 6 bits.

However with optimum MS Quantizers 4 to 5 bits could be sufficient.

Pseudo Noise quantizing: - is a method of suppressing contouring effects.

This is done by adding a small amount of uniformly distributed pseudo random noise to the luminance samples before quantization. This pseudo random noise is called dither.

To display the image, the same (or another) pseudo noise is subtracted from quantizer output as shown in Figure (3.8) above.

The effect is that in the regions of low luminance gradients (which are regions of contours) the input noise causes pixels to go above or below the original decision level, thereby breaking the contours. However, the average value of quantized pixels is about the same with and without the additive noise. During display, the noise tends to fill in the regions of contours in a such a way that the spatial average is unchanged.

The amount of dither added should be kept small enough to maintain the spatial resolution but large enough to allow the luminance values to vary randomly about the quantizer decision levels. The noise should usually affect the least significant bit of quantizer. Reasonable image quality is acheivable by a 3-bit quantizer.

Section 3.2

  • Periodic Sampling with Arbitrary Sampling Geometries:

We define two linearly independent vectors and and write the locations of a doubly periodic set of samples in - as

Using vector notation we express these relations as where

is an integer indexing vector with as unitless coordinate variables and is a matrix made up of the sampling vectors and i.e .

As and are linearly independent, the determinant of is non-zero. We refer to as the sampling matrix. The basis vectors can be interpreted as columns of which maps index values to actual sampling positions of a spatially continuous signal.

Sampling a continuous signal produces the discrete signal. The sampling locations are shown in Fig (3.2.1).

At this stage we ask the questions,

  1. How are the FT’s of and related?
  2. Under what circumstance can we reconstruct from its samples ?

We proceed by defining the 2D-FT of as:

Where and are double integrals, because and are vectors respectively.

The discrete time FT i.e. (DTFT) of can be written as

Where and

Since is obtained from by sampling, we have:

Substituting , we obtain

or

This can be proved as follows:

We have,

i.e,

i.e,

Hence,

Neglecting higher order derivatives on RHS we get

in other words

Using this result we can write,

We perform the integration over as an infinite series of evaluated over square areas, i.e.

Where represents the square,

and or

Replacing by , we can remove the dependence of the limits of integration on as,

where , for all integer values of

A comparison of above equation with equ.(14) implies that

Or equivalently

Where

Equ(16) provides the desired relation better the FT’s of and .This answers our first question.

Example: For rectangular sampling we choose,

and

Then equ (16) reduces to

relating the two 2-D FTs for rectangular sampling. It is observed that, can be interpolated as periodic extension of , but now the periodicity is described by general matrix which can be thought of as a,

set of two periodicity vectors and ie, , and (2) since is periodic in both and with a period of , implies that is periodic in with periodicity , i.e.

because

Example: Consider the continuous band limited signal with , ,where B is the region of finite extent called the baseband.

If we sample with , which corresponds to sample locations as shown in the figure (3.2.5).

Then will be extended with periodicity;

Thus has the following form in the plane.

We observe that:

  1. By varying sampling matrix , we can adjust periodicity matrix , such that there is no overlap among periodically repeated versions of ; and,
  2. By varying in this manner we ensure there is no aliasing.

Using the bandlimited condition, i.e. equ(3) becomes,

For values of lying in a square centered on origin with sides of length , shown below.

consequently,

Can be recovered. This implies, can be recovered from; i.e.

Substituting for we get.

Substituting for in terms of ,

We have,

Where the is over the baseband B in frequency plane.

For notational ease we write

Where

The interpolating function allows us to construct the values of at points in between sample locations given by .

  • Comparison of Rectangular and Hexagonal Sampling

For any bandlimited waveform, there is an infinite number of possible choices for the periodicity matrix and the sampling matrix .

Among these numerous possibilities, two strategies are common. They are rectangular sampling and hexagonal sampling.

For the rectangular sampling case is diagonal. Hexagonal sampling refers to sampling matrices of the form. A set of sample locations for hexagonal sampling matrix is shown below.

The alternate rows of this raster are identical and the odd-indexed rows are staggered one half sample interval w.r.t the even indexed rows. The term “hexagonal” is used since each sample location will have six nearest neighbors when

The corresponding periodicity matrix will be

or

This implies,

i.e.,

or

i.e. the periodicity matrix has same form as sampling matrix .

We are interested in the hexagonal region with parameters as shown in Figure below.

We have,

and

Let us compare the relative efficiencies of rectangular and hexagonal schemes when they are used to sample a 2-D continuous signal which is circularly bandlimited.

i.e. or

* There are several regions which when periodically, extended according to cover the entire plane with no overlap. They are:

The circular baseband shown above, can be inscribed in a square with sides of length 2W or a hexagon with sides of length. Consequently this signal can be embedded via square or a hexagon base-band. Periodic replication of the square baseband corresponds to sampling continuous signal on a rectangular raster.

This is shown in Figure below.

In this case the sampling matrix is,

because

This is so because,

or

and

For hexagonal baseband case: Periodicity matrix is :

Where and

Sampling matrix

and

Since sampling density is proportional to , we see that by taking the ratio of

we get . This implies that the hexagonal sampling requires less samples than rectangular sampling to represent the same circulating bandlimited continuous signal . There is no other efficient sampling scheme for circularly bandlimited signals than hexagonal sampling.

Example: Marine seismic maps can result in data which is periodically sampled in an unusual fashion.

In figure above a boat towing a line of sensors moves with a speed of B knots in a direction perpendicular to an ocean current of speed C knots. The sensors are uniformly spaced temporal sampling period by T. How is the underlying spatial processing sampled?

Answer: To a first approximation, the sensors remain in a straight line, but that line is directed away from the direction of the boat movement at an angle given by

or

The resulting sampling grid corresponds to a sampling matrix:

Section 3

  • Coordinate Systems Mapping

Rectangular coordinate systems are only a special case for the description of two dimensional signals. They allow expressing the relationships between spatial domain and frequency domain by a 2D FT with separable integration over the different dimensions.

Example: The 2D FT of a spatially continuous signal is defined as,

Alternatively provided that the coordinated axes establish an orthogonal system, this equation can be represented as,

where