13

Chapter 2

Sampling and Quantization

2.1 Analog and Digital Signals

In order to investigate sampling and quantization, the difference between analog and digital signals must be understood. Analog signals consist of continuous values for both axes. Consider an electrical signal whose horizontal axis represents time in seconds and whose vertical axis represents amplitude in volts. The horizontal axis has a range of values from zero to infinity with every possible value in between. This makes the horizontal axis continuous. The vertical axis is also continuous allowing the signal’s amplitude to assume any value from zero to infinity. For every possible value in time there is a corresponding value in amplitude for the analog signal.

Digital signals on the other hand have discrete values for both the horizontal and vertical axes. The axes are no longer continuous as they were with the analog signal. In this chapter, time will be used as the quantity for the horizontal axis and volts will be used for the vertical axis.

2.2 Introduction to Sampling

The motivation for sampling and quantizing is the need to store a signal in a digital format. In order to convert an analog signal to a digital signal, the analog signal must be sampled and quantized. Sampling takes the analog signal and discretizes the time axis. After sampling, the time axis consists of discrete points in time rather than continuous values in time. The resulting signal after sampling is called a discrete signal, sampled signal, or a discrete-time signal. The resulting signal after sampling is not a digital signal. Even though the horizontal axis has discrete values the vertical axis is not discretized. This means that for any discrete point in time, there are an infinite number of allowed values for the signal to assume in amplitude. In order for the signal to be a digital signal, both axes must be discrete.

2.3 Introduction to Quantization

Since a discrete signal has discrete points in time but still has continuous values in amplitude, the amplitude of the signal must be discretized in order to store it in digital format. The values of the amplitude must be rounded off to discrete values. If the vertical axis is divided into small windows of amplitudes, then every value that lies within that window will be rounded off (or quantized) to the same value. For example, consider a waveform with window sizes of 0.5 volts starting at –4 volts and ending at +4 volts. At a discrete point in time, any amplitude between 4.0 volts and 3.5 volts will be recorded as 3.75 volts. In this example the center of each 0.5-volt window (or quantization region) was chosen to be the quantization voltage for that region. Reasons for choosing the center as the quantization voltage will be discussed in section 2.7. In this example the dynamic range of the signal is 8 volts. Since each quantization region is 0.5 volts there are 16 quantization regions included in the dynamic range.

It is important that there are 16 quantization regions in the dynamic range. Since a binary number will represent the value of the amplitude, it is important that the number of quantization regions is a power of two. In this example, 4 bits will be required to represent each of the 16 possible values in the signal’s amplitude.

2.4 Sampling Continuous-Time Signals

Sampling a continuous-time signal generates a discrete-time signal. This is accomplished by multiplying the continuous-time signal (the analog signal) by a series of unit impulses. The result is the original signal’s information only at points in time where an impulse occurs. The process discards information about the original signal at all other values in time.

2.4.1 Dirac Delta Function

The series of unit pulses used to sample a signal is a series of Dirac delta functions. Consider the Dirac delta function:

= Dirac Delta Function = (2.1)

The following two relations define the continuous-time unit impulse, or Dirac delta function:

= 0 for t ¹ 0 (2.2)

for t = 0 (2.3)

The Dirac delta function is a unit pulse in which the duration approaches zero but the area remains unity. This means as the width of the pulse approaches zero, the amplitude of the pulse must approach infinity to maintain a unit area [1]. Let g(t) equal a rectangular pulse of width T that is an even function of time:

(2.4)

then

(2.5)

The discrete-time version of the unit impulse function is defined by [1]

(2.6)

The Dirac delta function is the derivative of the unit step function with respect to time, and therefore the unit step function is the integral of the Dirac delta function with respect to time.

2.4.2 Time-Domain Impulse Sampling

Consider a train of equally spaced unit impulses. This is called the Dirac comb and is defined as follows:

(2.7)

where Ts is the sampling interval or sampling period.

Impulse sampling is performed by multiplying the continuous-time signal x(t) by the impulse train. The discrete-time signal can be represented mathematically by the following equation [8]:

(2.8)

where

x(t) = continuous-time signal

xs(t) = discrete-time signal

fs = sampling frequency or sample rate =

From equation (2.8) it is gathered that the discrete-time signal is the original signal multiplied by a train of unit impulses equally spaced by the sampling interval [8].

To examine xs(t) in the frequency domain, the Fourier transform is performed on xs(t):

(2.9)

where “ * ” denotes convolution [3].

Multiplication in the time domain corresponds to convolution in the frequency domain. As a result, the spectral content of the sampled signal has the same spectral content as the original signal plus copies of the original signal’s spectral content centered at integral multiples of the sampling frequency. This means that in the frequency domain xs(t) looks like a periodic representation of the original signal’s frequency spectrum. These copies of the original signal’s spectral content are called aliases. If the sampling rate is high enough, the aliases can be filtered out by low-pass or bandpass filters with cut-off frequencies outside the spectral content of the original signal. This filtering of the aliases will recover the original signal [3].

2.5 Nyquist Rate

As discussed in section (2.4), the aliases caused by sampling can be removed if the sampling rate is high enough. Let the highest frequency component in the original signal’s magnitude spectrum be called fmax and the lowest frequency component be called fmin. The original signal’s frequency content extends from fmin to fmax. After sampling, the sampled signal’s frequency content has aliases (copies of the original signal’s frequency content) centered at

(2.10)

where k = 0, 1, 2, 3, etc.

The lowest frequency component in the alias is given by the expression:

(2.11)

and the highest frequency component in the alias is given by the expression:

(2.12)

As the sampling frequency is lowered, the first alias (k = 1) is the alias that will merge into the original signal’s spectral content first. In this case

(2.13)

and

(2.14)

This means that the original signal can be recovered without distortion as long as the following relationship holds:

(2.15)

or

Nyquist rate = (2.16)

Therefore, the sampling rate must be at least twice as high as the highest frequency component in the original signal’s magnitude spectrum. This rate, however, is merely a lower limit for the ideal case. In practical systems it is important to provide guardbands between the original signal’s spectral content and the aliases [3]. Increasing the sampling rate increases the distance between the original signal’s spectral content and the aliases. In this case the Nyquist rate is exceeded in order to provide guardbands between aliases and to take into account the non-ideal frequency response of filters. The practice of exceeding the Nyquist rate is referred to as oversampling [8].

2.6 Quantization

As discussed in section (2.3), the quantization process takes a signal that is discrete in time and continuous in amplitude and converts it into a signal that is discrete in both time and amplitude. In order for the signal to be stored digitally, the rounded-off values of the signal’s amplitude at discrete points in time must be converted into binary numbers.

In this chapter uniform quantization will be discussed in which each quantization region has the same quantization width. As discussed in section (2.3), the number of quantization regions determines the number of bits required to represent each discrete value in amplitude. The number of bits per sample (n), or quantization bits, required in uniform quantization is calculated as follows:

n = LOG2(number of quantization regions) (2.17)

Consider a digitizer that is set to sample at 20,000 samples/sec and requires 4 bits/sample for quantization. Since the signal is being sampled at a certain rate with a certain number of quantization bits, the output of the digitizer must be outputting data at a certain rate. The encoding rate is the number of bits per second that is required to digitally represent the signal. Quantitatively, the encoding rate is the product of the sampling frequency and the number of quantization bits [3].

Encoding Rate = (fs)*(n) (2.18)

In this example the encoding rate is equal to 80,000 bits per second. Increasing the sampling frequency pushes the aliases away from the original signal’s spectral content and allows analysis of higher frequency signals, but it also increases the amount of required memory to store the signal digitally. The same tradeoff applies to increases in the number of quantization bits: a more accurate representation of the signal can be achieved, but the amount of required memory to store the signal also increases.

The dynamic range of the signal is the range of the signal’s amplitude [3]. In the example in section (2.3) the dynamic range was 8 volts because the signal’s amplitude ranged from 4 volts to –4 volts.

Dynamic Range = (Signal_Amplitudemax ) – (Signal_Amplitudemin) (2.19)

The width of the quantization region (for uniform quantization) can be defined by the following relationship:

(2.20)

where n = number of bits per sample [3].

The quantization region width is inversely proportional to the resolution of the quantization. It should also be noted that while a decrease in the quantization region width increases resolution, it also increases the number of bits required to digitally represent the signal.

Since quantization involves rounding-off to the center of the quantization region, the process can cause error to be present between the actual amplitude values and recorded amplitude values.

2.7 Uniform Quantization Error

The reason for choosing the center of the quantization regions to be designated as the rounded-off value is seen by evaluating the quantization error (also called quantization noise) associated with the rounding-off process. Quantization error is the difference in the signal’s actual amplitude and the amplitude assumed by the process of quantization. If the top of the quantization region is chosen as the round-off value, then the minimum quantization error is zero and the maximum quantization error is equal to the quantization region width. The same error is associated with choosing the bottom of the quantization region as the round-off value. If the center of the quantization region is chosen to be the round-off value, then the minimum error is still zero but the maximum error is ± one-half the quantization region width [3].

Error in choosing the center of the quantization region is defined by:

(2.21)

(2.22)

(2.23)

From equation (2.23) it is evident that the accuracy is increased by increasing n (the number of quantization bits).

Since quantization error is equally likely to be any value ranging from the minimum quantization error to the maximum quantization error, the error is said to be probabilistic. Quantization error can be represented as a uniformly distributed random variable since it has an equal probability to lie at any point in the range of possible quantization error values (minimum quantization error to maximum quantization error). The probability density function of the uniformly distributed random variable for quantization error is defined as the following [3]:

(2.24)

Since the quantization error is a uniformly distributed random variable centered at zero, the average value, or mean, of the quantization error is zero.

mx = mean = 0 (2.25)

The variance of the quantization error is the amount that the error varies about the mean. The higher the variance is, the less predictable the random variable is. Variance is given by the following relationship [6]:

(2.26)

For random variables with a mean of zero, the variance is equal to the average normalized power. This is demonstrated as follows [3]:

(2.27)

The variance in the above expression is equal to the average value of X2, which is the average normalized power. For uniform quantization, the mean (mx) of the quantization error is zero [3]. Also taking into account the fact that the probability density function is zero for x-values greater than Xmax or less than Xmin, the variance of the quantization error equal to

(2.28)

2.8 Summary

In order to convert an analog signal to a digital format, sampling and quantization must be performed on the signal. Although quantization introduces error, this error can be reduced by increasing the number of quantization bits. Once a signal has been sampled and quantized, it can be stored in a digital format and manipulated and analyzed by DSP (digital signal processing) algorithms. Chapter 3 discusses the use of applying the discrete-time Fourier series (DTFS) to digital signals in order to gain insight into the frequency content of the original signal.