EE5356 Digital Image Processing

~Final Exam~

5/13/2010 Thursday

11;00 am—1:30 pm

ATTENTION - Read and follow or risk penalty:

1)Closed books and closed notes.

2)Please print your name and last four digits of your ID.

3)For multiple choice write ONE answer in the space provided next to the question

4)For part (B) show ALL steps NEATLY and range of the variables used wherever appropriate; for e.g.: 0<k<N or k = 1,2,3,….N-1.

5)Arrange answers in the correct order

6)No Copying. Exam for the student will be immediately terminated!

STUDENT NAME: ______

STUDENT ID: ______

PART A: Multiple choices (3 points for each question)

1) In an image compression system, 6686 bits are used to represent a 128x128 image with 256 gray levels. What is the compression ratio for this system?

(A) 19.6

(B) 2.45

(C) 0.41

(D) 0.05

2) Which one of the following is a lossy coding?

(A) Run Length Coding

(B) Uniform Quantizer

(C) Huffman Coding

(D) Predictive Coding without quantizer

3) Which one of the following can not be adopted as a data compression system?

(A) Transform Coding , followed by DPCM Coding

(B) Transform Coding, followed by Uniform Quantizer, followed by Huffman Coding

(C) DPCM Coding, followed by Huffman Coding

(D) Huffman Coding, followed by Transform Coding.

4) What does the definition of entropy tell us?

(A) The lower bound to encode a source with distortion.

(B) The upper bound to encode a source without distortion.

(C) The average # of bits to encode a source without distortion.

(D) The average # of bits to encode a source given a certain distortion.

5) In Huffman Coding, the size of the code book is L1, while the longest code word can have as many as L2 bits. What is the relationship between L1 and L2?

(A) L1<L2

(B) L1>L2

(C) L1=L2

(D) They don’t have a certain relationship.

6) Comparing Geometrical Zonal Coding with Threshold Coding, for the same number of transmitted samples, which one of the following is not correct?

(A)Threshold Coding has more distortion.

(B)The Threshold Coding mask gives a better choice of transmission samples.

(C)Threshold Coding needs more rates.

(D)In threshold coding, the addresses of the transmitted samples have to be coded for every image block.

7) Which comment is correct according to inverse filter and Wiener filter? (Cannot give reconstruction means do not yield useful reconstructed images.)

(A) When the ratio of spectrum N/H is small, inverse filter can not give reconstruction while Wiener filter can.

(B) When the ratio of spectrum N/H is small, Wiener filter can not give reconstruction while inverse filter can.

(C) When the ratio of spectrum N/H is large, both inverse filter and Wiener filter can not give reconstruction.

(D) When the ratio of spectrum N/H is large, Wiener filter can not give reconstruction while inverse filter can.

8) Which one of the following comments of Geometric Mean Filter (GMF) is not correct?

Gs = (IF)s(WF)1-s

IF = Inverse filter, GMF =Geometric Mean Filter

(A) For s=1, GMF acts as an inverse filter (IF).

(B) For s=0, GMF acts as an Wiener filter.

(C) For s>1/2, GMF tends more towards Wiener filter.

(D) For s<1/2, GMF tends more towards Wiener filter.

9) In DPCM codec which of the following need to be quantized?

(A) The reconstruction value.

(B) The difference between prediction value and the original value.

(C) The prediction value.

(D) The transform coefficient.

10) In Run Length Coding, suppose the runs are coded in maximum lengths of M, then the probability distribution of the run lengths turns out to be the geometric distribution , probability of a ‘0’=, probability of a ‘1’=.

Since a run length of implies a sequence of ’0’s followed by a ‘1’. that is symbols, the average number of symbols per run will be

(A)

(B)

(C)

(D) Cannot calculate.

PART B:

(1) [10 points]Given

Symbol / Probability,
S0 / 0.3
S1 / 0.2
S2 / 0.2
S3 / 0.2
S4 / 0.1

Calculate the entropy

i
Draw the Huffman tree.

Find Huffman Code and get its average code length and the redundancy.

(2) [10 points] List the advantages and disadvantages of inverse filter, pseudo inverse filter, wiener filter, geometric mean filter and constrained least square filter

(3)[20 points] The sequence 150,152,170,170,168,164 is to be predictively coded using the previous element prediction rule, for DPCM and for the feedforward predictive coder. Assume a 2-bit quantizer shown in fig.3 is used except the first sample is quantized separately by a 7-bit uniform quantizer, giving . Please fill up the form (next page) showing that reconstruction error builds up with a feedforward predictive coder; whereas it tends to stabilize with the feedback loop of DPCM.

EncoderDecoder

Fig.1: DPCM codec

EncoderDecoder

Fig.2: Feedforward codec

Fig.3: Quantizer

DPCM / Feedforward Predictive Coder
n / / / / / / / / / / /
0 / 150 / --- / --- / --- / --- / --- / ---
1 / 160
2 / 162
3 / 175
4 / 163
5 / 170

(4) [20 points]Show that the Wiener filter doesnot restore the power spectral density (PSD) of the object, whereas the geometric mean filter does, when s=(1/2). (see problem 8)

(Derive that for Wiener filter

and for GMF)

Hint: for any filter G.

sequences

Note that PSD is the Fourier transform of the autocorrelation function.

(5) [10 points] Write a short note defining SSIM and UIQI