Lecture 6, Application 1 Addendum 2/8/2008
Learning Objectives:
· Review Huffman coding on a more realistic image
· Familiarize with Matlab syntax for basic operations
· Briefly touch on Huffman decoding
· Introduce useful graphical tools and demos in Matlab
More realistic Huffman/Variable Length Coding example:
Input the image:
> I = [0 1 2 3 4 5 6 7;1 2 3 4 5 6 7 8;2 3 4 15 6 7 8 9;3 4 5 6 7 8 9 10;4 5 6 7 8 9 10 11; 0 1 2 3 4 5 6 7;1 2 3 4 5 6 7 8;7 8 9 10 11 12 13 14]
Matrix Display:
I =
0 1 2 3 4 5 6 7
1 2 3 4 5 6 7 8
2 3 4 15 6 7 8 9
3 4 5 6 7 8 9 10
4 5 6 7 8 9 10 11
0 1 2 3 4 5 6 7
1 2 3 4 5 6 7 8
7 8 9 10 11 12 13 14
Count the number of occurrences – rows correspond to gray level, columns to the image columns:
> h = hist(I, 0:15);
> h
h =
2 0 0 0 0 0 0 0
2 2 0 0 0 0 0 0
1 2 2 0 0 0 0 0
1 1 2 2 0 0 0 0
1 1 1 2 2 0 0 0
0 1 1 0 2 2 0 0
0 0 1 1 1 2 2 0
1 0 0 1 1 1 2 2
0 1 0 0 1 1 1 2
0 0 1 0 0 1 1 1
0 0 0 1 0 0 1 1
0 0 0 0 1 0 0 1
0 0 0 0 0 1 0 0
0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 1
0 0 0 1 0 0 0 0
Calculate the probability: length(I(:)) is the image size in pixels. sum(h’) will sum the number of counts in each row. The prime performs the transpose converting rows to columns – Matlab operations conventionally operate on matrix columns first. Without the transpose, the sum would be the counts with in the image columns rather than the counts at each gray level.
> p = sum(h')./length(I(:));
> p'
ans =
0.0313
0.0625
0.0781
0.0938
0.1094
0.0938
0.1094
0.1250
0.0938
0.0625
0.0469
0.0313
0.0156
0.0156
0.0156
0.0156
> E = entropy(I)
E =
3.7157
Calculate the maximum achievable lossless compression ratio:
> Cmax = 4/3.7157
Cmax =
1.0765
> p'
0.0313
0.0625
0.0781
0.0938
0.1094
0.0938
0.1094
0.1250
0.0938
0.0625
0.0469
0.0313
0.0156
0.0156
0.0156
0.0156
> p = sort(p,'descend')
> p'
0.1250
0.1094
0.1094
0.0938
0.0938
0.0938
0.0781
0.0625
0.0625
0.0469
0.0313
0.0313
0.0156
0.0156
0.0156
0.0156
> B = (3.*sum(p(1:4))+4.*sum(p(5:10))+5.*sum(p(11:12))+6.*sum(p(13:16)))
B = 3.7500
> Ca = 4/B
Ca = 1.0667
Decoding:
Huffman: [01111110011101011010011111100100110011101011011…]
Image: [0 1 2 3 …
I =
0 1 2 3 4 5 6 7
1 2 3 4 5 6 7 8
2 3 4 15 6 7 8 9
3 4 5 6 7 8 9 10
4 5 6 7 8 9 10 11
0 1 2 3 4 5 6 7
1 2 3 4 5 6 7 8
7 8 9 10 11 12 13 14
Observations:
1. There will be a unique Huffman code for every possible level, even if that level has p(l) = 0.
2. Therefore, if we know the code, the max or the min value in the image, we have enough information to decode.
function x = huff2mat(y)
%HUFF2MAT Decodes a Huffman encoded matrix.
% X = HUFF2MAT(Y) decodes a Huffman encoded structure Y with uint16
% fields:
% Y.min Minimum value of X plus 32768
% Y.size Size of X
% Y.hist Histogram of X
% Y.code Huffman code
%
% The output X is of class double.
%
% See also MAT2HUFF.
% Copyright 2002-2004 R. C. Gonzalez, R. E. Woods, & S. L. Eddins
% Digital Image Processing Using MATLAB, Prentice-Hall, 2004
% $Revision: 1.5 $ $Date: 2003/11/21 13:17:50 $
Window and Level, Basic Viewing: imtool(I)
a b
c
Block DCT Demonstrator: dctdemo
a b
c d
Figures a-c: Graphical display of multi-resolution feature of discrete transform image compression – specifically, the resolution of the image improves systematically with each new coefficient retained. This allows intermediate display during transmission with improved resolution as more time for transmission and update proceeds.
Figure d: The second concept is the “block” artifact that is apparent for large compression ratios. This is due to the block mapping step – the 8 X 8 pixel blocks on which the DCT is performed. This artifact is typical of high-compression-ratio JPEG images.