CS3291 DSP Solutions 2 BMGC
University of Manchester
CS3291: Digital Signal Processing 2003
Solutions to selected problems
Section 3
3.1. We can find two simple signals for which the rule for linearity does not apply.
Let x1[n]=1 for all n and x2[n]=-1 for all n.
Response to {x1[n]} is {y1[n]} with y1[n]=1 for all n.
Response to {x2[n]} is {y2[n]} with y2[n] =1 for all n.
As x1[n]+x2[n]=0 for all n, the response to {x1[n]+x2[n]} will be {02 } i.e. zero for all n.
But this is not {y1[n]+y2[n]} which would be 2 for all n.
3.2. { …,0,… 1, 1, 1, 1, -4, 0, …, 0, … }
3.3. Signal flow graph (i) is non-recursive like fig 3.3 whereas (ii) is recursive like fig 3.4.
3.4. (i) { …,0,…, 1, 1, 0, …, 0, …} stable & causal. It is a finite impulse response.
(ii) { …,0,…, 1, -1, 1, -1, 1, -1, …, 1, -1, …} causal but unstable. It is an infinite impulse response. Note that not all filters with infinite impulse responses are unstable, bit this one is.
3.5. { …, 0, …, 0, 1, 0, 0, …, 0, …}
3.6. 800 Hz.
3.7. H(ejW) = 1 – e-jW = e-jW/2 ( ejW/2 - e-jW/2) = 2 j sin (W/2) e-jW/2 = 2 sin (W/2) e-j(W/2 – p/2)
G(W) = |2sin (W/2)|
When W>0 then f(W) = -(W/2 – p/2); when W<0 then f(W) = p -(W/2 – p/2) = -(W/2 + p/2).
When W=0 phase is arbitrary as G(W)=0, call it zero.
This is not linear phase.
3.8. { …, 0, …, 0, 4, -8, 16, -32, …} Non-stable.
3.9. We know that {ejWn } produces {H(ejW) ejWn } and therefore {e-jWn } produces {H(e-jW) e-jWn }
We also know that H(e-jW) is the complex conjugate of H(ejW).
So that since H(ejW) = G(W)ejf(W) it follows that H(e-jW) = G(W)e-jf(W) .
Now {cos(Wn)} = { 0.5 ( ejWn + e-jWn )} = 0.5{ ejWn } + 0.5 {e-jWn }.
Therefore the response to {cos(Wn)} is 0.5{ G(W)ejf(W)ejWn } + 0.5 {G(W)e-jf(W)e-jWn }
= 0.5 G(W){e j(f(W)+Wn) + e -j(f(W)+Wn) } = {G(W) cos(Wn + f(W))}
3.10. H(ejW) = 1 + 2e-jW + 3 e-2jW + 2 e-3jW +e-4jW = e-2jW ( e2jW + 2ejW + 3 + 2e-jW + e-2jW )
= e-2jW ( 3 + 4cos (W ) + 2 cos (2W) )
It may be shown by various means that 3 + 4cos (W ) + 2 cos (2W) ³ 0 for all W.
(For example show that 3 + 4cos (W ) + 2 cos (2W) = 1 + 4cos (W ) + 4 cos2 (W) = (1+2cos(W))2 Therefore the phase response f(W) is –2W for all W.
This is linear phase with a phase delay of 2 samples.
Note that the impulse response ( …, 0, …, 0, 1, 2, 3, 2, 1, 0, …,0, … } is symmetric about n=2.
A symmetric impulse response gives a linear phase response.
Section 4
4.1. Cut-off frequency = p/2 radians/sample.
Taking f(W) to be 0, H(ejW) = G(W) and the ideal impulse response is, by the inverse DTFT:
It may be checked that h[n] = 0.5 when n=0
Therefore {h[n]} is the following infinite sequence:
{ …-1/(7p), 0, 1/(5p), 0, -1/(3p), 0, 1/p, 0.5, 1/p, 0, -1/(3p), 0, 1/(5p), 0, -1/(7p), … }
Rectangularly windowing for –5 £ n £ 5 gives the following sequence:
{ …, 0, …0, 1/(5p), 0, -1/(3p), 0, 1/p, 0.5, 1/p, 0, -1/(3p), 0, 1/(5p), 0, …, 0 … }
Delaying by 5 samples to make the impulse response causal gives:
{ …, 0, …, 0, 1/(5p), 0, -1/(3p), 0, 1/p, 0.5, 1/p, 0, -1/(3p), 0, 1/(5p),0, …, 0 … }
We can now draw the signal-flow graph of the FIR filter.
H(z) = 1/(5p) - (1/3p) z-2 + (1/p) z-4 0.5 z-5 + (1/p)z-6 - ( 1/(3p) ) z-8 + (1/5p) z-10
y[n] = (1/(5p))x[n] -(1/(3p))x[n-2] + (1/p) x[n-4] + 0.5 x[n-5] + (1/p)x[n-6] - ( 1/(3p) )x[n-8]
+(1/(5p)) x[n-10]
4.2.
= (1/np)(sin(np/2) – sin(np/4) ) when n¹0 and 0.25 when n=0. Hence etc.
4.3.
y[n] = (1/(5p))x[n] -(1/(3p))x[n-2] + (1/p) x[n-4] + 0.5 x[n-5] + (1/p)x[n-6] - ( 1/(3p) )x[n-8]
+(1/(5p)) x[n-10]
= 0.06366x[n] -0.1061x[n-2] + 0.3183 x[n-4] + 0.5 x[n-5] + 0.3183x[n-6] - 0.1061x[n-8]
+0.06366 x[n-10]
637x[n] -1061x[n-2] + 3183 x[n-4] + 5000 x[n-5] + 3183x[n-6] - 1061x[n-8] +637x[n-10]
= ¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾
10000
The following is the “bare bones” of a C program to read 1000 16-bit integer samples of a signal from a binary file, pass these through the 10th order filter above implemented using integer arithmetic only, and store the output samples generated in a binary output file.
#include<stdio.h>
#include<stlab.h>
#include<math.h>
void main( )
{ file *fpin;fpout;
long n, y;
short m,ix, iy, x[11], a[11]; //16-bit integers
fpin=fopen(c:\\..\\infilename.dat, “rb”);
fpout=fopen(c:\\..\\outfilename.dat, “wb”);
a[0]=637; a[1]=0; a[2]=-1061; a[3]=0; a[4]=3183; etc.
for (m=1;m<11;n++) x[m]=0;
for ( n=0;n<1000;n++)
{ fread(&ix, sizeof(short),1, fpin); x[0] = ix;
y=x[0]*a[0];
for (m=10; m>0; m--) {y = y + x[m]*a[m]; x[m] = x[m-1];}
iy = y / 10000;
fwrite(&iy, sizeof(short),1, fpout); }
fclose(fpin); fclose(fpout);
}
4.4.
= (1/np)( – sin(np/3) ) when n¹0 and 2/3 when n=0. Hence etc.
4.5. A linear phase filter must have an impulse response which is symmetric about some point in time, t=D say. This means that for all t, h(D+t) = h(D-t), and if h(t) remains non-zero as t à¥, h(t) must also remain non-zero as t tends to -¥. This means that the filter must be non-causal and therefore unrealisable. The argument is the same for discrete time filters where h[n] is symmetric about n=M where M is an integer; i.e. h[M-n]=h[M+n] for all n, or where 2M is an integer and h[M+0.5+n]=h[M-0.5-n] for all n. The argument is a little more complicated where the symmetry is about n=M where 2M is not an integer.
Examples of symmetric impulse responses:
{…,0, …, 0, 1, -2, 3, 7, 3, -2, 1, 0, …, 0, …} M = 3 ; x[2] = x[4], etc.
{ …, 0, …0, 1, -2, 5, 5, -2, 1, 0, …, 0, …} M=2.5: x[2]=x[3], x[1]=x[4], x[0]=x[5]
Section 5
5.1. (a) H(z) = 2 - 3z-1 + 6 z-4
(b) H(z) = z-1 / (1 + z-1 + 0.5 z-2)
5.2. Corresponding difference equation is y[n] = x[n-1]
5.3. (i) When input x[n] = zn then output y[n] = H(z) zn.
Substituting, H(z)zn = zn – 0.9 H(z)z(n-1) Therefore H(z) [ zn + 0.9zn-1] = zn
H(z) = 1 / (1 + 0.9z-1)
(ii) {h[n]} = { …, 0, …, 0, 1, -0.9, 0.81, -0.93, 0.94, … }
H(z) = 1 + (-0.9z-1) + (-0.9)2z-2 + (-0.9)3z-3 + …
= 1 + (-0.9z-1) + (-0.9z-1)2 + (-0.9z-1)3 + …
= 1 / (1 + 0.9z-1) assuming |0.9z-1| < 1 i.e. |z| > 0.9.
5.4. y[n] = x[n] + 3x[n-1] + 2x[n-2] – 0.9 y[n-1]
Zeros: z = -2 & z = -1; poles at z = 0 & z = - 0.9.
5.5. Difference equation is: y[n] = x[n] + 2 y[n-1].
{h[n]} = { …, 0, …, 0, 1, 2, 4, 8, 16, 32, …} Unstable! (NB not all IIR filters are unstable)
5.6.
1 - 0.9 z-1 + 0.81 z-2 z2 - 0.9 z + 0.81 (z - 0.9ejp/3 ) (z -0.9e-jp/3)
H(z) = ¾¾¾¾¾¾¾¾ = ¾¾¾¾¾¾¾¾ = ¾¾¾¾¾¾¾¾¾¾¾ 1 -0.95z-1 + 0.9025 z-2 z2 -0.95z + 0.9025 (z - 0.95ejp/3 ) (z -0.95e-jp/3)
The gain response will have a peak of amplitude 2 (6dB) at W = p/3. The gain at frequencies not close to p/3 will be approximately one. To find out how sharp the peak is we can do various easy things.
You may choose to estimate the gain at p/3 ±0.05. A bit of geometry (right-angle triangles as usual) tells us that the gain at p/3 ±0.05 is Ö(0.12 + 0.052) / Ö(0.052 + 0.052) = Ö5 / Ö2 = 4 dB. The distance to the pole has increased by a factor Ö2 (decreasing the gain by 3dB) but the distance to the zero has increased from 0.1 to 0.112, i.e. a factor 1.12 ( corresponding to an increase in gain of about 1dB).
Overall the gain decreases by 2 dB from its value of 6 dB at p/3. This information will allow a reasonable sketch (showing 4 dB points rather that 3 dB points), but if you insist on finding the 3dB points, you can do it quite easily by finding the increase q in relative frequency such that
(distance to zero) / (distance to pole)
reduces from 2 to approximately Ö2. As usual we neglect changes to the distances to the complex conjugates of this pole and zero as they are far away.
Distance to zero » Ö (q2 + 0.12) and distance to pole »Ö(q2+0.052).
Solving Ö (q2 + 0.12) / Ö (q2 + 0.052) = Ö2 gives q = ±0.071 radians/sample.
Hence 3dB points are at p/3±0.071.
Follow-up exercise: repeat this problem with the poles at 0.99exp(±jp/3) and the zeros unchanged. The peak at p/3 now becomes much higher, i.e. 20 dB, and you will find that the points where the gain drops by 3 dB from 20 dB to 17 dB occur very close to p/3 ± 0.01. But where does the gain become approximately 3dB ? Solving Ö (q2 + 0.12) / Ö (q2 + 0.012) = Ö2 gives q = ±0.1 radians/sample; i.e. the gain drops to 3dB at W = p/3 ± 0.1. A gain response sketch for this follow-up exercise can therefore be drawn with minimal calculation.
5.7. r - e-jW r - (cos(W) –j sin(W)) (r – cos(W)) + j sin(W)
H(ejW) = ¾¾¾¾¾ = ¾¾¾¾¾¾¾¾¾¾¾ = ¾¾¾¾¾¾¾¾¾¾
1 - re-jW 1 - r (cos(W) –j sin(W)) (1 – r cos(W)) + j r sin(W)
(r – cos(W))2 + sin(W)2 r2 - 2rcos(W) + 1
|H(ejW)|2 = ¾¾¾¾¾¾¾¾¾¾¾¾ = ¾¾¾¾¾¾¾¾¾¾¾¾¾¾¾ = 1
(1 – r cos(W))2 + r2 sin(W)2 1 - 2 r cos(W) + r2 (cos2(W) + sin2(W))
5.8. See Example 5.4.
5.9. The coefficients a0, a1, a2, b1, & b2 are clearly not integers, and if we just round each to the nearest integer or take its integer part, the result will be rather silly. So we must choose a scaling factor K which is a large integer, say 100 or 1000 or 1024 or maybe 100000. We multiply each coefficient by K and then round to the nearest integer. The effect of rounding is now less drastic, and we can compensate later for the scaling up of the coefficients by dividing by K. Clearly the larger K, the less serious will be the effect of rounding. However the integers produced must not be too large as overflow may occur. In a 16-bit microprocessor like the TMS32010, stored integers representing filter coefficients are limited to the range –32768 to 32767. Similarly signal values from the ADC lie between –32768 to 32767. The processor can multiply together two 16-bit words (e.g. a signal sample and an integerised filter coefficient) to produce a 32-bit result, and can add 32-bit numbers together. But any resulting 32-bit number must be scaled back to 16-bits before it can be stored as a signal and subjected to further multiplication processes, or output to a DAC. The scaling back to 16 bits is achieved by dividing by K. It is also much easier to divide by a constant K which is a power of two, such as 1024 than to divide by say 100 or 1000. Since second order IIR section filter coefficients normally lie between –2 and +2, choosing K=1024 is fairly safe, though not necessarily optimal. Having chosen K, we then calculate the integerised coefficients as follows:
IB1 = int (K*b1); IB2 = int (K*b2); IA0 = int(K*a0); IA1 = int(K*a1); IA2 =int(K*a2);
Now we can write the program:-
IW1:=0; IW2:=0; (these are 16-bit integers)
L: Input IX; (16-bit integer from ADC)
P := IX*K - IB1*IW1 – IB2*IW2; (32-bit result in P)
IW := P / K; (integer divide by shifting to produce the 16-bit IW)
P := IW*IA0 + IW1*IA1 + IW2*IA2; (32-bit result in P)
IY := P / K; (integer divide by shifting to produce the 16-bit IY)
Output IY; (send 16-bit result to DAC)
IW2 := IW1;
IW1 := IW;
Goto L (Go back for next sample)
These program steps are easily understood and converted to a different language such as C or assembly language. Apologies for the “goto” statement.
The program given above may be understood in a more professional way by defining IB1, IB2, etc. to be “Q-12” fixed-point numbers; i.e. a decimal point (strictly a “binary” point) would be assumed to exist after the most significant 4 bits. If the programmer chooses to define IX as a Q-12 number also, P becomes a Q-24 number which is scaled back to a Q-12 number by the statement IY=P/K. The programmer must remember the Q-formats assumed for each word and keep track of what happens at each stage of the calculation. Thinking about Q-factors rather than scaling by K constants is ultimately more elegant and flexible, but students tend to prefer K constants to begin with. You can generate the same code by either mode of thinking. Clearly good documentation is going to be very important as calculations get complicated. Fixed point programming is important for mobile communications since it simplifies the hardware and computational complexity (though not the programming effort) and this leads to power savings and longer battery life.
5.10. If the order is as in the question, the impulse response of the combination is {h1[n]}Ä{h2[n]}.
If the order is reversed, the impulse response becomes: {h2[n]}Ä{h1[n]}. This is equal to
{h1[n]}Ä{h2[n]} as we know.
5.11. Notch frequency is p/2. Place zero on unit circle at z = exp(jp/2) and its complex conjugate at z = exp(-jp/2). Place poles at z = (1-a) exp(jp/2) and z = (1-a) exp(-jp/2). If a is small, the 3dB bandwidth is 2a. Therefore 2a = 3.2*2p/200 and a = 0.05.
( z - exp(jp/2) )( z - exp(-jp/2) ) z2 + 1 1 + z-2