14:332:421 Communications Engineering

Lecture Notes for Monday September 18, 2000

Author: Jen Buonanno

Probability Review

If we have a random process X(t), we have the following definitions:

Joint PDF of X(t1)…X(tk)fX(t1)…X(tk)(x1…xk) (this is hard to calculate)

Autocorrelation RX(t,) = E[X(t)X(t + )]

Stationary ProcessfX(t1)…X(tk)(x1…xk) = fX(t1 + T)…X(tk + T)(x1…xk)

Wide Sense Stationary (WSS) E[X(t)] = XandRX(t,) = RX()

CovarianceCX(t,) = E[(X(t)-E[X(t)])(X(t + )-E[X(t + )])]

= RX(t,) – E[X(t)]E[X(t + )]

A strictly stationary process is one where the PDF is the same for any time offset.

A stationary process is WSS, but not all WSS processes are stationary!!

For WSS X(t): CX(t,) = RX() – E[x]2 = CX()

Gaussian Process

The book definition is X(t) is a Gaussian random process if:

Y = 0t1 g(t)X(t)dtis a Gaussian random variable for all well behaved g(t)

Using matrices, we can express a Gaussian random process as such:

X = X(t1)x = x1m = E[X(t1)]

: : :

X(tk) xk E[X(tk)]

X(t) is a Gaussian random process if X has a multivariate Gaussian PDF for all t1…tk

The PDF of X is defined as:

fX(x) = 1 . – (x – m)TC-1(x – m)/2

(2)k/2|C|1/2 e

|C| = det(C) and C is the covariance matrix where Ci,j = cov(X(ti), X(tj)) = CX(ti,tj – ti)

Let’s see what happens when k = 1 in the fX(x) equation:

X = [X(t1)] = Xm = [E[X(t1)]] = mx = [x1] = x1

C = E[(X(t1) – m)(X(t1) – m)] = E[(X – m)2] = 2 where 2 is the variance of X, var(X)

C-1 = 1/2

So, this gives us:

fX(x) = 1 . – (x – m)TC-1(x – m)/2 as

(2)k/2|C|1/2 e

fX(x) = 1 . – (x – m)2/22 which is the formula we know and love

2efrom probability last year…

Let’s look at the case k = 2:

X = X(t1) = X1m = E[X1]

X(t2) = X2 E[X2]

Ci,j = cov(Xi,Xj) = cov(X1,X1)cov(X1,X2) = 2 where  = cov(X1,X2)

cov(X2,X1)cov(X2,X2)  2cov(X2,X1)

C-1 = 2x2 inverse – the general (math) formula for this is:

a b-1 = 1/(ad-bc) d -b

c d -c a

C-1 = 1/(1222) 22-

- 12

If we plug and chug with these values, we’d get fX(x) = fX1X2(x1,x2), which is a bivariate Gaussian PDF.

So, for a simple example, let’s look at the following:

X(t) is a WSS Gaussian process with zero mean such that X(ti) and X(tj) are uncorrelated (covvairance = 0) for all ti tj

X = X1m = 0

: :

Xk 0

C = Cij = CX(ti,tj – ti) = E[(X(ti) – 0)(X(tj) – 0)] = E[X(ti)X(tj)] = E[Xi]E[Xj] = 0 i  j

When i = j: Cii = E[X(ti)2] = 2 (this is the variance) NOTE: 2 does not depend on i

because this is a WSS process

C = 2 0 0 0 … the main diagonal of this matrix is 2

| 0 2 0 0 …| det(C) = 2k

| : ………… |

 0 0 0 …2

C-1 = 1/2 0 0 0 …

| 0 1/2 0 0 …|

| : ………… |

 0 0 0 …1/2

The PDF becomes:

fX(x) = 1 . – xTC-1x

(2)k/2(2k)2 e

xTC-1x = (x1…xk) 1/2 … x1 = (x1…xk) x1/2 = x12 + … + xk2

: : : 2

 …1/2 xk xk/2

The PDF becomes:

fX(x) = 1 . – (x12 + … + xk2 )/22

(2)k/2e

= 1 . – x12/2 * …* 1 . – xk2/2

22e 22 e

= fX1(x1)…fXk(xk)

Two important points:

A Gaussian uncorrelated process means that the process is independent.

The joint PDF of a Gaussian process is the product of the individual PDFs.

Gaussian Noise Process N(t)

Our model: E[N(t)] = 0 (we have zero mean!)

Noise is unpredictable; if this was not zero, we’d have some DC stuff in here, and that would mean that the expected value wouldn’t be unpredictable.

The unpredictability of the noise means:

N(t1)…N(tk) are useless in predicting N(tk+1) N(tk+1) is independent of N(t1)… N(tk)

for all ti tj , N(ti) and N(tj) are independent and uncorrelated

If we assume a process is Gaussian, we assume it has zero mean.

Power per unit frequency is to be constant.

N(t) has a power spectral density SN(f):

So SN(f) = No/2 = CONSTANT

RN() = No/2 ()

SN(f) = -oo+oo RN()e-j2f d = No/2-oo+oo(t) e-j2f d = No/2

If we have a power spectral density of something, we have a WSS process

Covariance:

Cov(X(t),X(t + )) = RN(t,) = No/2 () = 0 for  0

One problem – the average noise power of this process is infinite:

-oo+oo SN(f) df = No/2 -oo+oo df = oo so this really doesn’t exist!!

In a real physical system we have limited bandwidth range.

If we pass N(t) through a filter H(f), we get N’(t) and we have the following relationship:

SN’(f) = |H(f)|2SN(f) = No/2 |H(f)|2

Amplitude Modulation

We’re referring to DSB-SC (Double Side Band – Suppressed Carrier)

If we have a message x(t) that is bandlimited:

(x(t) is also referred to as a baseband signal, which means it’s Fourier Transform is centered at 0).

On the transmitter end, we have:

x(t)  y(t)

cos(2fct)

X(t) = x(t)cos(2fct) (this is the modulated signal from above)

Y(f) = ½ X(f-fc) + ½ X(f + fc)

Usually, the function Y(f) is very narrow (about 30kHz in width) centered around a large carrier frequency, fc .

Typical carrier frequencies:

AM radio:1 MHz

FM radio:100MHz

Cell phones:850MHz

PCS:1900MHz

Wireless LAN:2.4 GHz

On the receiver end of things, we have:

y(t)  v(t)  LPF  ½ x(t)

cos(2fct) This is a local oscillator – in a perfect world this local oscillator

and the modulating cos(2fct) would be perfectly in phase.

When these phases are not equal, we reduce the received signal

v(t) = y(t)cos(2fct) = x(t)cos2(2fct) = ½ x(t)[1 + cos(4fct)]

V(f) = ½ X(f) + ½ (½)[X(f – fc) + X(f + fc)], which looks like:

And if we low pass filter about the zero frequency, we finally get an output of ½ x(t).