Basic Topics

Objectives

Get you comfortable and familiar with tools of linear algebra and other applications.

Will be given 12-14 exercise sets, will choose the best 10 (must serve at least 10)

80% of the grade is based on homework!

20% is based on the exam

Topics Planned to be Covered

-Vector Spaces over and .

-Basic definitions connected with vector spaces

-Matrices, Block of matrices

-Gaussian elimination

-Conservation of dimensions

-Eigen values and Eigen vectors

-Determinants

-Jordan forms

-Difference and differential equations

-Normed linear spaces

-Inner product spaces (orthogonality)

-Symmetric, Hermitian, Normal matrices

-Singular Value Decompositions (SVD)

-Vecor Valued Functions of many variables

-Fixed point theorems

-Implicit function theorem

-Extremal problems with constraints

General Idea of Linear Algebra

Given

Find

A choice of that satisfies these equations is called a solution.

1)When can you guarantee at least one solution?

2)When can you guarantee at most one solution?

3)How can you find solutions?

4)Are there approximate solutions?

Short notation:

A vector space over is a collection of objects called vectors on which two operations are defined:

1)Vector addition –

2)Scalar multiplication –

Subjects to following constraints:

1)

2)

3)There is a zero vector, s.t.

4), then there is a s.t.

5)

6), then

7), then

8)

Vector Space Example

Claim: if is a vector space over , then there is exactly one zero element.

Proof: Suppose that and are both zero elements.


Lets set

Lets set

Therefore

Claim: If , there is exactly one additive inverse

Suppose and are both additive inversesof .

Lets mix things up:

We denote the additive inverse of by .

Since this is clumsy to write we abbreviate this as

Note: We can replace by !

Let be a vector space over Means we can replace by either or in all that follows this statement.

Definitions

Let be a vector space over .

A subset of is called a subspace of if:

1)

2)

If (1) and (2) hold, then is a vector space of over .

Example

As we can see, this is a subspace…

However…

Let

So this is not a subspace!

Span

A span

Check: span is a subspace of

Clear

Span

For example:

But they are all equal(!!):

Linear Dependency

are said to be linearly dependent if there is a choice s.t.

Not all of which are zero!

If say

are said to be linearly independent if the following statement is true:

If are linearly independent, then

for all .

A set of vectors is said to be a basis for a vector space over , if and:

1) (there’s enough vectors to span the entire space)

2)are linearly independent. (there are’t too many vectors)

Matrix Mutiplication

If are matrices,

is a matrix with

If are matrices, then

In general – !

or

Is said to be left invertible if there is s.t.

is right invertible if there’s a s.t.

If and then .

Later we shall show that this also forces .

Triangular matrices

is said to be upper triangular if for

is said to be lower triangular if for

If both upper and lower than we denote it as simply triangular.

Theorm: Let be a triangular matrix, then A is invertible if and only if all the diagonal entries are nonzero.

Moreover, in this case, is upper triangular its inverse is upper triangular

And is lower triangular its inverse is lower triangular.

Investigate
want to:

Want to choose so that:

(4),

(2)+(

(1)

(3)

If it is necessary that and .

Also we have a formula for B

Can show

Theorm: Let is triangular, then A is invertible if and only if the diagonal entries of are all non-zero.

Moreover, if this condition is met, then A is upper triangular the inverse to is upper triangular. (lowerlower)

invertible, there exists a matrix

Such that,

Let upper triangular matrix.

Plan: Suppose we know that if upper triangular, then is invertible if and only if it’s diagonal entries are nonzero.

Objective: Extend this to upper triangular matrices.

Suppose first that is invertible.

TODO: Draw single vector multiplication

(1)

(2)

(3)

(4)

(4)

(2)+

(1)+

!

If the theorem is true for marices, then if triangular upper matrix, diagonal of are nonzero and !

Showed: is upper triangular invertible.

Now take upper triangular with non zero entries on its diagonal.

is upper triangular matrix with non-zero entries on its diagonal.

Theorm true for matrices, then this = there exists a

Lets define B=[

Can show –

transpose of .

Example:

entry of is entry of

Hermitian transpose - (take complex conjugate)

, - subspace of

Nullspace of

- subspace of

Need to show all properties. But they exist…

Matrix Multiplication

There are matrices that are right invertible but not left invertible.

Similarly - there are matrices that are left invertible and not right invertible.

This does not happen if .

Gaussian Elimination

Looking for solutions for …

For instance:

A Gaussian Elimination is a method of passing to a new system of equations that is easier to work with.

This new system must have the same solutions as the original one.

is called upper echelon if the first nonzero entry in row , sits to the left of the first nonzero entry in row .

Example:

The first nonzero entry in each row is called a pivot. In the example we have 3 pivots marked in red.

Consider the equation:

Two operations:

1)Interchange rows

2)Substract a multiple of one row from another row

Each operation corresponds to multiplying on the left by an invertible matrix.

We can revert the process later:

Steps:

1)Construct the augmented matrix

2)Interchange rows 1 and 2.

3)Subtract 2 times row 1 from row 3

4)Add 2 times row 2 to row 3

5)Solve the system –

6)Work from bottom up

Solve the pivot variable:

Note:

------End of lesson 2

Gauss Elimination

Try to solve

Two operations:

1)To permute (interchange) rows

2)Subtract a multiple of one row from the other

(1)&(2) are implemented by multiplying on the left by the appropriate invertible matrix.

where is an appropriate permutation matrix

where is a lower triangular matrix

Eventally you will have

Such that is an upper echelon matrix.

Now we need to solve .

Work from the bottom up, Solve the pivot variables in terms of the others.

Another example:

Try to solve

Solve for and find that:

This is a solution of for any choice of and provided .

Let’s track the Gaussian elmnimation from a mathematical point of view:

, then there exists an invertible matrix such that upper echelon.

Theorem: be upper echelon with pivots, ,

Then:

(1)

(2) is left invertible

(3) is right invertible

Proof of (2):

Suppose .

If no zero diagonal entries

Otherwise ,

The left invertible matrix

by definition

(2a) We shown that is left invertible.

(2b) left invertible .

If

Let By definition, it means that

By assumption, is left invertible there is a

(2c) has pivots.

Columns of are independent.

That forces k=q

is right invertible

upper triangular with nonzero diagonal entries U is invertible

You can find such a such that here is upper triangular with nonzero diagonal elements, and .

so that Therefore is invertible. By VP!

And it doesn’t matter what we choose.

(3b) is the input, is the output.

The range is the set of outputs.

Claim: Given any , can find an

Let W be a right inverse to , let .

(3c)

If then it must look something like:

We have zero rows.

So all of our answers would always have zeroes at the end! Therefore, we don’t cover all

Theorem: If and then

Proof: Clear that .

If we apply Gaussian elimination, can find invertible matrix such that which is upper echelon.

isleft invertible has pivots.

Theorem: Let be a vector space over and let be a basis for .

And let also be a basis for . Then .

Define as

Define as

But I can do this again symmetrically with u’s replaced by v’s, I would get that

So .

------End of lesson 3

Theorem:

Let be a vector space over . Let be a basis for and also a basis for , then .

That number is called the dimension of

The vector space . Define its dimension to be 0.

A transformation (mapping, operator,…) from a vector space over into a vector space over is a rule (algorithm, formula) that assigns a a vector in for each .

Example 1: Let

Example 2: Let

(by matrix multiplication)

Definition: A transformation from into is said to belinear if:

1)

2)

Is linear?

No!

Every linear transformation can be expressed in terms of matrix multiplication.

If is a linear transformation from a vector space over into the vector space over . Then define:

–null spaceof .

- range spaceof .

is a subspace of

is also a subspace of

Conservation of dimension

Theorem: Let be a linear transformation from a finite dimension vector space over into a vector space over .

Then

Proof:

Suppose and

Let be a basis for .

Claim is also a finite dimensional vector space. Let be a basis for .

Since , so we can find .

Claim: are linearly independent.

To check: Suppose there are coefficients:

Let’s activate the transformation on both sides

Why

Since , their transformation is zero!

Linearly independent items must never be zero, otherwise you can multiply the zero one with some value other than zero and get zeros.

is a basis for are linearly independent.

is a basis (for are linearly independent.

Claim next: .

Let and consider .

is a basis for

If it forces to be the zero transformation

Last time we showed that:

If is an upper echelon with pivots.

(1)

(2) left invertible

(3) right invertible

Objective: Develop analogous set of facts for not necessarily upper echelon.

(1) then there exists an invertible matrix is upper echelon.

are invertible matrices, then is also invertible.

(2)Let and are invertible. Then:

Suppose there is

But this also means .

i.e. .

Suppose i.e.

Can we also show ?

No. Or at least not always…

Let

Let

Let . B is invertible.

What we can show, is the dimensions of these spaces are equal.

Let be a basis for

Easy to see: .

Claim: they are linearly independent.

Proof:

But is a basis (therefore independent) so .

So we’ve shown that:

Now take basis – of

Check are linearly independent. Same as before…

So now we have two inequalities resulting I

Definition: If

Say

number of pivots in .

Theorem: Let then

(1)

(2) is left invertible

(3) is right ivertible

Proof:

If (1) is clearly true.

If invertible such that upper echelon.

This implies that number of pivots in .

Suppose is left invertible there is a

Same as to say is left invertible.

(3)

------End of lesson 4

linear transformation from a finite dimension vector spce over into a vector space over , then

If (also a subspace of )

Theorem: If , then:

(1)rank

(2)rank is left invertible

(3)rank is right invertible”everything”

Exploited fact: if upper echelon with pivots, then:

(1)

(2) is left invertible

(3) is right invertible

Gaussian elimination corresponds to finding and invertible such that

invertible is invertible.

Rank=

If upper echelon with pivots rank

Implications

1)System of equaeions:

Looking or vectors (if any).

(a)As left invertible, guarantees at most one solution.
To chek: Suppose

If is a left inverse of

(b)right invertible guarantees at least one solution.
Let be a right inverse of and choose
Then

2)If and is both right invertible and left invertiblethen .
Earlier we showed that
and then .
Rank.
left invertible
right invertible
So .

3)invertible, is upper echelon.
And lwr number of pivots in .
rankrank.
Claim: The pivot columns of are linearly independent and form a basis to
The corresponding columns in are linearly independent and form a basis for

are lin independent, and span
If ,
claim: are linearly independent and spanSuppose we can find coefficients such that

since and are linearly independent.

4)Related application
Given
Find a basis for span
Define
Bring to upper echelon form via Gaussian elimination.
The number of pivots in
and the corresponding columns will be a basis

5)Calculating inverses
Let be and it is known to be invertible.
How to calculate its inverse?


Gauss-Seidel

Do all these calculations in one shot


upper echelon

Suppose

D=


(1)Manipulate by permutations and subtracting multiplies of rows from lower rows.

(2)Multiplying through by a diagonal matrix

(3)Subtract multiplies of rows from higher rows inverse of is .

,

entry of is entry of

Claim: rankrank

upper echelon

rankrank

Rank rank

number of pivots in =rank.

If is a linear transformation from a vector space over into itself ()

A subspace of is said to be invariant under if for every .

The simplest non-zero invariant subspace (if they exist) would be one dimensional spaces.

Suppose is one dimensional and is invariant under , then if you take any vector ,

In this case, is said to be thean Eigen value of and is said to be an Eigen vector.

Important – !!!

In other words, a vector is called an Eigenvector of if:

(1)

(2) some

That is called an Eigenvalue of .

If then .

So there’s a “flexibility” of stretching.

There isn’t the Eigenvector.

Theorem: Let be a vector space over , a linear transformation from into and let be a non-zero finite dimension subspace of that is invariant under . Then, there exists a vector and a .

Proof:

Take any . Suppose

Consider . That’s a list of vectors. In an dimensional space. Therefore, they are linearly dependent.

Can find not all of which are zero such that

Suppose that is the largest index that is non-zero.

So this expression reduces to

Consider the following polynomial:

Possibilities Either:

(1)

(2) but

(3) but

(4)

(5) but

----- End of lesson 5

Previously – on linear algebra

Thorem: Let be a vector space over , T a linear transformation from into , and let be a finite dimensional subspace in that is invariant of . (i.e. , then ). Then there exists a non-zero vector and a .

We could have just worked just with .

Note:

Could have rephrased the conclusion as:

There is a finite dimension subspace of that is invariant under There is a one dimension subspace of that is invariant under .

Implication

Let , Then there exists a point

Because the transformation from into that is defined by the formula

Proof: Take any vecto .

Consider the set of vectors . These are vectors in - an dimensional space.

Since they are independent, there are coefficients:

such that not all coefficients are zero.

Same as to say

Let Claim (trivial)

Claim that if we look at the ordinary polynomial:

Suppose

Either:

(1)

(2) and

(3) and

We are looking fo

Questions: Suppose

Can one guarantee that has at least one real Eigen value?

No!

Looking for

If The entire vector is zero! Not acceptable…

So i.e.

Result for says that there is always a one dimensional subspace of that is invariant under A.

- there always exists at least one two dimensional subspace of that is invariant under .

Implication:

Let - then there exists a point such that

Suppose for some distinct points in .

That is to say, there are non-zero vectors of

Claim are linearly independent.

Let’s check for

Suppose

Let’s break it up:

Continue with our calculation:

We’ve reduced the problem!

(we know the lambda’s are different so the rest of the coefficients are not zero)

We can do it again to get


Now must be zero since the other scalars and vectors in the expression are non-zero.

We can in fact repeat the process to knock out and also .

This can also be generalized for other than 3.

if

, then there is at least one dimensional subspace of that is invariant under .

Translate this to: There is

with real coefficients

Example:

If with distinct Eigenvalues, then .

Because, Eigenvectors are linearly independent. And they sit inside an dimensional space .

The matrix of the Eigenvectors:

is an matrix with rank .

If , then

is invertible. (Rank k)

So we can rewrite this as:

If you need to raise some matrix to the power of 100

Suppose:

So

Sums of Subspaces

Let be subspaces of a vector space over .

Claim: is a vector space.

Lemma:

Let subspaces of a finite dimensional vector space

Another claim: is a vector space.

Proof: Suppose is a basis for

if really

basis for

. Suppose

is a basis for

Claim: basis for

Need to show:

(a) is a linear combination of these

(b)Show are linearly independent.

Claim: b is correct.

Denote

Suppose:

We don’t know who is, but it is definitely in !

It has to be expressible this way:

----- End of lesson 6

subspaces of a vector space over .

Both of these are vector spaces.

We established that a dimension

The sum is said to be direct (called a direct sum) if

(consequence, the sum is a direct sum )

In a direct sum, the decomposition is unique.

If such that and also such that then

and .

Why?

So . From our assumption, .

Similarly, .

This means and .

subspaces of a vector space then

is said to be direct if

It’s very tempting to jump to the conclusion (for example) that

is a direct sum

However, this is not enough.

Consider

But

So forget the false conclusions, and just remember .

Generally we say is direct every collection of non-zero vectors, at most one from each subspace is linearly independent.

Suppose . . If the sum is direct, then all non-zero are linearly independent.

Every has at least one Eigenvalue. i.e. there is a point and a vector such that (same as saying

That is equivalent to saying

Suppose has distinct Eigenvalues . We shoed that if then

are linearly independent,.

Same as to say is a direct sum.

Same as to say

Criteria: A matrix is diagonizable can find linearly independent Eigenvectors.

Same as to say .

, distinct Eigenvalues.

Additional fact: are linearly independent. Therefore, is invertible. . is a matrix.

So

is a matrix.

2 distinct Eigenvalues.

Crucial issue is the dimension of the null spaces: and

a basis for

a basis for

(1)Find Engenvalues of (distinct Eigenvalues)

(2)Find basis for each space

(3)Stack resulting vectors
(columns of are taken from basis and always linearly independent)

Question: Do you always have enough columns? No. could be non invertible, and then is not diagonizable.