Notes 3/MATH 223 & MATH 247/Winter, 2009

1. Matrices of linear mappings

Suppose we have:

vector spaces and ;

dim=, dim=;

basis = () of ; basis = () of ;

and

linear mapping .

We define the matrix of relative to the bases  and .The notation for this matrix is , where, in the subscript position, the two bases and are supposed to be indicated.

For short, let us write for this matrix now. Its definition is as follows.

is an matrix ( =dimension of the codomain space ; =dimension of the domain space ; );

;

or, in more detail,

= the the column of = the coordinate vector of relative to the basis  (note: is applied to the th basis vector in ) .

Theorem For any vector in , we have

=

(Partly) in words: the coordinate vector of relative to the basis equals the matrix described above times the coordinate vector of relative to the basis  .

Proof Let be . Then . Therefore, since is linear, . Thus,

= = = .

Here, we made use of the fact that “taking coordinate vectors is a linear operation”:

;

and

.

Most common special case: when , and  = . Now, is a linear operator. We abbreviate as . The columns of are the coordinate vectors , for ; and the equality of the theorem becomes, for arbitrary in :

= .

Example 1. =, ; , ; ==, the standard basis of , ==, the standard basis of ; defined by the formula

Now: = = = ;

= = =

The matrix = is

= [,] = [,] = .

Given, as an example, , we have: = = ;

= = ; = = .

On the other hand:

= = = =

We see that the theorem is verified in this special case (which, of course, is not a proof of the theorem itself). But we see even more: not only that the results of the two calculations are the same, but also, that the intermediate expressions are also, essentially, the same. The theorem itself has a very easy proof, one that is not more complicated than the calculations above.

Example 2. The spaces and , as well as the mapping the same as in Example 1; the bases ,  consist of vectors defined as follows:

; .

Now: = = = ;

= = =

But, now we need and . If = , then

= = = ;

which means that (!)

=

Denoting the matrix by , and accepting the fact that is invertible, this means that

=

[ is the so-called transition, or change-of-basis, matrix for the change of basis from , the standard basis in , to the “new” basis . Later, there will be more on “change-of-basis”.]

We can calculate

= ; and so

= = = .

Similarly,

= = = .

Thus, the matrix = is

= [,] = .

The last matrix can be used directly to calculate if is given in the form ; the result will be given in the form .

For instance, let . Then = , and thus

= = = ,

which means that

. (*)

If we want to check this result , then we calculate

= = ;

from the definition of directly:

= ;

On the other hand, from the formula (*)

= = ;

unbelievably, the same!

Example 3. = , = ;

= (), the standard basis of :

= (), the standard basis of :

;

defined by , with the fixed matrix .

First of all, it is easy to see, without calculation, that so defined is a linear mapping:

;

and similarly for the other condition.

We have that dim()=6 , dim()=4. thus, the matrix is a matrix.

The columns of are

.

We need to calculate the values at hand.

= = = ,

= = = ,

= = =

= = =

= = =

= = = ;

from which, the matrix is

=

2. Composition of linear mappings

Suppose we have vector spaces and . Let us also suppose that we have linear mappings and as in

.

Then we can form the composite linear mapping

: ,

defined as follows:

.

In other words, is applied to any by first applying to , and then, applying to the result . Since the domain of is supposed to coincide with the codomain of , the space , and is in , can be meaningfully applied to , and the result will be in .

It is easy to see that and being linear implies that is a linear.

Let us now also assume that we have respective selected bases ,  and  of the spaces and . Then we can form the following matrices:

, , and .

(Note carefully the subscripts in these formulas. They are the only ones that are meaningfully applicable in our context.)

Theorem Under the above notation, . In an abbreviated notation:

Remark: the theorem says that “composition of linear mappings corresponds to matrix multiplication.

Proof Let be any vector in . Let . Then = . We have . Therefore,

=

= ;

at the exclamation mark, we used the associativity of matrix multiplication:

,

with . We have shown that

= .

The definition of the matrix says that

= .

Therefore = . Since this holds true for all in , we must have that , as asserted.

2