9.2.12 (Presentation) FINAL

Algebra Set 10

E- Content in Mathematics

Adjoint and Inverse of Matrices

Objectives

From this unit a learner is expected to achieve the following

  1. Familiarize with the concept of determinant of a matrix.
  2. Understand some properties of determinants
  3. Learn the method to find cofactor of elements of a matrix and cofactor matrix.
  4. Learn that adjoint of a matrix is the transpose of the cofactor matrix.
  5. Understandthat a non-singular matrix posses an inverse and an invertible matrix is non-singular
  6. Learn the method to find inverse of a non-singular matrix using adjoint.
  7. Understand the concept of orthogonal matrix with examples.

Sections

  1. Determinants of Matrices
  2. Properties of Determinants
  3. Cofactors
  4. Adjoint
  5. Inverse of a Matrix.
  6. Orthogonal Matrices
  1. Determinants of Matrices

In this session we describe the method to find the inverse of a square matrix, provided the inverse exists. For the discussion the definition of determinant of a matrix and adjoint of a matrix are needed. We also discuss the concept of orthogonal matrices. Let us begin with a brief description of determinants.

Every square matrix has a number associated to it,called its determinant. In this section we define the determinant and discuss some of its properties. The determinant of is denoted by or

  • The determinant of a matrix is just the unique entry. i.e., if , then
  • The determinant of a matrix is defined by
  • The determinant of a matrix is defined by
  • Proceeding similarly, having defined the determinant of an matrix , determinant of an matrix is defined by

2. Properties of Determinants

We list some properties of determinants.

  • The determinant of a square matrix and determinant of its transpose are the same.
  • If A is a square matrix and B is the matrix obtained from A by interchanging two rows (or columns) then
  • If two rows (or columns) of a square matrix A are identical then
  • If A is a square matrix and B is the matrix obtained from A by multiplying every element in any row (or column) of A by k then
  • If A is a square matrix and B is the matrix obtained from A by adding a multiple of a row(column) to any other row(column) then
  • The determinant of a triangular matrix is the product of entries on the diagonal.

We prove some of the above properties for 3x3 matrices (in the case of rows). Proof for higher order matrices, in the case of rows as well as columns, are similar and are left as assignments.

Proof of Property 1.

Let

Let be the determinant of . Then

,

by a rearrangementof terms.

, proving property 1.

Proof of Property 3.

Suppose determinant of a matrix has two identical rows, say,

Suppose we interchange the first and second rows of the above determinant to obtain the determinant , then by Property 2,

…(1)

But it can be seen that there is no difference between and , so that

…(2)

From Eqs. (1) and (2), we have

which implies

which implies

The following is a very useful result.

Theroem 1 (Multiplicative Property) If A and B are square matrices of the same order, then

3. Cofactors

Consider an n-square matrix

Let denote the (n 1) –square submatrix of A obtained by deleting its th row and th column. The determinant is called the minor of the element Cofactor of is the “signed” minor given by

Then the cofactor matrix of is given by the matrix

A Property of Cofactors

Let A be an n-square matrix. Then the sum of the products of the elements of a row (or column) of A with the corresponding cofactors of elements of that row (or column) is equal to the determinant of A. But, if any row (or column) is multiplied by the cofactors of any other row (or column) then their sum is zero. That is,

and

Example 1 Find the cofactor of the element a23 = 6 in the matrix

Solution

Cofactor of a23 = 6 is given by, where the 2-square submatrix of A obtained by deleting its second row and third column. Hence

4. Adjoint

Definition Let be an -square matrix . The adjoint of , denoted by , is the transpose of the cofactor matrix of .

With the notations mentioned above,

Example 2. Find the adjoint of

Solution

Adjoint of A is given by where

Now

Similarly,

Hence

Theorem2 If A is an n-square matrix , then

where is the n square identity matrix.

Proof

Let be an n-square matrix and let

which also is an n-square matrix.

Now , the ijth entry of is obtained by multiplying the ith row of A with jth column of . Since ith row of A is and jth column of is we have

where are cofactors , so that

Therefore, is the n square diagonal matrix with each diagonal element Hence

Similarly ,

This completes the proof of the theorem.

The following corollary is immediate from the theorem.

Corollary If A is ann-square matrix with then

where 0 is the n-square zero matrix.

Theorem 3 If A is an n-square matrix , then

Proof

Let be the given n square matrix .

Now

, using the result

Hence, by Theorem 2,

Similarly,

This completes the proof of the theorem.

The following corollary is immediate from the theorem.

Corollary If then

5. Inverse of a Matrix

Definition Let A be an n-square matrix. If there exists an n-square matrix B, such that

AB = In = BA,

then B is called the inverse of A. In that case we say that A is invertible.

Remarks

  • If B is the inverse of A, then obviously, A is the inverse of B.
  • We will see that not all square matrices possess inverses.

Theorem4 If the inverse of a matrix exists, then it is unique.

Proof

Let A be an n × n matrix and let B and C be two inverses of A. Then by the definition of inverse,

AB = BA = In… (3)

AC = CA = In … (4)

From Eqn. (3),

AB = In

so that

C (AB) = CIn = C … (5)

From Eqn. (4),

CA = In

so that

(CA) B = In B = B … (6)

But

C(AB) = (CA)B

by the associativity of matrix multiplication. So by Equations (5) and (6), we get

C = B.

Hence the inverse of a matrix , if it exists , is unique.

Notation Theorem 4 assures us that if a matrix Ahas an inverse then it is unique. We denote “the” inverse of A by

Theorem5 The necessary and sufficient condition for a square matrix A to possess the inverse is that

Proof

Necessary part

Let A be an n square matrix which possess the inverse, sayB. Then by definition of inverse,

AB = In .

Hence

so that

Hence

Sufficiency part

Suppose Now define a matrix B, by the relation

... (7)

Claim : B is the inverse of A.

Similarly ,

Thus

AB = In = BA,

so by the definition of inverse of a matrix, B is the inverse of A.

Note We can define Bin Equation (7) if and only if

In view of Theorem 5,inverse ofA, when |A| 0 is given by the formula

We note that if the inverse for the matrix doesn’t exist.

Non singular matrix

Definition An matrix A is a singular matrix if its determinant otherwise A is a non- singular matrix. i.e., a matrix A is a non- singular if

In view of this definition, Theorem 4 can be restated as follows.

Theorem6 The necessary and sufficient condition for a square matrix A to possess the inverse is that A is non-singular. i.e., a square matrix is invertible if and only if it is non-singular.

Now we state and prove the reversal law of inverse of matrices.

Theorem7 If A and B are any two non- singular matrices of the same order, then AB is also non- singular and

(AB)-1 = B-1A-1.

Proof

Since , and since is also different from 0, so that AB is non-singular and hence by Theorem 6, (AB)-1 exists.

Now

(AB) (B–1A–1) = A (BB–1)A–1 = A I A–1 = (A I) A–1 = AA–1 = I

Similarly ,

(B–1 A–1 ) (AB) = I

Thus

(AB) (B–1A–1) = I = ( B-–1A–1) (AB) ,

and by the definition of inverse of a matrix

(AB)-1 = B-1A-1.

Corollary If are invertible matrices of same order, then

Proof

By induction on n and using Theorem 7, we have

Theorem8 If A is an n × n non-singular matrix then AT is invertible and

i.e., the operations of transposing and inverting are commutative.

Proof

Since |AT| = |A| and since |AT| is also different from 0. So AT is invertible.

We have the identity,

... (8)

Taking transposes of matrices in Eq.(8), we obtain

... (9)

Using the reversal law of transposes, Eq. (9) can be written as

... (10)

Now Eq. (10) shows that is the inverse of AT. This completes the proof of the theorem.

Example 3. Find the inverse of the matrix

Solution

By computation, |A| = -11  0 , so A is non- singular and hence A1 exists. Also, from Example 2, we have

Hence

**********************************************************************************************************

6. Orthogonal matrices

Definition A square matrix Ais said to be orthogonal if AAT = I = ATA, where Iis the identity matrix of same order as that of A.

RemarkIn view of definition of an inverse of a matrix, the above definition implies, if A is orthogonal, then

AT = A1 .

Example 4Show that is orthogonal and hence determine its inverse.

Solution

Now , so that

, the 2 × 2 identity matrix.

Similarly, it can be seen that

ATA = I.

i.e., AAT = I = ATA,

so that Ais orthogonal. Since A is orthogonal, by the remark, we have

Theorem9 If A is an orthogonal matrix, then

Proof

Now AAT =A AT

=AA, since A= AT,

Hence

AAT= A2 . . . (11)

Also, since A is orthogonal, AAT = I, so that

AAT  = I  = 1 . . . (12)

From Equations (11) and (12), we have

A2 = 1

and hence A=  1.

Theorem 10 The product of two orthogonal matrices is orthogonal.

Proof Let A and B be any two orthogonal matrices of the same order. Then we have,

AAT = I = ATA and BBT = I = BTB.

Hence

(AB)T (AB) = (BTAT) (AB), since (AB)T = (BTAT)

= BT (ATA) B, by the associativity of matrix multiplication.

= BT ( I ) B = BT (I B) = BT B = I .

Similarly,

(AB)(AB)T = I.

Hence

(AB)T(AB) = I = (AB)(AB)T,

and so AB is orthogonal. Therefore the product of any two orthogonal matrices is orthogonal.

Theorem 11 The inverse of an orthogonal matrix is orthogonal.

Proof

Let A be an orthogonal matrix. Then,

AAT = I = ATA … (13)

Taking the inverse of all matrices in Equation (13), we get

(AAT) 1= (I )1= (ATA)1.

Using Theorem 7, the above implies

(AT)1A1= I =A1 (AT)1 .

Now using Theorem 8, the above implies

(A 1)T A1= I =A1(A 1)T .

HenceA 1 is an orthogonal matrix.

Summary

Now let us conclude the session. In this session we have defined the term determinant of a square matrix and discussed some properties of determinants. Method of finding cofactor matrix and adjoint matrix of any square matrix and that of finding the inverse of a non-singular matrix have been discussed. We have concluded the session with a brief description on orthogonal matrices. Now try to solve the following questions.

Assignments

1. Find the cofactor matrix of

2. Find the cofactor matrix of

3. Find the cofactor matrix of the matrix of and hence find the determinant of A.

4. Find the adjoint of

5. Find the adjoint of

6. If A and B are square matrices of the same order, prove that

7. Prove that

8. Find the inverse of the matrix

9. Find the inverse of the matrix

10. Show that

11. Prove that the following matrices are orthogonal.

(a) (b)

QUIZ

1. If A is a n-square non-singular matrix, then

(a)

(b)

(c)

(d)

Ans. (a)

2. Which of the following is true:

(a) is not invertible

(b) is orthogonal

(c) The inverse of is

(d) The inverse of is

Ans. (b)

3. Inverse of the non-singular diagonal matrix is

(a)

(b)

(d)

Ans. (d)

FAQs

1. Whether negative integral powers of a non-singular matrix possible?

Ans. Yes. If A is a non-singular matrix and k a positive integer, then

Also, we note that

Hencenegative integral powers of a non-singular matrix is possible.

2. Whether every matrix possess (multiplicative) inverse?

Ans. No. Only non-singular (square) matrices possess inverses.

3. Is every orthogonal matrix possess an inverse?

Ans. Yes. If A is an orthogonal matrix, then its inverse is

4. If a square matrix possess an inverse, can you say that the matrix is orthogonal?

Ans. No.

Glossary

REFERENCES

Books

  1. I.N. Herstein, Topics in Algebra, Wiley Easten Ltd., New Delhi, 1975.
  2. K. B. Datta, Matrix and Linear Algebra, Prentice Hall of India Pvt. Ltd., New Delhi, 2000.
  3. P.B. Bhattacharya, S.K. Jain and S.R. Nagpaul, First Course in Linear Algebra, Wiley Eastern, New Delhi, 1983.
  4. P.B. Bhattacharya, S.K. Jain and S.R. Nagpaul, Basic Abstract Algebra (2nd Edition), CambridgeUniversity Press, Indian Edition, 1997., New Delhi, 1983.
  5. S.K. Jain, A. Gunawardena and P.B. Bhattacharya, Basic Linear Algebra with MATLAB,KeyCollege Publishing (Springer-Verlag), 2001.

Free E-Books

Websites

1/Algebra and Trigonometry SET TEN (Adjoint and Inverse of Matrices)