Jump to content

Inverse of a matrix


Recommended Posts

For matrices, if AB=I, then does that mean BA=I also?

 

If I have 2 matrices and I have AB=I, is that sufficient to conclude that B is the inverse of A? Or do I have to calculate BA explicitly too?

 

I've tried finding a simple 2x2 counterexample but I can't find any. All examples of AB which I've conjured up also have BA=I.

Link to comment
Share on other sites

I don't know much about matrices, but wikipedia seems to...

It is important to note that commutativity does not generally hold; that is, given matrices A and B and their product defined, then generally AB ≠ BA.
Then again, further investigation reveals, that with invertible matrices, then AB=BA=In does hold. Don't you just love Wikipedia?
Link to comment
Share on other sites

Thanks for the replies. Would the following proof be correct?

 

Thm:

Suppose A, B are square matrices and AB=I. Then BA=I also and B=A^-1.

 

Proof:

Suppose AB=I, then the system Bx=b has a solution for any column vector b, since x=(AB)x=A(Bx)=Ab.

Now we have

B(AB)x=Bx=b

on the other hand:

(BA)Bx=(BA)b. So (BA-I)b=0 for any column vector b, therefore BA=I.

 

I think there are a few gaps in the logic. Can anyone help me prove this?

Link to comment
Share on other sites

For matrices, if AB=I, then does that mean BA=I also?

 

If I have 2 matrices and I have AB=I, is that sufficient to conclude that B is the inverse of A?

No but almost. A and B additionally must be square matrices.

 

Or do I have to calculate BA explicitly too?

I think in most cases it´s easier to check if they are square matrices.

 

I've tried finding a simple 2x2 counterexample but I can't find any. All examples of AB which I've conjured up also have BA=I.

That is because there is none. 2x2 is square and hence AB=1 => BA=1. I´ll sketch a proof in below but I cannot guarantee that it will be too helpful to you as I don´t know which things I can assume known and which not; for a formal proof, better check a book on linear algebra.

 

This in general is not true for any invertible matrix A and any other invertible matrix B.

To my own surprise, it is. Assume A and B are their inverses, respectively. Assume there was another matrix C=B+X such that AC=1. Then, AB = 1 = AC = AB + AX and therefore AX=0. You might already guess that AX=0 => X=0 (and therefore C=B) if A is invertible but let´s make it explicit:

 

[math] AX = \left( \begin{array}{c} A_1^t \\ \dots \\ A_n^t \end{array} \right) \left( X_1 \dots X_n \right) [/math], where I have written the matrices in row-vectors [math] A^t_i [/math] and column-vecotors [math] X_i [/math]. Since A is an invertible matrix, the vectors [math] A_i [/math] must be linearly independent. Now, since we want [math] C \neq B \Rightarrow X \neq 0 [/math] it can be concluded that there is at least one k, such that [math] X_k \neq \vec 0 [/math]. But from AX=0 we know that for all j: [math] \left< A_j | X_k \right> = 0 [/math] where <...|...> denotes the scalar product. Now, the crucial point is that the A_i form a base for the n-dimensional vector space they and the X_i belong to because A is invertible (This is the point where I assume knowledge that might not be there. They must be linearly independent for det(A)=0.). Hence, to satisfy [math] \left< A_j | X_k \right> = 0 [/math] for all base vectors [math]A_j[/math], [math]X_k [/math] can only be zero, which was explicitely excluded in case that [math] B \neq C [/math].

 

So far it was shown that if AB=1 for an invertible A, B is unique. Since updating showed me Zareon already replied, I´ll first read his post before making any further thoughts.

Link to comment
Share on other sites

Prelude: I am not sure if it´s correct but a math prof of mine once said "a proof is an argument that everyone understands and agrees with", so in this case:

Thm:

Suppose A, B are square matrices and AB=I. Then BA=I also and B=A^-1.

 

Proof:

Suppose AB=I, then the system Bx=b has a solution for any column vector b, since x=(AB)x=A(Bx)=Ab.

I don´t understand the "for any vector b" part. Had you said "for any vector x", I had understood it as you can multiply any vector x with the matrix B. What I don´t understand why any vector can be constructed by multiplying another vector x with B. There definitely is an additional restriction on B (just let B=0 and try to construct and [math] b \neq \vec 0 [/math]) which you didn´t mention.

 

Sadly, you need the "any b"-statement here:

So (BA-I)b=0 for any column vector b, therefore BA=I.

So the "any b" statement must be made more explicit. But I have the bad feeling that it´s only true if AB=BA=1 ....

Link to comment
Share on other sites

I don´t understand the "for any vector b" part. Had you said "for any vector x", I had understood it as you can multiply any vector x with the matrix B. What I don´t understand why any vector can be constructed by multiplying another vector x with B. There definitely is an additional restriction on B (just let B=0 and try to construct and [math] b \neq \vec 0 [/math]) which you didn´t mention.

 

What I meant was that, assuming AB=I, then for any vector b there exists an x such that Bx=b. The proof is being that, if you multiply both sides by A, then you get x=Ab. But I guess that's not a proof at all :-( It sounds ok, but I've got a shaky feeling with it. It probably assumes what I`m trying to prove.

 

I`ll go and try to understand your proof.

Link to comment
Share on other sites

I think your proof works out fine...though there is an easier way to do it:

 

Let A be an invertible matrix, B is its right inverse, C is its left inverse. Then:

C = CI = C(AB) = (CA)B = IB = B.

 

However, this only works for matrices that are invertible in both directions; a matrix can have a one-sided inverse.

=Uncool-

Link to comment
Share on other sites

@Atheist: sorry for being not clear enough in my previous post.

What I meant is that in general it is not true for any matrix A and B, which both are invertible (non-singular), but otherwise have no relation. Of course, I can fully go with you when A and B are each other's inverse.

 

@uncool: Can you give an example of a matrix which only has an inverse on one side? I'm quite sure that for every square matrix, with elements from a field, and with non-zero (invertible) determinant has both a left-sided and a right-sided inverse, which are equal.

Link to comment
Share on other sites

@Atheist: sorry for being not clear enough in my previous post.

What I meant is that in general it is not true for any matrix A and B, which both are invertible (non-singular), but otherwise have no relation. Of course, I can fully go with you when A and B are each other's inverse.

My statement went further. What I said was that if A and B are square matrices, A is invertible and AB=1, then B is the inverse of A. So I was not assuming that A and B are each other´s inverse, I was showing it under the given conditions. What I was trying to show is that for an invertible A, there is only one B such that AB=1 and therefore that B must be the inverse.

Link to comment
Share on other sites

You're lucky that I'm in a linear algebra course right now.

Note:If you see a semi-colon in any of the matrices below, it indicates a new row.

 

The most general equation is [math]Ax=b[/math] where A is a square matrix, x is the unknown vector, and b is a vector that matches the inner dimension of the A square matrix.

 

Note: If A is not square, then this equation is invalid.

 

The augmented matrix [math][A|b][/math] can be written as [math]A \cdot x=b[/math]

 

Identity matrix is [math]I_{mxn}[/math].

i.e. [math]I_{2x2}[/math] is

[1 0; 0 1]

 

[math]A_{mxn} \cdot I_{nxm} = A[/math]

 

If A is a SQUARE matrix, i.e. size nxn, then [math]A \cdot I = A[/math].

 

DEFINITION: Matrix A of size nxn is invertible if there is a matrix b of size nxn such that [math]AB=BA=I_{nxn}[/math].

 

Therefore, B is called an INVERSE of A, denoted by [math]A^{-1}[/math].

 

Thereom: If A has an inverse, then it has exactly one inverse.

 

For 2x2 matrices, you can determine whether they are invertible or not. If A=[a b; c d], you have to use:

[math]A^{-1} = \frac{1}{ad-bc}[d -b; -c(space) a][/math]

 

If the matrices are 3x3 or greater, you can use the Gaussian-Jordan Elimination.

 

 

If you have any more questions, let me know. :)

Link to comment
Share on other sites

Ok, I'll give you an example.

Let [math]A=\begin{bmatrix}1&2\\3&4\end{bmatrix}[/math], where [math]A=\begin{bmatrix}a&b\\c&d\end{bmatrix}[/math].

 

So you have a=1, b=2, c=3, and d=4.

 

Using [math]

A^{-1}=\frac{1}{ad-bc}\begin{bmatrix}d&-b\\-c&a\end{bmatrix}

[/math], you replace the numbers.

 

It would be: [math]\frac{1}{-2}\begin{bmatrix}4&-2\\-3&1\end{bmatrix}[/math]

 

Since the denominator of -2 is not equal to 0, you know it's an inverse matrix. You then multiple it to the matrix.

 

The inverse matrix would be:

[math]\begin{bmatrix}-2&1\\\frac{3}{2}&\frac{-1}{2}\end{bmatrix}[/math]

 

That's it for 2x2 matrices.

 

To find the inverse for 3x3 matrices, it's farther more complicated. But it's almost similiar to the method above. :cool: OR, you can use Gaussian-Jordan Elimination, using the augmented matrix [math][A|I][/math].

Link to comment
Share on other sites

its the 2x2 case of a formula for calculating matrix inverses.

 

the term ad-bc is the determinant of the matrix, if this is zero then the matrix has no inverse as you get a divide by zero error (this can also be found by row reduction)

 

the term on the right is called the adjoint of A, and is the matrix of cofactors.

 

the cofactor of a matrix is, (-1)^(i+j)x M_i,j

 

where M_i,j is the ijth minor of the matrix A, a minor is just the determinant of the matrix with the ith row and the jth column removed.

 

calculating the cofactors for every I and J and then arranging them into the adjoint matrix and dividing by the determinant gives you the inverse.

 

Although for anything other than the 2x2 case the formula is wortheless for actual calculations, similar to cramers rule. (although they are useful for proof work)

 

that formula is extremely useful for proving that if the determinant is zero, then the matrix is non invertible.

 

 

 

Evon there were a couple of little mistakes in your post:

 

the Identity matrix I is always square.

 

AI always equals A whether or not A is square. (although you have to chose the right I, if A is mxn then I is nxn and AI=A

 

Ax=b is just a representation of a system of linear equations, and so always holds true (although its possible to make inconsistent equations, or systems without a unique solution)

Link to comment
Share on other sites

Evon there were a couple of little mistakes in your post:

 

the Identity matrix I is always square.

 

AI always equals A whether or not A is square. (although you have to chose the right I, if A is mxn then I is nxn and AI=A

 

Ax=b is just a representation of a system of linear equations, and so always holds true (although its possible to make inconsistent equations, or systems without a unique solution)

 

Yeah, it was in my notebook, and I got confused when I saw the mxn. I was wondering how identity matrix could be mxn. Maybe it was a proof that my professor was doing.

 

Anyways.

Link to comment
Share on other sites

@uncool: Can you give an example of a matrix which only has an inverse on one side? I'm quite sure that for every square matrix, with elements from a field, and with non-zero (invertible) determinant has both a left-sided and a right-sided inverse, which are equal.

I'm referring to non-square matrices, which have inverse on at most one side. An example:

 

[1 0] has infinitely many right inverses (of the form vertical [1 x]), but no left inverse.

=Uncool-

Link to comment
Share on other sites

uncool:

 

Matrix inversion is defined only for square matrices. The psuedo-inverse is a generalization to non-invertible square matrices and non-square matrices. Unlike your example, the psuedo inverse is unique for all matrices over [math]\mathbb R[/math] or [math]\mathbb C[/math]. Your example fails because the [math]I[/math] resulting from [math]\bmatrix 1 & x\endbmatrix \bmatrix1\\0\endbmatrix[/math] is [math]1\times1[/math]. It is not a member of the set of [math]2\times1[/math] matrices over the reals.

 

The expression C = CI = C(AB) = (CA)B = IB = B is valid only for matrices over a ring. This doesn't work if matrix multiplication is not associative.

 

Example: matrices over the octonions can have distinct left and right inverses.

Link to comment
Share on other sites

uncool:

 

Matrix inversion is defined only for square matrices. The psuedo-inverse is a generalization to non-invertible square matrices and non-square matrices. Unlike your example, the psuedo inverse is unique for all matrices over [math]\mathbb R[/math] or [math]\mathbb C[/math]. Your example fails because the [math]I[/math] resulting from [math][1 x] [1 0]^T[/math] is [math]1\times1[/math]. It is not a member of the set of [math]2\times1[/math] matrices over the reals.

 

The expression C = CI = C(AB) = (CA)B = IB = B is valid only for matrices over a ring. This doesn't work if matrix multiplication is not associative.

 

Example: matrices over the octonions can have distinct left and right inverses.

 

Right, that's exactly what I was trying to say, but you said it much better...

=Uncool-

Link to comment
Share on other sites

Thanks for all the replies. But most posts just show that if A has an inverse, then it is unique. That's somewhat trivial.

What I wanted to know is that AB=I implies BA=I.

 

I got the answer now, but it's not a (very) beautiful proof. It's allright though:

 

First we use that the system Ax=b has a solution for any vectror b iff A is row reducible to the identity matrix.

Proof:

(<=) Just row reduce the augmented matrix [A|b] to [i|c]. Then c is the solution.

(=>) Every elementary operation can be done by multiplying A on the left by an elementary matrix. If the reduced echelon form of A is not the identity, then H=(Et...E2E1)A it has all zero's in the bottom row. (The Ei's are the elementary matrices corresponding to the operations). So let b=(Et...E2E1)-1)en. Where en is the n'th standard basis vector: enT=(0 0 ... 0 1). Then reduction of [A|b] gives [H|en] which has no solution.

 

This last part is the 'ugly' part of the proof.

 

Now suppose AB=I, then the equation Ax=b has a solution for any vector b. Just pick x=Cb, then Ax=A(Cb)=b. So A is row reducible to I from the above result, so there exist elementary matrices, such that (Et...E2E1)A=I. Since CA=AB=I implies B=C we have B=(Et...E2E1), so BA=I.

 

I think the proof can be made more beautiful by considering A as a linear function from R^n to R^n. I'll see if that gives more insight.

Link to comment
Share on other sites

What I wanted to know is that AB=I implies BA=I.

 

Uncool did just that by saying

Let A be an invertible matrix, B is its right inverse, C is its left inverse. Then:

C = CI = C(AB) = (CA)B = IB = B.

 

There is no need to use that vector stuff. How do you justify removing the vectors at the end?

Link to comment
Share on other sites

Uncool did just that

But I want AB=I => BA=I. Uncool assumes the existence of a matrix C for which CA=I. The difficulty in the proof is in showing that AB=I implies that there exists a matrix C such that CA=I. Showing then that B=C is the trivial step Uncool and others showed.

 

There is no need to use that vector stuff. How do you justify removing the vectors at the end?

 

I don't think you can prove it from general grouplike properties. I`m sure some knowledgeable mathematician here can show there are grouplike structures where elements have a rightinverse, but no left inverse. You really have to use some special properties of matrices.

Link to comment
Share on other sites

well, 2 things, Zhareon:

1) There's a thing called kernel, etc. It measures the independence of each line of a matrix - that is, if you make the matrix into a set of vectors, it measures how many are idependent. It is simple to prove that the dimension of the horizontal kernel is equal to that of the vertical kernel - so that if the matrix has an inverse on the right, then its horizontal kernel has dimension 0, so the vertical kernel has dimension 0, so it has a left inverse (this is from a while back, so anyone with a more correct way of saying it is welcome.)

 

2) Actually, there is no group-like structure with right inverse but no left inverse unless you remove associativity.

=Uncool-

Link to comment
Share on other sites

well, 2 things, Zhareon:

1) There's a thing called kernel, etc. It measures the independence of each line of a matrix - that is, if you make the matrix into a set of vectors, it measures how many are idependent. It is simple to prove that the dimension of the horizontal kernel is equal to that of the vertical kernel - so that if the matrix has an inverse on the right, then its horizontal kernel has dimension 0, so the vertical kernel has dimension 0, so it has a left inverse (this is from a while back, so anyone with a more correct way of saying it is welcome.)

Ah, now you're using that the dimension of the row space is equal to the dimension of the column space. If A has a right inverse the column space has dimension n (A is nxn). And A has a left inverse iff it has dim(rowspaceA)=n.

 

That's a neater way to look at it. But showing that dim(rowspace)=dim(columnspace) (=rank A) is shown in my book by counting pivots. I hope there's a nicer way to look at it apart from counting pivots.

 

2) Actually, there is no group-like structure with right inverse but no left inverse unless you remove associativity.

=Uncool-

 

I forgot what I those things were called, but I meant RINGS. The set of nxn matrices form a ring. I`m pretty sure you cannot prove that if an element A of your ring has a right inverse then it will also have a left inverse. I dont think it's hard to conjure up a counterexample. That fact that it is true for matrices means you have to use some properties of matrices and that's why I dug into vectors and pivots and whatnot.

Link to comment
Share on other sites

I forgot what I those things were called, but I meant RINGS. The set of nxn matrices form a ring. I`m pretty sure you cannot prove that if an element A of your ring has a right inverse then it will also have a left inverse. I dont think it's hard to conjure up a counterexample.

 

Try to find a counterexample. Try real hard. Hint: There are no such beasts for matrices over the reals.

Link to comment
Share on other sites

Ah, now you're using that the dimension of the row space is equal to the dimension of the column space. If A has a right inverse the column space has dimension n (A is nxn). And A has a left inverse iff it has dim(rowspaceA)=n.

 

That's a neater way to look at it. But showing that dim(rowspace)=dim(columnspace) (=rank A) is shown in my book by counting pivots. I hope there's a nicer way to look at it apart from counting pivots.

Heh...sorry if I was talking condescendingly, as it's clear you do know this. I'm not sure if there's a nicer way, though.

 

I forgot what I those things were called, but I meant RINGS. The set of nxn matrices form a ring. I`m pretty sure you cannot prove that if an element A of your ring has a right inverse then it will also have a left inverse. I dont think it's hard to conjure up a counterexample. That fact that it is true for matrices means you have to use some properties of matrices and that's why I dug into vectors and pivots and whatnot.
I thought you meant something slightly different, in which you had an identity and in which every element had a right inverse. However, we can look at yours this way:

Let us look at the set of elements with a right inverse (they may have left inverse as well). This has a well-defined multiplication, is closed under multiplication, is associative, and has an identity. It therefore is a quasi-group. It also has a right inverse for every element, as defined - and therefore, it can be proven that they have a left inverse, that is equal to the right inverse.

=Uncool-

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.