Jump to content

Linear independence


Recommended Posts

I am ashamed to ask this, but here goes.....

 

We know that if a vector space is finite-dimensional, then there exist subsets of finite cardinality that will seve as a basis for this space iff all elements of these sets are linearly independent.

 

We also know that this implies that any subset containing the zero vector cannot be a set of linearly independent vectors, and is therefore not a suitable basis for our space.

 

I now find that, in the case my vector space is not finite-dimensional, then the basis set may contain "only finitetly many non-zero elements".

 

On the assumption that a non-finite vector space has a basis set of non-finite cardinality, then it seems either (or both) of the following must be true:

 

1. In the case of a non-finite vector space, we don't worry about linear independence of the basis vectors, or

 

2. In the case of a non-finite vector space linear independence has no real meaning

 

Or am I gibbering?

Link to comment
Share on other sites

Not sure if you are gibbering. But at least even after a few attempts I still fail to parse the sentence "I now find that, in the case my vector space is not finite-dimensional, then the basis set may contain 'only finitely many non-zero elements'". You are saying that a vector space of infinite dimension has a basis with a finite number of elements? I believe that is wrong. Strangely enough, you even seem to contradict your dubious statement in your follow-up sentence saying "on the assumption that a non-finite vector space has a basis set of non-finite cardinality ...". I assume there is a typo somewhere in your post?

 

To answer your two questions despite not having understood the context:

1) I think we do care quite a lot. In fact, in QM you often assume your basis to be orthonormal.

2) In an infinite-dimensional vector space you can still pick out three vectors (or two, or four, or ...) and have a notion of them being linearly independent or not.

Link to comment
Share on other sites

If you have an infinite set of linearly independent vectors and take one away, how many do you now have?

 

So for instance in fourier analysis where the vector space is the (infinfite) set of continuous functions. we do not normally include the zero function (y=0 for all x) in the series.

Link to comment
Share on other sites

Ha! You guys are too too tactful - I was being stupid

 

I had mis-read my text. The correct statement in NOT that the basis for an infinite-dimensional space contains only finitely many non-zero elements, rather it is

 

Each element in the basis has only finitely many entries.

 

Here's my simple example.

 

Consider the set [math]P(x)[/math] of polynomials of arbitrary degree.This is of course vector space by the usual axioms Now from the identity [math]p(x)=0[/math] (from which we extract the roots of this polynomial/vector, assuming not all coefficients are zero), we may infer that, say

 

[math]x^0, x^1, x^2,.........[/math] are linearly independent and may form a basis for [math]P(x)[/math]

 

But elements of this basis must themselves be polynomials, so let's write this basis as, say

 

[math]x^0+0+0+.......[/math]

 

[math]0+x^1+0+0+......[/math]

 

[math]0+0+x^2+0+......[/math]

 

and so on. It is easy to see that, by taking sums of finitely many of these, with scaling (if required) we recover any polynomial whatever

Link to comment
Share on other sites

Ha! You guys are too too tactful - I was being stupid

 

I had mis-read my text. The correct statement in NOT that the basis for an infinite-dimensional space contains only finitely many non-zero elements, rather it is

 

Each element in the basis has only finitely many entries.

 

Here's my simple example.

 

Consider the set [math]P(x)[/math] of polynomials of arbitrary degree.This is of course vector space by the usual axioms Now from the identity [math]p(x)=0[/math] (from which we extract the roots of this polynomial/vector, assuming not all coefficients are zero), we may infer that, say

 

[math]x^0, x^1, x^2,.........[/math] are linearly independent and may form a basis for [math]P(x)[/math]

 

But elements of this basis must themselves be polynomials, so let's write this basis as, say

 

[math]x^0+0+0+.......[/math]

 

[math]0+x^1+0+0+......[/math]

 

[math]0+0+x^2+0+......[/math]

 

and so on. It is easy to see that, by taking sums of finitely many of these, with scaling (if required) we recover any polynomial whatever

Yes. 1, x, x^2, ... are a basis for P(x). There are infinitely many vectors in the basis. And each element of P(x) can be written as a finite linear combination of basis vectors.

 

It doesn't make a whole lot of sense to point out that x^2 = 0 + 0*x + 1*x^2 + 0*x^3 ...

 

It's true, but so what? In any vector space V, any vector v whatsoever can be written

 

v = v + 0*x1 + 0*x2 + ...

 

where the xi's are all the other vectors in the entire vector space. But so what? What's the significance of this to you?

 

That would be true about any basis vector in any vector space. In the Cartesian plan with standard basis {(1,0), (0,1)} you could certainly make the point that (1,0) = (1,0) + 0 * (0,1) but what would be the point of saying that?

Edited by HalfWit
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.