Jump to content

Tensors


geordief

Recommended Posts

It seems to me from my recent entry into this area that a tensor is a mathematical object with direction * and magnitude* that applies to the behaviour (?) of a cell in a manifold that exists over an area of space.

 

These calls are non pointlike regions corresponding to what ,in school level geometry would be points in the Cartesian coordinate system such as (1,0,0) or (7,5,18) just as examples.

 

These cells (and their corresponding tensors) can model physical activity such as different(?) forces acting in particular on the cell(=local region).

 

My question is :

 

(1)"Do these tensors all have to use the same units?" **

 

(2) Also is my understanding leading up to my question solid?

 

(3)How many tensors can a cell "accommodate" both in theory and practically ?

 

(4) Is there any (close) connection to this earlier thread?

 

http://www.scienceforums.net/topic/101620-intrinsic-curvature/page-1#entry961025

 

 

 

(EDIT) * rather " multiple directions and magnitudes" since I think one tensor combines more than one element ...

 

** rephrase to (1)"Do all elements of tensors have to use the same units?"

Edited by geordief
Link to comment
Share on other sites

 

 

(1)"Do these tensors all have to use the same units?"

 

 

No, just as examples the stress tensor, the strain tensor, the dielectric tensor, and the inertia tensor all have 9 elements and the same form but very different units.

 

Tensors properly don't have units, but their elements may have units.

Some may refer to the units of the elements as the units of the tensor.

 

Some tensor elements are simply coefficients or just plain old numbers, some have units.

 

I think all the elements in a particular tensor must have the same units as each other.

Perhaps someone else will confirm that.

 

Having the same units as each other does not necessarily make the elements of the same type, for instance the stress tensor contains shear and direct stresses which are different, although they enjoy the same units.

 

 

 

 

 

(2) Also is my understanding leading up to my question solid?

 

Well sort of.

 

Tensors are essentially point functions so some tensors do not involve a cell at all.

Those that do involve a differential cell in the calculus sense that shrinks to a point in some limiting process.

 

Some 'cells' are composed of differential elements, dx, dy and dz in the engineering 'control volume' sense and actually exist in the same space as the x,y and z axis.

That is they have measurable length along these axes of dx, dy and dz.

 

For some quantities resort has to be made to the phase space referred to in the post#88 in the Fields thread.

Edited by studiot
Link to comment
Share on other sites

 

 

 

I think all the elements in a particular tensor must have the same units as each other.

Perhaps someone else will confirm that.

 

 

I think we cross posted -or rather I edited while you were posting .

 

Perhaps I said the same thing as you in the second part of my EDIT at the bottom of the post..

Link to comment
Share on other sites

(2) Also is my understanding leading up to my question solid?

Unfortunately not.

 

1. Tensors are defined quite independently of manifolds; your understanding of manifolds seems shakey

 

2. Tensors are essentially multilinear maps from the Cartesian product of vector spaces to the Reals

 

3. As such, tensors do not have "units" - they "live" in tensor spaces which have dimensions

 

4. Physicists (and some mathematicians) refer to tensors by their scalar components. This is justified because it is frequently desirable to work in a coordinate-free environment, but can be misleading.

 

If you would like to know more - and if your linear algebra is up to it - I can explain in grisely detail

Link to comment
Share on other sites

Unfortunately not.

 

1. Tensors are defined quite independently of manifolds; your understanding of manifolds seems shakey

 

2. Tensors are essentially multilinear maps from the Cartesian product of vector spaces to the Reals

 

3. As such, tensors do not have "units" - they "live" in tensor spaces which have dimensions

 

4. Physicists (and some mathematicians) refer to tensors by their scalar components. This is justified because it is frequently desirable to work in a coordinate-free environment, but can be misleading.

 

If you would like to know more - and if your linear algebra is up to it - I can explain in grisely detail

It is good of you to offer but I am a very slow, unreliable and perhaps obtuse learner. This is the first time I have come across the area "linear algebra" .It does sound interesting and I am familiar with some of the concepts involved.

 

But I would need to devote some time to it I guess for any of its consequences to be beneficial to me.

 

To be honest my main interest in tensors is because I have come across the terminology in regard to general relativity and so I feel it will be beneficial to me to poke my nose in along that "fault line",

 

My understanding of tensors has gone from practically zero to "unfortunately not ....solid" in the last 24 hours so I do not feel too bad about it .

 

Perhaps I can hold you to that explanation some time down the line when I may be better equipped to benefit? :)

Edited by geordief
Link to comment
Share on other sites

How is your understanding of matrices?

 

I think they are a good place to start for those who want the Physics, but not the detailed maths.

Most tensors in the physical world are second order so can be written as matrices.

Edited by studiot
Link to comment
Share on other sites

How is your understanding of matrices?

 

I think they are a good place to start for those who want the Physics, but not the detailed maths.

Most tensors in the physical world are second order so can be written as matrices.

I think I have a grounding in them ( that is why I said to Xerxes that I was familiar with some of the concepts in Linear Algebra that I saw when I did a quick search on the term although it seemed daunting otherwise. Dot Products were also known to me)

Link to comment
Share on other sites

The following might help.

 

A space is the set of all possible values, whether we want them or not, of a given condition.

 

So the usual cartesian 3 dimensional space is the set {x,y,z} where x, y and z take on every possible numerical value.

 

x,y and z are then said to form a basis for the whole space since we can generate the entire catalog of triples from them.

 

We can restrict this in two ways.

 

We can select a subspace of the whole space.

For instance the plane z=0 is a 2 dimensional subspace of {x,y,z} since it ranges through every possible value of x and y and does not need or use any values of z.

 

This subspace is also a subset of {x,y,z}, but not all subsets are subspaces.

 

The cube bounded by the six planes x=0, x=1, y=0,y=1, z=0, z=1 is a set of triples, {x,y,z}, where 0<x<1 , 0<y<1 and 0<z<1

 

Naturally there is a difference in some rules for subspaces and subsets or there would be no point in making the distinction

 

The difference between a subset and a subspace is important in the definition of real world fields, which can occupy a subset or subspace.

Link to comment
Share on other sites

Well, I really cannot see that any of the above has very much to do with with topic at hand.

 

geordieff What follows will certainly raise some questions for you - do please ask.

them, and I will do my best to give the simplest possible answers.

 

First suppose a vector space [math]V[/math] with [math]v \in V[/math]. Then to any such space we may associate another vector space - called the dual space [math]V^*[/math]- which is the vector space of all linear mappings [math]V \to \mathbb{R}[/math], that is [math]V^*:V \to \mathbb{R}[/math].

Obviously then, for [math]\varphi \in V^*[/math] then [math]\varphi(v) = \alpha \in \mathbb{R}[/math].

 

So the tensor (or direct) product of two vector spaces is written as the bilinear mapping [math]V^*\otimes V^*:V \times V\to \mathbb{R}[/math], where elements in [math]V \times V[/math] are the ordered pairs (of vectors) [math](v,w)[/math], so that, for [math]\varphi,\,\,\phi \in V^*[/math], by definition, [math]\varphi \otimes \phi(v,w)=\varphi(v)\phi(w)[/math]

 

The object [math]\varphi \otimes \phi[/math] is called a TENSOR. In fact it is a rank 2, type (0,2) tensor

 

Written in full, this is [math]\varphi \otimes \phi = (\sum\nolimits_j A_j \epsilon^j)\otimes (\sum\nolimits_k B_k \epsilon^k) = \sum\nolimits_{jk}A_j B_k \epsilon^j \otimes \epsilon^k[/math] which we can write as [math]\sum\nolimits_{jk}C_{jk} \epsilon^j \otimes \epsilon^k[/math] where the [math]A,\,B,\,C[/math] are scalar and the set [math]\{\epsilon^i\}[/math] are basis vectors for [math]V^*[/math].

 

The scalars [math]C_{jk}[/math] have a natural representation as an [math]n \times n [/math] matrix, where [math]n[/math] is the dimension of these dual spaces i.e. the cardinality of the set [math]\{\epsilon^i\}[/math]. Most physicists (and some mathematicians) refer to this tensor by its scalar components i.e[math]C_{jk}[/math]

 

There is more more - much more. Aren't you glad you asked!!

Link to comment
Share on other sites

Well Xerxes has certainly spelled it out for you in gory detail (What he said is true),

But one word of warning.

 

Xerxes has not been lazy, he has been kind to you and written out all the summation signs. (he has actually put a lot of work in)

 

Tensor addicts have a secret convention that they do not bother with the giant sigma sign, they regard it as 'understood' whenever you see the double suffix.

 

Let me know if you need a translation to rough guide English.

Link to comment
Share on other sites

Well, I may as well finish off my boring little tutorial.

 

Recall I said that a type (0,2) tensor takes the mathematical form [math]\varphi \otimes \phi[/math] and is an element in the space of linear mapping [math]V^* \otimes V^*: V \times V \to \mathbb{R}[/math]

 

In fact there is no restriction on the "size" of the space thereby created; we may have, say, [math]V^* \otimes V^* \otimes V^* \otimes V^* \otimes.......[/math] for any finite number of dual spaces provided only that they act on exactly the same number of spaces that enter into the Cartesian product.

 

Using the shorthand I alluded to earlier, we may have, say, [math]A_{ijklmn}[/math] as a type (0,6) tensor.

 

Now note that, we may define the dual space of a dual space as [math](V^*)^* \equiv V^{**}[/math]. And in the casee that these are finite-dimensional vector spaces, by a somewhat tortuous argument, assert that [math]V^{**} = V[/math] (I cheated rather - they are not identical, but they are said to "naturally isomorphic" so can be treated as the same)

 

So we may have that [math]V \otimes V:V^* \times V^* \to \mathbb{R}[/math]withe exactly the same construction as before, so that, again in shorthand [math]A^{jk}[/math] are the scalar components of a type (2,0) tensor.

 

Furthermore, we can "mix and match" ; we may have mixed tensors of the form [math]V^* \otimes V: V \times V^* \to \mathbb{R}[/math], once again with shorthand [math]T^j_k[/math] and so on to higher ranks.

I close this sermon with 3 remarks that may (or may not) be of interest.....

 

1. Tensors have their own algebra, which is mostly intuitive when one realizes, as studiot hinted at, that every tensor has a representation as a matrix with one exception......

 

2. ....this being tensor contraction. I will say no more than that this operation is equivalent to taking the scalar product of a vector and its dual.

 

3. The algebra of tensors and that of tensor fields turn out to be identical, so physicists frequently talk of "a tensor" when in reality they are talking of a tensor field

Link to comment
Share on other sites

Well, I may as well finish off my boring little tutorial.

 

Recall I said that a type (0,2) tensor takes the mathematical form [math]\varphi \otimes \phi[/math] and is an element in the space of linear mapping [math]V^* \otimes V^*: V \times V \to \mathbb{R}[/math]

 

In fact there is no restriction on the "size" of the space thereby created; we may have, say, [math]V^* \otimes V^* \otimes V^* \otimes V^* \otimes.......[/math] for any finite number of dual spaces provided only that they act on exactly the same number of spaces that enter into the Cartesian product.

 

Using the shorthand I alluded to earlier, we may have, say, [math]A_{ijklmn}[/math] as a type (0,6) tensor.

 

Now note that, we may define the dual space of a dual space as [math](V^*)^* \equiv V^{**}[/math]. And in the casee that these are finite-dimensional vector spaces, by a somewhat tortuous argument, assert that [math]V^{**} = V[/math] (I cheated rather - they are not identical, but they are said to "naturally isomorphic" so can be treated as the same)

 

So we may have that [math]V \otimes V:V^* \times V^* \to \mathbb{R}[/math]withe exactly the same construction as before, so that, again in shorthand [math]A^{jk}[/math] are the scalar components of a type (2,0) tensor.

 

Furthermore, we can "mix and match" ; we may have mixed tensors of the form [math]V^* \otimes V: V \times V^* \to \mathbb{R}[/math], once again with shorthand [math]T^j_k[/math] and so on to higher ranks.

I close this sermon with 3 remarks that may (or may not) be of interest.....

 

1. Tensors have their own algebra, which is mostly intuitive when one realizes, as studiot hinted at, that every tensor has a representation as a matrix with one exception......

 

2. ....this being tensor contraction. I will say no more than that this operation is equivalent to taking the scalar product of a vector and its dual.

 

3. The algebra of tensors and that of tensor fields turn out to be identical, so physicists frequently talk of "a tensor" when in reality they are talking of a tensor field

It is going to take me a (very) long time to "get into and perhaps eventually through " your 2 posts. I will have to first familiarize myself with the notation and symbols since I never really had much experience with set theory(that is what they are ,aren't they?)

 

It is not "boring " but I have to work within my limits- and pace myself.If I take on too much at one go then it will be self defeating.

 

I am beginning to see that perhaps tensors may be a simpler subject than I supposed but it has been forbidden territory for me for several years now and I am going to give myself plenty of time to approach the subject :)

Link to comment
Share on other sites

A few notes.

 

Firstly Xerxes had confirmed what I said elsewhere, that there is more than one space associated with some fields.

 

He refers to the tensor space and the dual space, although I sis not mean that particular combination for my purposes, I think it proves the point.

 

Secondly all the tensors you will meet can be represented as square matrices, but not all square matrices are tensors.

 

For instance the matrix

 

0 1 0

0 0 1

1 0 0

 

is not a tensor.

 

Second order tensors produce square matrices like the above, third order tensors produce cubical matrices and so on.

I see a sixth order one was noted.

Link to comment
Share on other sites

Firstly Xerxes had confirmed what I said elsewhere, that there is more than one space associated with some fields.

Actually, that is not what you said

 

A field in Physics can entail two quite distinct and different coordinate systems and usually does.

No transformation exists between these coordinate systems.

In any case, I cannot parse the new claim that "there is more than one space associated with some field".

 

What does this mean?

Link to comment
Share on other sites

Actually, that is not what you said

 

In any case, I cannot parse the new claim that "there is more than one space associated with some field".

 

What does this mean?

 

Actually it is what I said.

 

I even posted an excerpt from a renowned textbook describing such a situation, the coincidence two spaces viz momentum space and position space at a particle.(post#88).

Link to comment
Share on other sites

I'm interested in this discussion because I've only ever seen tensor products in abstract algebra. I don't know any formal physics and don't know what tensors are. And the abstract mathematical formulation seems so far removed from the physics meaning of tensor that I've always been curious to bridge the gap.

 

[Feel free to skip this part]

Briefly, if [math]V[/math] and [math]W[/math] are vector spaces over the real numbers, their tensor product [math]V \otimes W[/math] is the free vector space on their direct product [math]V \times W[/math], quotiented out by the subspace generated by the usual bilinearity relationships. The tensor product has a universal property that's generally used to define it, which is that any bilinear map from the direct product [math]V \times W[/math] to any other vector space "factors through" the tensor product.

 

This is a lot of math jargon and sadly if I tried to supply the detail it would not be helpful. It's a long chain of technical exposition. The details are on Wiki:

 

https://en.wikipedia.org/wiki/Tensor_product

 

https://en.wikipedia.org/wiki/Tensor_product_of_modules

 

Also this article is simple and clear and interesting. "How to conquer tensorphobia."

https://jeremykun.com/2014/01/17/how-to-conquer-tensorphobia/

[End of skippable part]

 

This [the algebraic approach to tensor products] is all I know about tensors. It's always struck me that

 

* This doesn't seem to have anything to do with physics or engineering; and

 

* It doesn't say anything about dual spaces, which are regarded as very important by the physicists.

 

What I know about physics and engineering tensors is that they are (loosely speaking I suppose) generalizations of vector fields. Just as a vector field describes the force of a swirling wind or electrical field about a point in the plane, tensors capture more and higher order behavior localized at a point on a manifold.

 

What I wish I understood was a couple of simple examples. When Einstein is analyzing the motion of a photon passing a massive body, what are the tensors? When a brige engineer needs to know the stresses and strains on a bolt, what are the tensors?

 

Studiot mentioned the stress and strain tensors. Even though I don't know what they are, their names are suggestive of what they do. Encode complex information about some force acting on a point. Studiot can you say more about them?

 

I hope someday to understand what a tensor is in everyday terms (bridge bolts) and how they're used in higher physics, and how any of this relates to mathematical tensor product. Bilinearity seems to be one of the themes.

 

Along these lines, Xerxes wrote something I hadn't seen before.

 

[math]\varphi \otimes \phi(v,w)=\varphi(v)\phi(w)[/math]

That relates a pair of functionals to the product of two real numbers. The distributive law induces bilinearity. So this looks like something for me to try to understand. It might be a bridge between the physics and the math. If I could understand why the functionals are important it would be a breakthrough. For example I've seen where an [math]n[/math]-fold tensor has some number of factors that are vector spaces, and some number that are duals of those spaces, and these two numbers are meaningful to the physicists. But duals don't even appear in the algebraic approach.

 

This is everything I know about it and perhaps I'll learn more in this thread.

Edited by wtf
Link to comment
Share on other sites

Hi wtf. I would be willing to bet you know as much physics and engineering as I do, but let's see if I can give some insight......

 

Physics and engineering would be unthinkable without a metric, although this causes no problems to a mathematician. Specifically, a vecto space is called a "metric space" if it has an inner product defined.{edit "with" to "without"}

 

Now an inner product is defined as a bilinear, real-valued mapping [math]b:V \times V \to \mathbb{R}[/math](with certain obvious restrictions imposed), that is [math]b(v,w) \in \mathbb{R}[/math] where [math]v,\,w \in V[/math].

 

In the case that our vector space is defined over the Reals, we have that [math]b(v,w)=b(w,v)[/math]

 

Turn to the dual space, with [math]\varphi \in V^*[/math] This means that for any [math]\varphi \in V^*[/math] and any [math]v \in V[/math] that [math]\varphi(v) \in \mathbb{R}[/math]

 

In the case of a metric space there always exists some particular [math]\varphi_v(w) = b(v,w) \in \mathbb{R}[/math] for all [math]v \in V[/math].

 

And likewise by the symmetry above, there exists a [math]\phi_w(v) =b(w,v) = b(v,w)[/math]. But writing [math]\varphi_v(w)\phi_w(v)[/math] as their product, we see this is just [math]\varphi_v \otimes \phi_w(v,w) = b(v,w)[/math], so that [math]\varphi_v \otimes \phi_w \in V^* \otimes V^*[/math].

 

And if we expand our dual vectors as, say [math]\varphi_v=\sum\nolimits_j \alpha_j \epsilon^j[/math] and [math] \phi_w = \sum\nolimits_k \beta_k \epsilon^k[/math], then as before we may write [math]\varphi_v \otimes \phi_w = \sum\nolimits_{jk} g_{jk} \epsilon ^j \otimes \epsilon^k[/math] then, dropping all reference to the basis vectors, we may have that [math]b = \alpha_j \beta_k= g_{jk}[/math].

 

Therefore the [math]g_{jk}[/math] are called the components of a type (0,2) metric tensor.

 

It is important in General Relativity (to say the least!!)

Edited by Xerxes
Link to comment
Share on other sites

I've been percolating on the posts in this thread and some online references and I'm making some progress. I just wanted to write down what I understand so far. I think I need to just keep plugging away at the symbology. Thanks to all for the discussion so far.

 

2. Tensors are essentially multilinear maps from the Cartesian product of vector spaces to the Reals

That's a very clarifying remark. Especially in light of this:

 

First suppose a vector space [math]V[/math] with [math]v \in V[/math]. Then to any such space we may associate another vector space - called the dual space [math]V^*[/math]- which is the vector space of all linear mappings [math]V \to \mathbb{R}[/math], that is [math]V^*:V \to \mathbb{R}[/math].

Obviously then, for [math]\varphi \in V^*[/math] then [math]\varphi(v) = \alpha \in \mathbb{R}[/math].

 

So the tensor (or direct) product of two vector spaces is written as the bilinear mapping [math]V^*\otimes V^*:V \times V\to \mathbb{R}[/math], where elements in [math]V \times V[/math] are the ordered pairs (of vectors) [math](v,w)[/math], so that, for [math]\varphi,\,\,\phi \in V^*[/math], by definition, [math]\varphi \otimes \phi(v,w)=\varphi(v)\phi(w)[/math]

 

The object [math]\varphi \otimes \phi[/math] is called a TENSOR. In fact it is a rank 2, type (0,2) tensor

The above is extremely clear. By which I mean that it became extremely clear to me after I worked at it for a while :) And I definitely got my money's worth out of this example. Let me see if I can say it back.

 

Given [math]V[/math] and [math]V^*[/math] as above, let [math]\varphi, \phi \in V^*[/math] be functionals, and define a map [math]\varphi \otimes \phi : V \times V \to \mathbb R[/math]. In other words [math]\varphi \otimes \phi[/math] is a function that inputs a pairs of elements of [math]V[/math], and outputs a real number. Specifically, [math]\varphi \otimes \phi(u, v) = \varphi(u) \phi(v)[/math] where the right hand side is just ordinary multiplication of real numbers. Note that [math]\varphi \otimes \phi[/math] doesn't mean anything, it's notation we give to a particular function.

 

It's clear (and can be verified by computation) that [math]\varphi \otimes \phi[/math] is a bilinear map.

 

In other words [math]\otimes[/math] is a function that inputs a pair of functionals, and outputs a function that inputs pairs of vectors and outputs a real number.

 

So that's one definition of a tensor.

 

I'm not clear on why your example is of rank [math]2[/math], I'll get to that in a moment.

 

Another way to understand the tensor product of two vector spaces comes directly from the abstract approach I talked about earlier. In fact in the case of finite-dimensional vector spaces, it's especially simple. The tensor product [math]V \otimes V[/math] is simply the set of all finite linear combinations of the elementary gadgets [math]v_i \otimes v_j[/math] where the [math]v_i[/math]'s are a basis of [math]V[/math], subject to the usual bilinearity relationships.

 

Note that I didn't talk about duals; and the tensors are linear combinations of gadgets and not functions. In fact one definition I've seen of the rank of a tensor is that it's just the number of terms in the sum. So [math]v \otimes w[/math] is a tensor of rank [math]1[/math], and [math]3 v_1 \otimes w_1 + 5 v_2 \otimes w_2[/math] is a tensor of rank two. Note that I have a seemingly different definition of rank than you do.

 

In general, a tensor is an expression of the form [math]\sum_{i,j} a_{ij} v_i \otimes v_j[/math]. This is important because later on you derive this same expression by means I didn't completely follow.

 

By the way if someone asks, what does it mean mathematically to say that something is a "formal linear combination of these tensor gadgets," rest assured that there are technical constructions that make this legit.

 

Now if I could bridge the gap between these two definitions, I would be making progress. Why do the differential geometers care so much about the dual spaces? What meaning do the duals represent? In differential geometry, physics, engineering, anything?

 

Likewise I understand that in general tensors are written as so many factors of the dual space and so many of the original space. What is the meaning of the duals? What purpose or meaning do they serve in differential geometry, physics, and engineering?

 

Now one more point of confusion. In your most recent post you wrote:

 

Physics and engineering would be unthinkable without a metric, although this causes no problems to a mathematician. Specifically, a vecto space is called a "metric space" if it has an inner product defined.{edit "with" to "without"}

I think that can't be right, since metric spaces are much weaker than inner product spaces. Every inner product gives rise to a metric but not vice versa. For example the Cartesian plane with the taxicab metric is not an inner product space. I'm assuming this is just casual writing on your part rather than some fundamentally different use of the word metric than I'm used to.

 

Now an inner product is defined as a bilinear, real-valued mapping [math]b:V \times V \to \mathbb{R}[/math](with certain obvious restrictions imposed), that is [math]b(v,w) \in \mathbb{R}[/math] where [math]v,\,w \in V[/math].

 

In the case that our vector space is defined over the Reals, we have that [math]b(v,w)=b(w,v)[/math]

Agreed so far. Although in complex inner product spaces this identity doesn't hold, you need to take the complex conjugate on the right.

 

Turn to the dual space, with [math]\varphi \in V^*[/math] This means that for any [math]\varphi \in V^*[/math] and any [math]v \in V[/math] that [math]\varphi(v) \in \mathbb{R}[/math]

Yes.

 

In the case of a metric space there always exists some particular [math]\varphi_v(w) = b(v,w) \in \mathbb{R}[/math] for all [math]v \in V[/math].

The correspondence between the dual space and the inner product is not automatic, it needs proof. Just mentioning that.

 

And likewise by the symmetry above, there exists a [math]\phi_w(v) =b(w,v) = b(v,w)[/math]. But writing [math]\varphi_v(w)\phi_w(v)[/math] as their product, we see this is just [math]\varphi_v \otimes \phi_w(v,w) = b(v,w)[/math], so that [math]\varphi_v \otimes \phi_w \in V^* \otimes V^*[/math].

Now here I got lost but I need to spend more time on it. You're relating tensors to the inner product and that must be important. I'll keep working at it.

 

And if we expand our dual vectors as, say [math]\varphi_v=\sum\nolimits_j \alpha_j \epsilon^j[/math] and [math] \phi_w = \sum\nolimits_k \beta_k \epsilon^k[/math], then as before we may write [math]\varphi_v \otimes \phi_w = \sum\nolimits_{jk} g_{jk} \epsilon ^j \otimes \epsilon^k[/math] then, dropping all reference to the basis vectors, we may have that [math]b = \alpha_j \beta_k= g_{jk}[/math].

Aha! The right side is exactly what I described above. It's a finite linear combination of elementry tensor gadgets. And somehow the functionals disappeared!

 

So I know all of this is the key to the kingdom, and that I'm probably just a few symbol manipulations away from enlightenment :)

 

Therefore the [math]g_{jk}[/math] are called the components of a type (0,2) metric tensor.

Right, it's (0,2) because there are 0 copies of the dual and 2 copies of V. But where did the functionals go?

 

It is important in General Relativity (to say the least!!)

Should I be thinking gravity, photons, spacetime? Why are the duals important? And where did they go in your last calculation?

 

I'll go percolate some more. To sum up, the part where you define a tensor as a map from the Cartesian product to the reals makes sense. The part about the duals I didn't completely follow but you ended up with the same linear combinations I talked about earlier. So there must be a pony in here somewhere.

Edited by wtf
Link to comment
Share on other sites

Yes, you seem to be getting there. But since I am finding the reply/quote facility here extremely irritating to use (why can I not get ascii text to wrap), I will reply to you substantive questions as follows......

 

1. Rank of a tensor

 

I use the usual notation that the rank of a tensor is equal to the number of vector spaces that enter into the tensor (outer) product. Not the somewhat confusing fact....

 

If [math]V[/math] is a vector space, then so is [math]V \otimes V[/math], and, since elements in a vector space are obviously vectors, then yhe tensor [math]v \otimes w[/math] is a vector!!

 

 

2. Dual spaces

 

The question of "what are they for?" may be answered for you in the following

 

3. Prove the relation between the action of a dual vector (aka linear functional) and the inner product.

 

First note that, since by assumption, [math]V[/math] and [math]V^*[/math] are linear spaces, it will suffice to work on basis vectors.

 

Suppose the subset [math]\{e_j\}[/math] is an orthonormal basis for [math]V[/math]. Further suppose that [math]\{\epsilon^k\}[/math] is an arbitrary subset of [math]V^*[/math].

 

Then [math]\{\epsilon^k\}[/math] will be a basis for [math]V^*[/math] if and only if [math]\epsilon^k(e_j)= \delta^j_k[/math] where [math]\delta^j_k = 1,\,\,j=k,\text{and} \,\,0,\,\,j \ne k[/math]

 

Now note that if [math]g(v,w)[/math] defines an inner product on [math]V[/math], then the basis vectors are orthonormal if and only if [math]g_{jk}(e_j,e_k)=\delta ^j_k[/math] This brings the action of a dual basis on dual vector bases and the inner product of bases into register. Extending by linearity, this must be true for all vectors and their duals

Edited by Xerxes
Link to comment
Share on other sites

OK, so let's talk a bit about tensors in differential geometry. Recall that, in normal usage, differential geometry is the study of (possibly) non-Euclidean geometry without reference to any sort of surrounding - or embedding - space.

 

First it is useful to know what is a manifold. No. Even firster, we need to know what is a topological space.

 

Right.

 

Suppose [math]S[/math] a point set - a set of abstract points. The powerset [math]\mathcal{P}(S)[/math] is simply the set formed from all possible subsets of [math]S[/math]. It is a set whose members (elements) are themselves sets.

Note by the definition of a subset, the empty set [math]\O[/math] and [math]S[/math] itself are included as elements in [math]\mathcal{P}(S)[/math]

 

So a topology [math]T[/math] is defined on [math]S[/math] whenever [math]S[/math] is associated to a subset (of subsets of [math]S[/math]) of [math]\mathcal{P}(S)[/math] and the following are true

 

1. Arbitrary (possibly infinite) union of elements in [math]T[/math] are in [math]T[/math]

 

2. Finite intersections of elements in [math]T[/math] are in [math]T[/math]

 

3. [math]S \in T[/math]

 

4. [math] \O \in T[/math]

 

The indivisible pairing [math]S,T[/math] is called a topological space. Note that [math]T[/math] is not uniquely defined - there are many different subsets that can be found for the powerset.

 

Now often one doesn't care too much which particular topology id used for any particular set, and one simply says "X is a topological space". I shall do that here.

 

Finally, elements of [math]T[/math] are called the open sets in the topological space, and the complements in [math]S,T[/math] of elements in [math]T[/math] are called closed.

 

Ouch, this already over-long, so briefly, a manifold is [math]M[/math] is a topological space for which there exists a continuous mapping from any open set in [math]M[/math] to an open subset of some [math]R^n[/math] which has a continuous inverse. This mapping is called a homeomorphism (it's not a typo!), so that when [math]U \subseteq M[/math] one writes [math]h:U \to R^n[/math] for this, and [math]n[/math] is taken as the dimension of the manifold

 

Since [math]R^n \equiv R \times R \times R \times.....[/math] the homeomorphic image of [math]m \in U \subseteq M[/math] is, say, [math]h(m)= (u^1,u^2,,....,u^n)[/math], a Real n-tuple

 

And really finally, one defines projections on each n-tuple [math]\pi_j:(u^1,u^2,....,u^n)\to u^j[/math], a Real number.

 

So the composite function is defined to be [math]\pi_j \circ h = x^j:U \to \mathbb{R}[/math]

 

Elements in the set [math]\{x^k\}[/math] are called the coordinates of [math]m[/math]. They are functions, by construction

Link to comment
Share on other sites

So far so good at my end. I haven't forgotten this thread, I've been slowly working my way through the universal property of the tensor product applied to multilinear forms on the reals. I can now visualize the fact that [math]\mathbb R \otimes \dots \otimes \mathbb R = \mathbb R[/math], because you can pull out all the coefficients of the pure tensors so that the tensor product is the 1-dimensional vector space with basis [math]1 \otimes \dots \otimes 1[/math]. This is pretty simple stuff but I had to work at it a while before it became obvious.

 

I'm still curious to understand the significance of the duals in differential geometry and physics so feel free to keep writing, you'll have at least one attentive reader.

Edited by wtf
Link to comment
Share on other sites

ps -- Let me just say all this back and, being a pedantic type, clarify a couple of fuzzy locutions.

 

The indivisible pairing [math]S,T[/math] is called a topological space. Note that [math]T[/math] is not uniquely defined - there are many different subsets that can be found for the powerset.

Minor expositional murkitude. I'd say this as: For a given set [math]S[/math], various topologies can be put on it. For example if [math]T = \mathcal P(S)[/math] then every set is open. That's the discrete topology. The discrete topology is nice because every function on it to any space whatsoever is continuous. Or suppose [math]T = \{\emptyset, S\}[/math]. This is called the indiscrete topology. No sets are open except the empty set and the entire space. And the everyday example is the real numbers with the open sets being countable unions of open intervals. [This is usually given as a theorem after the open sets have been defined as sets made up entirely of interior points. But this is a more visual and intuitive characterization of open sets in the reals].

 

Now often one doesn't care too much which particular topology id used for any particular set, and one simply says "X is a topological space". I shall do that here.

This is actually interesting to me. Do they use unsual topologies in differential geometry? I thought they generally consider the usual types of open sets. Now I'm trying to think about this. Hopefully this will become more clear. I guess I think of manifolds as basically Euclean spaces twisted around in various ways. Spheres and torii. But not weird spaces like they consider in general topology.

 

Ouch, this already over-long

Well if anything it's too short, since this is elementary material (defined as whatever I understand :)) and I'm looking forward to getting to the good stuff. But I hope we're not going to have to go through the chain rule and implicit function theorem and all the other machinery of multivariable calculus, which I understand is generally the first thing you have to slog through in this subject.

 

If you can find a way to get to tensors without all that stuff it would be great. Or should I be going back and learning all the multivariable I managed to sleep through when I was supposed to be learning it? I can take a partial derivative ok but I'm pretty weak on multivariable integration, Stokes' theorem and all that.

 

And really finally, one defines projections on each n-tuple [math]\pi_j:(u^1,u^2,....,u^n)\to u^j[/math], a Real number.

 

So the composite function is defined to be [math]\pi_j \circ h = x^j:U \to \mathbb{R}[/math]

 

Elements in the set [math]\{x^k\}[/math] are called the coordinates of [math]m[/math]. They are functions, by construction

What you are doing with this symbology is simply putting a coordinate system on the manifold. We started out with some general topological space, and now we can coordinatize regions of it with familiar old [math]\mathbb R^n[/math]. All seems simple conceptually. In fact my understanding is that "A coordinate system can flow across a homeomorphism."

 

I know these things are called charts, but where my knowledge ends is how you deal with the overlaps. If [math]U, U' \subset S[/math], what happens if the [math]h[/math]'s don't agree?

 

Ok well if you have the patience, this is pretty much what I know about this. Then at the other end, I do almost grok the universal construction of the tensor product and I am working through calculating it for multilinear forms on the reals. This is by the way a very special case compared to the algebraic viewpoint of looking at modules over a ring. In the latter case you don't even have a basis, let alone a nice finite one. So anything involving finite-dimensional vector spaces can't be too hard :)

Edited by wtf
Link to comment
Share on other sites

For a given set [math]S[/math], various topologies can be put on it.

Yes, this is true.

For example if [math]T = \mathcal P(S)[/math] then every set is open. That's the discrete topology.

Yes, but every set is also closed

 

Or suppose [math]T = \{\emptyset, S\}[/math]. This is called the indiscrete topology. No sets are open except the empty set and the entire space.

Yes, but they are also closed, and they are the only elements in this topology.

 

I had rather hoped I wouldn't have to get into the finer points of topology, but I see now this is unavoidable - and not just as a consequence of the above.

 

If we want a "nice" manifold, we prefer that it be connected and have a sensible separation property, say the so-called Hausdorff property.

 

As to the first, I will assert - I am not alone in this! - that a topological space is connected if and only if the only sets that are both open and closed are the empty set and the space itself. The discrete topololgy cleary fails this test.

 

I will further assert that, if a topological space [math]M[/math] has the Hausdorff property and there exist open sets [math]U,\,\,V[/math] with, say [math]x \in U,\,\,y \in V[/math] then if and only if [math]U \cap V = \O[/math] then I may say that [math]x \ne y[/math]

The indiscrete (or concrete) topology fails this test.

 

So these 2 topologies, while they undeniably exist, will be of no interest to us

 

I guess I think of manifolds as basically Euclean spaces twisted around in various ways. Spheres and torii.

Be careful. Euclidean space has a metric, so do spheres and tori. We do not have one so far - so we do not have a geometry i.e. a shape

 

But I hope we're not going to have to go through the chain rule and implicit function theorem and all the other machinery of multivariable calculus,

Point taken, I will try to be as intuitive as I can (though it's not really in my nature)

 

Later.....

Link to comment
Share on other sites

Let just, by way of clarification, that the Hausdorff property I just referred to is NOT transitive.

 

Specifically, if I have 3 points [math]x,\,y,\,z[/math] with [math]x \ne y[/math] and [math]y \ne z[/math] by the Hausdorff property I gave, and writing [math]U_x[/math] for some open set containing [math]x[/math] etc, then by definition

[math]U_x \cap U_y = \O[/math] and [math]U_y \cap U_z = \O[/math], but this does NOT imply that [math]U_x \cap U_z = \O[/math].

 

But of course if I want [math]x \ne z[/math] then I must find new open sets, say [math]V_x,\,V_z[/math] such that[math]V_x \cap V_z = \O[/math].

 

Clearly [math]x \in V_x,\, x \in U_x[/math] but then [math]V_x \ne U_x[/math].

 

As a consequence, for the point [math]m \in M[/math] (our manifold) with coordinates [math]x^1,x^2,....,x^n[/math] we may extend these coordinates to an open set [math]U_m \subsetneq M[/math].

 

Then [math]U[/math] is called a ccordinate neighbourhood (of [math]m[/math]). Or just a neighbourhood.

Edited by Xerxes
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.