Jump to content


Photo
- - - - -

Tensors


  • Please log in to reply
35 replies to this topic

#1 geordief

geordief

    Protist

  • Senior Members
  • 788 posts

Posted 6 February 2017 - 10:59 AM

It seems to me from my  recent entry into  this area that a tensor is a mathematical object with direction * and magnitude* that applies to  the behaviour (?) of a cell in a manifold  that exists over an area of space.

 

These calls are non pointlike regions corresponding to what ,in school level geometry would be points in the Cartesian coordinate system    such as (1,0,0) or (7,5,18) just as examples.

 

These cells (and their corresponding tensors) can model physical activity  such as different(?) forces acting in particular on the cell(=local region).

 

My question is :

 

(1)"Do these tensors all have to use the same units?" **

 

(2) Also is my understanding leading up to my question solid?

 

(3)How many tensors can a cell "accommodate"  both in theory and practically ?

 

(4) Is there any (close) connection to this earlier thread?

 

http://www.sciencefo...e-1#entry961025

 

 

 

(EDIT) * rather " multiple directions and magnitudes" since I think one tensor combines more than one element ...

 

** rephrase to (1)"Do all elements of tensors  have to use the same units?"


Edited by geordief, 6 February 2017 - 02:43 PM.

  • 0

#2 studiot

studiot

    Genius

  • Senior Members
  • 6,296 posts
  • LocationSomerset, England

Posted 6 February 2017 - 02:45 PM

 

 

(1)"Do these tensors all have to use the same units?"

 

 

No, just as examples the stress tensor, the strain tensor, the dielectric tensor, and the inertia tensor all have 9 elements and the same form but very different units.

 

Tensors properly don't have units, but their elements may have units.

Some may refer to the units of the elements as the units of the tensor.

 

Some tensor elements are simply coefficients or just plain old numbers, some have units.

 

I think all the elements in a particular tensor must have the same units as each other.

Perhaps someone else will confirm that.

 

Having the same units as each other does not necessarily make the elements of the same type, for instance the stress tensor contains shear and direct stresses which are different, although they enjoy the same units.

 

 

 

 

 

(2) Also is my understanding leading up to my question solid?

 

Well sort of.

 

Tensors are essentially point functions so some tensors do not involve a cell at all.

Those that do involve a differential cell in the calculus sense that shrinks to a point in some limiting process.

 

Some 'cells' are composed of differential elements, dx, dy and dz in the engineering 'control volume' sense and actually exist in the same space as the x,y and z axis.

That is they have measurable length along these axes of dx, dy and dz.

 

For some quantities resort has to be made to the phase space referred to in the post#88 in the Fields thread.


Edited by studiot, 6 February 2017 - 02:58 PM.

  • 0

#3 geordief

geordief

    Protist

  • Senior Members
  • 788 posts

Posted 6 February 2017 - 02:59 PM

 

 

 

I think all the elements in a particular tensor must have the same units as each other.

Perhaps someone else will confirm that.

 

 

I think we cross posted -or rather I edited while you were posting .

 

Perhaps I said the same thing as you in the second part of my EDIT at the bottom of the post.. 


  • 0

#4 Xerxes

Xerxes

    Meson

  • Senior Members
  • 224 posts
  • LocationUK

Posted 6 February 2017 - 04:11 PM

(2) Also is my understanding leading up to my question solid?

Unfortunately not.

1. Tensors are defined quite independently of manifolds; your understanding of manifolds seems shakey

2. Tensors are essentially multilinear maps from the Cartesian product of vector spaces to the Reals

3. As such, tensors do not have "units" - they "live" in tensor spaces which have dimensions

4. Physicists (and some mathematicians) refer to tensors by their scalar components. This is justified because it is frequently desirable to work in a coordinate-free environment, but can be misleading.

If you would like to know more - and if your linear algebra is up to it - I can explain in grisely detail
  • 0

#5 geordief

geordief

    Protist

  • Senior Members
  • 788 posts

Posted 6 February 2017 - 04:29 PM

Unfortunately not.

1. Tensors are defined quite independently of manifolds; your understanding of manifolds seems shakey

2. Tensors are essentially multilinear maps from the Cartesian product of vector spaces to the Reals

3. As such, tensors do not have "units" - they "live" in tensor spaces which have dimensions

4. Physicists (and some mathematicians) refer to tensors by their scalar components. This is justified because it is frequently desirable to work in a coordinate-free environment, but can be misleading.

If you would like to know more - and if your linear algebra is up to it - I can explain in grisely detail

It is good of you to offer but I am a very slow, unreliable and perhaps obtuse learner. This is the first time I have come across the area "linear algebra"  .It does sound interesting and I am familiar with some of the concepts involved.

 

But I would need to devote some time to it I guess for any of its consequences  to be beneficial to me.

 

To be honest my main interest in tensors is because I have come across the terminology in regard to general relativity  and so  I feel it will be beneficial to me  to poke my nose in along that "fault line",

 

My understanding of tensors has gone from practically zero to "unfortunately not ....solid" in the last 24 hours  so I do not feel  too bad  about it .

 

Perhaps I can hold you to that explanation some time down the line  when I may be better equipped to benefit? :-)  


Edited by geordief, 6 February 2017 - 04:29 PM.

  • 0

#6 studiot

studiot

    Genius

  • Senior Members
  • 6,296 posts
  • LocationSomerset, England

Posted 6 February 2017 - 04:35 PM

How is your understanding of matrices?

 

I think they are a good place to start for those who want the Physics, but not the detailed maths.

Most tensors in the physical world are second order so can be written as matrices.


Edited by studiot, 6 February 2017 - 04:36 PM.

  • 0

#7 geordief

geordief

    Protist

  • Senior Members
  • 788 posts

Posted 6 February 2017 - 04:50 PM

How is your understanding of matrices?

 

I think they are a good place to start for those who want the Physics, but not the detailed maths.

Most tensors in the physical world are second order so can be written as matrices.

 I think I have a grounding  in them ( that is why I said to Xerxes that I was familiar with some of the concepts in Linear Algebra  that I saw when  I did a quick search on the term  although it seemed daunting otherwise. Dot Products were also known to me)  


  • 0

#8 studiot

studiot

    Genius

  • Senior Members
  • 6,296 posts
  • LocationSomerset, England

Posted 6 February 2017 - 05:16 PM

The following might help.

 

A space is the set of all possible values, whether we want them or not, of a given condition.

 

So the usual cartesian 3 dimensional space is the set {x,y,z} where x, y and z take on every possible numerical value.

 

x,y and z are then said to form a basis for the whole space since we can generate the entire catalog of triples from them.

 

We can restrict this in two ways.

 

We can select a subspace of the whole space.

For instance the plane z=0 is a 2 dimensional subspace of {x,y,z} since it ranges through every possible value of x and y and does not need or use any values of z.

 

This subspace is also a subset of {x,y,z}, but not all subsets are subspaces. 

 

The cube bounded by the six planes x=0, x=1, y=0,y=1, z=0, z=1 is a set of triples, {x,y,z}, where 0<x<1 , 0<y<1 and 0<z<1

 

Naturally there is a difference in some rules for subspaces and subsets or there would be no point in making the distinction

 

The difference between a subset and a subspace is important in the definition of real world fields, which can occupy a subset or subspace.


  • 0

#9 Xerxes

Xerxes

    Meson

  • Senior Members
  • 224 posts
  • LocationUK

Posted 7 February 2017 - 09:57 PM

Well, I really cannot see that any of the above has very much to do with with topic at hand.

geordieff What follows will certainly raise some questions for you - do please ask.
them, and I will do my best to give the simplest possible answers.

First suppose a vector space V with v \in V. Then to any such space we may associate another vector space - called the dual space V^*- which is the vector space of all linear mappings V \to \mathbb{R}, that is V^*:V \to \mathbb{R}.
Obviously then, for \varphi \in V^* then \varphi(v) = \alpha \in \mathbb{R}.

So the tensor (or direct) product of two vector spaces is written as the bilinear mapping V^*\otimes V^*:V \times V\to \mathbb{R}, where elements in V \times V are the ordered pairs (of vectors) (v,w), so that, for \varphi,\,\,\phi \in V^*, by definition, \varphi \otimes \phi(v,w)=\varphi(v)\phi(w)

The object \varphi \otimes \phi is called a TENSOR. In fact it is a rank 2, type (0,2) tensor

Written in full, this is \varphi \otimes \phi = (\sum\nolimits_j A_j \epsilon^j)\otimes (\sum\nolimits_k B_k \epsilon^k) = \sum\nolimits_{jk}A_j B_k \epsilon^j \otimes \epsilon^k which we can write as \sum\nolimits_{jk}C_{jk} \epsilon^j \otimes \epsilon^k where the A,\,B,\,C are scalar and the set \{\epsilon^i\} are basis vectors for V^*.

The scalars C_{jk} have a natural representation as an n \times n matrix, where n is the dimension of these dual spaces i.e. the cardinality of the set \{\epsilon^i\}. Most physicists (and some mathematicians) refer to this tensor by its scalar components i.eC_{jk}

There is more more - much more. Aren't you glad you asked!!
  • 0

#10 studiot

studiot

    Genius

  • Senior Members
  • 6,296 posts
  • LocationSomerset, England

Posted 7 February 2017 - 10:31 PM

Well Xerxes has certainly spelled it out for you in gory detail (What he said is true), 

But one word of warning.

 

Xerxes has not been lazy, he has been kind to you and written out all the summation signs. (he has actually put a lot of work in)

 

Tensor addicts have a secret convention that they do not bother with the giant sigma sign, they regard it as 'understood' whenever you see the double suffix.

 

Let me know if you need a translation to rough guide English.


  • 0

#11 Xerxes

Xerxes

    Meson

  • Senior Members
  • 224 posts
  • LocationUK

Posted 8 February 2017 - 06:40 PM

Well, I may as well finish off my boring little tutorial.

Recall I said that a type (0,2) tensor takes the mathematical form \varphi \otimes \phi and is an element in the space of linear mapping V^* \otimes V^*: V \times V \to \mathbb{R}

In fact there is no restriction on the "size" of the space thereby created; we may have, say, V^* \otimes V^* \otimes V^* \otimes V^* \otimes....... for any finite number of dual spaces provided only that they act on exactly the same number of spaces that enter into the Cartesian product.

Using the shorthand I alluded to earlier, we may have, say, A_{ijklmn} as a type (0,6) tensor.

Now note that, we may define the dual space of a dual space as (V^*)^* \equiv V^{**}. And in the casee that these are finite-dimensional vector spaces, by a somewhat tortuous argument, assert that V^{**} = V (I cheated rather - they are not identical, but they are said to "naturally isomorphic" so can be treated as the same)

So we may have that V \otimes V:V^* \times V^* \to \mathbb{R}withe exactly the same construction as before, so that, again in shorthand A^{jk} are the scalar components of a type (2,0) tensor.

Furthermore, we can "mix and match" ; we may have mixed tensors of the form V^* \otimes V: V \times V^* \to \mathbb{R}, once again with shorthand T^j_k and so on to higher ranks.
I close this sermon with 3 remarks that may (or may not) be of interest.....

1. Tensors have their own algebra, which is mostly intuitive when one realizes, as studiot hinted at, that every tensor has a representation as a matrix with one exception......

2. ....this being tensor contraction. I will say no more than that this operation is equivalent to taking the scalar product of a vector and its dual.

3. The algebra of tensors and that of tensor fields turn out to be identical, so physicists frequently talk of "a tensor" when in reality they are talking of a tensor field
  • 0

#12 geordief

geordief

    Protist

  • Senior Members
  • 788 posts

Posted 8 February 2017 - 07:06 PM

Well, I may as well finish off my boring little tutorial.

Recall I said that a type (0,2) tensor takes the mathematical form \varphi \otimes \phi and is an element in the space of linear mapping V^* \otimes V^*: V \times V \to \mathbb{R}

In fact there is no restriction on the "size" of the space thereby created; we may have, say, V^* \otimes V^* \otimes V^* \otimes V^* \otimes....... for any finite number of dual spaces provided only that they act on exactly the same number of spaces that enter into the Cartesian product.

Using the shorthand I alluded to earlier, we may have, say, A_{ijklmn} as a type (0,6) tensor.

Now note that, we may define the dual space of a dual space as (V^*)^* \equiv V^{**}. And in the casee that these are finite-dimensional vector spaces, by a somewhat tortuous argument, assert that V^{**} = V (I cheated rather - they are not identical, but they are said to "naturally isomorphic" so can be treated as the same)

So we may have that V \otimes V:V^* \times V^* \to \mathbb{R}withe exactly the same construction as before, so that, again in shorthand A^{jk} are the scalar components of a type (2,0) tensor.

Furthermore, we can "mix and match" ; we may have mixed tensors of the form V^* \otimes V: V \times V^* \to \mathbb{R}, once again with shorthand T^j_k and so on to higher ranks.
I close this sermon with 3 remarks that may (or may not) be of interest.....

1. Tensors have their own algebra, which is mostly intuitive when one realizes, as studiot hinted at, that every tensor has a representation as a matrix with one exception......

2. ....this being tensor contraction. I will say no more than that this operation is equivalent to taking the scalar product of a vector and its dual.

3. The algebra of tensors and that of tensor fields turn out to be identical, so physicists frequently talk of "a tensor" when in reality they are talking of a tensor field

It is going to take me a (very) long time  to "get into and perhaps eventually through " your 2 posts. I will have to first familiarize myself with the notation and symbols since I never really had much experience with set theory(that is what they are ,aren't they?)

 

It is not "boring " but I have to work within my limits- and pace myself.If I take on too much at one go then it will be self defeating.

 

I am beginning to see that perhaps tensors may be a simpler subject than I supposed  but it has been forbidden territory for me for several years now and I am going to give myself plenty of time to approach the subject :-)  


  • 0

#13 studiot

studiot

    Genius

  • Senior Members
  • 6,296 posts
  • LocationSomerset, England

Posted 8 February 2017 - 07:49 PM

A few notes.

 

Firstly Xerxes had confirmed what I said elsewhere, that there is more than one space associated with some fields.

 

He refers to the tensor space and the dual space, although I sis not mean that particular combination for my purposes, I think it proves the point.

 

Secondly all the tensors you will meet can be represented as square matrices, but not all square matrices are tensors.

 

For instance the matrix

 

0  1  0

0  0  1

1  0  0

 

is not a tensor.

 

Second order tensors produce square matrices like the above, third order tensors produce cubical matrices and so on.

I see a sixth order one was noted.

 
 
 
 
 

  • 0

#14 Xerxes

Xerxes

    Meson

  • Senior Members
  • 224 posts
  • LocationUK

Posted 8 February 2017 - 09:44 PM

Firstly Xerxes had confirmed what I said elsewhere, that there is more than one space associated with some fields.

Actually, that is not what you said
 

A field in Physics can entail two quite distinct and different coordinate systems and usually does.
No transformation exists between these coordinate systems.

In any case, I cannot parse the new claim that "there is more than one space associated with some field".

What does this mean?
  • 0

#15 studiot

studiot

    Genius

  • Senior Members
  • 6,296 posts
  • LocationSomerset, England

Posted 8 February 2017 - 11:29 PM

Actually, that is not what you said
 
In any case, I cannot parse the new claim that "there is more than one space associated with some field".

What does this mean?

 

Actually it is what I said.

 

I even posted an excerpt from a renowned textbook describing such a situation, the coincidence two spaces viz momentum space and position space at a particle.(post#88).


  • 0

#16 wtf

wtf

    Baryon

  • Senior Members
  • 169 posts

Posted 9 February 2017 - 02:53 AM

I'm interested in this discussion because I've only ever seen tensor products in abstract algebra. I don't know any formal physics and don't know what tensors are. And the abstract mathematical formulation seems so far removed from the physics meaning of tensor that I've always been curious to bridge the gap.

[Feel free to skip this part]
Briefly, if V and W are vector spaces over the real numbers, their tensor product V \otimes W is the free vector space on their direct product V \times W, quotiented out by the subspace generated by the usual bilinearity relationships. The tensor product has a universal property that's generally used to define it, which is that any bilinear map from the direct product V \times W to any other vector space "factors through" the tensor product.

This is a lot of math jargon and sadly if I tried to supply the detail it would not be helpful. It's a long chain of technical exposition. The details are on Wiki:

https://en.wikipedia.../Tensor_product

https://en.wikipedia...duct_of_modules

Also this article is simple and clear and interesting. "How to conquer tensorphobia."
https://jeremykun.co...r-tensorphobia/
[End of skippable part]

This [the algebraic approach to tensor products] is all I know about tensors. It's always struck me that

* This doesn't seem to have anything to do with physics or engineering; and

* It doesn't say anything about dual spaces, which are regarded as very important by the physicists.

What I know about physics and engineering tensors is that they are (loosely speaking I suppose) generalizations of vector fields. Just as a vector field describes the force of a swirling wind or electrical field about a point in the plane, tensors capture more and higher order behavior localized at a point on a manifold.

What I wish I understood was a couple of simple examples. When Einstein is analyzing the motion of a photon passing a massive body, what are the tensors? When a brige engineer needs to know the stresses and strains on a bolt, what are the tensors?

Studiot mentioned the stress and strain tensors. Even though I don't know what they are, their names are suggestive of what they do. Encode complex information about some force acting on a point. Studiot can you say more about them?

I hope someday to understand what a tensor is in everyday terms (bridge bolts) and how they're used in higher physics, and how any of this relates to mathematical tensor product. Bilinearity seems to be one of the themes.

Along these lines, Xerxes wrote something I hadn't seen before.
 

\varphi \otimes \phi(v,w)=\varphi(v)\phi(w)


That relates a pair of functionals to the product of two real numbers. The distributive law induces bilinearity. So this looks like something for me to try to understand. It might be a bridge between the physics and the math. If I could understand why the functionals are important it would be a breakthrough. For example I've seen where an n-fold tensor has some number of factors that are vector spaces, and some number that are duals of those spaces, and these two numbers are meaningful to the physicists. But duals don't even appear in the algebraic approach.

This is everything I know about it and perhaps I'll learn more in this thread.

Edited by wtf, 9 February 2017 - 03:20 AM.

  • 0

#17 Xerxes

Xerxes

    Meson

  • Senior Members
  • 224 posts
  • LocationUK

Posted 9 February 2017 - 04:55 PM

Hi wtf. I would be willing to bet you know as much physics and engineering as I do, but let's see if I can give some insight......

Physics and engineering would be unthinkable without a metric, although this causes no problems to a mathematician. Specifically, a vecto space is called a "metric space" if it has an inner product defined.{edit "with" to "without"}

Now an inner product is defined as a bilinear, real-valued mapping b:V \times V \to \mathbb{R}(with certain obvious restrictions imposed), that is b(v,w) \in \mathbb{R} where v,\,w \in V.

In the case that our vector space is defined over the Reals, we have that b(v,w)=b(w,v)

Turn to the dual space, with \varphi \in V^* This means that for any \varphi \in V^* and any v \in V that \varphi(v) \in \mathbb{R}

In the case of a metric space there always exists some particular \varphi_v(w) = b(v,w) \in \mathbb{R} for all v \in V.

And likewise by the symmetry above, there exists a \phi_w(v) =b(w,v) = b(v,w). But writing \varphi_v(w)\phi_w(v) as their product, we see this is just \varphi_v \otimes \phi_w(v,w) = b(v,w), so that \varphi_v \otimes \phi_w \in V^* \otimes V^*.

And if we expand our dual vectors as, say \varphi_v=\sum\nolimits_j \alpha_j \epsilon^j and  \phi_w = \sum\nolimits_k \beta_k \epsilon^k, then as before we may write \varphi_v \otimes \phi_w = \sum\nolimits_{jk} g_{jk} \epsilon ^j \otimes \epsilon^k then, dropping all reference to the basis vectors, we may have that b = \alpha_j \beta_k= g_{jk}.

Therefore the g_{jk} are called the components of a type (0,2) metric tensor.

It is important in General Relativity (to say the least!!)

Edited by Xerxes, 9 February 2017 - 05:19 PM.

  • 0

#18 studiot

studiot

    Genius

  • Senior Members
  • 6,296 posts
  • LocationSomerset, England

Posted 10 February 2017 - 09:19 AM

Not needed now.


Edited by studiot, 10 February 2017 - 09:21 AM.

  • 0

#19 wtf

wtf

    Baryon

  • Senior Members
  • 169 posts

Posted 11 February 2017 - 03:25 AM

I've been percolating on the posts in this thread and some online references and I'm making some progress. I just wanted to write down what I understand so far. I think I need to just keep plugging away at the symbology. Thanks to all for the discussion so far.
 

2. Tensors are essentially multilinear maps from the Cartesian product of vector spaces to the Reals


That's a very clarifying remark. Especially in light of this:
 

First suppose a vector space V with v \in V. Then to any such space we may associate another vector space - called the dual space V^*- which is the vector space of all linear mappings V \to \mathbb{R}, that is V^*:V \to \mathbb{R}.
Obviously then, for \varphi \in V^* then \varphi(v) = \alpha \in \mathbb{R}.

So the tensor (or direct) product of two vector spaces is written as the bilinear mapping V^*\otimes V^*:V \times V\to \mathbb{R}, where elements in V \times V are the ordered pairs (of vectors) (v,w), so that, for \varphi,\,\,\phi \in V^*, by definition, \varphi \otimes \phi(v,w)=\varphi(v)\phi(w)

The object \varphi \otimes \phi is called a TENSOR. In fact it is a rank 2, type (0,2) tensor


The above is extremely clear. By which I mean that it became extremely clear to me after I worked at it for a while :-) And I definitely got my money's worth out of this example. Let me see if I can say it back.

Given V and V^* as above, let \varphi, \phi \in V^* be functionals, and define a map \varphi \otimes \phi : V \times V \to \mathbb R. In other words \varphi \otimes \phi is a function that inputs a pairs of elements of V, and outputs a real number. Specifically, \varphi \otimes \phi(u, v) = \varphi(u) \phi(v) where the right hand side is just ordinary multiplication of real numbers. Note that \varphi \otimes \phi doesn't mean anything, it's notation we give to a particular function.

It's clear (and can be verified by computation) that \varphi \otimes \phi is a bilinear map.

In other words \otimes is a function that inputs a pair of functionals, and outputs a function that inputs pairs of vectors and outputs a real number.

So that's one definition of a tensor.

I'm not clear on why your example is of rank 2, I'll get to that in a moment.

Another way to understand the tensor product of two vector spaces comes directly from the abstract approach I talked about earlier. In fact in the case of finite-dimensional vector spaces, it's especially simple. The tensor product V \otimes V is simply the set of all finite linear combinations of the elementary gadgets v_i \otimes v_j where the v_i's are a basis of V, subject to the usual bilinearity relationships.

Note that I didn't talk about duals; and the tensors are linear combinations of gadgets and not functions. In fact one definition I've seen of the rank of a tensor is that it's just the number of terms in the sum. So v \otimes w is a tensor of rank 1, and 3 v_1 \otimes w_1 + 5 v_2 \otimes w_2 is a tensor of rank two. Note that I have a seemingly different definition of rank than you do.

In general, a tensor is an expression of the form \sum_{i,j} a_{ij} v_i \otimes v_j. This is important because later on you derive this same expression by means I didn't completely follow.

By the way if someone asks, what does it mean mathematically to say that something is a "formal linear combination of these tensor gadgets," rest assured that there are technical constructions that make this legit.

Now if I could bridge the gap between these two definitions, I would be making progress. Why do the differential geometers care so much about the dual spaces? What meaning do the duals represent? In differential geometry, physics, engineering, anything?

Likewise I understand that in general tensors are written as so many factors of the dual space and so many of the original space. What is the meaning of the duals? What purpose or meaning do they serve in differential geometry, physics, and engineering?

Now one more point of confusion. In your most recent post you wrote:
 

Physics and engineering would be unthinkable without a metric, although this causes no problems to a mathematician. Specifically, a vecto space is called a "metric space" if it has an inner product defined.{edit "with" to "without"}


I think that can't be right, since metric spaces are much weaker than inner product spaces. Every inner product gives rise to a metric but not vice versa. For example the Cartesian plane with the taxicab metric is not an inner product space. I'm assuming this is just casual writing on your part rather than some fundamentally different use of the word metric than I'm used to.
 

Now an inner product is defined as a bilinear, real-valued mapping b:V \times V \to \mathbb{R}(with certain obvious restrictions imposed), that is b(v,w) \in \mathbb{R} where v,\,w \in V.

In the case that our vector space is defined over the Reals, we have that b(v,w)=b(w,v)


Agreed so far. Although in complex inner product spaces this identity doesn't hold, you need to take the complex conjugate on the right.
 

Turn to the dual space, with \varphi \in V^* This means that for any \varphi \in V^* and any v \in V that \varphi(v) \in \mathbb{R}


Yes.
 

In the case of a metric space there always exists some particular \varphi_v(w) = b(v,w) \in \mathbb{R} for all v \in V.


The correspondence between the dual space and the inner product is not automatic, it needs proof. Just mentioning that.
 

And likewise by the symmetry above, there exists a \phi_w(v) =b(w,v) = b(v,w). But writing \varphi_v(w)\phi_w(v) as their product, we see this is just \varphi_v \otimes \phi_w(v,w) = b(v,w), so that \varphi_v \otimes \phi_w \in V^* \otimes V^*.


Now here I got lost but I need to spend more time on it. You're relating tensors to the inner product and that must be important. I'll keep working at it.
 

And if we expand our dual vectors as, say \varphi_v=\sum\nolimits_j \alpha_j \epsilon^j and  \phi_w = \sum\nolimits_k \beta_k \epsilon^k, then as before we may write \varphi_v \otimes \phi_w = \sum\nolimits_{jk} g_{jk} \epsilon ^j \otimes \epsilon^k then, dropping all reference to the basis vectors, we may have that b = \alpha_j \beta_k= g_{jk}.


Aha! The right side is exactly what I described above. It's a finite linear combination of elementry tensor gadgets. And somehow the functionals disappeared!

So I know all of this is the key to the kingdom, and that I'm probably just a few symbol manipulations away from enlightenment :-)
 

Therefore the g_{jk} are called the components of a type (0,2) metric tensor.


Right, it's (0,2) because there are 0 copies of the dual and 2 copies of V. But where did the functionals go?
 

It is important in General Relativity (to say the least!!)


Should I be thinking gravity, photons, spacetime? Why are the duals important? And where did they go in your last calculation?

I'll go percolate some more. To sum up, the part where you define a tensor as a map from the Cartesian product to the reals makes sense. The part about the duals I didn't completely follow but you ended up with the same linear combinations I talked about earlier. So there must be a pony in here somewhere.

Edited by wtf, 11 February 2017 - 04:40 AM.

  • 0

#20 Xerxes

Xerxes

    Meson

  • Senior Members
  • 224 posts
  • LocationUK

Posted 11 February 2017 - 09:45 PM

Yes, you seem to be getting there. But since I am finding the reply/quote facility here extremely irritating to use (why can I not get ascii text to wrap), I will reply to you substantive questions as follows......

1. Rank of a tensor

I use the usual notation that the rank of a tensor is equal to the number of vector spaces that enter into the tensor (outer) product. Not the somewhat confusing fact....

If V is a vector space, then so is V \otimes V, and, since elements in a vector space are obviously vectors, then yhe tensor v \otimes w is a vector!!


2. Dual spaces

The question of "what are they for?" may be answered for you in the following

3. Prove the relation between the action of a dual vector (aka linear functional) and the inner product.

First note that, since by assumption, V and V^* are linear spaces, it will suffice to work on basis vectors.

Suppose the subset \{e_j\} is an orthonormal basis for V. Further suppose that \{\epsilon^k\} is an arbitrary subset of V^*.

Then \{\epsilon^k\} will be a basis for V^* if and only if \epsilon^k(e_j)= \delta^j_k where \delta^j_k = 1,\,\,j=k,\text{and} \,\,0,\,\,j \ne k

Now note that if g(v,w) defines an inner product on V, then the basis vectors are orthonormal if and only if g_{jk}(e_j,e_k)=\delta ^j_k This brings the action of a dual basis on dual vector bases and the inner product of bases into register. Extending by linearity, this must be true for all vectors and their duals

Edited by Xerxes, 11 February 2017 - 10:20 PM.

  • 0




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users