# Tensors

## Recommended Posts

I must say that this use of the term Hausdorff is quite different from what I've learned about the term. In my understanding, asking if that property is transitive is meaningless.

A topological space is Hausdorff if it separates points by open sets. That is, given any two points $x, y$, there are open sets $U_x, U_y$ with $x \in U_x$, $y \in U_y$, and $U_x \cap U_y = \emptyset$.

For example the real numbers with the usual topology are Hausdorff; the reals with the discrete topology are Hausdorff; and the reals with the indiscreet topology are not Hausdorff.

I confess I have no idea what it means for the Hausdorff property to be transitive. It's not a binary relation. It's a predicate on topological spaces. Given a topological space, it's either Hausdorff or not. It would be like asking if the property of being a prime number is transitive. It's a meaningless to ask the question because being prime is a predicate (true or false about any individual) and not a binary relation.

Given a pair of points, they are either separated by open sets or not. Of course for each pair of points you have to find a new pair of open sets, which is what I think you are saying.

Historical note. Felix Hausdorff was German mathematician in the first half of the twentieth century. In 1942 he and his family were ordered by Hitler to report to a camp. Rather than comply, Hausdorff and his wife and sister-in-law committed suicide. https://en.wikipedia.org/wiki/Felix_Hausdorff

Edited by wtf

##### Share on other sites

So you don't like my use of the term "transitive". I can live with being wrong aboutthat.

Let's move on to the really interesting stuff, closer to the spirit of he OP (remember it?)

The connectedness property mandates that, every $m \in M$ there exist at least 2 overlapping coordinate neighbourhoods containing $m$. I write $m \in U \cap U'$.

So suppose the coordinates (functions) in $U$ are $\{x^1,x^2,....,x^n\}$ and those in $U'$ are $\{x'^1,x'^2,....,x'^n\}$ and since these are equally valid coordinates for our point, we must assume functional relation between these 2 sets of coordinates.

For full generality I write

$f^1(x^1,x^2,....,x^n)= x'^1$

$f^2(x^1,x^2,....,x^n) = x'^2$

..............................

$f^n(x^1,x^2,....,x^n)= x'^n$

Or compactly $f^j(x^k)=x'^j$.

But since the numerical value of each $x'^j$ is completely determined by the $f^j$, it is customary to write rhis as $x'^j= x'^j(x^k)$, as ugly as it seems at fist sight*.

This is the coordinate transformation $U \to U'$. And assuming an inverse, we will have quite simply $x^k=x^k(x'^h)$ for $U' \to U$

Notice I have been careful up to this point to talk in the most general terms (with the 2 exceptions above). Later I will restrict my comments to a particular class of manifolds

* Ugly it may be, but it simplifies notation in the calculus.

##### Share on other sites

So you don't like my use of the term "transitive".

You are using the term in a highly nonstandard way and your exposition is unclear on that point.

Let's move on to the really interesting stuff, closer to the spirit of he OP (remember it?)

Very much so. I'm interested in why differential geometers and physicists are so interested in using dual spaces in tensor products when the algebraic definition says nothing about them. The current exposition of differential geometry is very interesting to me but not particularly relevant (yet) to tensor products.

The connectedness property mandates that, every $m \in M$ there exist at least 2 overlapping coordinate neighbourhoods containing $m$. I write $m \in U \cap U'$.

I hope I may be permitted to post corrections to imprecise statements, in the spirit of trying to understand what you're saying. The indiscreet topology is connected but each point is in exactly one open set. Perhaps you need the Hausdorff property. Again not being picky for the sake of being picky, but for my own understanding. And frankly to be of assistance with your exposition. If you're murky you're murky, I gotta call it out because others will be confused too.

I'm still digesting the rest of your post.

Edited by wtf

##### Share on other sites

Are you talking about the transition maps? I'm working through that now. The Wiki page is helpful. https://en.wikipedia.org/wiki/Manifold

ps ... Quibbles aside I'm perfectly willing to stipulate that the topological spaces aren't too weird. Wiki says they should be second countable and Hausdorff. Second countable simply means there's a countable base. For example in the reals with the usual topology, every open set is a union of intervals with rational centers and radii. There are only countably many of those so the reals are second countable.

Interestingly Wiki allows manifolds to be disconnected. I don't think it makes a huge difference at the moment. I can imagine that the two branches of the graph of 1/x are a reasonable disconnected manifold.

Edited by wtf

##### Share on other sites

I'm interested in why differential geometers and physicists are so interested in using dual spaces in tensor products when the algebraic definition says nothing about them.

We will get to that in due course (and soon). It has to do with the difference between Euclidean geometry (algebra) and non-Euclidean geometry (diff. geom.)

The current exposition of differential geometry is very interesting to me but not particularly relevant (yet) to tensor products.

I will make it so, I hope. Again in due course

I hope I may be permitted to post corrections to imprecise statements,

You are not only permitted, you are encouraged to do so.

The indiscreet topology is connected but each point is in exactly one open set. Perhaps you need the Hausdorff property.

You do. In my defence, I explicitly said in an earlier post that manifolds with the discrete and indiscrete topologies, being respectively not connected and not Hausdorff, were of no interest to us. But yes, I should have reiterated it. Sorry.

Again not being picky for the sake of being picky, but for my own understanding. And frankly to be of assistance with your exposition.

No, you are quite right to correct me if I am unclear or wrong. I welcome it

I'm still digesting the rest of your post.

Take your time - the incline increases from here on!

##### Share on other sites

Let's move on to the really interesting stuff ...

I commented on the first half earlier. Now to the rest of it.

First there's a big picture, which is that if we have a manifold $M$ and a point $m \in M$, then we may have two (or more) open sets $U, U' \subset M$ with $m \in U \cap U'$. So $m$ has two different coordinate representations, and we can go up one and down the other to map the coordinate representations to each other.

My notation in what follows is based on this excellent Wiki article, which I've found enlightening.

The notation is based on this picture.

We have two open sets $U_\alpha, U_\beta \subset M$ with corresponding coordinate maps $\varphi_\alpha : U_\alpha \rightarrow \mathbb R^n$ and $\varphi_\beta : U_\beta \rightarrow \mathbb R^n$. I prefer the alpha/beta notation so I'll work with that.

Also, as I understand it the coordinate maps in general are called charts; and the collection of all the charts for all the open sets in the manifold is called an atlas.

If $m \in U_\alpha \cap U_\beta$ then we have two distinct coordinate representations for $m$, and we can define a transition map $\tau_{\alpha, \beta} : \mathbb R^n \rightarrow \mathbb R^n$ by starting with the coordinate representation of $m$ with respect to $U_\alpha$, pulling back (is that the correct use of the term?) along $\varphi_\alpha^{-1}$, then pushing forward (again, is this the correct usage or do pullbacks and pushforwards refer to something else?) along $\varphi_\beta$.

So we define $\tau_{\alpha, \beta} = \varphi_\beta \varphi^{-1}_\alpha$. Likewise we define the transition map going the other way, $\tau_{\beta, \alpha} = \varphi_\alpha \varphi^{-1}_\beta$.

I found it helpful to work through this before tackling your notation.

So suppose the coordinates (functions) in $U$ are $\{x^1,x^2,....,x^n\}$ and those in $U'$ are $\{x'^1,x'^2,....,x'^n\}$ and since these are equally valid coordinates for our point, we must assume functional relation between these 2 sets of coordinates.

Now I feel equipped to understand this.

We have $m \in U_\alpha \cap U_\beta$. Then I can write

$\varphi_\alpha(m) = (\alpha^i)$ and $\varphi_\beta(m) = (\beta^i)$, with the index in both cases is the $n$ in $\mathbb R^n$. I don't think we talked about the fact that the dimension is the same all over but that seems to be part of the nature of manifolds.

Question: You notated your ordered n-tuple with set braces rather than tuple-parens. Is this an oversight or a feature? I can't tell. I'll assume you meant parens to indicate an ordered $n$-tuple.

Also you referred to the coordinates as functions, and you did that earlier as well. I'm a little unclear on what you mean. Certainly for example $\alpha_i = \pi_i \varphi_\alpha(m)$, in other words the $i$-th coordinate with respect to $\varphi_\alpha$ is the $i$-th projection map composed on $\varphi_\alpha$.

Are you identifying each coordinate with its respective projection map? That's perfectly sensible. You probably said that earlier.

For full generality I write

$f^1(x^1,x^2,....,x^n)= x'^1$

$f^2(x^1,x^2,....,x^n) = x'^2$

..............................

$f^n(x^1,x^2,....,x^n)= x'^n$

Aha. This took me a while to sort out. What is $f^i$? Putting all this in my notation, we have

$f^i(\alpha^1, \alpha^2, \dots, \alpha^n) = \beta^i$.

So we seem to be starting with the $\alpha$-coordinates of $m$, using the transfer map $\tau_{\alpha,\beta}$ to get to the corresponding $\beta$-coordinates; then taking the $i$-th coordinate via the $i$-th projection map.

Therefore we must have $f^i = \pi_i \tau_{\alpha,\beta} = \pi_i \varphi_\beta \varphi_\alpha^{-1}$.

As far as I can tell this is the equation that relates your notation to mine. Have I got this right?

Or compactly $f^j(x^k)=x'^j$.

I undersand that. But note that it's ambiguous. Does $f^j$ act on the real number $x^k$? No, actually it acts on the $n$-tuple $(x^k)_{k=1}^n$. So if we are pedants (and that's a good thing to be when we are first learning a subject!) it is proper to write $f^j((x^k)_{k=1}^n)$. Whenever we see $f^j(x^k)$ we have to remember that we are feeding an $n$-tuple into $f^j$, and not a real number.

But since the numerical value of each $x'^j$ is completely determined by the $f^j$, it is customary to write rhis as $x'^j= x'^j(x^k)$, as ugly as it seems at fist sight*.

This is very interesting. Let me say this back to you. $m$ has $\beta$-coordinates $(\beta^i)$. And now what I think you are saying is that we are going to identify the coordinate $\beta^i$ with the map $f^i = \pi_i \varphi_\beta \varphi_\alpha^{-1}$. Is that right? We identify each $\beta$-coordinate with the process that led us to it! Very self-referential

This is what I understand you to be saying, please confirm.

ADDENDUM: No I no longer understand this. $f^i$ doesn't play favorites with some particular $\beta^i$. It makes sense to say that $f^i$ maps $\varphi_\alpha(m)$ to the $i$-th coordinate of $\varphi_\beta(m)$. But it's a different $f^i$ for each $m$.

I think I am confused. I should sort this out before I post but I'll just throw this out there.

This is the coordinate transformation $U \to U'$. And assuming an inverse, we will have quite simply $x^k=x^k(x'^h)$ for $U' \to U$

* Each $f^i$ is a map from $\mathbb R^n$ to the reals. It inputs an $n$-tuple that is the $\alpha$-representation of a point $m$; and outputs a single real number, the $i$-th coordinate of the $\beta$-representation of $m$.

So the only way to make sense of what you wrote is to that the the collection of all the $f^i$ 's are the coordinate transformations.

Actually what I understood from the Wiki article is that the transfer maps were the coordinate transformations. So maybe I'm confused on this point. Can you clarify?

* There's actually a little swindle going on with $\varphi_\alpha$. At first it was a map from $U$ to some open subset of $\mathbb R^n$. But in order to pull back along $\varphi_\alpha^{-1}$ we have to restrict the domain to the image $\varphi_\alpha(U_\alpha \cap U_\beta)$. So we don't really have a map from $U$ to $U'$ in your notation; but only from their intersection to itself.

Can you clarify?

Notice I have been careful up to this point to talk in the most general terms (with the 2 exceptions above). Later I will restrict my comments to a particular class of manifolds

It doesn't seem to matter at this point what the topological conditions are. It's all I can do to chase the symbols.

* Ugly it may be, but it simplifies notation in the calculus.

I think I'm with you so far. Just the questions as indicated. Two key questions:

* How the transition maps can be said to be from $U$ to $U'$ when in fact they're only defined from the $\alpha$ and $\beta$ images, respectively, of the intersection. I'm just a little puzzled on this.

* Your notation $x'^j= x'^j(x^k)$. First I thought I understood it and now I've convinced myself $x'^j$ depends on $m$.

* And now that I think about it, the transition maps are from Euclidean space to itself, they're not defined on the manifold.

I'm more confused now than when I started working all this out.

Edited by wtf

##### Share on other sites

I think I understand what you're saying. In my notation, you are using $\beta^i$ as both the value of the $i$-th coordinate of the $\beta$-representation of some point $m \in U_\alpha \cap U_\beta$; and also as the function $\pi_i \varphi_\beta \varphi_\alpha^{-1}$ that maps the $\alpha$-representation of some point $m$ to the $i$-th coordinate of the $\beta$-representation of $m$.

That's how I'm understanding this. You're taking the $i$-th coordinate to be both the function and the specific value for a given $m$. It's a little bit subtle. The REAL NUMBER $\beta^i$ changes as a function of $m$; but the FUNCTION $\beta^i$ does not.

Is that right? I want to make sure I'm nailing down this formalism.

Secondly I believe that you are a little confusing or inaccurate when you say the transfer maps (without the extra projection at the end) go from $U$ to $U'$. Rather the transition maps go from $\varphi_\alpha(U_\alpha \cap U_\beta)$ to $\varphi_\beta(U_\alpha \cap U_\beta)$ and back.

Since the charts are homeomorphisms so are the transfer maps in both directions. And I've read ahead on Wiki and a couple of DiffGeo texts I've found, and I see that if the transfer maps are differentiable or smooth then we call the manifold differentiable or smooth. That makes sense. We already know how to do calculus on Euclidean space.

So I'm a litle confused again ... the charts themselves don't have to be differentiable or smooth as long as the transfer maps (on the restricted domain) are. Is that correct? So for example the charts could have corners outside the areas of overlap? Perhaps you can help me understand that point.

##### Share on other sites

Wow wtf, so much to respond to. Sorry for the delay but we have been without power until now. The best I can do for now is to re-iterate my earlier post in a slightly different form

I am aware that, on forums such as this it is considered a hanging offence to disagree with the sacred Wiki, so let us say I have confused you. Specifically FORGET the term "transition function". But I am quite willing to use the Wiki notation, as you say you prefer it......

So.

We have 3 quite different mappings in operation here. The first is our homeomorphism: given some open set $U \subsetneq M$ that $\varphi:U \to R^n$. Being a homeomorphism it is by definition invertible.

Suppose there exist 2 such open sets, say $U_\alpha,\,\,U_\beta$ with $U_\alpha \cap U_\beta \ne \O$. In fact suppose the point $m \in U_\alpha \cap U_\beta$, so that $\varphi_\alpha:U_\alpha \to V \subseteq R^n$ and $\varphi_\beta:U_\beta \to W \subseteq R^n$.

So the composite function $\varphi_\beta \circ \varphi_\alpha^{-1} \equiv \tau_{\alpha,\beta}:V \to W \in R^n$. One calls this an "induced mapping" (but no, $\varphi_\alpha^{-1}$ is not a pullback, it's a simple inverse)

Your Wiki calls this a transition, I do not. So let's forget the term.

But note that single points in $V,\,\,W$ are Real n-tuples, say $(\alpha^1,\alpha^2,....\alpha^n)$ and $(\beta^1,\beta^2,....,\beta^n)$, so that image of $\tau_{\alpha,\beta}((\alpha^1,\alpha^2,....,\alpha^n))= (\beta^1,\beta^2,....\beta^n)$

So the second mapping I defined as: for the point $m \in U_\alpha$, say, the image under $\varphi_\alpha$ is the n-tuple $(\alpha^1,\alpha^2,....,\alpha^n)$ likewise for $\varphi_\beta(U_\beta)= (\beta^1,\beta^2,....,\beta^n)$ then there always exist projections $\pi_\alpha^1(\alpha^1,\alpha^2,....,\alpha^2)= \alpha^1$ and so on, likewise for the images under $\pi_\beta^j$ of the n-tuple $(\beta^1,\beta^2,....,\beta^n)$.

Note that since the $\alpha^j$, say, are Real numbers this is a mapping $R^n \to R$.

So the composite mapping (function) $\pi_\alpha^j \circ \varphi_\alpha \equiv x^j$ is a Real-valued mapping (function) $U_\alpha \to R$ and the n images under this mapping of $m \in U_\alpha$ is simply the set $\{\alpha^1,\alpha^2,....,\alpha^n\}$ and the images under this mapping of $m \in U_\beta$ is the set $\{\beta^1,\beta^2,....,\beta^n\}$ so that $x^j(m) = \alpha^j$ and $x'^k(m) = \beta^k$

The $x^j,\,x'^k$ are coordinate functions, or simply coordinates

The coordinate transformations I referred to are simply mappings from $\{x^1,x^2,....,x^2\} \to \{x'^1,x'^2,....,x'^n\}$, they map (sets of) coordinates (functions) to (sets of) coordinates (functions) if and only if they refer to the same point in the intersection of 2 open sets. This mapping is multivariate - that is, it is NOT simply the case that say $f^1(x^1)=x'^1$ rather $f^1(x^1,x^2,....,x^2)=x'^1$.

Note that the argument of $f^j$ is a set, not a tuple, appearances to the contrary

I hope this helps.

It also seems I may have confused you slightly with my index notation - but first see if the above clarifies anything at all.

P.S I am generally very careful with my notation. In particular I will always b careful to distinguish a tuple from a set

Edited by Xerxes

##### Share on other sites

Sorry for the delay but we have been without power until now. The best I can do for now is to re-iterate my earlier post in a slightly different form

Sorry about your power loss but the recent pace is fine for me. It might have been me who pulled the plug

I am aware that, on forums such as this it is considered a hanging offence to disagree with the sacred Wiki, so let us say I have confused you. Specifically FORGET the term "transition function".

I'm just grasping at straws to follow your posts. FWIW here is a screen shot from Introduction to Differential Geometry by Robbin and Salamon. This is from page 59 of this pdf. https://people.math.ethz.ch/~salamon/PREPRINTS/diffgeo.pdf

They use the term transition map exactly as I've used it. But no matter, we can call them something else. But it's clear what they are, you are in agreement even if you prefer to use a different name.

We have 3 quite different mappings in operation here. The first is our homeomorphism: given some open set $U \subsetneq M$ that $\varphi:U \to R^n$. Being a homeomorphism it is by definition invertible.

Suppose there exist 2 such open sets, say $U_\alpha,\,\,U_\beta$ with $U_\alpha \cap U_\beta \ne \O$. In fact suppose the point $m \in U_\alpha \cap U_\beta$, so that $\varphi_\alpha:U_\alpha \to V \subseteq R^n$ and $\varphi_\beta:U_\beta \to W \subseteq R^n$.

So the composite function $\varphi_\beta \circ \varphi_\alpha^{-1} \equiv \tau_{\alpha,\beta}:V \to W \in R^n$. One calls this an "induced mapping" (but no, $\varphi_\alpha^{-1}$ is not a pullback, it's a simple inverse)

Your Wiki calls this a transition, I do not. So let's forget the term.

Ok. I agree with all your notation so far. As I say it took me the duration of your power outage for all this to become clear so feel free to pretend the power's out as I work to absorb subsequent posts.

But note that single points in $V,\,\,W$ are Real n-tuples, say $(\alpha^1,\alpha^2,....\alpha^n)$ and $(\beta^1,\beta^2,....,\beta^n)$, so that image of $\tau_{\alpha,\beta}((\alpha^1,\alpha^2,....,\alpha^n))= (\beta^1,\beta^2,....\beta^n)$

Yes, entirely clear.

So the second mapping I defined as: for the point $m \in U_\alpha$, say, the image under $\varphi_\alpha$ is the n-tuple $(\alpha^1,\alpha^2,....,\alpha^n)$ likewise for $\varphi_\beta(U_\beta)= (\beta^1,\beta^2,....,\beta^n)$ then there always exist projections $\pi_\alpha^1(\alpha^1,\alpha^2,....,\alpha^2)= \alpha^1$ and so on, likewise for the images under $\pi_\beta^j$ of the n-tuple $(\beta^1,\beta^2,....,\beta^n)$.

Perfectly clear.

Note that since the $\alpha^j$, say, are Real numbers this is a mapping $R^n \to R$.

So the composite mapping (function) $\pi_\alpha^j \circ \varphi_\alpha \equiv x^j$ is a Real-valued mapping (function) $U_\alpha \to R$ and the n images under this mapping of $m \in U_\alpha$ is simply the set $\{\alpha^1,\alpha^2,....,\alpha^n\}$ and the images under this mapping of $m \in U_\beta$ is the set $\{\beta^1,\beta^2,....,\beta^n\}$ so that $x^j(m) = \alpha^j$ and $x'^k(m) = \beta^k$

Yes.

The $x^j,\,x'^k$ are coordinate functions, or simply coordinates

Ok so we are identifying the coordinates with the projection mappings composed on the charts that produce them.

The coordinate transformations I referred to are simply mappings from $\{x^1,x^2,....,x^2\} \to \{x'^1,x'^2,....,x'^n\}$, they map (sets of) coordinates (functions) to (sets of) coordinates (functions) if and only if they refer to the same point in the intersection of 2 open sets. This mapping is multivariate - that is, it is NOT simply the case that say $f^1(x^1)=x'^1$ rather $f^1(x^1,x^2,....,x^2)=x'^1$.

Yes this is clear to me.

Note that the argument of $f^j$ is a set, not a tuple, appearances to the contrary

I take this to mean that $\{f^j\}_{i=1}^n$ is a set of maps where $f^j = \pi_j \varphi_\beta \varphi_\alpha^{-1}$, is that right?

I hope this helps.

Yes very much.

It also seems I may have confused you slightly with my index notation - but first see if the above clarifies anything at all.

Yes much better. Of course the couple of days I spent working through this in my own mind helped a lot too.

P.S I am generally very careful with my notation.

Maybe I should leave that remark alone Let me just say that I sometimes find it productive to work through points of murkiness in your exposition. I'm ready for the next step and do feel free to take this as slowly as you like. Also if you have any particular text you find helpful feel free to recommend it. There are so many different books out there.

##### Share on other sites

OK, good. We have both worked hard to arrive at a very simple conclusion: if a point in our manifold "lives" jointly into 2 different "regions", then it is entitled to 2 different coordinate representations, and these must be related by a coordinate transformation.

I will say this to our nearly 1000 lurkers: you have seen an example of rigourous mathematics at work, far from the hand waving of my simple (but true) statement above.

wtf. I had planned to say more about the finer points of differentiable manifolds, but on reflection have decided to try and get back to the matter at hand - tensors in the context of differential geometry, since geodief stated his interest was started by an attempt to understand the General Theory.

I will say no more tonight as I collided with a bottle wine earlier, causing serious (but temporary) brain damage

##### Share on other sites

wtf. I had planned to say more about the finer points of differentiable manifolds, but on reflection have decided to try and get back to the matter at hand - tensors in the context of differential geometry, since geodief stated his interest was started by an attempt to understand the General Theory.

Thanks Xerxes for all your patience.

That is actually my interest too so this direction is perfect for me. My goal is to understand tensors in differential geometry and relativity at a very simple level, but sufficient to understand the connection between them and the tensor product as defined in abstract algebra.

In fact lately I've been finding DiffGeo texts online and flipping to their discussion of tensors. Sometimes it's similar to what I've seen and other times it's different. It's all vaguely related but I think it will all come together for me if I can see an actual tensor in action. And if it's the famous metric tensor of relativity, I'll learn some physics too. That's a great agenda.

That's what I meant the other day when I said I hoped we didn't have to slog through the calculus part. I don't want to have to do matrices of partial derivatives and the implicit function theorem and all that jazz, even if it's the heart of the subject. I just want to know what the metric tensor in relativity is and be able to relate it to the tensor product. Partial derivatives make my eyes glaze over even though I've taken multivariable calculus and could explain and compute them if I had to.

Along the way, maybe I'll figure out where the duals come from. Because with or without the duals you get the same tensor product; but the duals are regarded as important in relativity. That's the part I'm missing ... why we care about the duals when they're not needed in the definition of tensor product.

I will say no more tonight as I collided with a bottle wine earlier, causing serious (but temporary) brain damage

Was that collision between the glass container and your skull? Or of the wine molecules with your brain cells? Or did you use the latter to mitigate the effects of the former?

Edited by wtf

##### Share on other sites

Partial derivatives make my eyes glaze over

I am very sorry to hear that. I cannot at present see how to proceed without a lot of it. Differential geometry - yes even in the bastard version that physicists use - involves a lot of partial derivatives.

I have been quiet here recently as I have been working overseas. Home tomorrow, when I will try to work out a strategy

##### Share on other sites

I am very sorry to hear that. I cannot at present see how to proceed without a lot of it. Differential geometry - yes even in the bastard version that physicists use - involves a lot of partial derivatives.

I have been quiet here recently as I have been working overseas. Home tomorrow, when I will try to work out a strategy

I'm perfectly happy to have some "character building opportunities" as they say Partial differentiate away. No hurry on anything.

ps -- In case I'm being too oblique ... just write whatever you want and I'll work through it.

Edited by wtf

##### Share on other sites

I'm perfectly happy to have some "character building opportunities" as they say Partial differentiate away.

OK, time to "man up" all readers.

First the boring bit - notation. One says that a function is of class $C^0$ if it is continuous. One says it is of class $C^1$ if it is differentiable to order 1. One says it is of class $C^{\infty}$ if it is differentiable to all imaginable orders, in which case one says it is a "smooth function". I denote the space of all Real $C^{\infty}$ functions at the point $m \in M$ by $C^{\infty}_m$

So recall from elementary calculus that, given a $C^1$ function $f:\mathbb{R} \to \mathbb{R}$ with $a \in \mathbb{R}$ then $\frac{df}{da}$ is a Real number.

Recall also that this can be interpreted as the slope of the tangent to the curve $f(a)$ vs $a$.

Using this I make the following definition:

For any point $m \in U \subsetneq M$ with coordinates (functions) $x^1,x^2,....,x^n$ then I say a tangent vector at the point $m \in U \subsetneq M$ is an object that maps $C^{\infty}_m \to \mathbb{R}$ so that, for any $f \in C^{\infty}_m$ and since $m = \{x^1,x^2,...,x^n\}$ we may write $v=\frac{\partial}{\partial x^1}f + \frac{\partial}{\partial x^2}f+....+\frac{\partial}{\partial x^n}f$.

Or more succinctly as $v= \sum\nolimits^n_{j=1} \frac{\partial}{\partial x^j}f \in \mathbb{R}$.

As an illustration, recall the mapping (homeomorphism) $h:U \to R^n$ where $h(m)=(u^1,u^2,....,u^n)\in R^n$ and the projections $\pi_1((u^1,u^2,....,u^n))=u^1 \in \mathbb{R}$ and so on. Recall also I defined the coordinate functions in $U \subsetneq M$ by $x^j= \pi_j \circ h$ so the $x^j$ really are functions.

So I may have that $\frac{\partial}{\partial x^j}x^k= \delta^k_j$ where $\delta^k_j = \begin{cases}1\quad j=k\\0\quad j \ne k\end{cases}$.

So in fact, since this defines linear independence, we may take the $\frac{\partial}{x^h}$ to be a basis for a tangent vector space. At the point $m \in U \subsetneq M$ one calls this as $T_mM$

Good luck!

##### Share on other sites

Good luck!

<Star Trek computer voice> Working ...

Actually I read through it and it looks pretty straightforward. I'll work through it step by step but I didn't see anything I didn't understand. The tangent space is an n-dimensional vector space spanned by the partials. I understand that, I just need practice with the symbology.

I see at the end you bring in the Kronecker delta. This is something I'm familiar with as a notational shorthand in algebra. I've heard that it's a tensor but at the moment I don't understand why. I can see that by the time I work through your post I'll understand that. This seems like a fruitful direction for me at least.

Edited by wtf

##### Share on other sites

The tangent space is an n-dimensional vector space spanned by the partials.

Yes. In fact these are called differentiable operators, and are closely related to the directional derivative. They are also the closest we can get, in an arbitrary manifold, to the notion of a directed line segment that is used to define vectors in Euclidean space.

Anyway, recall I wrote the property of linear independence for these bad boys as $\sum\nolimits_{j=1}^n \frac{\partial}{\partial x^j}x^k = \begin{cases}1 \quad j=k\\0\quad j \ne k \end{cases}$

Yes the K. delta is a tensor - its called a "numerical tensor", rather special case.

Anyway, from the above, the following is immediate...

If I accept these differential operators as a basis for $T_mM$ then I can write an arbitrary tangent vector as $v=\sum\nolimits_{j=1}^n \alpha^j \frac{\partial}{\partial x^j}$ so that

$v(x^j) = \alpha^j$ which is unique to this vector.

Anyway.....

Suppose the point $m \in M$ and space $C_m^{\infty}$ of all smooth functions $M \to \mathbb{R}$ at $m$.

Recall I defined the tangent space at $m$ as space of mappings $T_mM:C_m^{\infty} \to \mathbb{R}$ so that $v(f) \in \mathbb{R}$

For the mapping $f:M \to \mathbb{R}$ I now define the differential $df:T_mM \to \mathbb{R}$. This is sometimes called the pushforward - see my post http://www.scienceforums.net/topic/93098-pushing-pulling-and-dualing/

I insist on a numerical identity $df(v)= v(f)$ for some $f \in C_m^{\infty}$ and and any $v \in T_mM$

To see why we care, let me replace the arbitrary function $f$ by the coordinate functions $x^j$ so that $dx^j(v)=v(x^j)$

I now replace the vector $v \in T_mM$ by the basis vectors $\frac{\partial}{\partial x^k}$ so that

$dx^j(\frac{\partial}{\partial x^k})=\frac{\partial}{\partial x^k}(x^j)$

So we know that the RHS is $\frac{\partial}{\partial x^k}(x^j)= \delta ^j_k$, so that the LHS implies that $dx^j$ and $\frac{\partial}{\partial x^k}$ are linearly independent.

But since the basis for $T_mM$ is already complete, we have to say that the $dx^j$ are a basis for another but related vector space space.

This is called the dual space and is written $T^*_mM$.

Note the existence of the dual space is thus a mathematical inevitability, not a mere whim

PS Note this is not a unique situation in mathematics. Consider the space of eigenvectors - the eigenspace - obtained by the action of an operator on a vector space.

##### Share on other sites

You're two posts ahead of me FYI. I haven't worked through the earlier one yet. Been a little busy with other things.

Note the existence of the dual space is thus a mathematical inevitability, not a mere whim

I'm thinking that you are intending this remark as a response to my questions about why dual spaces creep into tensor products, but I don't think you are understanding my question then. Of course I understand what dual spaces are. But in the algebraic definition of tensor products, duals NEVER show up; while in diffGeo/physics discussions, they ALWAYS show up. That's the gap I'm trying to bridge. Apparently no algebraist has ever set foot in the same room as a differential geometer, else there would be a clear and simple explanation of this expositional mismatch somewhere.

I hope to get through the earlier post today or tomorrow or the day after.

##### Share on other sites

Ok now that I'm going through this I'm completely confused by where all this is taking place. We don't know how to take derivatives on a manifold yet but your notation is assuming that we can.

First the boring bit - notation. One says that a function is of class $C^0$ if it is continuous. One says it is of class $C^1$ if it is differentiable to order 1.

Picky refinement, my understanding is that a $C^1$ function has a continuous derivative. There are functions with derivatives that fail to be continuous on one or more (even infinitely many) points. http://math.stackexchange.com/questions/292275/discontinuous-derivative/292380#292380

One says it is of class $C^{\infty}$ if it is differentiable to all imaginable orders

More pickiness, this is a trivial point but of course you mean for all positive integer orders. Then it's no longer a function of someone's imagination. I was thinking fractional derivatives, who knows what else.

in which case one says it is a "smooth function". I denote the space of all Real $C^{\infty}$ functions at the point $m \in M$ by $C^{\infty}_m$

Ok here is an expositional problem that confuses me. This is not pickiness, I'm genuinely confused. We've been letting $M$ stand for a manifold. But we don't know how to differentiate a function on a manifold. In fact you said that the charts are only homeomorphisms, so for all we know our manifold $M$ is so full of corners it can't be differentiated at all. In order to get past this point I have to either assume we've defined differentiability on a manifold somehow, or else that we're working in $\mathbb R^n$. I hope you will clarify this point.

So recall from elementary calculus that, given a $C^1$ function $f:\mathbb{R} \to \mathbb{R}$ with $a \in \mathbb{R}$ then $\frac{df}{da}$ is a Real number.

Little point of notational confusion. I'd believe $\frac{df}{dx}\biggr\rvert_{x=a}$ or $\frac{df}{dx}(a)$ but I'm not sure about your notation. Is that a typo or a standard notation?

Recall also that this can be interpreted as the slope of the tangent to the curve $f(a)$ vs $a$.

Yes.

Using this I make the following definition:

For any point $m \in U \subsetneq M$ with coordinates (functions) $x^1,x^2,....,x^n$ then I say a tangent vector at the point $m \in U \subsetneq M$ is an object that maps $C^{\infty}_m \to \mathbb{R}$

Now you see I have the $M$ problem in spades. I see you talking about tangent vectors to a point on a manifold but I have no idea how to define differentiability on a manifold. Rather than look it up I thought I'd just ask.

Of course if we're in $\mathbb R^n$ this is clear.

This is still an interesting point of view even if I imagine that we are talking about Euclidean space and not manifolds. We're fixing a point and letting the functions vary. If we are in single-variable calculus, we can let $x = 1$ for example, and then $\frac{df}{dx}(1) : C^\infty_1 \to \mathbb R$ is a function that inputs $x^2$ and outputs $2$, inputs $x^3$ and outputs $3$, inputs $e^x$ and outputs $e$, and so forth.

You see I'm still bothered by your notation. Did you really want me to write $\frac{df}{d1}$ as you indicated earlier? I have a hard time believing that but I'll wait for your verdict.

It's clear to me that by the linearity of the derivative, $\frac{df}{dx}(1)$ is a linear functional on $C^\infty_1$. But the domain is the real numbers, not some arbitrary one-dimensional manifold that I don't know how to take derivatives on. For one thing don't we need an algebraic and metric structure of some sort so that we can add and subtract vectors and take limits?

So I do sort of see where you're going with this. But I'm totally confused about how we lift the differentiable structure of $\mathbb R^n$ to $M$.

so that, for any $f \in C^{\infty}_m$ and since $m = \{x^1,x^2,...,x^n\}$ we may write $v=\frac{\partial}{\partial x^1}f + \frac{\partial}{\partial x^2}f+....+\frac{\partial}{\partial x^n}f$.

Or more succinctly as $v= \sum\nolimits^n_{j=1} \frac{\partial}{\partial x^j}f \in \mathbb{R}$.

Ok I believe this. Or maybe not. First, you are using those set brackets again and I do not for the life of me see how that can make any sense. There's no order to sets so how do you know which coordinate function goes with which coordinate? Secondly of course there is the manifold problem again, I don't know how to define a differentiable function on a manifold.

Now if I forget manifolds and pretend we're in $\mathbb R^n$ then I suppose we could define the functional $v=\frac{\partial}{\partial x^1}(m)f + \frac{\partial}{\partial x^2}(m)f+....+\frac{\partial}{\partial x^n}(m)f$. I would almost believe this notation as I have written it.

This particular functional is defined at the point $m$. However I see that you've left that part out and you're defining this functional for all points? But then it's not defined correctly. I don't know what is the input to the functional.

Well like I say it's more or less clear what you're thinking but I'm lost n the points I've indicated.

ps -- Ah ... slight glimmer ... since $m$ itself has coordinates, we can break up the partials as acting on each coordinate separately, and we'll end up with some Kronecker-fu leading to the rest of your exposition. Is that the right intuition?

I'll push on.

(Later edit) ...

I can see a way to define differentiability.

If $M$ is a manifold and $U \subset M$ is an open set, and if $\varphi : U \to \mathbb R^n$ is a chart, and $f : U \to \mathbb R$ is a function, then we would naturally look at $f \varphi^{-1} : \varphi(U) \to \mathbb R$.

If $f \varphi^{-1}$ is smooth then (since $\varphi(U) \subset \mathbb R^n$) we can take the partials with respect to the coordinate functions and then I think the rest of your notation works.

Is that right?

Edited by wtf

##### Share on other sites

I'm completely confused by where all this is taking place. We don't know how to take derivatives on a manifold yet but your notation is assuming that we can.

We can. This is because of the continuous isomorphism (homeomorphism) $U \simeq R^n$. Or if you prefer, our manifold is locally indistinguishable from an open subset of $R^n$

Picky refinement, my understanding is that a $C^1$ function has a continuous derivative.

Yes, but a $C^0$ function is by definition a continuous function, and $C^1$ subsumes $C^0$. As I said

I'm genuinely confused. We've been letting $M$ stand for a manifold. But we don't know how to differentiate a function on a manifold.

Yes we do - see above

for all we know our manifold $M$ is so full of corners it can't be differentiated at all.

If it is of class $C^{\infty}$ all functions (including coordinate functions) are continuous - no corners!

In order to get past this point I have to either assume we've defined differentiability on a manifold somehow, or else that we're working in $\mathbb R^n$.

Roughly speaking we are working in $R^n$, or something that "looks very like it", namely the open subset of $M$ where the homeomorphism $U \simeq R^n$ holds. D

Little point of notational confusion. I'd believe $\frac{df}{dx}\biggr\rvert_{x=a}$ or $\frac{df}{dx}(a)$ but I'm not sure about your notation. Is that a typo or a standard notation?

Its standard (see below)

You see I'm still bothered by your notation. Did you really want me to write $\frac{df}{d1}$ as you indicated earlier? I have a hard time believing that but I'll wait for your verdict.

It's clear to me that by the linearity of the derivative, $\frac{df}{dx}(1)$ is a linear functional on $C^\infty_1$.

I'm afraid I cannot parse this.

Look, suppose that $f(x)=y$. Then I can write $\frac{dy}{dx}=\frac{d(f(x)}{dx}$. But the "x" in the "numerator" MUST be the same as the "x" in the deminator, so I introduce no ambiguity by wring $\frac{df}{dx}$. This is standard

you are using those set brackets again and I do not for the life of me see how that can make any sense. There's no order to sets so how do you know which coordinate function goes with which coordinate?

The superscripts in $x^1,x^2,....,x^n$ are just tracking indices - they do not imply a natural order. I may have $x=x^1,\,y=x^2,\,z=x^3$ or equally I may have $x=x^2,\,y=x^3,\,z=x^1$. It doesn't matter

Now if I forget manifolds and pretend we're in $\mathbb R^n$ then I suppose we could define the functional $v=\frac{\partial}{\partial x^1}(m)f + \frac{\partial}{\partial x^2}(m)f+....+\frac{\partial}{\partial x^n}(m)f$. I would almost believe this notation as I have written it.

Well, you need to be careful. If I write, say, $\frac{d}{dx}(m)$ I really mean $\frac{d(m)}{dx}$, and this not what you meant. What you write has no meaning.In terms of notation you could, if you wanted to specify a point of application you could write $\frac{df}{dx}|_m \in U$

This particular functional is defined at the point $m$. However I see that you've left that part out and you're defining this functional for all points? But then it's not defined correctly. I don't know what is the input to the functional.

The input for any functional is, by definition, a vector. The output is a Real number. What you wrote (sorry, I lost it in transcription) is not a functional.

In my last post I gave you 2 functionals - $df$ and $dx^j$. Please check that they are mappings fron a vector space to the Real numbers

ps -- Ah ... slight glimmer ... since $m$ itself has coordinates, we can break up the partials as acting on each coordinate separately, and we'll end up with some Kronecker-fu leading to the rest of your exposition. Is that the right intuition?

Oh yes. Good.

If $M$ is a manifold and $U \subset M$ is an open set, and if $\varphi : U \to \mathbb R^n$ is a chart, and $f : U \to \mathbb R$ is a function, then we would naturally look at $f \varphi^{-1} : \varphi(U) \to \mathbb R$.

If $f \varphi^{-1}$ is smooth then (since $\varphi(U) \subset \mathbb R^n$) we can take the partials with respect to the coordinate functions and then I think the rest of your notation works.

Is that right?

Sort of, but your reasoning escapes me. If on LHS of the above you mean $f(\varphi^{-1}):\varphi(U) \to \mathbb{R}$ or $f\circ \varphi^{-1}:\varphi(U) \to \mathbb{R}$ (they mean the same) and since $(\varphi^{-1} \circ \varphi)U= U$ then how does your composite unction differ from $f:U \to \mathbb{R}$ (which I gave as a definition)?

##### Share on other sites

So, in spite of a sudden lack of interest, I will continue talking to myself, as I hate loose ends.

Recall I gave you in post#27 that, for set open sets in $U \cap U'$ we will have the coordinate transformations $x'^j=x'^j(x^k)$. Notice I am here treating the $x'^j$ as functions, and the $x^k$ as arguments

Suppose some point $m \in U \cap U'$ and a vector space $T_mM$ defined over this point.

Recall also I said in post#41 that for any $v \in T_mM$ that $v(x^j)=\alpha^j$ which are called the components of $v= \alpha^j \frac{\partial}{\partial x^j}$.

Likewise I must have that $v=\alpha'^k \frac{\partial}{\partial x'^k}$. We may assume these are equal, since our vector $v$ is a Real Thing

So that $\alpha^j=v(x^j)$ and $\alpha'^k=v(x'^k)$, we must have that $\alpha'^k= \alpha^j\frac{\partial x'^k}{\partial x^j}$.

This is the transformation law for the components of a tangent vector, also known (by virtue of the above) as a type (1,0) tensor.

It is no work at at to extract the transformation laws for higher rank tensors, and very little to extract those for type (0,n) tensors.

PS I do wish that members would not ask questions where either they they are not equipped to understand the answers, or have no real interest in the subject they raise

##### Share on other sites

No lack of interest. I'm working through your posts. I've been busy with other things and you're four posts ahead of me now but I intend to catch up.

However you're wrong about differentiability. If I map the graph of the Weirstrass function to the reals by vertical projection, I have a homeomorphism but no possible differentiable structure on the graph because the graph has no derivative at any point. I'll get busy on my next post (which I've drafted but not yet cleaned up) and elaborate on this point.

Well never mind I'll just put this bit up here.

Now the point is that if the map $f \varphi^{-1} : \mathbb R^n \to \mathbb R$ happens to be differentiable (or smooth, etc.) then we say that $f$ is differentiable. Also we need the transition maps to be smooth as well. We talked about them a while back. You can confirm all this in volume one of Spivak's DiffGeo book. I'll add that working through your posts has enabled me to make sense of parts of Spivak; and reading parts of Spivak has enabled me to make sense of your posts. So I am making progress and finding this valuable.

You need to define differentiability this way. Mere homeomorphism is not enough, surely you agree with this point but perhaps forgot? Plenty of continuous functions aren't differentiable. Remember that almost all continuous functions are just like the Weirstrass function, differentiable nowhere.

Likewise your definition of $C^1$ is wrong, you need the function to be continuously differentiable and not just differentiable. There are functions that are differentiable but whose derivative is not continuous, and such functions are not $C^1$. It is of course my curse in life that my ability to be picky and precise exceeds my ability to understand math, and I'm right about these two points despite being ignorant of differential geometry.

I will see if I can focus some attention this week on catching up with your last four posts.

"PS I do wish that members would not ask questions where either they they are not equipped to understand the answers, or have no real interest in the subject they raise ..."

Sorry was that for me? I'm paddling as fast as I can. If it's for someone else, personally I welcome any and all posts. This isn't the Royal Society and I'm sure I for one would benefit from trying to understand and respond to any questions about this material at any level.

Edited by wtf

##### Share on other sites

PS I do wish that members would not ask questions where either they they are not equipped to understand the answers, or have no real interest in the subject they raise

Would you like me to find someone to dust off your ivory tower? You are a clever guy but your intellectual aloofness and the implicit self-aggrandising I get from the quoted post, does you no favours.

One doesn't know that one might not understand the answer until one asks. Even though one may not understand completely, it may add a useful piece or two in the jigsaw puzzle for them. At the very least, it gives a person an indication how far they've got to go learning before they can understand and can put signposts in the road ahead for them. Besides, an answer may not prove useful to the questioner but will to someone else that is capable, now or in the future; It's never wasted.

I read all your posts in this thread and don't have a clue about most of it but they give me a sense of the scale of what is necessary to be learnt in order to understand this subject. It sets the stage for me, if not the details just yet. With increasing exposure, one becomes familiar with the unfamiliar.

##### Share on other sites

However you're wrong about differentiability. If I map the graph of the Weirstrass function to the reals by vertical projection, I have a homeomorphism but no possible differentiable structure on the graph because the graph has no derivative at any point.

Yes, but at no pint did I assert that a continuous function needs to be differentiable. Rather I asserted the converse - a differentiable function must be continuous.

Likewise your definition of $C^1$ is wrong, you need the function to be continuously differentiable and not just differentiable.

Maybe I did not make myself clear. I said that the $C^k$ property for a function "subsumes" the $C^0$ property. If we attach the obvious meaning to the $C$ in $C^k$ we will say that a $C^0$ function is continuous to order zero, a $C^1$ function is continuous to order one..... a $C^k$ function is continuous to order $k$

I am sorry if my language was not sufficiently clear.

##### Share on other sites

PS I do wish that members would not ask questions where either they they are not equipped to understand the answers or have no real interest in the subject they raise

I take this to heart and plead guilty. As my philosophy prof once said: The spirit is willing but the flesh is weak. I have the math skills but my interest is drifting. The good news is that your posts enabled me to read parts of Spivak (*) and reading Spivak enabled me to understand parts of your posts. Learning has taken place and this has been valuable. You've moved me from point A to point B and I am appreciative.

I have not given up. I'm going far more slowly than I thought I would. I'll post specific questions if I have any. For the record you have no obligation to post anything. I regret encouraging any expectations that have led to disappointment. No one is more disappointed than me.

(*) Michael Spivak, A Comprehensive Introduction to Differential Geometry, Volume I, Third Edition. PDF here.

Now, all that said ... I have four specific comments, all peripheral to the main line of your exposition. Regarding the main line of your exposition, I pretty much understand all of it, but not well enough to turn it around and say something meaningful in response. The concepts are in my head but can't yet get back out. You should not be discouraged by that. Your words are making a difference.

Question 1) Definition of differentiable structure on $U$

You wrote:

Yes, but at no pint did I assert that a continuous function needs to be differentiable. Rather I asserted the converse - a differentiable function must be continuous.

First I stipulate that this issue is unimportant and if we never reach agreement on it, I'm fine with that.

However this remark was in response to my pointing out that you need the map $f \varphi^{-1} : \varphi(U) \to \mathbb R$ to be differentiable order to define the differentiability of $f$. It's the only possible thing that can make sense. And yes of course by $f \varphi^{-1}$ I mean $f \circ \varphi^{-1}$, sorry if that wasn't clear earlier.

For whatever reason you seem to have forgotten this. It's true that we think of $U$ as having a differentiable structure. But we have to define it as I've indicated. I verified this in Spivak. Homeomorphism can't be enough because there's no differentiability on an arbitrary manifold till we induce it.

Your not agreeing with this puzzles me. And your specific response about differentiable implying continuous doesn't apply to that at all.

As I say no matter on this issue but wanted to register my puzzlement.

* Question 2) Definition of $C^1$

In response to this issue you wrote:

Maybe I did not make myself clear. I said that the $C^k$ property for a function "subsumes" the $C^0$ property. If we attach the obvious meaning to the $C$ in $C^k$ we will say that a $C^0$ function is continuous to order zero, a $C^1$ function is continuous to order one..... a $C^k$ function is continuous to order $k$

I am sorry if my language was not sufficiently clear.

I apologize but you are still not clear. What does subsume mean? You can't mean subset, because the inclusions go the other way. If a function is $n$-times continuously differentiable then it's certainly $n-1$-times. So $C^n \subset C^{n-1}$. So subsume doesn't mean subset.

Of course it does mean that a $C^n$ function is conntinuous. Differentiable functions are continuous, we all agree on that (is this what you were saying earlier?) So you are saying that a $C^n$ function must be continous. Agreed, of course. That's "subsumed."

However this seems to be missing the point. The point is that there exists a differentiable function whose derivative is not continuous.

Therefore it's not good enough to say that $C^1$ is all the differentiable functions. It's all the differentiable functions whose derivative is continuous. There's no way I can fit "subsumes" into this.

Again like I say, trivial point, not important, we can move on. But I wanted to be as clear as I could about my own understanding, since like any beginner I must be picky.

* Question 3) The notation $\frac{df}{da}$

Earlier you wrote:

So recall from elementary calculus that, given a $C^1$ function $f:\mathbb{R} \to \mathbb{R}$ with $a \in \mathbb{R}$ then $\frac{df}{da}$ is a Real number.

I have never seen this notation. $a$ is a constant. I asked about this earlier and did not understand your response. If $a = \pi$ would you write $\frac{df}{d\pi}$? I would write $\frac{df(a)}{dx}$ or $\frac{df}{dx}(a)$, which you seem to think are radically different. Or even $\frac{df}{dx} \bigg\rvert_{x = a}$. I'm confused on this minor point of notation.

* Question 4) The real thing I want to know

After glancing through Spivak I realized that I am never going to know much about differential geometry. Perhaps looking at Spivak was a mistake

I'm trying to refocus my search for the clue or explanation "like I'm 5" that will relate tensors in engineering, differential geometry, and relativity, to what I know about the tensor product of modules over a commutative ring in abstract algebra.

What I seek, which perhaps may not be possible, is the 21 words or less -- or these days, 140 characters or less -- explanations of:

- How a tensor describes the stresses on a bolt on a bridge; and

- How a tensor describes the gravitational forces on a photon passing a massive body; and

- Why some components of these tensors are vectors in a vector space; and why others are covectors (aka functionals) in the dual space.

And I want this short and sweet so that I can understand it. Like I say, maybe an impossible dream. No royal road to tensors.

Ok that is everything I know tonight.

##### Share on other sites

Ha! So I am fired, in the nicest possible way! *wink*

Do not feel bad, wtf. Differential geometry is a hard subject, as you would see if you had all 5 volumes of Michael Spivak's work.

I do not pretend to have his depth of knowledge - I merely took a college course. Moreover his reputation as a teacher is extremely high, whereas mine is ....... (do NOT insert comment here!)

Regarding applications, all I can say is that I am neither an engineer nor a physicist, so as far as bridge bolts etc. you would need to ask somebody else.

On the other hand, it is not possible to study differential geometry without at some point encountering tensor fields, especially metric fields and the curvature fields that arise from them. These are the principle objects of interest in the General Theory of Gravitation.

If I offered to give guidance on this subject, it would be strictly as an outsider, an amateur.

## Create an account

Register a new account