Help wanted with indices

Recommended Posts

Given

$\Lambda^{\alpha}_{\beta}$

as a LT which index is Row and which column?

Have got confused on a problem with a non symmetrical LT where I get a valid answer either way!

Thanks

John

Share on other sites

Given $\Lambda^{\alpha}_{\beta}$

as a LT which index is Row and which column?

I'm not sure if you can really tell without having specified a vector the transformation acts on. However, the standard notation is using $v^\sigma$ as the entries of a column vector which supposedly leads to $v'^\alpha = \Lambda^\alpha {} _\beta v^\beta$ and from comparing this with a matrix multiplication to the first index being lines and the 2nd being columns.

Share on other sites

Unless the tensor is symmetric, that is a very bad notation. One of the indices should be displaced to the right, to give them an order (then the first index is row, the second column).

If $\Lambda$ is a Lorentz Transformation, then it is orthogonal, so the transpose is actually the inverse, i.e. switching the order of the indices changes the transformation into its inverse. So the ambiguity in the order of the indices is pretty severe!

Share on other sites

Thanks for the replies - bear with me a little further.

So a transformation from fr1 to fr2 can be represented by

Col Matrix [t2,x2,y2,z2] = ColMatrix [t1,x1,y1,z1].$\Lambda^{2}_{1}$ ?

(Have spent a few minutes and cannot find how to draw a column 4x1 matrix)

My problem arose when solving a problem in Schutz concerning 3 frames f1 moving relative to f0 and f2 relative to f1. Ended up with the LT f2 f0 which I was then asked to prove that it was indeed a LT by showng that the interval was invariant under it. Did that OK but then noticed I had done as above but had switched rows and columns. I then repeated using the "correct" equations as above and the result was still invariant. Left me unsure which was the "correct" procedure and what is the meaning of the fact that effectively swapping the rows and columns of an LT is still an LT. Is that always the case?

I had noticed the offset in the indices in the book but the relevance of it was not explained. Obvious I suppose but I missed it anyway. Thanks for that leftmost is row - rightmost is column - that helps.

Does that explain the results I got? Is the inverse of an LT always an LT?

Hope this makes some kind of sense. Late at night here - will check over my working in the morning.

John

Share on other sites

- matrices can be written by \begin{array}{cccc} 11 & 12 & 13 & 14 \\ 21 & 22 & ... \end{array}. There's also a more direct command named something like "matrix" but I am neither familiar with the command, nor does the "show/hide latex reference"-button in the advanced editing options work properly in my browser.

- No, the indices on Lambda do not represent the number of a coordinate system but the index of the respective coordinate. Your trafo would be $\left( \begin{array}{c} t2 \\ x2 \\ y2 \\ z2 \end{array}\right)$= $\left( \begin{array}{cccc} \Lambda^0{}_0 & \Lambda^0{}_1 & \Lambda^0{}_2 & \Lambda^0{}_3 \\ \Lambda^1{}_0 & \Lambda^1{}_1 &\Lambda^1{}_2 & \Lambda^1{}_3 \\ \Lambda^2{}_0 & \Lambda^2{}_1 &\Lambda^2{}_2 & \Lambda^2{}_3 \\ \Lambda^3{}_0 & \Lambda^3{}_1 &\Lambda^3{}_2 & \Lambda^3{}_3 \end{array} \right) \left( \begin{array}{c} t1 \\ x1 \\ y1 \\ z1 \end{array} \right)$ (sidenote@admins.sfn : could you increase the maximum number of characters in a tex string from 400 to something bigger?).

- Yes, the inverse of an LT is an LT. Let LT L12 (1 and 2 not being indices but part of the name!) be the transformation that transforms coordinates from frame 1 to frame 2. LTs are the transformations from inertial frames to inertial frames so there is an LT for the transformation from frame 2 to frame 1: L21. Transforming coordinates from system 1 to system 2 and then back to system 1 should better give the original coordinates back, hence L21*L12 = 1 (to be understood as a matrix multiplication with the order of execution read from right to left). Sameways, transforming coordinates from frame 2 into frame 1 and then back into frame 2 should also yield the original numbers: L12*L21 = 1. Therefore L12*L21 = 1 = L21*L12 which is the definition for two algebraic elements (replace "algebraic element" by "matrix" if it makes it easier to understand for you) to be their respective inverses. If you prefer a more direct construction, multiply the LT matrix for a transformation belonging to a relative motion of v with that of a relative motion of -v.

In short: Yes, inverses of Lorentz transformations are also Lorentz transformations. Their physical meaning is that they are the transformation that transforms from the original destination frame back onto the original source frame.

- Thinking of the Lorentz transformations as matrices might help because it's a familiar concept. But the formalism with upper and lower indices is a bit more/different than that, so better stick with $\Lambda_{\mu \nu} v^\nu := \sum_{\nu = 0 \dots 3} \Lambda_{\mu \nu} v^\nu$ if in question. As I previously said, you can possibly get a representation of a matrix multiplication by comparison of the appearing terms. You should be on the safe side of representing the contraction of an object with two Lorentz indices with an object with one Lorentz index (contraction meaning that one index appears on both objects; once as an upper one and once as a lower one) as a matrix multiplication whenever the former object is given as $\Lambda^\mu{}_\nu$ (note the order and upper/lower position of the indices) and the latter object is given as $v^\nu$ (note the position and that it corresponds with the 2nd index of the "matrix"). $( \Lambda_\mu{}^\nu, v_\nu )$ also works (but is a tad less common). As soon as you get terms like $\Lambda_\mu{}^\nu v^\mu$ or $R_{\alpha \beta \gamma \delta} \cdot \text{something}$ () you run into trouble with your graphical "first index is line index, 2nd is column index"-rule.

In short: Don't put too much into matrix multiplications even though they might be familiar to you. Better think of matrix multiplications as a visualization tool for certain types of mathematical operations.

Share on other sites

Thanks Athiest - that was a real help. I appreciate the effort you put in to the reply.

You said :-

"Thinking of the Lorentz transformations as matrices might help because it's a familiar concept."

I understood from my reading so far that $\Lambda^\mu{}_\nu$ was a matrix encapsulation of the LT coefficients. Is this perhaps a blinkered way of looking at it which will not be helpful when I finally get round to tensors?

I understand about the inverse LT - and replacing v with -v etc.

The book I am using uses "Einstein summation" and so there are no $\Sigma$. Have got used to that.

I had not yet come across the term "contraction of an object ...."

So $( \Lambda_\mu{}^\nu, v_\nu )$ is a summation and $\Lambda_\mu{}^\nu v^\mu$ is not?

The use of the word "object" implies that there are very general implications involved - perhaps I have my nose to close to the page to see the bigger picture at the moment!

Im rambling a bit here - perhaps I need to read some more before I can ask a decent question. I used to say to my students ( taught engineering stuff at one time) when you can ask a good question you are more than half way to its answer!

Thanks again for the help.

John

Share on other sites

You said : "Thinking of the Lorentz transformations as matrices might help because it's a familiar concept."

I understood from my reading so far that $\Lambda^\mu{}_\nu$ was a matrix encapsulation of the LT coefficients. Is this perhaps a blinkered way of looking at it which will not be helpful when I finally get round to tensors?

I can't really comment on helpfulness for you. Severian, being a teacher for that kind of stuff, is probably a better advisor on that. There is a lot more to the index notation than writing stuff in matrix notation $\vec a = M \vec b$ shows.

I'll try breaking my thoughts down into tiny pieces that hopefully sketches why and when you can use matrix notation for Lorentz transformation, why I do not like to do so and why the original question strictly speaking had no answer. Be aware that the following is really basic and possibly not exactly helpful for your current class - like said, stick to thinking in matrices if your course does and it's easier for you.

At a very basic level $\Lambda^\mu{}_\nu$ as such is just 16 real-valued indexed numbers. If you just want to write those numbers down you can either arrange them in a 4x4 square as a matrix (and for simply writing it down it doesn't matter at all which index you chose as row and which as column) or just in a linear vector-like style: $( \Lambda^0{}_0, \Lambda^0{}_1, \Lambda^0{}_2, \Lambda^0{}_3, \Lambda^1{}_0, \Lambda^1{}_1, \Lambda^1{}_2, \Lambda^1{}_3, \dots )$. It's all just a way to write down the same 16 indexed numbers. At this point, it certainly is not clear which index is row and which is column; it is not even clear that your representation you chose to write down the 16 numbers on a piece of paper has rectangular form (a circular arrangement would also be nice ).

Now, luckily you added that these 16 numbers are supposed to represent an LT, so the obvious usage of these 16 numbers is to express what happens to the entries of a 4-vector under a coordinate transformation.

Assumption 1:

In the most basic and most common version of expressing special relativity (there is a little ambiguity where exactly you draw the line between SR and GR) 4-vectors are symbolized as $v^\mu$ (upper index) where v0 is a number associated to time (e.g. time-position for position vectors, energy for 4-momenta) and v1, v2, v3 are numbers associated to the respective space direction (e.g. position in space or momentum).

Assumption 2:

Your $\Lambda^\mu_\nu$ means $\Lambda^\mu{}_\nu$ and the values of the 16 Lambda are given such that the transformation law for the numbers $v^\mu$ onto the numbers $w^\mu$ representing the same vector in a different coordinate system is $w^\mu = \Lambda^\mu{}_\nu v^\nu$. That is also standard notation.Explicitely writing that out (i.e. expanding the compressed notation of the Einstein convention) yields four equations; one for each number of $w^\mu$:

$w^0 = \Lambda^0{}_0 v^0 + \Lambda^0{}_1 v^1 + \Lambda^0{}_2 v^2 + \Lambda^0{}_3 v^3$

$w^1 = \Lambda^1{}_0 v^0 + \Lambda^1{}_1 v^1 + \Lambda^1{}_2 v^2 + \Lambda^1{}_3 v^3$

$w^2 = \Lambda^2{}_0 v^0 + \Lambda^2{}_1 v^1 + \Lambda^2{}_2 v^2 + \Lambda^2{}_3 v^3$

$w^3 = \Lambda^3{}_0 v^0 + \Lambda^3{}_1 v^1 + \Lambda^3{}_2 v^2 + \Lambda^3{}_3 v^3$

You can write this more elegantly as

$\left( \begin{array}{c} w^0 \\ w^1 \\ w^2 \\ w^3 \end{array} \right)$= $\left( \begin{array}{cccc} \Lambda^0{}_0 & \Lambda^0{}_1 & \Lambda^0{}_2 & \Lambda^0{}_3 \\ \Lambda^1{}_0 & \Lambda^1{}_1 &\Lambda^1{}_2 & \Lambda^1{}_3 \\ \Lambda^2{}_0 & \Lambda^2{}_1 &\Lambda^2{}_2 & \Lambda^2{}_3 \\ \Lambda^3{}_0 & \Lambda^3{}_1 &\Lambda^3{}_2 & \Lambda^3{}_3 \end{array} \right)$ $\left( \begin{array}{c} v^0 \\ v^1 \\ v^2 \\ v^3 \end{array} \right)$.

From this, the question which index corresponds to rows and which to columns can be answered for that case. But: There were two assumptions that were made, one about how vectors are specified via numbers and one about how vectors and Lorentz transformations are combined.

Intermediate result: As long as you are dealing with coordinate transformations given as numbers $\Lambda^\mu{}_\nu$ and vectors are given as numbers $v^\mu$ and the transformation rule is $w^\mu = \Lambda^\mu{}_\nu v^\nu$ and the numbers $v^\mu$ are arranged as column vectors, then using matrix multiplication in the sense $w = \Lambda v$ and equating the 1st index of Lambda with row and the 2nd with column is fine. If you feel like filling a few pieces of paper you could verify for yourself that $w^\mu = A^\mu{}_\sigma B^\sigma{}_\nu v^\nu$ can in the same manner be written in matrix formulation $w = ABv$ with the same identifications when the same assumptions hold true - I'm certainly not gonna tex that in here . Without the two assumptions, there had -as far as I can see- not been an answer to your question.

In short: For practical purposes within your course you are possibly fine with thinking in terms of matrix multiplications.

Problems:

There is several terms that are syntactically correct that you can -in principle or in practice- encounter that do not satisfy the two assumptions above:

- $w^\mu = \Lambda_\nu{}^\mu v^\nu$. I just made that one up, but if you write that out as I did above, you'll see that 1st and 2nd index are exchanged with respect to the previous example.

- $s = g_{\mu\nu}v^\mu v^\nu$: One of the most important terms in relativity and one you probably already encountered (invariant pseudo-magnitude of a vector v). Obviously, if all numbers for g and v are given you can calculate s via the rules the sum convention dictates (if you don't see that, try it out - getting accustomed to Lorentz-indices is really worth it). But writing that as a matrix equation is not possible without previous transformations or additional assumptions.

- $R_{\alpha \beta \gamma \delta}$: An object with four indices (R is the common name for the curvature tensor in GR). Contraction of indices (e.g. $F_{\alpha \beta \gamma} = R_{\alpha \beta \gamma \delta} v^\delta$ is defined by the summation convention. But how would you even represent such an object in a matrix-vector style? You could use a 2D matrix with each entry being a 2D submatrix but there's little gain for practical purposes.

Ok, that was a lot of text; hope I didn't make too many errors.

Bottom line: Thinking in and expressing equations via matrices might suffice for your case. I don't like it. I don't do so (so perhaps it actually is a good idea and I am just not used to it). I think it's very prone to making errors unless you remember some additional rules of how the terms with Lorentz indices transfer to matrix equations. Considering that when you have understood the index notation it takes only a few seconds to deduce the correct form in case you have to write it out that way, I think it's better to understand the index notation rather than remembering stuff like "first index is row, 2nd is column".

I had not yet come across the term "contraction of an object ...."

So $( \Lambda_\mu{}^\nu, v_\nu )$ is a summation and $\Lambda_\mu{}^\nu v^\mu$ is not?

No. The former is simply two collections of numbers (staying in the language of above) seperated by a comma and grouped by parentheses. The parentheses do not indicate any mathematical operation. The latter is a summation, though (-> Einstein sum convention).

The use of the word "object" implies that there are very general implications involved - perhaps I have my nose to close to the page to see the bigger picture at the moment!

It doesn't. Especially not in the sense of object meaning a physical object such as a stone or an electron. I more or less mean "variable".

Share on other sites

Thanks again.

Unfortunately Im not doing any kind of course just self study as a kind of hobby. It would be great to have someone to "talk things over" with!.

I think I get your message - using matrix multiplication works in some cases but the $\Delta{x}^\mu{=}\Lambda^\mu{}_\nu\Delta{x}^\nu$ notation is always going to be meaningful no matter what the exact expression.

ie $\Delta{x}^0{=} \Lambda^0{}_\nu\Delta{x}^\nu$ for the 4 values of $\nu$. The = here meaning "is the sum of". I think the $\mu$s and the 0s should have bars over them to indicate the frame to which they refer - have not worked out how to do the bar yet - maybe something in "logic".

So $\Lambda^1{}_2\$ is the coefficient multiplying the y (2) value of one frame used in calculating the x (1) value in the other - and so on x15.

The upper index here refers to one frame the lower the other. In some respects anyway.

How am I doing?

John.

Share on other sites

You certainly shouldn't think of raised and lowered indices as being linked to frames. They are not.

Instead, think of a vector as always having a raised index. So, if we write the coordinates t, x, y and z as a four-vector, we write $x^0=ct$, $x^1=x$, $x^2=y$, $x^3=z$ etc.

The object with lowered indices, e.g. $x_{\mu}$ is a derived object called a co-vector, and is defined by $x_{\mu} = g_{\mu \nu} x^{\nu}$, where $g_{\mu \nu}$ is the metric tensor, $g_{00}=1$, $g_{11}=-1$, $g_{22}=-1$, $g_{33}=-1$ and all other entries zero. So $x_0=ct$, $x_1=-x$, $x_2=-y$, $x_3=-z$.

Share on other sites

Thanks Severian

I got the idea of upper/lower indices relating to frames from the book I am studying "A First Course in GR" by B F Schutz. Quote :-

"The bars on the indices only serve to indicate the names of the observers involved: they affect the entries in the matrix [ $\Lambda$] only in that the matrix is always constructed using the velocity of the upper index frame relative to the lower index frame"

Hence my idea that $\Lambda$ is a matrix and the meaning of the indices.

I think part of my problems lie in the way we are taught, books are written. I call this " the method of diminishing deception". I am right at the start of the book (Cp 2 Vector algebra). There are many ways to get into a subject and different paths to the end point. Concepts may be presented at the beginning which may not be entirely accurate but may be necessary to avoid too many complications first up. Later we are told - " well, what I told you before is not quite true......" etc etc.

There also seems to be no "absolute standardization" of the symbolism used. Schutz mentions , among other things, the signs associated with t,x,y,z ie - + + + or + - - -. In the end these things don't matter but when you are starting out you have ( I do anyway ) have to hang on to something.

To give you some idea of where I'm at.

C1 - Basically a review of SR stress space time diagrams and invariance.

C2 -Vector algebra - definitions, basis vectors, four velocity and momentum.

There are plenty of problems at the end of each chapter but unfortunately only some have answers. Next C3 Tensor Analysis in SR. So at the moment tensors and the metric are pretty vague concepts.

I want to stress that I am not complaining here. If this stuff was easy to really understand I would not be interested. The joy, for me anyway, is in the struggle. However I do value any input you are happy to provide. You never what will bring that "Ahhhhh!!!" moment!

John

Share on other sites

You certainly shouldn't think of raised and lowered indices as being linked to frames. They are not.

Instead, think of a vector as always having a raised index. So, if we write the coordinates t, x, y and z as a four-vector, we write $x^0=ct$, $x^1=x$, $x^2=y$, $x^3=z$ etc.

The object with lowered indices, e.g. $x_{\mu}$ is a derived object called a co-vector, and is defined by $x_{\mu} = g_{\mu \nu} x^{\nu}$, where $g_{\mu \nu}$ is the metric tensor, $g_{00}=1$, $g_{11}=-1$, $g_{22}=-1$, $g_{33}=-1$ and all other entries zero. So $x_0=ct$, $x_1=-x$, $x_2=-y$, $x_3=-z$.

As a geometer I cringed a bit when I read this.

$x^{\mu}$ is not (the components of ) a vector. Not every thing with a raised index transforms in the appropriate way to be called a vector. You should think of this as a collection of coordinate functions, i.e maps from a point on space-time to a number.

Also, $x_{\mu}$ is not (the components of ) a dual or co-vector.

When I get round to it, I will further explain this.

Share on other sites

• 4 weeks later...

Would geometer not be a geometrist? Is it really called geometer?

(Sorry, moved away from UK at 7, so this is an honest question )

(Sorry, this of course is so off topic but I just got curious about the term)

Share on other sites

Would geometer not be a geometrist? Is it really called geometer?

(Sorry, moved away from UK at 7, so this is an honest question )

(Sorry, this of course is so off topic but I just got curious about the term)

Straight from wiki "A geometer is a mathematician whose area of study is geometry".

I think it is the standard term.

Share on other sites

• 9 months later...
Straight from wiki "A geometer is a mathematician whose area of study is geometry".

I think it is the standard term.

haha ajb.. so sorry.. almost a year on.. but thank you for the answer

Geometer it is!

Share on other sites

As a geometer I cringed a bit when I read this.

$x^{\mu}$ is not (the components of ) a vector. Not every thing with a raised index transforms in the appropriate way to be called a vector.

I never said everything did. The object I was talking about is a position vector in space-time which is very much a vector due to its transformation under the Poincare group. Obviously if you define $x^\mu=${frog,sheep,cow,cat}, then it will not transform as a four-vector, so is not a vector. But that is not what we were talking about.

You should think of this as a collection of coordinate functions, i.e maps from a point on space-time to a number.

I am not sure what you are meaning here. What is the single number you are mapping to?
Share on other sites

I never said everything did. The object I was talking about is a position vector in space-time which is very much a vector due to its transformation under the Poincare group. Obviously if you define $x^\mu=${frog,sheep,cow,cat}, then it will not transform as a four-vector, so is not a vector. But that is not what we were talking about.

I am not sure what you are meaning here. What is the single number you are mapping to?

$x^{\mu}$ is a vector under the Lorentz group (we are explicitly talking about Minkowski space here) which is a subgroup of the diffeomorphism group. It is not a vector under the full diffeomorphism group. Vector should be reserved for the full diffeomorphism group and some quantifier like 4-vector or Poincare vector should be stated. (Same thing applies for Euclidean vectors)

The same should apply for tensors and tensor-like objects.

Coordinates are maps from points on the space-time to $\mathbb{R}^{4}$. You have great flexibility in the choice of functions here.

Any way, pick a point $p$ on your space-time. A coordinate assigns the collection of numbers $\{x^{0}, x^{1}, x^{2}, x^{3} \}$ to this point. Thus, we have a collection of functions.

Merged post follows:

Consecutive posts merged

To be a little more specific why $x^{\mu}$ is not a true vector we need to consider how it transforms under diffeomorphisms. (Lets work on a smooth manifold without any extra structure)

Let $f : M \rightarrow M$ be a smooth map. We will use the coordinates $x^{\nu}$ to describe the point $p$ and $y^{\mu}$ to describe $f(p)$. I hope that it is clear what I mean here.

Under this map the coordinates transform via the pull-back,

$f^{*}y^{\mu} = y^{\mu}(x)$

with the standard abuse of notation. This is clearly not a vector.

Lets consider maps that are at most linear.

$y^{\mu}(x) = T^{\mu }_{\: \: \nu} x^{\nu} + a^{\mu}$

(There is some freedom here with conventions on rows and columns, this does not matter much here but it can do in noncommutative geometries.)

This looks like how we expect it to transform under the Poincare group. For now, I have put no conditions on the $T$ nor have I specified any extra structure.

If we set $a=0$ then it looks like the Lorentz group. Again, remembering we have no extra structure.

So what is the transformation law of a vector?

$Y^{\nu} = X^{\mu}\left ( \frac{\partial y^{\nu}}{\partial x^{\mu}} \right).$

Again I hope it is clear what I mean here.

The outcome of this analysis is that $x^{\mu}$ looks like a vector only for "linear transformations". It is not a "true" vector.

(However the infinitesimal difference is a vector. i.e $y^{\mu}(x)- x^{\mu} = \epsilon X^{\mu}$ is a vector )

(we also have conventions about rows and columns and ordering, but this is not greatly important for manifolds)

You can define vectors and other tensors via there transformation rules. This is not particularly elegant, but it does allow for generalisations quickly.

If anyone wants to know more, I suggest the PlanetMath website.

Edited by ajb
Consecutive posts merged.
Share on other sites

$x^{\mu}$ is a vector under the Lorentz group (we are explicitly talking about Minkowski space here) which is a subgroup of the diffeomorphism group. It is not a vector under the full diffeomorphism group. Vector should be reserved for the full diffeomorphism group and some quantifier like 4-vector or Poincare vector should be stated. (Same thing applies for Euclidean vectors)

Look, you can define the word 'vector' to be whatever you want in your field (it has a very different definition in biology for example), but this is a physics forum so we should use physics definitions. In my field (physics) a four-vector is defined by its transformation under the Lorentz (or Poincare) group.

To suggest that the position vector is not a vector is simply proof of daft naming conventions.

Any way, pick a point $p$ on your space-time. A coordinate assigns the collection of numbers $\{x^{0}, x^{1}, x^{2}, x^{3} \}$ to this point. Thus, we have a collection of functions.

That much is clear, but what I was asking is, what do you mean by this being amapping to a single number? What is the single number, for example, for the position vector of the Earth at 12am New Year's Day, 2000 with respect to the Sun on New year's day in 1900?

Share on other sites

Forget it.

Merged post follows:

Consecutive posts merged
Look, you can define the word 'vector' to be whatever you want in your field (it has a very different definition in biology for example), but this is a physics forum so we should use physics definitions. In my field (physics) a four-vector is defined by its transformation under the Lorentz (or Poincare) group.

As you said a 4-vector, you have implied that the vector is a vector with respect to the Lorentz group. Fine I agree and is what I have said.

In mathematics and physics you can define vectors by there representation of Diff or some subgroup there of.

To suggest that the position vector is not a vector is simply proof of daft naming conventions.

Ok on things like $\mathbb{R}^{n}$ once a basis is chosen.

That much is clear, but what I was asking is, what do you mean by this being amapping to a single number? What is the single number, for example, for the position vector of the Earth at 12am New Year's Day, 2000 with respect to the Sun on New year's day in 1900?

$x^{1}$ for example is a NUMBER.

Share on other sites

amazing howa year on something can wake up again.

Share on other sites

As you said a 4-vector, you have implied that the vector is a vector with respect to the Lorentz group. Fine I agree and is what I have said.

YOu explicitly said "$x^{\mu}$ is not (the components of ) a vector". What did you mean by that, and is it not in contradiction with what you are saying in the above quote?

$x^{1}$ for example is a NUMBER.

I am still not getting this. What does $x^1$ have to do with it? That is not enough to express the vector - the vector cannot be considered as a mapping of the coordinates onto this number.

Share on other sites

You are being stubborn now Severian as many other times XX

Share on other sites

YOu explicitly said "$x^{\mu}$ is not (the components of ) a vector". What did you mean by that, and is it not in contradiction with what you are saying in the above quote?

[/Quote]

It is not really a vector, it is a coordinate.

If you restrict yourself to transformations that are strictly linear, then it does look like a vector.

Really, what is a vector as I have stated already is the (infinitesimal) difference between the coordinates of a point and a near by point.

I think this maybe what you are talking about by "coordinate vector". If we fix the zero vector and consider a near by point then

$x^{\mu}(p) - x^{\mu}(0)= \delta x^{\mu}$

is a vector. I suspect what you are talking about is what I have called $\delta x$.

I am still not getting this. What does $x^1$ have to do with it? That is not enough to express the vector - the vector cannot be considered as a mapping of the coordinates onto this number.

A coordinate is a map from a point of a manifold to $\mathbb{R}^{n}$ for some suitable $n$. Thus, it is a collection of real valued functions from the manifold to the real numbers.

That is all I have said.

The collection of functions, the coordinate does not transform as a vector under Diff. That statement is true. I am sorry, but that is the case.

Create an account

Register a new account