Jump to content

Xerxes

Senior Members
  • Posts

    254
  • Joined

Everything posted by Xerxes

  1. As ajb says, don't worry about it, Xerxes was just showing off, rather than offering genuine help. He is a bit like that, you will have to forgive him.
  2. Well you are certainly bamboozling me!Like... So. I am temporarily divorced from my texts and tutorial notes, accordingly lemme try this: At each and every point on a manifold, we will have a tangent vector space. Each of these spaces is entitled to an arbitrary basis (let's assume these spaces are finite-dimensional), then there may be a set-theoretic union of all such possible bases over all points in the "underlying" manifold which is called, if I recollect correctly, the frame bundle. OK. Of course the basis vectors for each and every tangent space comprises of vectors (they are a subspace-space of the tangent space), so that a section of the frame bundle is, essentially by definition, a vector field. Is this what you mean? And what you call the "collection" of all such sections you call a "frame"? Hmm, something seems a bit wrong here, for surely, if our tangent spaces are finite-dimensional, this need not imply that elements in the section are? So how can the "collection of sections" be synonymous with a basis? Moreover, the seems no limit to the number of sections I may take of my so-called frame bundle Not feeling very bright today, sorry (too much travelling!)
  3. Yes. Sticking with connections for now, and remaining true (insofar as this is possible) to the intuitionist nature of this thread, I can offer a coupla of definitions (of a connection on a principal bundle) which may appear wildly different, but are in fact complemtary. My first is easily the most intuitive. So. Our principal bundle, I called it [math]P[/math] is a manifold, and as such is entitled to a set of tangent vectors at each and every point [math]p \in P[/math]. Let's follow convention and call it [math]T_pP[/math]; it's a vector space. Then a connection can be seen as a (smooth) assignment of a "horizontal" subspace [math]H_pP[/math] such that the whole space at this point can be decomposed as the direct sum [math]T_pP = H_pP \oplus V_pP[/math] where the second term is the "vertical part"of this vector space. This is nice. It simply says that element in the vertical subspace "point along" the fibres, while the elements in the horizontal subspace "point between" them. It looks childish, but the math is kosher. My second definition is more challenging, but ultimately more rewarding. So. A connection 1-form is a (smooth) mapping [math]\omega: T_pP \to \mathfrak{g}[/math] the codomain being the Lie algebra of the Lie group [math]G[/math]. Huh? What can this mean? Well it a fact from (fairly) elementary linear algebra that nay vector space that can be decomposed as, say, [math]W = U \oplus V[/math] admits of two projections, [math]p_1:W \to U,\,\,\, p_2: W \to V[/math] with the following rather obvious properties: [math]\ker p_1 = V[/math] and likewise [math] \ker p_2 = U[/math]. So. Let's take as a given that any [math]T_pP= V_pP \oplus H_pP[/math] can be decomposed in this way. Note that each fibre in our bundle is a Lie group, which a manifold, whose algebra is simply the vector space tangent to the group identity. So we have that, since the elements in [math]V_pP[/math] "run along" the fibre, we have an isomorphism [math]V_pP \simeq \mathfrak{g}[/math] So now it is easy to see that our connection 1-form (I called it [math]\omega[/math]) is simply the first projection [math]\omega: V_pP \oplus H_pP \to V_pP[/math] up to isomorphism and whose kernel is the subspace [math]H_pP[/math]. Hence my two definitions are complementary! Yes, but unfortunately I am being sent overseas for 2 weeks (I call it "being transported"). I am enjoying the rather rambling turn this thread has taken. Do not take my silence as indifference.
  4. Well, Schroedinger's hat was asking for intuition, so let's see if I can find some BUT BE WARNED I am not a physicist. The bundle approach to GR that I dismissed rather cavalierly seems on reflection to make some sort of sense. Let's start here: Let's take it as read that GR does not actually replace SR but is in fact a generalization of it. The reasons are simple: 1) SR assumes "flat" Euclidean spacetime 2) GR models spacetime as a possibly non-flat 4-manifold which, by all possible definitions, is locally Euclidean and flat 3) Then it may well be that SR applies locally on our non-flat 4-manifold. 4) SR depends upon (or defines, your choice) the set of Lorentz transformations on Euclidean space. Picture this: as we "roam" over our spacetime manifold, let's call it [math]M[/math], then at each point [math]m \in M[/math] we may make an arbitrary Lorentz transformation, guaranteeing that SR holds in a "neighbourhood" of [math]m[/math]. The set of all physically realizable Lorentz transformations are the group called [math]SO(1,3)[/math] (assuming a certain metric signature which is not important). Or, to put it another way, at each point [math]m \in M[/math] we may "attach" the Lorentz group. Let's call this group (it's a Lie group BTW) as [math]G[/math] for brevity (though it is almost certainly not the [math]G[/math] that SH was referring to). The (disjoint) set union of this group at all points is called a "principal bundle". It is notated (at least by me) as [math] P(G,M)[/math] and has the following properties. 1)[math]P,\,\,G,\,\,M[/math] are all manifolds; 2) for each [math]p \in P[/math] there is a projection [math] \pi: P \to M,\,\,\pi(p) = m \in M[/math] whose preimage [math]\pi^{-1}(m) = G[/math] is called a "fibre" over [math]m[/math]. Hold on to this concept So. Although we know exactly how to "travel along" a fibre (simply use the group laws - recall each fibre is a group), there is no assured way to "move between" fibres i.e. different copies of our group - recall they are disjoint (mathematicians say no canonical way). For this we need a connection. Finally, you can think of the 4-manifold spacetime as some sort of "reality" (though it fries my brains), the principle bundle just described is an even higher level of abstraction. Deep-fried brains, maybe? Ummm. Did I promise intuition...........? Duh!
  5. I hope I didn't accuse you of arrogance. If it seemed implied, I apologize And I am sure you do. My doubt was to whether it is relevant to GR. OK fukkit, you asked about the relation between [math]g[/math] and [math]G[/math]. So suppose that [math]V[/math] is an arbitrary vector space. Then of necessity there exists a dual space [math]V^{*}[/math] such that [math]V^{*}: V \to \mathbb{R}[/math]. Let's say [math]\varphi \in V^{*}[/math] such that [math]\varphi(v) = \alpha \in \mathbb{R}[/math] Now suppose an inner product is defined on [math]V[/math]. This may be defined as the mapping from the Cartesian product of vector spaces to the Reals. Since elements in [math]V \times V[/math] are the ordered pairs [math](v,w)[/math] then we will need a mapping, say [math]g:V \times V \to \mathbb{R},\,\,\,g(v.w) = \alpha \in \mathbb{R}[/math]. So what is the relation between the [math]\varphi[/math] as defined above and [math]g[/math]? By the definition of an inner product space, one may assign a unique element in [math]V^{*}[/math] say [math]\varphi_w[/math] such that [math]\varphi_w(v)=g(v,w)[/math], which generalizes to [math]\varphi = g(v, \cdot) \in V^{*}[/math] So we may therefore makes the definition that [math]V^{*} \otimes V^{*}: V \times V \to \mathbb{R},\,\,\, g \in V^{*} \otimes V^{*}[/math] which is by definition a type (0,2) tensor. Now it is easily seen that the set of all tensors at a point is a vector space at that point, so using standard notation write [math]g = \sum_{\mu, \nu} g_{\mu \nu}\epsilon^{\mu} \otimes \epsilon^{\nu}[/math] for this element of that vector space, where the "epsilons" are basis vectors and the first term under the sum are called components. But since the power of tensors lies entirely in the fact that any equation using them retains the same form regardless of the choice of coordinates (and hence bases), it is customary to write tensors in component form, hence our tensor is always written as [math]g_{\mu \nu}[/math] It's called "the metric tensor" Well is it as simple as you thought? I haven't even started on [math]G[/math], but no, you are not annoying me
  6. I am sorry to appear rude, but I seriously doubt this. GR is very very hard and requires a deep and working knowledge of: point set topology; differentiable manifolds; vector spaces, in particular tangent spaces; and loads of other mathy stuff which you cannot possibly sidestep (life is sooo hard!!) And if you want to display GR as a theory of connexions on bundles, then you need a deep knowledge of these too. (Though I doubt this can be done - or at least I have never seen anyone attempt it. Have you?) Sorry again, but you have so far failed to demonstrate an understanding of such things. To clarify: There IS no shortcut. This is not a failing on your part, merely an attempt to run before you can walk. For example...... .....makes almost no sense in differential geometry, which is the subject at hand. Like it or not, an understanding of this subject is a prerequisite for fully understanding E.'s field equations and what they actually mean Like yes, my point exactly
  7. Does anyone have any experience with this? Recently, I switched to BT Broadband as they were offering a great deal PLUS a free Home hub router. This is fantastic piece of kit using Vista wirelessly - I have dual boot - likewise using Ubuntu when wired. But I cannot get it working wirelessly using Ubuntu. Sadly the BT helpline (in Bangalore, or wherever) merely shrugs and says "we do not support Linux". Meantime I have gone back to my trusty Netgear router, but would like to use the Home hub if possible. And I categorically refuse to be a regular Microsoft user. Any tips?
  8. Yes, that is the definition I would use. I have seen another (see below) Which does not differ significantly from the above, except it has a slight category-theoretic feel to it: the arrow is the tensor! OK, in applications, particularly in differential geometry, it is common to define a tensor in terms of how it transforms under a coordinate change. This is not, I believe, very helpful as a definition, though I grant the transformation rules are essential to understanding tensors in this context. PS I do not understand DrRocket's post at all
  9. Yikes! So moron wants the mapping that assigns a vector to each point [math]p \in M[/math] a single element [math]v \in T_pM[/math] to have the point [math]p \in M[/math] as an image point? Jeez, that's bonkers! Confidence shaken, I ask in trepidation: What does this mean? As far as I am aware, what you wrote is the domain of the tensor aka multilinear map [math]V^{*} \otimes V^{*} \cdots \otimes V^{*} \otimes V \otimes V\cdots V[/math] whose codomain is Real. How can this be a tensor? Surely it is nothing more (or less) than the Cartesian product of vector spaces. Does this make it a tensor? Or am I wrong again?
  10. Ugh! This is a subject I thought I knew fairly well. Obviously I was mistaken, as I find the following very confusing It was to my understanding that the above is the domain of some multilinear form aka tensor. Simplifying, let's say, [math]V \otimes V^{*} :V^{*} \times V \to \mathbb{R}[/math]. But OK, in your example the rank of your tensor is simply p + q, and mine it is 1 + 1 =2 Tensors do not need to thought of as fields, but no matter. You are, of course, entitled to use whatever notation you choose, but a tangent space at the arbitrary point [math]p \in M[/math] is generally referred to as [math]T_pM[/math], and the tangent bundle, the set theoretic (disjoint) union of all such tangent spaces as [math]TM[/math]. It seems (at least to me) that to any manifold one can associate only a single tangent bundle. Is this wrong? Whatever, I cannot make sense of the Cartesian product of bundles, assuming you are using standard notation. Again, I am confused by this. Taking the intuitive view that a vector field on [math]M[/math] assigns (smoothly) to every point [math]p \in M[/math] a single element from its associated tangent space [math]T_pM[/math], this seems to imply that a field is a section that is a map from the bundle to the base manifold, not the other way around. Maybe it's me that's the other around
  11. Xerxes

    Math Jokes

    An engineer, a physicist and a mathematician are travelling by train through Scotland for the first time. They all see a single black sheep in a field. Engineer: Ah, so sheep in Scotland are black Physicist: No, there is one sheep in Scotland that is black Mathematician: Actually no, there is at least one sheep in at least one field in Scotland that is black on at least one side. By edit: Here's another, that is cruel in the other way, as it were. An engineer, a physicist and a mathematician are stranded on a desert island. They have haven't eaten for days. A can of beans is washed onto the shore, and they get to discussing how to open it with the limited resources at their disposal. The physicist and engineer argue about levers, fulcrums, tensile strength of bamboo vs that of tin etc, and finally turn to the silent mathematician. "Without loss of generality, we may assume the existence of a can-opener......"
  12. DrRocket flatters me with his link. I only partly understood it, and even then only after much beard-tugging. Ah well. First forum rule: do not ask questions whose answers you are not equipped to understand!
  13. Sorry to butt in on a very interesting discussion between guys who obviously know their stuff, but I have a slight worry. First I am not a physicist and I am most emphatically not a philosopher, but I am familiar with Einstein's field equations. So it seems there are solutions to these equations (I believe Goedel found one) that allow CTCs, which you guys are more-or-less dismissing as "non-physical". My question: What is the basis for this dismissal as being non-physical? Is it experimental? Or is it that you just don't think the universe works that way? Is this is a good argument? Is there a better one? Or am I just making a fool of myself in an area where I am a complete baby? Don't get me wrong: I am as sceptical about time travel as the next moron, but I am struggling to follow the argument here
  14. khaled: I have made mistakes - even posted nonsense - on more one forum more than once. While embarrassing, it is a forgiveable "offence". But to claim as you did that your gibberish "requires logic" looks like arrogance. Arrogance plus mistakes are not an easy mix to digest. Please don't post if you do not really know what you are talking about. Yes. Readers notice the a here, that is, not any. As I recall this is famous corollary (due to Dedekind?) to an easy thm that states that every set, finite, infinite, countable or otherwise, has a countable proper subset. So let us assume as discussed that the set of all positive integers is countably infinite, and assign a symbol to the cardinality of any set that can be placed in 1-1 correspondence with this set as [math]\aleph_0[/math] (say "aleph-null"). Let's refer to this as the first transfinite cardinal, for want of a better term. So the question arises, what is the next transfinite cardinal? Since we don't yet know (or possibly care) let's call it [math]\aleph_1[/math]. How are these two cardinal numbers related? Now, as shown by Cantor, using an argument identical to the one in the OP, the set [math] \mathbb{R}[/math] of real numbers is uncountably infinite (the term "infinite" is, of course redundant), and let's assign a symbol to its cardinality, say [math]\mathfrak{c}[/math]. Then it can be shown that [math]\mathfrak{c} = 2^{\aleph_0}[/math] as I hinted in the OP. Question: is it truly the case that [math]\mathfrak{c} = \aleph_1 \Rightarrow \aleph_1 =2^{\aleph_0}[/math]? That is, is there or is there not a cardinal that "lies between" [math]\mathfrak{c}[/math] and [math]\aleph_0[/math]? As far as I am aware, nobody knows for sure. Poor old Cantor literally went bonkers trying for a proof of the negative. PS This is the celebrated "continuum hypothesis". Anyone know if it has been proven?
  15. My point is, or rather was, that I had had a very "belittling" day, and wanted to appear superior to someone. Anyone. In short I was being an arsehole. I apologize for the implication of mindless plagiarism. I truly do, try to forgive me
  16. hello wikipedia This is a verbatim and unattributed quote (note the incorrect "nonnegative" vs the correct "non-negative" in both)
  17. Xerxes

    Mass

    Like this, as a shameless paraphrase of Einstein. Consider a material body B with energy content [math] E_{\text{initial}}[/math]. Let B emit a "plane wave of light" for some fixed period of time [math] t[/math]. One easily sees that the energy content of B is reduced by [math]E_{\text{initial}} - E_{\text{final}}[/math], which depends only on [math]t[/math]. Let [math]E_{\text{initial}} - E_{\text{final}} = L[/math] i.e.the light energy "withdrawn" from B. Now, says Einstein, consider the situation from the perspective some body, say [math]B'[/math] moving uniformly at velocity [math]v [/math] with respect to B. Then from this perspective, the energy withdrawn from [math]B[/math] is [math]L'[/math] so that, as before, [math]L'[/math] depends only on [math]t'[/math], which is [math] t(1 -\frac{v^2}{c^2})^{-\frac{1}{2}}[/math] by Lorentz time dilation. The difference between [math]L[/math] and [math]L'[/math] is simply [math]L' - L = L[(1 - (\frac{v^2}{c^2})^{-\frac{1}{2}} - 1].[/math]. By expanding [math](1 -\frac{v^2}{c^2})^{-\frac{1}{2}}[/math] as a Taylor series, and dropping terms of order higher than 2 in [math]v/c[/math], he finds that [math]L(1 + \frac{v^2}{2c^2} - 1) = L\frac{v^2}{2c^2} = \frac{1}{2}(\frac{L}{c^2})v^2[/math]. With a flourishing hand-wave Einstein now says something like this: the above is an equation for the differential energy of bodies in relative motion; but so is [math]E = \frac{1}{2}mv^2[/math], the equation for kinetic energy - these can only differ by an irrelevant additive constant, so set [math]\frac{1}{2}(\frac{L}{c^2})v^2 = \frac{1}{2}mv^2 \Rightarrow \frac{L}{c^2} = m[/math] and so [math]L= mc^2[/math]. But, says he, [math]L[/math] is simply a "quantity" of energy, light in this case, that now depends only on [math]m[/math] and [math]c^2[/math] so...... [math]E = mc^2[/math].
  18. This is how it is usually presented, and I have no quarrel with it. BUT...... ....it might help to explain what is the factor (if that's what it is) [math]g_{\mu\nu}[/math], whether we are multiplying or summing, what are the [math]\{x^i\}[/math], and what would be the consequence of setting [math]x^{\mu} = x^{\nu}[/math]. Of course, I can give chapter and verse, but it would be a very long haul, of little interest to physicists let alone "protophyicists"
  19. Well, if you are dealing with a mathematical structure where [math]a^{-1} = \frac{1}{a}[/math] you are implicitly assuming that this structure admits of multiplication and multiplcative identity (here [math]1[/math]). So that [math]1a = a[/math] and [math]aa^{-1} = a \frac{1}{a} = \frac{a}{a} = 1[/math].This is the definition of the multiplicative inverse. But you should be aware that there are structures where [math]a^{-1}a = 0[/math] which implies that [math]a^{-1} = -a[/math]. For these structures the operation in question is called addition, with zero as the additive identity. Maybe you should ignore that point for now, important though it is. So sticking with multiplication, notice that [math]a= a^1[/math] so you simply replace the exponent [math]1[/math] with the exponent [math]m[/math], and it follows that [math]a^{-m} = \frac{1}{a^m}[/math]
  20. Ya, agreed, but my post was not intended as a "proof", rather some light entertainment. Likewise the entire thread (the clue is in its title!).
  21. How disheartening it is to try and help a poster who then doesn't even acknowledge one's efforts, let alone act on them. Worse, it is downright rude. Ah well.
  22. I am not going to do this for you, as it really isn't that hard. But I will give a few pointers. Note that since [math]A[/math] is [math]n \times n[/math] then [math]\det(tA) = t^n \det(A)[/math]. So since [math]\det(A) \ne 0,\,\,\,t\,\, \ne 0[/math] then [math]t^n \ne 0 \Rightarrow \det(tA) \ne 0[/math]. But you must prove the premise [math]\det(tA) = t^n \det(A)[/math]. Can you do that? For the second part, namely [math](tA)^{-1} = \frac{1}{t}A^{-1}[/math] you need only to prove that [math] (AB)^{-1} = B^{-1}A^{-1}[/math], remembering that you can treat [math]t[/math] as a [math]1 \times 1[/math] matrix. Recall that 1. [math]AA^{-1}= A^{-1}A =I[/math] 2. matrix algebra is associative 3. if [math] x[/math] is then treated as an element in a commutative ring, here most likely a field, then [math]xA = Ax[/math]. See how you get on
  23. No shit? Anyway, here's something weird (or wonderful, depending on how you hang) that emerges when you consider different "sizes" of infinity. The idea is not mine, but the execution is, as far as I know. The set of elements that make up an alphabet, as it is commonly understood, is of finite cardinality. In English this cardinality is 26. Let's now assign to the letter A the natural number 10, to B the number 20 etc. and concatenate these numbers to form a word. So that DOG = 4015070 (I use A = 10 rather than A = 1 to remove the ambiguity: is 12 = AB or is 12 =L) If we like, we can chuck in for good measure another 26 elements to capitalize, and another few for spaces and punctuation. We see that there is no word, no sentence, no book, no library, no collection of libraries that cannot be represented as a natural number, however humongous. And by definition, the set of natural numbers is countably infinite. Now any complete theory of the numbers [math]\mathbb{N}[/math] MUST include at least one true statement for each subset of [math]\mathbb{N}[/math]. But by Cantor's argument, the set of all subsets of [math]\mathbb{N}[/math] which I called [math]\mathcal{P}(\mathbb{N})[/math] is uncountable, so, by the above, there is no set of words, sentences, books etc (all elements in [math]\mathbb{N}[/math] recall) however large that will allow a true statement to made of each and every subset of the natural numbers. Thus our theory can never be complete This is an example of one of Gödel's incompleteness theorems
  24. Of course, by Cantor himself, whose proof I outlined in the OP. Did my post give the impression I was claiming this proof as mine own? If so, I apologize. Only a madman would do that, as it is one of the most celebrated proofs in all mathematics
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.