Jump to content

wtf

Senior Members
  • Posts

    830
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by wtf

  1. I've been percolating on the posts in this thread and some online references and I'm making some progress. I just wanted to write down what I understand so far. I think I need to just keep plugging away at the symbology. Thanks to all for the discussion so far. That's a very clarifying remark. Especially in light of this: The above is extremely clear. By which I mean that it became extremely clear to me after I worked at it for a while And I definitely got my money's worth out of this example. Let me see if I can say it back. Given [math]V[/math] and [math]V^*[/math] as above, let [math]\varphi, \phi \in V^*[/math] be functionals, and define a map [math]\varphi \otimes \phi : V \times V \to \mathbb R[/math]. In other words [math]\varphi \otimes \phi[/math] is a function that inputs a pairs of elements of [math]V[/math], and outputs a real number. Specifically, [math]\varphi \otimes \phi(u, v) = \varphi(u) \phi(v)[/math] where the right hand side is just ordinary multiplication of real numbers. Note that [math]\varphi \otimes \phi[/math] doesn't mean anything, it's notation we give to a particular function. It's clear (and can be verified by computation) that [math]\varphi \otimes \phi[/math] is a bilinear map. In other words [math]\otimes[/math] is a function that inputs a pair of functionals, and outputs a function that inputs pairs of vectors and outputs a real number. So that's one definition of a tensor. I'm not clear on why your example is of rank [math]2[/math], I'll get to that in a moment. Another way to understand the tensor product of two vector spaces comes directly from the abstract approach I talked about earlier. In fact in the case of finite-dimensional vector spaces, it's especially simple. The tensor product [math]V \otimes V[/math] is simply the set of all finite linear combinations of the elementary gadgets [math]v_i \otimes v_j[/math] where the [math]v_i[/math]'s are a basis of [math]V[/math], subject to the usual bilinearity relationships. Note that I didn't talk about duals; and the tensors are linear combinations of gadgets and not functions. In fact one definition I've seen of the rank of a tensor is that it's just the number of terms in the sum. So [math]v \otimes w[/math] is a tensor of rank [math]1[/math], and [math]3 v_1 \otimes w_1 + 5 v_2 \otimes w_2[/math] is a tensor of rank two. Note that I have a seemingly different definition of rank than you do. In general, a tensor is an expression of the form [math]\sum_{i,j} a_{ij} v_i \otimes v_j[/math]. This is important because later on you derive this same expression by means I didn't completely follow. By the way if someone asks, what does it mean mathematically to say that something is a "formal linear combination of these tensor gadgets," rest assured that there are technical constructions that make this legit. Now if I could bridge the gap between these two definitions, I would be making progress. Why do the differential geometers care so much about the dual spaces? What meaning do the duals represent? In differential geometry, physics, engineering, anything? Likewise I understand that in general tensors are written as so many factors of the dual space and so many of the original space. What is the meaning of the duals? What purpose or meaning do they serve in differential geometry, physics, and engineering? Now one more point of confusion. In your most recent post you wrote: I think that can't be right, since metric spaces are much weaker than inner product spaces. Every inner product gives rise to a metric but not vice versa. For example the Cartesian plane with the taxicab metric is not an inner product space. I'm assuming this is just casual writing on your part rather than some fundamentally different use of the word metric than I'm used to. Agreed so far. Although in complex inner product spaces this identity doesn't hold, you need to take the complex conjugate on the right. Yes. The correspondence between the dual space and the inner product is not automatic, it needs proof. Just mentioning that. Now here I got lost but I need to spend more time on it. You're relating tensors to the inner product and that must be important. I'll keep working at it. Aha! The right side is exactly what I described above. It's a finite linear combination of elementry tensor gadgets. And somehow the functionals disappeared! So I know all of this is the key to the kingdom, and that I'm probably just a few symbol manipulations away from enlightenment Right, it's (0,2) because there are 0 copies of the dual and 2 copies of V. But where did the functionals go? Should I be thinking gravity, photons, spacetime? Why are the duals important? And where did they go in your last calculation? I'll go percolate some more. To sum up, the part where you define a tensor as a map from the Cartesian product to the reals makes sense. The part about the duals I didn't completely follow but you ended up with the same linear combinations I talked about earlier. So there must be a pony in here somewhere.
  2. I'm interested in this discussion because I've only ever seen tensor products in abstract algebra. I don't know any formal physics and don't know what tensors are. And the abstract mathematical formulation seems so far removed from the physics meaning of tensor that I've always been curious to bridge the gap. [Feel free to skip this part] Briefly, if [math]V[/math] and [math]W[/math] are vector spaces over the real numbers, their tensor product [math]V \otimes W[/math] is the free vector space on their direct product [math]V \times W[/math], quotiented out by the subspace generated by the usual bilinearity relationships. The tensor product has a universal property that's generally used to define it, which is that any bilinear map from the direct product [math]V \times W[/math] to any other vector space "factors through" the tensor product. This is a lot of math jargon and sadly if I tried to supply the detail it would not be helpful. It's a long chain of technical exposition. The details are on Wiki: https://en.wikipedia.org/wiki/Tensor_product https://en.wikipedia.org/wiki/Tensor_product_of_modules Also this article is simple and clear and interesting. "How to conquer tensorphobia." https://jeremykun.com/2014/01/17/how-to-conquer-tensorphobia/ [End of skippable part] This [the algebraic approach to tensor products] is all I know about tensors. It's always struck me that * This doesn't seem to have anything to do with physics or engineering; and * It doesn't say anything about dual spaces, which are regarded as very important by the physicists. What I know about physics and engineering tensors is that they are (loosely speaking I suppose) generalizations of vector fields. Just as a vector field describes the force of a swirling wind or electrical field about a point in the plane, tensors capture more and higher order behavior localized at a point on a manifold. What I wish I understood was a couple of simple examples. When Einstein is analyzing the motion of a photon passing a massive body, what are the tensors? When a brige engineer needs to know the stresses and strains on a bolt, what are the tensors? Studiot mentioned the stress and strain tensors. Even though I don't know what they are, their names are suggestive of what they do. Encode complex information about some force acting on a point. Studiot can you say more about them? I hope someday to understand what a tensor is in everyday terms (bridge bolts) and how they're used in higher physics, and how any of this relates to mathematical tensor product. Bilinearity seems to be one of the themes. Along these lines, Xerxes wrote something I hadn't seen before. That relates a pair of functionals to the product of two real numbers. The distributive law induces bilinearity. So this looks like something for me to try to understand. It might be a bridge between the physics and the math. If I could understand why the functionals are important it would be a breakthrough. For example I've seen where an [math]n[/math]-fold tensor has some number of factors that are vector spaces, and some number that are duals of those spaces, and these two numbers are meaningful to the physicists. But duals don't even appear in the algebraic approach. This is everything I know about it and perhaps I'll learn more in this thread.
  3. What an interesting project. I just had a look at the Wiki page for the surreals to refresh my memory, and got confused pretty fast. Then I read through the Talk page. https://en.wikipedia.org/wiki/Talk:Surreal_number There you'll find several experts in the subject arguing about how to define the surreals, what their properties are, whether the definition is circular, whether the surreals are the largest ordered field or not, whether they can properly be called a linear continuum. It's really worth reading. It's totally clear now why these aren't more well known. Even the pros aren't quite sure what they are.
  4. (Wiseguy kid): But isn't multiplication just repeated addition? How can you have apple-many oranges?
  5. The discussion so far has been about the physics; but since this is the math section I'd like to say a word about the math. As a math-trained person when you tell me that 2 times 6 is 12, I believe that. I could drill it down to the Peano axioms. And it tracks a highly obvious and familiar fact of nature, namely that two rows of six are the same as six rows of two and there are twelve of them altogether. I can see the living proof of this in the world every time I buy a carton of eggs. However if you ask me what 2 feet times 6 pounds is, I know that's 12 foot-pounds and I can conceptualize it physically. But if I put on my formalist hat, I confess I have no idea what that means in math. I can't drill down foot-pounds to anything I know in set theory. I actually have no idea what it really is. As someone noted in this Stackexchange thread, we tell kids you can't add apples to oranges, and then we tell them to multiply feet times pounds. What kind of sense does that make? http://physics.stackexchange.com/questions/98241/what-justifies-dimensional-analysis No less a genius than professor Terrance Tao has blogged on exactly this subject. https://terrytao.wordpress.com/2012/12/29/a-mathematical-formalisation-of-dimensional-analysis/ I don't have time to read this today otherwise I'd summarize as much as I understood. Hopefully I'll get to that later. Meanwhile I wanted to toss these links out there because this really is a good question. What is a foot-pound, really?
  6. Not sure what you mean by a combination -- sum, product, ordered pair? But a complex function [math]f(z) = w[/math] maps the [math]z[/math]-plane to the [math]w[/math]-plane. So the graph lives in 4-space. What's often done is to show nice color pictures of the real or imaginary parts of a complex function. Here's a page I found but these kinds of pictures are all over the Web. http://www.geom.uiuc.edu/~banchoff/script/CFGExp.html
  7. @Studiot, Please explain to us zeta function regularization. Otherwise retract the nonsense in your last post. You've crossed the line from honest questioning to trolling.
  8. Sure, that makes perfect sense. Studiot's point is that we apply the rule that "the limit of a sum is the sum of the limits" then we could split that into the difference of two infinite limits, and it would then be undefined. So we have two different answers for the same problem. The answer is that the rule that the limit of a sum is the sum of the limits does not apply if one or both of the limits are infinite.
  9. That's an interesting physical example. I'm sure that the concept of half life is not assumed in physics to go on forever. If you keep halving the quantity of something, at some point you can't divide it any more and the process stops. Whereas in math, you can keep dividing a number in half as much as you like. The sequence [math](\frac{1}{2^n})_{n \in \mathbb N}[/math] contains infinitely many distinct terms in math; but only finitely many in computer math or in any physical experiment that can be done, even in theory. Which, by the way, is why the calculus "explanation" of Zeno's paradox fails. Zeno is giving a thought experiment about the physical world, and not about the real numbers as they are presently understood.
  10. How is that different than [math]ax^2 + bx + c[/math] ? In that expression, the coefficients are constant for a given polynomial, but as they vary they generate the space of all quadratic polynomials. How do you interpret your equation differently?
  11. @Studiot, Perhaps I'm misunderstanding. Are you making a historical point rather than a mathematical one? Wallis, bless his heart, has been dead for over 300 years. He's hangin' with Hardy now. There is no confusion regarding the practice of using the extended reals in modern math.
  12. @Nedcim, Your comment sparked something in my brain and I realized that @Studiot has actually made a very good point, one that requires a response. @Studiot, You have made a good point here and I apologize for ignoring it earlier. The way I would interpret your question is as follows: Infinite limits appear to violate the theorem that the limit of a sum (or difference) is the sum (or difference) of the limits. And you are absolutely right! What is the resolution? Off the top of my head I think we must insist that to invoke this theorem, the limits in question must be members of the real numbers and not the extended real numbers. That is, since the limits in question are infinite, you can not expect to add or subtract them and get a sensible result. I was very curious to see how this is handled in the literature. I got out my dog-eared copy of Rudin's Principles of Mathematical Analysis. At one point he proves the theorem that you can add and subtract limits, but he only proves it when the limits are real numbers. Later on, he introduces infinite limits in the extended reals, but never tries to add or subtract them! In effect Rudin is careful on this point but never calls it out explicitly. It would be interesting to Google around and see whether there's any explicit discussion. The usual limit theorems don't necessarily apply to infinite limits in the extended reals but I don't remember ever seeing that explicitly stated.
  13. @Studiot, I admit I'm baffled that you're digging in your heels and denying a totally standard part of math. You're free to do so of course. I have nothing to add to what I've already written. Interested readers should consult the Wiki links I gave, and rest assured that this material is a standard part of math, taught at both the undergrad and graduate level. I'm afraid Professor Hardy is not reachable by mail these days. He died in 1947.
  14. @Studiot, Just to be clear, you personally reject this entirely standard and common piece of math? How do you get measure theory off the ground? What's the measure of the real numbers? https://en.wikipedia.org/wiki/Extended_real_number_line#Measure_and_integration Going further, if the Lebesgue measure of the real line is not [math]\infty[/math], and you still agree that the measure of a line segment of length [math]1[/math] is still [math]1[/math], then you have to abandon countable additivity. And now you just lost measure theory, which means you lost most of functional analysis, quantum physics, and a lot of other good stuff. What say you?
  15. @Studiot, You know about the extended reals, right? Those are the reals with symbols [math]\infty[/math] and [math]- \infty[/math] adjoined. Their purpose is just to make it possible to talk about limits at infinity and infinite limits. https://en.wikipedia.org/wiki/Extended_real_number_line In particular, it's perfectly reasonable to write, say, [math]\lim_{x \to 0^+} \frac{1}{x} = \infty[/math] Wikpedia has an explicit discussion of this here. https://en.wikipedia.org/wiki/Limit_of_a_function#Infinite_limits They make the point that the above limit equation should be read "increases without bound" or some such; and that alternately, we can introduce the extended reals so that we can legitimately talk about a limit being infinite. The reason it's convenient to talk about infinite limits is to distinguish the two different meanings of "diverge." The sequence [math]1, 2, 3, \dots[/math] diverges (in the reals) in a very different way that the sequence [math]0, 1, 0, 1, \dots[/math] does. In the former case, we can say that [math]1, 2, 3, \dots[/math] converges in the extended reals to [math]\infty[/math]. It's just a semantic point, but it's (to the best of my knowledge) fairly standard. In other words we're not saying anything profound, we're just introducing some notation and terminology for convenience. And also, a common point of confusion, these sysmbols [math]\infty[/math] and [math]- \infty[/math] have absolutely nothing to do with the transfinite ordinals and cardinals of set theory.
  16. I'm sure we have an interesting conversation here if we can figure out what it is. You know that if you ask "what's beyond infinity" that's the kind of bait I can't resist. What happened to the OP? Their question seemed perfectly reasonable.
  17. I need to study CH to understand how nets generalize the concept of limits? Studiot you did not read my post. And if you've been Wiki surfing, surely you know that work on CH is far past Cohen these days. I must say I'm a bit annoyed that you quoted my post without actually engaging with any part of it. CH truly has nothing whatsoever to do with what I wrote nor with any aspect of the generalized theory of limits.
  18. I think OP is asking about functional equations. https://en.wikipedia.org/wiki/Functional_equation I'm afraid I don't know anything about the general theory so I can't help the OP. I'm guessing that you'd have to show you've found all solutions by ad hoc methods on a case by case basis.
  19. Can you give an example of what you mean? That doesn't sound like functional analysis to me. Functional analysis is basically linear algebra and calculus on infinite dimensional vector spaces. For example the Hilbert space of quantum physics is studied in functional analysis.
  20. What an interesting question. What do you think is beyond infinity? Do you include the transfinite ordinals and cardinals as being beyond infinity? How about the so-called large cardinals studied in set theory? These are cardinals so big they can't be proved to exist in standard set theory. Hope this isn't too much of a thread jack but when someone asks what's beyond infinity ... that's a very thought-provoking and interesting question. Actually you can put the order topology on ordinal numbers and then talk about limits. It's possible to have a limit point of a topological space that can't be reached by any sequence. So this is not completely off topic. There are limits that do go "beyond infinity," or at least far beyond the natural numbers. https://en.wikipedia.org/wiki/Large_cardinal https://en.wikipedia.org/wiki/Order_topology
  21. The idea of a limit is that something (a sequence or a function) gets arbitrarily close to some value. Nothing is said about "reaching" the limit. So for example the sequence [math]\frac{1}{2}, ~~\frac{1}{4}, ~~\frac{1}{8},~\dots ~~[/math] gets arbitrarily close to [math]0[/math]. That means it gets as close as you want to zero. No term of the sequence is ever zero, and we do NOT talk about it "reaching" zero because that makes no mathematical sense in this context. It's true that we might INFORMALLY think that, but that is not the same as formally defining a limit. And until you understand what a limit is, it's counterproductive to think about it as "reaching" zero, because that makes it harder to understand the actual meaning of limit.
  22. How do you reuse cough medicine? I don't think I want to know.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.