Jump to content

Tom Mattson

Senior Members
  • Posts

    772
  • Joined

  • Last visited

Everything posted by Tom Mattson

  1. Nope. I should put a finer point on my statement though: You can't take quantum mechanical spin into account with differential operators (as you can with position, momentum, and energy to get the Schrodinger equation). You can of course use Heisenberg's equation of motion to predict the time evolution of spin operators, but this is of course a differential equation in the operators themselves. The observation of phase changes of spin-1/2 particles under 2pi rotations is proof positive that you can't use a function of coordinates to describe spin. If you try, you will be forced to use functions that are multiple valued functions of position, which are physically meaningless.
  2. No' date=' they don't, but the violations cannot be observed. The virtual particles don't live long enough. The mathematics from which HUP is derived is quite independent of any interpretation of the result. A straightforward derivation is here: http://www.cbloom.com/physics/heisenberg.html Where is the problem?
  3. You've got most of it. I'd add group theory and various algebras (Lie, Grassman, Clifford,...)
  4. It's both because the manner in which it was derived does not depend on whether the space between two points is filled with matter. If you think about it some, you will see that it wouldn't make any sense if the answer were not "both". Take two planets, separated by a distance L0 in their mutual rest frame. Then Buck Rogers goes zipping by in his starship with speed v in a direction parallel to the line joining the centers of the planets. How far apart are the planets in Buck's frame? Well, if the answer to your question were not "both", then the answer to my question would depend on whether or not there is a giant ruler between the two planets, which is absurd.
  5. Then you have to change your definition of a vector from: [math] \mathbf {V} = (magnitude) \cdot (direction) [/math] to something else. Honestly Johnny5, I don't see the point in carrying this exercise any further, so I'm done with it. Best of luck with it,
  6. It has to be there in order for the set of vectors to be considered a group under multiplication. This is necessary if you want to carry all the properties of multiplication/division on the reals over to vectors' date=' as you seem determined to do. In that case, you c Let vector A have its tail at (a,b,c) and its head at (x,y,z) Now, define the dot multiplicative inverse of vector A, to be a vector with its tail at (a,b,c), and its head (x`,y`,z`) to be such that: [math] \mathbf{A}^{-1} = \frac{1}{|\mathbf{A}|} \hat A [/math] For any vector A (except the zero vector), there is one and only one vector A-1 so that your property three is satisfied, and property 1 is also satisfied. I say that it is nothing more than division on the reals. Going back to Newton's second law, you have: [math] \mathbf {F} = m \mathbf {a} [/math] [math] m=\frac {\mathbf {F}}{\mathbf {a}} [/math] which you define to be: [math] m= \mathbf {F} \cdot \mathbf {a} ^{-1} [/math] [math] m= \mathbf {F} \cdot \frac { \hat { \mathbf {a} } } {| \mathbf {a} |} [/math] [math] m= \frac { \mathbf {F} \cdot \hat {\mathbf {a}}} {| \mathbf {a} |} [/math] which is just the division of one real number by another.
  7. It's not as though the vector is being physically distorted. All we're doing here is multiplying a vector by a scalar, and obtaining a new vector. It's not that complicated. If we're talking about vector division, an equation of the type I posted is the only way to do it. Division is the inverse operation of some multiplicative operation. One is defined in terms of the other, so that's the place to start looking. There is no need to mention time. We're doing a math problem here. Also, as I said, we aren't taking some physical object and stretching it at all. If you really insist on physical examples, look at F=ma again. The factor m doesn't "stretch" the vector a to produce the vector F. What does "stretching acceleration" even mean anyway? It's not like "acceleration" is a rubber band whose ends we can pull apart. No, we just take the quantities and multiply them together to get F. As I said, it should have been obvious that x is a real variable. If not, then why didn't you ask? Also, the fact that it could be positive, negative, or zero is immaterial. Your distinction between stretching, contracting, and reversing direction has no bearing whatsoever on this problem. We're trying to find the analog of division on the reals, with vectors. Yes, vector components are always origin-dependent.
  8. I'm not bothering with it for the purposes of this thread. It's a real number, which is all that matters.
  9. If Alchemy made claims that exposed it to falsification by contrary evidence, then by modern standards it is in fact scientific. If the theory opens itself up to the risk of being found false then it meets Popper's requirement of being scientific (a standard which is largely accepted today). In the modern view, a theory is not necessarily unscientific simply because it is false.
  10. Yes' date=' it is clear. In the quoted section, it's the dot product. For vectors [b']A[/b] and B in R3: [math] \mathbf {A} = A_x \mathbf {i} +A_y \mathbf {j} + A_z \mathbf {k} [/math] [math] \mathbf {B} = B_x \mathbf {i} +B_y \mathbf {j} + B_z \mathbf {k} [/math] the dot product is: [math] \mathbf {A} \cdot \mathbf {B} = A_x B_x + A_y B_y + A_z B_z [/math] Also, why are we talking about moving bodies? We're talking mathematics here.
  11. Now the only question is, how do you define vector division?
  12. Mmmmmm.....Clickalicious......
  13. I did it with simple right triangle trigonometry. Theta is the angle that V makes with the z-axis' date=' and phi is the angle that the projection of [b']V[/b] in the xy-plane makes with the x-axis. The derivation is in any calculus book. That's right.
  14. If V=(magnitude).(direction)' date=' then I can already tell you what (direction) is. In R[sup']3[/sup] we have: [math] \mathbf {V}=| \mathbf {V} | cos( \phi ) sin( \theta ) \mathbf {i} + | \mathbf {V} | sin( \phi ) sin( \theta ) \mathbf {j} + | \mathbf {V} | cos (\theta) \mathbf {k} [/math] Factoring out |V| we have: [math] \mathbf {V}=| \mathbf {V} | (cos( \phi ) sin( \theta ) \mathbf {i} + sin( \phi ) sin( \theta ) \mathbf {j} + cos (\theta) \mathbf {k} ) [/math] Since the |V| on the right hand side is (magnitude), it follows that the other quantity is what you call (direction). As one might expect, it is a unit vector in the direction of V. Furthermore, it is not possible for all 3 components to simultaneously be zero, for any choices of theta or phi.
  15. All I know is that you can't define vector division in terms of cross products or dot products. I'm still working on the more general problem.
  16. No, in general a vector doesn't need a direction. You're talking about vectors in Rn. All that is required for an object v to be a 'vector' is that it be a member of some set that satisfies Definition I.1 on page 80 of the following textbook: ftp://joshua.smcvt.edu/pub/hefferon/book/book.pdf But indeed, it would be best to stick with Rn. I regard the vectors in that equation as elements of the vector space R3. edit: Even if you use line vectors, you are going to have the same problems satisfying the requirements for multiplicative identities and inverses.
  17. It's not that we're forcing closure on vector division' date=' it's that there's no way to define vector division in terms of multiplication such that vectors are closed under multiplication [i']and[/i] the necessary properties of multiplicative inverses and identities are satisfied. If you conclude that m=F/a from F=ma, then there has to be a multiplicative inverse a-1 such that: F=ma Fa-1=maa-1 Fa-1=me Fa-1=m and then define Fa-1=F/a, it turns out that you cannot do this with either the dot product or the cross product, as I've shown. So what multiplication rule do you propose we use to define vector division? The "m" in F=ma cannot be a vector. If it were, then neither F nor a could be vectors.
  18. I like simplicity, this formula seems like only three things. curvature =LHS = constant times stress energy tensor. But now you are telling me that the LHS is the Einstein tensor, which is actually the Ricci tensor minus the Ricci scalar times the metric tensor. Did I get all this right? Yes, that's right. The curvature tensor is actually a rank-4 tensor. By contracting 2 indices together, you get the Ricci tensor, which appears in the field equations of GR. In principle, yes: In the low-speed, low-energy density limit. If the energy (I'm lumping mass in with energy) density goes to zero, you recover SR from GR. Space and time are still coupled, but now at least the spacetime is flat. Taking it one more step: If the speed scale (that is, a scale which is indicative of typical speeds of objects) goes to zero (or equivalently, as c goes to infinity), you recover Galilean relativity from SR. So, in a universe in which nothing moves, and nothing exists, Galilean relativity is true, and space and time are completely decoupled. In other words, there is no way that space and time can be decoupled in the real universe. Absolutely not. It's pure crackpottery. That much is obvious from the abstract, found here: http://www.hyperinfo.ca/LivingAtom/ The solar system is intelligent? The atom is intelligent? There are contradictions in relativity? "Living atom theory" can logically explain telepathy? The bozos who wrote that website not only have no real understanding of physics, but they also have serious misunderstandings of cognitive science if they really believe all this crap. I don't know of any general relativistic treatment of superconductivity, but I do know that superconductivity is understandable in terms of quantum statistical mechanics, and I know that GR is perfectly compatible with QM (that is, their axioms are all consistent). The only problem I can see is that GR might contradict the quantum field theoretic (as opposed to quantum mechanical) description of superconductivity. But we already know that GR and QFT don't get along, so this isn't all that Earth shattering.
  19. Roughly speaking: The right side describes the curvature of spacetime' date=' and the left side describes the matter and energy distribution. In the language of PDEs, the right side is the source term for the left side. That depends on what approach you look at. If you look at string theories, then that is a generalization of QFT, which will contain GR as a special case. If you look at loop quantum gravity, then that is a direct quantization of the GR theory. Among other things, general covariance and background independence will be retained from GR. But of course GR is not a quantum theory, so that aspect will require a major overhaul.
  20. As far as I know' date=' they aren't dealt with at all. Yes, but that's just division on the reals. It's considerably more complicated with tensors of higher rank, as will become apparent when I answer the next part... And if they aren't pointing in the same direction, what then? Even if they are pointing in the same direction, there are problems. Take the scalar equation ax=b. To solve for x, we divide both sides by a so that x=b/a. As long as a isn't zero, there's no problem here. But what is division anyway? Division by a is really multiplication by the multiplicative inverse of a, which is a-1. Letting e stand for the identity element (e=1 in R) and rewriting our solution in these terms, it becomes: ax=b a-1ax=a-1b ex=a-1b x=a-1b Now one important property of multiplication on the reals is closure. That is, for any elements {a,b} in R, the product ab is also in R. So a-1b is a real number, as expected. Now let's try this with vectors, as we attempt to solve for x. We'll have to assume that a "multiplicative inverse" a-1 exists for a. I put the word multiplicative in quotes, because it is not clear what kind of multiplication we are dealing with. For now, I'll denote it by the pound sign (#). ax=b a-1#ax=a-1#b Now we have 2 ways to multiply vectors: The dot product and the cross product. Let's see if either one could possibly stand for #. Dot product: a-1.ax=a-1.b Now we hit our first roadblock. What exactly is a-1 with respect to the dot product? A multiplicative inverse is supposed to have 3 properties: 1. It has to return an identity element when multiplied by a. 2. It has to be in the same set as a and the identity. 3. It has to be unique. There is no vector a-1 that satisfies all these properties. To prove this, assume that an identity element e exists such that a-1a=e. You could try to define a-1 as a vector which when put in a dot product with a, yields the scalar "1". But there are two problems with this. First, there are an infinite number of such vectors, which violates #3. And second, vectors aren't closed under the dot product, which violates #2. And of course, if we look at the other case, namely that in which an identity element e satisfying a-1a=e does not exist, then we have contradicted condition 1 above. Thus, we cannot define vector division from the dot product. Moving on... Cross product: Now let # stand for X. ax=b a-1Xax=a-1Xb We immediately run into 2 problems here. First, being obvious that a-1Xa is a vector, I can tentitively call that product e, the identity vector. Let us first assume that not all of the components of e are zero. Our first problem comes from the antisymmetry of the cross product. That is, aXb=-bXa. Now look at the action of taking the cross product of a and its inverse from each side: a-1Xa=e aXa-1=-e The problem here is that multiplication of a by its inverse should yield the identity element no matter which side you multiply from. But that's not the case here, unless we define 2 identity elements for this multiplication. But in that case, there can't be any sense in which this could be called "multiplication" in the usual sense, because in the usual sense there can only be one multiplicative identity. Now consider the second case: the components of e are all zero. That solves the problem above, but the problem now is that our multiplicative identity is the zero vector, which will lead to the division by zero problem. Thus, we cannot define vector division from the cross product. How? No, rank 2 tensors won't get you out of the problems detailed above. The closest thing to "vector division" that I have ever encountered is "phasor division", which necessitates the use of complex functions. What makes you think that the direction of F can cancel with anything? You're taking an operation on the reals, and applying it to something that is not even a mathematical object. At least, it isn't a mathematical object until you define its properties.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.