Jump to content

Zareon

Senior Members
  • Posts

    33
  • Joined

  • Last visited

Retained

  • Quark

Zareon's Achievements

Quark

Quark (2/13)

10

Reputation

  1. Consider an n-dimensional complex vector space. The corresponding vector space of nxn matrices is n^2 dimensional. I want to find out if there exists a basis for this space consisting of semipositive matrices. The question seems simple, but I can't find a proof. any help is appreciated.
  2. How would I go about showing that the function defined by f(x)=exp(-1/x^2) for x<>0 and f(x)=0 for x=0 has derivatives of all orders and the value of all the derivatives at x=0 is 0? It seem obvious that f is infinitely many times differentiable for x not equal to 0, but I don't know how I would write down a proof. Taylor series come to mind, but nothing in the book deals with that, so there should be another way. I've shown f'(0)=0 by writing down the limit and using L'Hospital. But how would I show it for higher order derivatives without explicitly calculating the derivatives and evaluating the limits? Would induction work? I've thought of letting g(x)=-1/x^2, so f(x)=exp(g(x)) and: f'=g'e^g f''=(g'+g'')e^g f'''=(g'+2g''+g''')e^g 'etc' whatever etcetera means. I explicitly find the relation using induction and then use induction to calculate the limits for all orders, but doesn't seem to go anywhere. Anyone know of a better way?
  3. I kinda forgot this topic, but here's the counterexample I promised. Consider the vector space l_2® consisting of the strings (x_1,x_2,...) with x_i real numbers. Then define the linear operators R and L (the Rightshift and the Leftshift) by: R(x_1,x_2,x_3,...)=(0,x_1,x_2,...) L(x_1,x_2,x_3,...)=(x_2,x_3,x_4,...) The set of linear operators on l_2 form a ring ofcourse and LR=1 but RL is not the identity.
  4. I think you're missing the point I`m making. There's no such thing for matrices (that's what I've just proved), but I can't believe the same holds for a general ring. I will try to find a counterexample.
  5. Ah, now you're using that the dimension of the row space is equal to the dimension of the column space. If A has a right inverse the column space has dimension n (A is nxn). And A has a left inverse iff it has dim(rowspaceA)=n. That's a neater way to look at it. But showing that dim(rowspace)=dim(columnspace) (=rank A) is shown in my book by counting pivots. I hope there's a nicer way to look at it apart from counting pivots. I forgot what I those things were called, but I meant RINGS. The set of nxn matrices form a ring. I`m pretty sure you cannot prove that if an element A of your ring has a right inverse then it will also have a left inverse. I dont think it's hard to conjure up a counterexample. That fact that it is true for matrices means you have to use some properties of matrices and that's why I dug into vectors and pivots and whatnot.
  6. But I want AB=I => BA=I. Uncool assumes the existence of a matrix C for which CA=I. The difficulty in the proof is in showing that AB=I implies that there exists a matrix C such that CA=I. Showing then that B=C is the trivial step Uncool and others showed. I don't think you can prove it from general grouplike properties. I`m sure some knowledgeable mathematician here can show there are grouplike structures where elements have a rightinverse, but no left inverse. You really have to use some special properties of matrices.
  7. Thanks for all the replies. But most posts just show that if A has an inverse, then it is unique. That's somewhat trivial. What I wanted to know is that AB=I implies BA=I. I got the answer now, but it's not a (very) beautiful proof. It's allright though: First we use that the system Ax=b has a solution for any vectror b iff A is row reducible to the identity matrix. Proof: (<=) Just row reduce the augmented matrix [A|b] to [i|c]. Then c is the solution. (=>) Every elementary operation can be done by multiplying A on the left by an elementary matrix. If the reduced echelon form of A is not the identity, then H=(Et...E2E1)A it has all zero's in the bottom row. (The Ei's are the elementary matrices corresponding to the operations). So let b=(Et...E2E1)-1)en. Where en is the n'th standard basis vector: enT=(0 0 ... 0 1). Then reduction of [A|b] gives [H|en] which has no solution. This last part is the 'ugly' part of the proof. Now suppose AB=I, then the equation Ax=b has a solution for any vector b. Just pick x=Cb, then Ax=A(Cb)=b. So A is row reducible to I from the above result, so there exist elementary matrices, such that (Et...E2E1)A=I. Since CA=AB=I implies B=C we have B=(Et...E2E1), so BA=I. I think the proof can be made more beautiful by considering A as a linear function from R^n to R^n. I'll see if that gives more insight.
  8. Thanks the tree. Anyway, I've found the answer. I just had to evaluate <U(v+iw),U(v+iw)> to find that also Im(<Uv,Uw>)=Im(<v,w>).
  9. What I meant was that, assuming AB=I, then for any vector b there exists an x such that Bx=b. The proof is being that, if you multiply both sides by A, then you get x=Ab. But I guess that's not a proof at all It sounds ok, but I've got a shaky feeling with it. It probably assumes what I`m trying to prove. I`ll go and try to understand your proof.
  10. I've read somewhere that a unitary matrix U can be defined by the property: (1) U*=U^{-1} (* = hermitian conjugate) or by the fact that it preserves lengths of vectors: (2) <Ux,Ux>=<x,x> I have trouble seeing why they are equivalent. It's obvious to see that (1)=> (2): <Ux,Ux>=(Ux)*(Ux)=x*(U*U)x=x*x=<x,x> But not the other way around. I CAN prove it for real vector spaces, where U is an orthogonal matrix from the fact that <v,w>=<w,v>. Then I would do: <v+w,v+w>=<U(v+w),U(v+w)>=<Uv,Uv>+<Uw,Uw>+2<Uv,Uw>=<v,v>+<w,w>+2<Uv,U,w> and working out the left side gives <Uv,Uw>=<v,w>. and from this that the columns of U are orthonormal, since [math]<Ue_i,Ue_j>=<e_i,e_j>=\delta_{ij}[/math] But for a complex vector space where <v,w>=<w,v>* all the above gives is: Re(<Uv,Uw>)=Re(<v,w>). EDIT: made some mistakes
  11. Thanks for the replies. Would the following proof be correct? Thm: Suppose A, B are square matrices and AB=I. Then BA=I also and B=A^-1. Proof: Suppose AB=I, then the system Bx=b has a solution for any column vector b, since x=(AB)x=A(Bx)=Ab. Now we have B(AB)x=Bx=b on the other hand: (BA)Bx=(BA)b. So (BA-I)b=0 for any column vector b, therefore BA=I. I think there are a few gaps in the logic. Can anyone help me prove this?
  12. For matrices, if AB=I, then does that mean BA=I also? If I have 2 matrices and I have AB=I, is that sufficient to conclude that B is the inverse of A? Or do I have to calculate BA explicitly too? I've tried finding a simple 2x2 counterexample but I can't find any. All examples of AB which I've conjured up also have BA=I.
  13. Zareon

    Gravity

    I see. The reason I asked is that, if it's true, then why would I be able to tell if I`m falling. You get that really strange feeling in your stomach (I've never skydived, but I can imagine). But I realize that must be just what it feels like if gravity 'falls away'. So the astronauts in a spacestation like the ISS must be feeling like they're constantly falling down, even when they're sleeping. That must be so weird! Hmm, another thing that got me thinking. In Newtonian gravity, an (rigid) object doesn't exert a net force on itself, just like you can't pull yourself up by your hair or something. But can an object influence it's own path in general relativity. That is, can the space-time distortion caused by an object be such that the region of spacetime in which the object is itself is curved due to its own mass?
  14. Zareon

    Gravity

    Hi, I have some questions. I hope you guys can help me out. Gravity affects all objects in the same way right? The motion an object makes in freefall (no forces except gravity) is independent of its mass. General relativity says an object follows it's natural path (straight line, or geodesic) through a curved spacetime if no forces are acting on it. That means nothing is really 'pulling' on an object right? When I sit in a car making a strong turn I can feel the acceleration. I get pushed against the side of the car, the car pushes on me and makes me go into the other direction. My internal organs have a tendency to stay behind too, so because of the force by body exerts on them I can feel I`m accelerating with my eyes closed. Is that correct so far? Now with gravity. All parts of my body are affected by it and accelerate in the same way, so I shouldn't be able to feel any acceleration. Is that also what the equivalence principle says? That an observer in freefall wouldn't be able to know whether he's in freefall by performing local experiments? So it's just like he's in an inertial frame.
  15. Thanks for the replies. swansont, I believe to have read (I think in Cohen-Tannoudji's Quantum Mechanics) that in this case the wavefunction is spread in space in two parts. It's in a superposition of two wave-packets, one which goes up and one which goes down according to spin and it collapses when it hits the plate. It's fine with me either way. The relevant question is whether there is a measurable difference between the two. I guess two measurements are involved. One of the z-component of spin, the other of the position, but they commute so you measure both at once and it doesn't matter. But there's this new QM book "Quantum Physics" on the market by M. LeBellac which introduces something like an ideal measurement which does not disturb the state when you measure it (and says the measurement postulate is redundant). There have also been so-called 'quantum nondemolition' experiments which do this. It goes straight against what I learned: that you can't measure a quantum state without disturbing it. Any enlightenment on this is greatly appreciated.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.