Jump to content

Zareon

Senior Members
  • Posts

    33
  • Joined

  • Last visited

Posts posted by Zareon

  1. Consider an n-dimensional complex vector space.

    The corresponding vector space of nxn matrices is n^2 dimensional.

    I want to find out if there exists a basis for this space consisting of semipositive matrices.

     

    The question seems simple, but I can't find a proof.

    any help is appreciated.

  2. How would I go about showing that the function defined by f(x)=exp(-1/x^2) for x<>0 and f(x)=0 for x=0 has derivatives of all orders and the value of all the derivatives at x=0 is 0?

     

    It seem obvious that f is infinitely many times differentiable for x not equal to 0, but I don't know how I would write down a proof. Taylor series come to mind, but nothing in the book deals with that, so there should be another way.

     

    I've shown f'(0)=0 by writing down the limit and using L'Hospital. But how would I show it for higher order derivatives without explicitly calculating the derivatives and evaluating the limits?

    Would induction work? I've thought of letting g(x)=-1/x^2, so f(x)=exp(g(x)) and:

    f'=g'e^g

    f''=(g'+g'')e^g

    f'''=(g'+2g''+g''')e^g

    'etc'

    whatever etcetera means. I explicitly find the relation using induction and then use induction to calculate the limits for all orders, but doesn't seem to go anywhere.

    Anyone know of a better way?

  3. I kinda forgot this topic, but here's the counterexample I promised.

    Consider the vector space l_2® consisting of the strings (x_1,x_2,...) with x_i real numbers.

    Then define the linear operators R and L (the Rightshift and the Leftshift) by:

    R(x_1,x_2,x_3,...)=(0,x_1,x_2,...)

    L(x_1,x_2,x_3,...)=(x_2,x_3,x_4,...)

     

    The set of linear operators on l_2 form a ring ofcourse and LR=1 but RL is not the identity.

  4. Try to find a counterexample. Try real hard. Hint: There are no such beasts for matrices over the reals.

     

    I think you're missing the point I`m making. There's no such thing for matrices (that's what I've just proved), but I can't believe the same holds for a general ring. I will try to find a counterexample.

  5. well, 2 things, Zhareon:

    1) There's a thing called kernel, etc. It measures the independence of each line of a matrix - that is, if you make the matrix into a set of vectors, it measures how many are idependent. It is simple to prove that the dimension of the horizontal kernel is equal to that of the vertical kernel - so that if the matrix has an inverse on the right, then its horizontal kernel has dimension 0, so the vertical kernel has dimension 0, so it has a left inverse (this is from a while back, so anyone with a more correct way of saying it is welcome.)

    Ah, now you're using that the dimension of the row space is equal to the dimension of the column space. If A has a right inverse the column space has dimension n (A is nxn). And A has a left inverse iff it has dim(rowspaceA)=n.

     

    That's a neater way to look at it. But showing that dim(rowspace)=dim(columnspace) (=rank A) is shown in my book by counting pivots. I hope there's a nicer way to look at it apart from counting pivots.

     

    2) Actually, there is no group-like structure with right inverse but no left inverse unless you remove associativity.

    =Uncool-

     

    I forgot what I those things were called, but I meant RINGS. The set of nxn matrices form a ring. I`m pretty sure you cannot prove that if an element A of your ring has a right inverse then it will also have a left inverse. I dont think it's hard to conjure up a counterexample. That fact that it is true for matrices means you have to use some properties of matrices and that's why I dug into vectors and pivots and whatnot.

  6. Uncool did just that

    But I want AB=I => BA=I. Uncool assumes the existence of a matrix C for which CA=I. The difficulty in the proof is in showing that AB=I implies that there exists a matrix C such that CA=I. Showing then that B=C is the trivial step Uncool and others showed.

     

    There is no need to use that vector stuff. How do you justify removing the vectors at the end?

     

    I don't think you can prove it from general grouplike properties. I`m sure some knowledgeable mathematician here can show there are grouplike structures where elements have a rightinverse, but no left inverse. You really have to use some special properties of matrices.

  7. Thanks for all the replies. But most posts just show that if A has an inverse, then it is unique. That's somewhat trivial.

    What I wanted to know is that AB=I implies BA=I.

     

    I got the answer now, but it's not a (very) beautiful proof. It's allright though:

     

    First we use that the system Ax=b has a solution for any vectror b iff A is row reducible to the identity matrix.

    Proof:

    (<=) Just row reduce the augmented matrix [A|b] to [i|c]. Then c is the solution.

    (=>) Every elementary operation can be done by multiplying A on the left by an elementary matrix. If the reduced echelon form of A is not the identity, then H=(Et...E2E1)A it has all zero's in the bottom row. (The Ei's are the elementary matrices corresponding to the operations). So let b=(Et...E2E1)-1)en. Where en is the n'th standard basis vector: enT=(0 0 ... 0 1). Then reduction of [A|b] gives [H|en] which has no solution.

     

    This last part is the 'ugly' part of the proof.

     

    Now suppose AB=I, then the equation Ax=b has a solution for any vector b. Just pick x=Cb, then Ax=A(Cb)=b. So A is row reducible to I from the above result, so there exist elementary matrices, such that (Et...E2E1)A=I. Since CA=AB=I implies B=C we have B=(Et...E2E1), so BA=I.

     

    I think the proof can be made more beautiful by considering A as a linear function from R^n to R^n. I'll see if that gives more insight.

  8. I don´t understand the "for any vector b" part. Had you said "for any vector x", I had understood it as you can multiply any vector x with the matrix B. What I don´t understand why any vector can be constructed by multiplying another vector x with B. There definitely is an additional restriction on B (just let B=0 and try to construct and [math] b \neq \vec 0 [/math]) which you didn´t mention.

     

    What I meant was that, assuming AB=I, then for any vector b there exists an x such that Bx=b. The proof is being that, if you multiply both sides by A, then you get x=Ab. But I guess that's not a proof at all :-( It sounds ok, but I've got a shaky feeling with it. It probably assumes what I`m trying to prove.

     

    I`ll go and try to understand your proof.

  9. I've read somewhere that a unitary matrix U can be defined by the property:

    (1) U*=U^{-1} (* = hermitian conjugate)

    or by the fact that it preserves lengths of vectors:

    (2) <Ux,Ux>=<x,x>

    I have trouble seeing why they are equivalent.

     

    It's obvious to see that (1)=> (2):

    <Ux,Ux>=(Ux)*(Ux)=x*(U*U)x=x*x=<x,x>

     

    But not the other way around. I CAN prove it for real vector spaces, where U is an orthogonal matrix from the fact that <v,w>=<w,v>. Then I would do:

     

    <v+w,v+w>=<U(v+w),U(v+w)>=<Uv,Uv>+<Uw,Uw>+2<Uv,Uw>=<v,v>+<w,w>+2<Uv,U,w> and working out the left side gives <Uv,Uw>=<v,w>.

    and from this that the columns of U are orthonormal, since [math]<Ue_i,Ue_j>=<e_i,e_j>=\delta_{ij}[/math]

     

    But for a complex vector space where <v,w>=<w,v>* all the above gives is:

    Re(<Uv,Uw>)=Re(<v,w>).

     

    EDIT: made some mistakes :P

  10. Thanks for the replies. Would the following proof be correct?

     

    Thm:

    Suppose A, B are square matrices and AB=I. Then BA=I also and B=A^-1.

     

    Proof:

    Suppose AB=I, then the system Bx=b has a solution for any column vector b, since x=(AB)x=A(Bx)=Ab.

    Now we have

    B(AB)x=Bx=b

    on the other hand:

    (BA)Bx=(BA)b. So (BA-I)b=0 for any column vector b, therefore BA=I.

     

    I think there are a few gaps in the logic. Can anyone help me prove this?

  11. For matrices, if AB=I, then does that mean BA=I also?

     

    If I have 2 matrices and I have AB=I, is that sufficient to conclude that B is the inverse of A? Or do I have to calculate BA explicitly too?

     

    I've tried finding a simple 2x2 counterexample but I can't find any. All examples of AB which I've conjured up also have BA=I.

  12. @Zareon: Apart from some nitpicking one could do (e.g. the motion of an object B is not independent of its mass if it gravitationally influences other bodies G around it which causes a change in the gravitational field caused by G), that all sounds fine.

    I see. The reason I asked is that, if it's true, then why would I be able to tell if I`m falling. You get that really strange feeling in your stomach (I've never skydived, but I can imagine). But I realize that must be just what it feels like if gravity 'falls away'. So the astronauts in a spacestation like the ISS must be feeling like they're constantly falling down, even when they're sleeping. That must be so weird!

     

    Hmm, another thing that got me thinking. In Newtonian gravity, an (rigid) object doesn't exert a net force on itself, just like you can't pull yourself up by your hair or something. But can an object influence it's own path in general relativity. That is, can the space-time distortion caused by an object be such that the region of spacetime in which the object is itself is curved due to its own mass?

  13. Hi, I have some questions. I hope you guys can help me out.

     

    Gravity affects all objects in the same way right? The motion an object makes in freefall (no forces except gravity) is independent of its mass.

    General relativity says an object follows it's natural path (straight line, or geodesic) through a curved spacetime if no forces are acting on it. That means nothing is really 'pulling' on an object right?

     

    When I sit in a car making a strong turn I can feel the acceleration. I get pushed against the side of the car, the car pushes on me and makes me go into the other direction. My internal organs have a tendency to stay behind too, so because of the force by body exerts on them I can feel I`m accelerating with my eyes closed. Is that correct so far?

    Now with gravity. All parts of my body are affected by it and accelerate in the same way, so I shouldn't be able to feel any acceleration. Is that also what the equivalence principle says? That an observer in freefall wouldn't be able to know whether he's in freefall by performing local experiments? So it's just like he's in an inertial frame.

  14. Thanks for the replies.

     

    swansont, I believe to have read (I think in Cohen-Tannoudji's Quantum Mechanics) that in this case the wavefunction is spread in space in two parts. It's in a superposition of two wave-packets, one which goes up and one which goes down according to spin and it collapses when it hits the plate. It's fine with me either way. The relevant question is whether there is a measurable difference between the two.

    I guess two measurements are involved. One of the z-component of spin, the other of the position, but they commute so you measure both at once and it doesn't matter.

    But there's this new QM book "Quantum Physics" on the market by M. LeBellac which introduces something like an ideal measurement which does not disturb the state when you measure it (and says the measurement postulate is redundant). There have also been so-called 'quantum nondemolition' experiments which do this. It goes straight against what I learned: that you can't measure a quantum state without disturbing it.

     

    Any enlightenment on this is greatly appreciated.

  15. Hi, I have a question that's been bugging me for a while now. According to QM a measurement collapses the wavefunction into an eigenstate of the measured eigenvalue (projection upon measured eigenspace).

     

    If I take the Stern-Gerlach apparatus. WHEN does the wavefunction collapse? (When is the measurement made?)

    Is it when the electron leaves the magnetic field?

    Or when the electron hits the plate?

    Or when I look at the plate?

    Or otherwise?

     

    I thought (actually, just assumed) that it happened when the electron hits the plate, but I`m not sure anymore. Is there any detectable way in which we can distinguish these cases (in particular case 1 and 2).

  16. It's not necessary to try to 'see that the GD can't get bigger'.

     

    If d divides x and y it divides ax+by for integers a and b. So in particular the divisors of n and m are the same as the divisors of n-m and m (you don't actually need the restriction on n=>m: divisors are well defined for negative numbers).

     

    So the set of divisors of n and m is the same as the set of divisors of n-m and m' date=' hence the largest element in each of those two sets must be the same.[/quote']

     

    Ah, I see. It's so freaking obvious now. Thanks!

  17. In the free electron model (electrons in a box/conductor) why do we use periodic boundary conditions? What the idea/justification for it?

     

    I understand that, to have a unique solution to the SE, we need a boundary condition. We can either choose the wavefunction to be zero at the edges or periodic boundary conditions. The first will give rise to standing waves, the second traveling waves.

     

    I've heard things like:

    - Periodic boundary conditions is better, since we have travelling waves which makes the jump to the study of electron transport phenomena easier.

    That's pedagogically very cute, but doesn't give me any insight.

     

    - The idea is that the box is very big, so whatever happens a distance L further doesn't affect stuff here and we can simply apply periodic boundary conditions with the physical affects.

    Well, the traveling plane waves are infinite in extent and non normalizable. Their position distribution is uniform through the conductor, so that kinda defeats the argument in itself.

     

    And yet, it seems that the application of periodic boundary conditions is very important and leads to certain result wou would otherwise get. Can anyone enlighten me about the wisdom behind this?

  18. E is not the energy. E is the electric field. Yeah, confusing... I'm gonna denote the stored energy by W.

     

    So the electic field is constant: [math]E=\sigma/\epsilon_0[/math].

     

    And the energy is: [math]W=1/2CV^2=\frac{\epsilon_0}{2}AdE^2[/math]

     

    What I meant is that the capacitance decreases inversely to the distance between the plates. That's what I meant by goes like 1/x, or rather 1/d. But the potential V increases linearly with the distance (goes like x). So the energy 1/2CV^2 goes increases linearly with the distance between the plates (like x). And I now realize this is not exactly the same as what I said in my previous post where I said V goes like x^2...which is wrong

  19. I have the following question. Some space object (galaxy or star far away) moves from point A to point B. The object is travelling with speed v at an angle theta to the line of sight.

    (picture sucks...)

    A

    | \

    | \

    | B

    | |

    |---ds---|

    | |

    | |

    to earth

     

    Suppose the light from B reaches earth a time [math]\Delta t[/math] after the light from A. I have to find the apparent velocity across the celestial sphere, that is [math]\Delta s/\Delta t[/math]

     

    This seemingly easy question took me some time to figure out. I named the time for the object to go from A to B t'. Then t' is the time [math]\Delta t[/math] plus the time for light to travel the vertical distance of AB:

    [math]t'=\Delta t+\frac{v\cos(\theta)t'}{c}[/math]

    also we have:

    [math]\Delta s = v\sin(\theta)t'[/math]

    Giving:

    [math]\frac{\Delta s}{\Delta t}=\frac{v\sin(\theta)}{1-\frac{v}{c}\cos(\theta)}[/math]

     

    Is this correct? I`m not sure, because although it gives plausible answers for at first sight, it gives nonsense answers for v->c.

  20. Thanks for the replies.

     

    Externet:

    I know the capacitance decreases (like 1/x), but the potential difference increases (like x^2) so the total energy is increased and gives the same (correct) value for the energy.

     

    Meir Achuz:

    That took me a while to grasp, but I think I see what you are saying. The plate can't exert a force on itself (just like I can't lift myself up by my hair). So I shouldn't consider the E-field created by that plate when calculating the work done, only the one from the other plate, which is 1/2E. Is that right?

    Thanks a lot!

  21. Take a parallel plate capacitor, area A and distance d.

    I know that E=sigma/e0, V=Ed, C=Ae0/d.

    I have also shown that the energy stored is 1/2CV^2=(e0/2)(E^2)(Ad).

     

    Now, if I increase the distance between the plates by an amount x, then E doesn't change and the increase in energy is (e0/2)(E^2)(Ax).

     

    But the work done by increasing the distance is Fx=(QE)x=A(sigma)Ex=e0(E^2)(Ax). Not equal to (e0/2)(E^2)(Ax), but twice that!! What's going on here!?

  22. I`m trying to prove the following:

     

    An algebraically closed field is of infinite extension over its prime subfield.

     

    I'm not sure I translated it correctly. What I mean is: Let K be an algebraically closed field and k its prime subfield. Then the degree of the extension K/k is infinite, i.e. [K:k] is not finite.

     

    I have in my mind an example like C/Q (the complex and rational number fields). I know it's true in this case, because Q has transcendental numbers.

    So I tried to prove k must have a transcendental element, since if the extension were finite, it would be algebraic. But I`m not sure this would work, because not all infinite extensions are algebraic. In any case, I haven't progressed much.

     

    Help is appreciated.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.