Jump to content

Question about spin


KipIngram

Recommended Posts

Ok, so I've been watching the lectures Leonard Susskind gave at Stanford that are available on the internet.  He described a superposition of states sort of along the lines below.  For purposes of this discussion we're assuming the position of the electron is "pinned down" somehow so that all we have to consider is spin.

1) Prepare the electron by applying a strong magnetic field.  This will align the electron's magnetic moment with that field.  I don't care whether a photon is emitted in this step or not - this is the preparation phase.

2) Now remove the preparation field, and apply a measurement field, at a different angle.  Classical electromagnetic theory makes a prediction about how much energy should be radiated in this situation, but we do not see that amount of energy.  Instead we either see a single photon that has a larger amount of energy, or we see no photon.  In either case we presume after the measurement that the electron is now aligned with the measurement field.  So if we saw a photon, we say that it was initially 180 degrees out of alignment with the measurement field, and if we don't see a photon we say that it was initially at 0 degrees.  So it was in a superposition of the 180 degree and 0 degree states.

So this bothers me.  I'd like to consider the small instant of time after we turn off the preparation field but before we turn on the measurement field.  It seems clear to me tha the electron is not in a superposition of states at this time - the measurement field doesn't yet exist to define what those superposed states would be.  On the other hand, if I turn the preparation field back on again, I will never get a photon - so it seems clear to me that the electron is still "aligned with A."

Then, after measurement, it's aligned with B.  The initial and final states of the electron seem completely clear - no "superposition" is required.  The only thing that requires a probabilistic interpretation is whether or not a photon is emitted.

So, my question this: Why is it not adequate when explaining this situation to simply attach probability (that depends on the direction of the preparation field and the direction and strength of the measurement field) to whether or not a photon is emitted, while declaring the initial and final states of the electron to be fully specified by the directions of the two fields?  Why does the framework "push the fuzziness" all the way to the physical state of the electron itself?  It seems like it's then "escalating" this fuzzy element to the macroscopic level with some mechanism that gets us "cats in superposed states of alive and dead," and so on.

The spot in this that I see a possible weakness is saying that the electron's magnetic moment is "aligned with the preparation field" to start with - that is implicitly specifying all three components of that direction - this may already be a misstep.  But Susskind either said that or strongly created that image in my mind when he lectured through this stuff.

Ok, I'm going to reply to my own topic, based on a later lecture in the series.  Is the resolution of my question above somehow related to this line of reasoning:

In classical thinking, if a magnetic moment points in some direction (say positive x axis), then it has NO COMPONENT in the y or z directions.  But when the system is described using quantum states, the states that describe a y direction or a z direction are not orthogonal to the state that describes the positive x direction.  Only the state that describes the negative z direction has that character.  So if such an electron (prepared +x) is allowed to participate in some series of events, and the y-axis component of spin is important in that series, then to get the right answer we must presume that the events happened both ways, with +y and -y, and then take out the probabilities at the very end when we actually perform a measurement.  We can't get the right answer by saying "well, the electron was oriented +x, and therefore the y and z components were zero."

I guess this matters because it's possible for all of those cases to interfere with each other as the system evolves unmeasured?  And if we inserted an interim measurement to determine which, say, y case was in play, then we would no longer have any contribution of the other y case, but now we'd have to consider both possible x cases thereafter, whereas before that interim measurement we only had to consider the +x case.

Am I vaguely on the right track here?

Edited by KipIngram
Typo
Link to comment
Share on other sites

4 hours ago, KipIngram said:

Ok, so I've been watching the lectures Leonard Susskind gave at Stanford that are available on the internet.  He described a superposition of states sort of along the lines below.  For purposes of this discussion we're assuming the position of the electron is "pinned down" somehow so that all we have to consider is spin.

 

This is rather vague.

The first thing to consider is

Is the electron bound to something, an atom, an ion, a crystal lattice.........?

Or is it free as in an electron beam or beta ray?

 

Then you can think about its spin.

Link to comment
Share on other sites

4 hours ago, KipIngram said:

 The spot in this that I see a possible weakness is saying that the electron's magnetic moment is "aligned with the preparation field" to start with - that is implicitly specifying all three components of that direction - this may already be a misstep.  But Susskind either said that or strongly created that image in my mind when he lectured through this stuff.

You only know one component, typically labeled as the z axis. 

Did Susskind use 0 and 180 in his example? 

Link to comment
Share on other sites

Hi guys.  First, studiot, he said to neglect everything else about the electron and consider only it's spin - basically he said "imagine it's nailed down so it doesn't get away from us."

Next, swansont, he did say that if we'd prepared the the electron in a certain direction, then there was 0% probability of finding it at 180 degrees to that preparation.  He said if we re-applied the same magnetic field we'd used to prepare it, we would never get a photon, and that if we applied a field at 180 degrees to the original, we would always get a photon.  Then if the measurement field was at any other angle (not equal to or exactly opposed to the preparation field, then there would be a probability of getting a photon, which upon hearing later lectures turned out to be [ 1 - cos(angle) ] / 2, where "angle" is the angle between the preparation and measurement fields.

I think the later lectures helped me - after I heard the first one I was imagining a particular simple preparation / measurement flow, and I don't think the full scope of the formalism was required to get the right answer for that one.  I was able to avoid thinking of a superposition of actual electron states, and instead just think of a probabilistic emission of a photon.  But I think now that more complex situations might require the whole standard shebang in order to come out right every time.

Link to comment
Share on other sites

Quote

Wikipedia

Fermi transitions

A Fermi transition is a beta decay in which the spins of the emitted electron (positron) and anti-neutrino (neutrino) couple to total spin S=0{\displaystyle S=0}S=0, leading to an angular momentum change ΔJ=0{\displaystyle \Delta J=0}{\displaystyle \Delta J=0} between the initial and final states of the nucleus (assuming an allowed transition). In the non-relativistic limit, the nuclear part of the operator for a Fermi transition is given by

OF=GV∑aτ^a±{\displaystyle {\mathcal {O}}_{F}=G_{V}\sum _{a}{\hat {\tau }}_{a\pm }}{\displaystyle {\mathcal {O}}_{F}=G_{V}\sum _{a}{\hat {\tau }}_{a\pm }}

with GV{\displaystyle G_{V}}G_{V} the weak vector coupling constant, τ±{\displaystyle \tau _{\pm }}\tau _{\pm } the isospin raising and lowering operators, and a{\displaystyle a}a running over all protons and neutrons in the nucleus.

Gamow-Teller transitions

A Gamow-Teller transition is a beta decay in which the spins of the emitted electron (positron) and anti-neutrino (neutrino) couple to total spin S=1{\displaystyle S=1}S=1, leading to an angular momentum change ΔJ=0,±1{\displaystyle \Delta J=0,\pm 1}{\displaystyle \Delta J=0,\pm 1} between the initial and final states of the nucleus (assuming an allowed transition). In this case, the nuclear part of the operator is given by

OGT=GA∑aσ^aτ^a±{\displaystyle {\mathcal {O}}_{GT}=G_{A}\sum _{a}{\hat {\sigma }}_{a}{\hat {\tau }}_{a\pm }}{\displaystyle {\mathcal {O}}_{GT}=G_{A}\sum _{a}{\hat {\sigma }}_{a}{\hat {\tau }}_{a\pm }}

with GA{\displaystyle G_{A}}G_{{A}} the weak axial-vector coupling constant, and σ{\displaystyle \sigma }\sigma the spin Pauli matrices, which can produce a spin-flip in the decaying nucleon.

https://en.wikipedia.org/wiki/Beta_decay

Link to comment
Share on other sites

4 hours ago, KipIngram said:

Next, swansont, he did say that if we'd prepared the the electron in a certain direction, then there was 0% probability of finding it at 180 degrees to that preparation.  He said if we re-applied the same magnetic field we'd used to prepare it, we would never get a photon, and that if we applied a field at 180 degrees to the original, we would always get a photon.  Then if the measurement field was at any other angle (not equal to or exactly opposed to the preparation field, then there would be a probability of getting a photon, which upon hearing later lectures turned out to be [ 1 - cos(angle) ] / 2, where "angle" is the angle between the preparation and measurement fields.

0 and 180 - There is no superposition involved - it would be an issue of going from spin up to spin down, so you get a photon.

At an arbitrary angle, you would have some component in the preparation axis, given by that equation.

Link to comment
Share on other sites

Right - it was for the other angles (say, like 30 degrees off from the preparation angle) where he said there would be a superposition.

For example, say we prepare at angle 0, then measure at angle 30.  He said we'd then get a superposition of the 30 degree and 210 degree states (though he didn't say it like that).  Just that we might get "no photon," which would imply the measured system aligned with the measurement field, or "full photon," would would imply it was 180 degrees out from the measurement field.

I hope I'm describing this well enough - if anything sounds off the problem is me, not Susskind. :)

Link to comment
Share on other sites

12 hours ago, KipIngram said:

that is implicitly specifying all three components of that direction - this may already be a misstep. 

Indeed it may be, if you wish to actually understand what may be happening.

12 hours ago, KipIngram said:

Is the resolution of my question above somehow related to this line of reasoning

Yes. Simply because we have chosen to describe a thing as having three components does not mean that it actually has three components. To be specific, describing the thing via three components may be sufficient to perfectly match the observations, but it may not be necessary. For example, you could determine an object's speed, by specifying the three components of its velocity vector and then using them to compute the speed. Or, you could just measure the speed - a single scalar component - and skip the entire three component description. In effect, this is what the Born rule accomplishes in quantum theory - computing a single scalar component (a probability) from the vector (or spinor) components, that were never actually necessary, but which are in fact sufficient. The problem comes, if you ever try to actually make a one-to-one correlation between the assumed components and the actual attributes of the entity being described by those components (as Bell's theorem attempts to do); because any entity that manifests less than three bits of observable information, will never exhibit the three unique bits of information that would be required to form a unique one-to-one correlation with the three components in the description - resulting in rather weird correlations, if you make the unfortunate decision to attempt to interpret them as being obtained from measurements of an entity that actually does exhibit three measurable and independent components.

Link to comment
Share on other sites

6 hours ago, Rob McEachern said:

Indeed it may be, if you wish to actually understand what may be happening.

Yes. Simply because we have chosen to describe a thing as having three components does not mean that it actually has three components. To be specific, describing the thing via three components may be sufficient to perfectly match the observations, but it may not be necessary. For example, you could determine an object's speed, by specifying the three components of its velocity vector and then using them to compute the speed. Or, you could just measure the speed - a single scalar component - and skip the entire three component description. In effect, this is what the Born rule accomplishes in quantum theory - computing a single scalar component (a probability) from the vector (or spinor) components, that were never actually necessary, but which are in fact sufficient. The problem comes, if you ever try to actually make a one-to-one correlation between the assumed components and the actual attributes of the entity being described by those components (as Bell's theorem attempts to do); because any entity that manifests less than three bits of observable information, will never exhibit the three unique bits of information that would be required to form a unique one-to-one correlation with the three components in the description - resulting in rather weird correlations, if you make the unfortunate decision to attempt to interpret them as being obtained from measurements of an entity that actually does exhibit three measurable and independent components.

 

A very interesting point of view.

Thank you +1

Link to comment
Share on other sites

12 hours ago, Rob McEachern said:

Indeed it may be, if you wish to actually understand what may be happening.

Yes. Simply because we have chosen to describe a thing as having three components does not mean that it actually has three components. To be specific, describing the thing via three components may be sufficient to perfectly match the observations, but it may not be necessary. For example, you could determine an object's speed, by specifying the three components of its velocity vector and then using them to compute the speed. Or, you could just measure the speed - a single scalar component - and skip the entire three component description. 

How do you describe the direction? Velocity is a vector.

Link to comment
Share on other sites

4 minutes ago, swansont said:

How do you describe the direction?

With a single bit -  up or down - per observation - bit values that will exhibit strange correlations, if you attempt to determine another value of that single bit, by using an apparatus oriented in anything other than, the only direction that is actually guaranteed to yield the correct bit value, in the presence of noise. This is what phase-encoded, on-time-pads are all about.

 

6 hours ago, studiot said:

A very interesting point of view.

It gets even  more interesting, when you realize that the mathematical description employs Fourier Transforms to describe wave functions, and those exact same equations (when the Born rule is employed) are mathematically identical to the description of a histogramming process - which directly measures the probability, with no need for phase components, much like measuring speed versus measuring velocity components. In other words, the histograms simply integrate the arrival of quantized energy. As long as each bin in the histogram only responds to the same quanta per arrival (which may differ from bin to bin), then the ratio of total received energy divided by energy per quanta, enables you to infer the number of arrivals and thus the relative probability, independently of whether or not the quanta arrive as waves, particles, or wave-particle dualities. That is why it only works with equi-quanta experiments - monochromatic light versus white light, in the classical case. In the white light case, the histograms correctly measure the total energy, but the inference of the number of arriving particles is incorrect, because there is not a single correct value for energy arriving per quanta.

Link to comment
Share on other sites

1 hour ago, Rob McEachern said:

With a single bit -  up or down - per observation - bit values that will exhibit strange correlations, if you attempt to determine another value of that single bit, by using an apparatus oriented in anything other than, the only direction that is actually guaranteed to yield the correct bit value, in the presence of noise. This is what phase-encoded, on-time-pads are all about.

So it is not generally true, but rather true in a specifically contrived experiment.

Link to comment
Share on other sites

21 minutes ago, swansont said:

So it is not generally true, but rather true in a specifically contrived experiment.

Exactly. It is generally true in the macroscopic realm, that supposed independent components actually are independent, precisely because, all the "naturally occurring" entities in that realm exhibit multiple bits of information. But "unnatural" macroscopic entities can be created with this single-bit-of-information property, analogous to the ability to create unnaturally occurring transuranic elements. If you do so, and measure their properties, they exhibit "weird" behaviors, just like quantum entities - because that is what they are - even though they are macroscopic - they have a severely limited (AKA quantized) information content. However, in the microscopic/quantum world, single-bit entities are common (that is why you only ever see spin-up or spin-down etc.). but their behavior is unfamiliar. So most quantum experiments, unlike classical ones, end-up being examples of a "specifically contrived experiment" as you have noted - experiments on objects with a severely limited information content. It is the small information content, not the small physical size, that drives the differences between classical and quantum behaviors. Bell et. al. had the great misfortune of stumbling upon a theorem that only applies to those specifically contrived experiments, that they never actually perform - it only applies to classical objects not quantum ones, because the quantum experiments, done on photons and electrons etc, are all being performed on objects that behave as if they fail to observe Bell's most fundamental and usually unstated assumption - that the objects manifest enough bits of information, to enable at least one unique, measured bit to be assigned to each member of each pair of entangled measurements: it is a logical impossibility to assign a unique bit to a pair of anything, when you only ever have one bit to begin with. Bell's theorem assumes that you can. That is the problem.

Link to comment
Share on other sites

3 hours ago, Rob McEachern said:

It gets even  more interesting, when you realize that the mathematical description employs Fourier Transforms to describe wave functions, and those exact same equations (when the Born rule is employed) are mathematically identical to the description of a histogramming process - which directly measures the probability, with no need for phase components, much like measuring speed versus measuring velocity components. In other words, the histograms simply integrate the arrival of quantized energy. As long as each bin in the histogram only responds to the same quanta per arrival (which may differ from bin to bin), then the ratio of total received energy divided by energy per quanta, enables you to infer the number of arrivals and thus the relative probability, independently of whether or not the quanta arrive as waves, particles, or wave-particle dualities. That is why it only works with equi-quanta experiments - monochromatic light versus white light, in the classical case. In the white light case, the histograms correctly measure the total energy, but the inference of the number of arriving particles is incorrect, because there is not a single correct value for energy arriving per quanta

I do believe we are beginning to stray further and further from the topic, although as I said I found your information theory / modern concrete mathematics theory centered view interesting.

I see that you have also replied in my uncertainty thread over the holidays.

 

Both subjects and one other where I promised some views pertain to the difference between Maths and Physics.

So I propose to collect them all together in a new thread about just that and leave Kip to his question entitled spin, though I am not sure as to his exact question.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.