Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 01/03/23 in all areas

  1. I appreciated the nod to a realist interpretation. And dig at string theory. I think my earlier comment on cowering from probabilistic theories was confused by some - @Mordred was one - as me not seeing the uses of probability in physics. Well of course I do. What I should have said was I'm leery of acausal theories (aka nondeterministic), which seem to skirt thorny ontological problems and just tell you like a stern schoolmarm that it's all stochastic. Here's a lump of twenty trillion thorium-234 atoms. Some of them will soon beta decay to protactinium-234. Some of them won't. Let's give each thorium atom in the lump an address. And name. At 221-B, there is Sherlock. At 10, is Boris. Either could, randomly, decay. As it happened, Boris decayed first, before Sherlock. At a macro scale, such an event seems to have a cause. We have an ontology of macro scale Borises, and can understand why they decay so easily. But the thorium atoms all seem identical. All intuitions seem wrong. Ontology can help. Maybe.
    2 points
  2. Democracy of the inclusive parliamentary kind, not the Greek model, requires that every eligible citizen have equal opportunity to participate in the selection of its governing bodies. The citizen can vote for a person to represent his or her interests, or a policy platform proposed by a party, or some combination of both; furthermore, every eligible citizen has the right to stand for election to office. No other kind of equality is entailed or implied. Which, of course, means that clean democratic process produce governments that move gradually toward equality and equity for all the citizens, simply because the majority, not the privileged elite, decide for the policy that favours their interest. If an elite wants to retain its privilege, it must corrupt the democratic process.
    1 point
  3. Genady has a point. Gravity causes planets,etc. to form because regular matter interacts electromagnetically. Collisions, friction etc. is a result of this electromagnetic interaction. A secondary result of this interaction is the production of electromagnetic radiation. The production of this comes at the expense of kinetic energy from the matter involved. Two particles collide, emit some EMR and separate, but at a slower speed than they met at. This happens enough and a clump of matter of matter forms. DM does not interact electromagnetically, not only does that mean it doesn't "collide" like regular matter, but it doesn't have the same mechanisim to shed KE. A DM particle can approach a planet, pass right through it, and fly off with the same speed it started with. There's is nothing to hold it in the vicinity. Having said that, There are ways for DM to clump. Gravitational interactions can cause such distributions. But compared to electromagnetic interaction, they are very,very, very, weak, and produce results much slower. The Universe just hasn't been around long enough for small compact collections of DM to form, Just much, much larger and diffuse collections like galactic halos.
    1 point
  4. I would say equality is an outcome of the democratic process, rather than a process itself. Perhaps equity is the better word here. Equity is more of a process, and has elements of fairness and justice to it that equality may not focus on. It's a concept that understands that people and their circumstances are too different for equality to work across the board. Instead, equitable solutions are sought to minimize any contradictions in the system.
    1 point
  5. This is not exactly what I meant, although it does partially overlap with what I meant. The adjective "topological" I didn't mean as applied to the background manifold, but to certain classes of solutions of differential equations on those manifolds. Thus, you can have many field theories defined on a topological manifold. Among all these theories, only a very restricted class of theories are topological, and most other theories are not. Topological theories have a Lagrangian not involving the metric. One interesting feature of these theories is that they are useful tools to study topological invariants of the manifold, like Wilson loops, probably Betti numbers too, and the like. They are invariant under diffeomorphisms, and once you incorporate the constraints --se below-- with the method of Lagrange multipliers, the constrained Hamiltonian becomes identically zero on the constraint surface. I say all this just to guarantee that we're talking about the same thing. But from the strictly dynamical point of view, topological fields are very constrained in the way the can evolve. In fact, they are maximally constrained. They --the fields-- have no local degrees of freedom, which means they do not propagate. The number of constraints exactly equals the number of degrees of freedom. Let me be as clear as humanly possible. As a warm-up, taking a 1st-order in time point-particle theory as a particularly simple example, what I mean by having #(DoF) = #(constraints) is the following: \[ q_{1}\left(t\right),\cdots,q_{n}\left(t\right) \] So a set of initial conditions \( q_{1}\left(0\right),\cdots,q_{n}\left(0\right)=q_{10},\cdots,q_{n0} \) completely determines the trajectory in the configuration space. Equivalently, if we're extremely lucky --the system is integrable--, we may manage to find a set of n integrals of motion: \[ \mathcal{J}_{1}\left(q_{1}\left(t\right),\cdots q_{n}\left(t\right)\right)=0 \] \[ \vdots \] \[ \mathcal{J}_{n}\left(q_{1}\left(t\right),\cdots q_{n}\left(t\right)\right)=0 \] This is (modulo condition of non-vanishing Jacobian) equivalent to fixing the previous n initial conditions. Integrals of motion for integrable systems depend on 1to1 on initial conditions. Now, what are constraints? Constraints are both mathematically and physically very similar, but with a very important nuance. Assume n constraints: \[ F_{1}\left(q_{1}\left(t\right),\cdots q_{n}\left(t\right)\right)=0 \] \[ \vdots \] \[ F_{n}\left(q_{1}\left(t\right),\cdots q_{n}\left(t\right)\right)=0 \] But these relations being in place for every set of initial conditions. What does this imply mathematically? It implies nothing other than that the system cannot move at all. Cannot evolve in any meaningful way at all. It's frozen dynamically. How much? It's just a point in configuration space. Now, what happens to a field theory under analogous strictures? We must now promote the \( q_{i}\left(t\right) \) to some \( \varphi_{a}\left(\boldsymbol{x},t\right) \), where \( a \) goes from 1 to n. If we impose n initial conditions, that means specifying the value of the \( \varphi \)'s, \[ \left.\varphi_{1}\left(\boldsymbol{x},\tau\right)\right|_{\varSigma^{d-1}}=f_{1}\left(\boldsymbol{x}\right),\cdots,\left.\varphi_{n}\left(\boldsymbol{x},\tau\right)\right|_{\varSigma^{d-1}}=f_{n}\left(\boldsymbol{x}\right) \] where \( \tau \) is some curvilinear coordinate that specifies the sub-manifold \( \varSigma^{d-1} \) the space-like foliations that define the Cauchy problem. If, instead, we have a set of n constraints, and as any constraints worth their salt, they do not depend on initial conditions, but are the same for all posible initial conditions, we will have the system extremely limited in its evolution. But it is no longer true that the set of states compatible with this situation "shrinks" to a point in configuration space. It does freeze, but due to the presence of the space variables \( \boldsymbol{x} \) it does so --it must-- to a fixed function sitting on the topological space. Or perhaps to a finite or countable set of such "frozen" functions. It is in that sense that I was talking about a quasi-rigidity. Known examples of constraints in field theory (valid for all sets of initial conditions) are transversalities and Gauss or Lorentz gauge-fixing constraints. Photons are known to "inhabit" a space of configurations with 4 degrees of freedom when you give them mass by assumption. If you impose masslessness, gauge fixing, they become more and more restricted in their evolution. If you imposed further constraints, there would be no dynamical situation that implies propagation. They would be "frozen." But not to a point. Instead they would be frozen to a pretty restrictive class of what I've tried to refer to by this term "quasi-rigidity." The previous discussion generalises trivially to a 2nd-order-in-time system by substituting 1,...,n with 1,...,n,...2n, and the \( \varphi \)'s to those plus their canonical momenta. Now, I know these musings not to be totally out of whack, because I've posed the question (the one about quasi-rigidity) elsewhere appropriate and I know for a fact that knowledgeable people consider it a totally-non-silly conjecture at least. Or, if you want, a question worth answering. But what would come next --the relation to the interpretation of QM-- does have speculative elements on my part, and would probably require a thread of its own.
    1 point
  6. How do you explain Brownian motion? You need to have some idea of what water is made of. How do you predict the life cycles of stars? You need to have some idea of what's inside them. How do you develop ways to travel faster than the speed of light? You need to have some idea of what matter and the vacuum are made of. Ontology is necessary for fundamental research. Existing theory is fine for developing applications, but developing new fundamental theories requires some kind of intuition about what those theories will describe. How can you unify gravity and quantum theory without having some idea of what they describe? Without some intuition about the nature of the underlying reality, all you can do is flail around wildly with ten-dimensional fantasies that don't describe anything. EDIT: Another good example is troubleshooting. If you want to fix a clock or an engine, or figure out what's making a person sick, you have to have some idea of what's inside the clock or the engine or the person. You can't use phenomenological models of how the things normally behave when their behavior is abnormal. BTW, I don't get that Feynman reference.
    1 point
  7. I've studied a lot of literature on Feymann integrals and have usually found them lacking or simply don't describe the steps to solving them in great detail. I recently came across a reference that I am thoroughly enjoying the scope of how it details the integrals in a wide range of related theories. (warning extremely math intense). If anyone wants a good solid reference I highly recommend this article. Feynman Integrals by Stefan Weinzierl https://arxiv.org/pdf/2201.03593.pdf
    1 point
  8. I think this will just add confusion for our questioner, quite honestly. We have not been talking about the double slit experiment but about light reaching us from stars. All the stuff about the principle of least time etc. notwithstanding, nothing about the current QM model suggests that light quanta do not travel, for all practical purposes, in a specific direction, nor that their associated waves do not have a direction of propagation, even if it is only, strictly, a predominant one. The idea our questioner has, that a single photon has a wave that spreads out uniformly in all directions, is not correct and we need to make that clear, I think.
    1 point
  9. According to my (limited) understanding of Rovelli's relational interpretation, the wave function applicable from the cat's perspective is different from the wave function applicable from our perspective, so long as the box remains closed. There is not necessarily a single, absolute, wave function describing a quantum system: it depends on the informational state to which it relates. The cat, being inside the box , is in a different informational state from those outside and so a different wave function applies, from its perspective.
    1 point
  10. I don't think it works quite like that. Firstly for an ideal liquid mixture (methanol-ethanol mix is close to ideal) the partial pressures of each at any given temperature follow the simplest form of Raoult's Law (at pressures below 10 x atmospheric at least) partial pressure of i (Pi) = partial pressure of pure i (Pi*) x mole fraction of i (xi) So if xi is small, the partial pressure is small even at the boiling point of the pure substance (in the case of the most volatile component). If you boil a methanol-ethanol mix, yes, there will be a significantly higher mole fraction of methanol in the vapour phase and xi will fall a bit but for a single stage operation, the idea that what's left behind has a safe methanol level has made many, many people "mad, blind and dead - or was it blind, mad and dead" as my school chemistry teacher put it. When you add water to the mix, things get a little more complicated. For both the methanol-water and ethanol-water systems, the alcohols are more attracted to themselves than to the water molecules. So these are non-ideal mixes and follow a modified version of Raoult's law that includes an activity coefficient ai. Pi = xi ai Pi* For these 'positive deviation' mixtures the activity coefficient >1.0 and offsets the effects of low mole fraction by increasing the alcohol's volatility. ai for methanol in its aqueous mix is typically around 2 but rises steeply in low concentrations to about 3.5. So a 50-50 methanol-water mix (xi = 0.5) will actually boil close to the boiling point of pure methanol. ai for ethanol in its aqueous mix is actually a bit higher, ~4 in low concentrations, which pushes its volatility even closer to that of methanol, making the separation even more challenging. Now obviously I have no practical experience of distilling strong spirits for personal consumption and strongly advise against anyone even considering such a reckless practice. But if say I was in urgent need of quality surgical spirit for medical purposes, I think I'd consider starting off the separation with a very slow simmer, well below boiling point and let the vapour rich air currents rise by thermal convection into my condenser.
    1 point
  11. This really is a difficult and complicated subject to untangle. Not least because Michael McMahon has made some quite perceptive comments as well as posting that flawed diagram. Nor do I see this as belonging in the speculations section. As a straight forward question about Newtonian Physics there is a straight forward answer to his question Yes indeed that is straight forward but the fictious force required is the radial centrifugal force, not the tangential Euler force. This accounts very well for the easily measurable fact that observed gravity is apparantly weaker at the equator than it is at the pole. The Maths of this used to be on the first year Physics course at London University, I can post it if you wish. However you have entitled this thread Gravity Mysteries and even offered some tantalising comments. which show deeper perception and understanding. When forces are first introduced in school Physics, they are defined some along the lines of without being specific about where or how that push/pull is generated. This is the level your diagram is pitched at, but unfortunately it also erroneously shows the normal force displaced from the 'gravity force' forming a couple that should not be present. But the diagram does hide some deeper stuff such as the question How does the adding the box onto the table develop into forces at a distance from the box pressing on the floor under the table legs ? Treating this question requires revisiting the basic force definition and significantly expanding it. You also mention contact forces, another part of the basic treatment, that description needs to be expanded to include th concept of 'body forces' for any sensible discussion.
    1 point
  12. So the position is untenable. Then there must be a Positron Field and an Electron Field. The Electron Field must be able to have a negative amplitude without being an excitation of a positron.
    -3 points
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.