Jump to content

Widdekind

Senior Members
  • Posts

    1121
  • Joined

  • Last visited

Everything posted by Widdekind

  1. Gravitational Potential Energy (GPE) ~ GM2/R equivalent rest-mass energy = mc2 GM2/R = mc2 m/M ~ RS/R (actually, 1/5 x RS/R) So, order-of-magnitude, the rest-mass equivalent, of some self-gravitating body, is a similar fraction of the total actual mass, as that body's Schwarzschild radius is, to its actual physical radius.
  2. Employing Mathematica to visualize the cumulative magnetic field, of multiple magnetic dipoles, apparently the region of high-field-strength, near the magnets, extends slightly farther away, when the magnetics are arranged, with alternating polarities. Apparently, alternating polarities "re-captures field lines", keeping more field lines near the array of magnets, so strengthening the field slightly. Paramagnets respond to external magnetic fields, with parallel-pointing magnetization, linearly proportional to the applied external field. So, the strength of the force, acting on the paramagnet, is [math]\propto \left( B \circ \nabla \right) B[/math]. Employing vector calculus identities, and the Maxwell equations, that winds up being [math]\propto \nabla \left( B \circ B \right)[/math], i.e. paramagnets respond to the applied magnetic field energy density.
  3. By dimensional analysis, the (total) energy, in the magnetic field, generated by a mass of plasma, of characteristic size-scale L, carrying a characteristic current I, is [math]B \approx \frac{\mu_0 I}{L}[/math] [math]E \approx \frac{B^2}{\mu_0} L^3 \approx \mu_0 I^2 L[/math] Now, when two opposing magnetic fields are juxtaposed, the opposing parts vanish (e.g. x-direction), and the parallel parts remain (e.g. y-direction). So, the total field energy, after reconnection, is reduced by about half [math]\left( <B>^2 + <B>^2 \; \rightarrow \; <B>^2 \right)[/math]. So, in order-of-magnitude approximation, [math]B \approx \sqrt{\frac{\mu_0 E}{L^3}}[/math] [math]I \approx \sqrt{\frac{E}{\mu_0 L}}[/math] The brightest Solar Flares release [math]\approx 6 \times 10^{25} J[/math] of energy, and span [math]\approx 10^5 km[/math]. So, [math]B \approx 1 Gauss[/math] [math]I \approx 1 T Amp[/math] The ratio of magnetic-field energy density, to particle thermal energy density, is [math]\frac{\frac{B^2}{\mu_0}}{n K_B T} \approx 10^{-9.5}[/math] if the density (in the photospheric "feet" of the prominence) is nearly 1024 per m3, and the Flare temperature is millions of K. So, nearly none of the thermal energy is organized, into coherent currents. (Out in the corona, above the sun's surface, the density is a trillion times lower, so magnetic energy densities may dominate, perhaps explaining why most corona plasma remains confined near the sun, most of the time.) If open field lines allow plasma particles to stream away from the sun's surface, out to space; then why wouldn't open field lines allow charged particles, to well up from the sun's deep interior, to its surface? The "feet" of solar prominences are millions of K, comparable to the sun's deep interior. Perhaps when field lines open out, through the sun's surface, super-hot plasma from the deep interior can stream along those lines, straight up to the surface, and then out along the field lines looping through the prominence? And then, if those field lines open out to space, then the plasma can stream away to space, too? Inexpertly, the "feet" of solar prominences look like plasma from the sun's deep interior, welling up to the sun's surface: Speculating, there seems to be a "catch-22" with trying to employ EM fields, to contain fusing plasma, for hypothetical controlled fusion reactors. For, stars require the immense gravity, of enormous amounts of matter, to gravitationally contain their runaway fusing plasma reactions. (Evidently, stars' magnetic fields also do contain plasma particles, except where the field lines occasionally reconnect, and open up "coronal holes" in the field, releasing puffs of plasma particles.) (Stars contain plasma, with other plasma, "plasma confined plasma".) Now, EM forces are "40 orders of magnitude stronger than gravity". But, to exploit EM fields, requires first establishing them. And, to establish EM fields, requires fighting against those very EM forces. So, you'd have to go against EM forces first, to then exploit those EM forces to contain fusing plasma. Unless you could establish an EM field once-and-for-all, and then gradually recoup start-up costs, then perhaps 'tis impossible to break even? Conversely, pushing space gas into a pile, and letting gravity do the rest, generates a star, which releases enormous amounts of energy, with nearly no start-up costs. So, perhaps 'tis practically impossible, to be more energy efficient, than a natural stellar fusion reactor (possibly employing some "stellar engineering" techniques, to steward the star; but probably not even). If so, then even hypothetical advanced Aliens would still exploit stars, building "lots & lots of solar panels". And, inter-stellar space-travel might be impossible, without fusion power plants -- except on "battery power", i.e. energy, harvested from stars, could be converted into anti-matter, and then used as fuel on ships. But, if it is impossible to contain plasma at fusion temperatures; then would it not be (even more) impossible to contain plasma heated by antimatter reactions? What other mechanism, besides plasma, could "catch" the photons, from pair-annihilation reactions? Practically, photons do not react with EM fields, only other charged particles (which one wouldn't want to be parts of their own space-craft). So, perhaps the only practical form of inter-stellar propulsion would be anti-matter bombs, in "anti-matter pulse propulsion" pusher-plate space-craft? If only natural stars can, practically & economically, generate fusion energy; and if massive stars cannot be incorporated into maneuverable space-craft; then only energy converted from stars, into transportable "batteries" or "bombs", could be employed in space-craft. In swift summary, perhaps improving upon natural stars, as energy sources, is impractical (if not impossible).
  4. using a crude single-burn-of-impulse approximation, you can calculate how much fusionable fuel is required, to accelerate the payload (plus fuel for deceleration) from departure, and the to decelerate the payload from destination. With enough fuel mass (relative to payload mass), you can (theoretically) accelerate to any speed; and, the faster you go, the less fuel for deceleration (relative to total fuel) is required -- fuel usage is asymmetric, more is needed to accelerate (when you're pushing the fuel-for-deceleration too), than to decelerate (when you're simply stopping the payload part). For fusion with e=0.007, you need total-fuel-to-payload ratios of hundreds to thousands, to reach semi-luminal speeds. you made the analogy, of rain-drops to a truck, i.e. you said, that we pretend protons are raindrops, and smoke-sized dust grains are trucks. that is qualitatively accurate; quantitatively, as observed above, if we analogize protons to raindrops, as you yourself suggested, then microgram dustgrains would analogize to hundreds of billions of tons worth of water, i.e. hundreds of cubic km, i.e. "raindrops & asteroids" would be a more quantitatively correct comparison
  5. Please ponder a fusion-powered rocket, of initial mass M, which converts 0.007M into (kinetic) energy. Energies are low w.r.t. rest-mass energy, and (so) velocities are low w.r.t. light-speed. So, classical approximations are accurate (if not precise): [math]M c^2 = M' c^2 + \frac{1}{2} M' v^2[/math] [math]\frac{\Delta M}{M'} = \frac{1}{2} \beta^2[/math] [math]\epsilon \approx \frac{1}{2} \beta^2[/math] [math]\beta \approx \sqrt{2 \epsilon} \approx 0.12 [/math] i.e. [math]\approx \frac{c}{8}[/math]. So, an ideal fusion rocket can only accelerate to an eighth of light-speed. And, if you wanted to be able to decelerate at destination, then you would have to save half the energy, and halving the energy of acceleration from departure would reduce speed by [math]\sqrt{2}[/math] to about 0.08c. So, self-nuclear-fusion-propelled space-craft can only accelerate to about 8% of light-speed; externally accelerated "space slingshotted" space-craft could coast at 12% of light-speed, and still carry enough on-board fusionable energy to decelerate at destination. Anything out there traveling faster than 12% of light-speed was externally accelerated, i.e. a "space bullet" fired from some "space gun". Hypothetically, an externally accelerated "space-bullet-craft" loaded with anti-matter could be accelerated to ... and still have enough on-board energy to decelerate at destination: [math]E_0 = \gamma \left( M_{ship} + m_{fuel} \right) c^2[/math] [math]c P_0 = \gamma \beta \left( M_{ship} + m_{fuel} \right) c^2[/math] [math]\Delta E = \gamma m_{fuel} c^2 = c \Delta P = c P_0[/math] [math]\gamma m_{fuel} c^2 = \gamma \beta \left( M_{ship} + m_{fuel} \right) c^2[/math] [math]m_{fuel} = \beta \left( M_{ship} + m_{fuel} \right)[/math] [math]\frac{m_{fuel}}{M_{ship} + m_{fuel}} = \beta \le 1[/math] So, a hypothetical externally accelerated space-craft, loaded with externally-supplied anti-matter (& matter) fuel, could cruise at near-light-speed, and still decelerate at destination, depending upon the ratio of fuel-mass to total-ship-mass. The ship would decelerate, by projecting a high-powered laser-like blast, in the forward direction ("focused pair-instability SNe GRB blast"), which could be aimed at the target world, to annihilate any indigenous lifeforms.
  6. i employed Mathematica, to numerically solve the ODE. (For a given world, and its [math]\alpha[/math], there exists some central density, for which the curve of the density drops to one, at the surface (x=1, y=1).) The equation works remarkably well, for the Moon, Mars, Earth, with a plausible "bulk bulk modulus" comparable to concrete. The simple model accurately, if not precisely, reproduces the average & central densities, of those bodies. But, world's cannot get much bigger than Earth, with this model. Larger worlds have lower [math]\alpha[/math], and worlds larger than ~7000km cannot satisfy (x=1,y=1); the equation demands too much slope (to generate the pressures offsetting gravity), so the densities drop well below uncompressed (y=1), well before reaching the surface (x=1). Interpreted, rocky worlds don't get (much) bigger than earth; more mass merely becomes more density; the solutions all have earth-sized worlds, with super-dense cores. So, whilst a simple bulk modulus model can account for moon-to-earth-sized worlds, "super earths" are either earth-sized, but super-dense; or else something else happens, e.g. high-pressure high-density matter states have higher bulk moduli.
  7. anyway, a nuclear-powered pusher-plate craft would, by assumption, have a pusher-plate capable of absorbing high energy particle radiation. So, a pusher-plate craft could accelerate, then coast in an "end over" (stern first) orientation, using its pusher plate to "snow plow" the ISM. However, even to drive an aircraft-carrier-sized ship (100Ktons) to Mars and back ([math]\Delta v \approx 10 km/s[/math]) would require circa 100Gtons of nuclear warheads for fuel. i think that's about the entire global stockpile of nukes. So, all earth could spend trillions of dollars, to load a huge stockpile of fuel pellets aboard a craft, for one trip. (Moreover, 100Gtons probably masses 100Ktons or more? So, at least on the outbound leg, there'd be no room for cargo.)
  8. a 1-ton spacecraft is ~e30 protons. if each proton is a 1g raindrop, that's ~30g = e27kg i.e. the mass of a world correct ? and so your analogy points to planets as the appropriate picture, for probes
  9. huh ? you made the analogy, of rain-drops to a truck, i.e. you said, that we pretend protons are raindrops, and smoke-sized dust grains are trucks. fine. However, how big, in this analogy, is my space-craft ? If a proton is a rain-drop; and a dust-grain a truck; then, would not a macroscopic spacecraft be, in that analogy, the size of a world, or a star ?? And, how much would a world care, whether it ran into a scattered ten tons of rain water, or a single mac-truck ? By implication, in that analogy, you & i are people, mid-way in size, between proton rain, and dust-grain trucks. So, evidently, we are large atoms, presumably in the forward facing "space umbrella". So, yes, the individual atoms, in the space umbrella, would react differently, to collisions w/ protons vs. dust. That would affect the nanoscopic details of the "space weathering" of the shield. In the meantime, macroscopically, momentum transfer, for large space-craft, would be the same, for the same amount of mass swept up. If our space-craft is the size of a world, then "rain-drops" would turn the surface into a fine powder; whereas "trucks" would scar & pit the surface, with visible AZ Barringer-sized craters. But from a macroscopic world-sized perspective, the momentum transfer would be the same. correct ? our sun's solar wind provides an "active defense" against incoming cosmic rays -- most are deflected away, all are decelerated to some degree, heading "up wind" towards the inner solar system. So, in analogy, perhaps an "active defense" would work well, for protecting craft, from the ISM ? what if you had a "head-light", shining forward, tuned to the ionization frequencies, of H, He, dust ? then, you could ensure, that all material directly in front of the craft was ionized, and so affectable by the ship's EM field. For example, you could imagine some sort of "space ferry", whose bow & stern were identical, and which was hollow, although to accommodate the flow-thru of ISM, not cars. The cylindrically-shaped "space ferry" would have "head-lights" at the stern, for acceleration; and at the bow, for ionization. At destination, the bow head-light would increase in luminosity (and the stern would power down), to decelerate the ship. Ionized space material could be swept up, into a space ramjet. Note, traveling to another star system is not like flying a plane to an airport -- the "airport" is itself moving. 'Tis more like a football quarterback throwing a pass (the ship) by leading the receiver (destination star system). The pass is thrown, to rendezvous with the receiver, "then there". But, before the ball connects with the receiver "then there", the receiver "here now" is running after the throw. So, if our football (the ship) had a headlight on it, none of the laser light would be seen by the receiver (until the ball was in their arms). So a spacecraft having a headlight would not be visible, to the destination star-system, until very near the time of arrival there. (you would have to be positioned, out in deep space, at the point of future rendezvous, at which position, you would see the inbound ship from one direction, and the oncoming star-system & planet, from some other direction, converging towards you, like standing on a football field, where the pass is thrown to, and where the receiver is running towards -- you'd see the football sailing straight at you from one side, and the receiver running towards you from some other direction.)
  10. Dust grains are (essentially) "clumps of protons", one collision => many protons hit the craft all at once. But, the momentum imparted is still the same, (classically) M * v = N * mp * v. The density of dust affects, and is already factored into, the overall density of the ISM. Yes, dust grains would create larger (micro) craters in a craft, like a single solid slug vs. buckshot. But the ram-pressure still obeys P = p v2. In a given interval of time, whether you sweep up N protons, all in one "solid slug" (dust grain), or as "buck shot" (ISM protons), the momentum transferred is the same. (Unless you're considering the minor loss in mass, from fusion, for multi-nucleon nuclei, <1%.) correct ? in analogy, cruising through the ISM, is like flying through dusty air. Most of what encounters the craft is "air" (ISM). And also, the dust matters too. (i understand, that "smoke" is a more correct analogy than "dust", typical ISM "dust" grains are more similar in size to "smoke" particles. Cruising through the ISM is like flying through smokey air. Cruising at even vaguely relativistic speeds, is like flying at nearly infinite mach-number, C => Mach # ~ 30K. So, you need an "air-craft" capable of flying through smokey air, at mach tens-of-thousands. In earth analogy, that'd be about 10K km per second, i.e. from one-side to the other of earth, in one second. Or, all the way around earth, up in the dusty stratosphere, in a few seconds.) perhaps you could accelerate an asteroid, and cruise behind it, in its evacuated wake ? It would sweep up the dusty & gas, creating a conduit, through which to travel. And you would not need to decelerate its mass, at the other end.
  11. First, oops on "alpha": Second, if you plot that function, you get (relative) density, as a function of the (corrected) alpha. That alpha is a function, of R2, and 1/K. So, you can then estimate the planet's "bulk bulk-modulus", K ~ R2 / a. E.g. for Earth, a ~ 0.25, R ~ 6400km, K ~ 450 GPa; for Mars, a ~ 0.18 (= 3/4 earth value), and R ~ 3200km (= 1/2 earth value), K ~ 1/3 earth value ~ 150GPa. So, qualitatively, perhaps Mars has allot less material, compacted into high-density, high-bulk-modulus phases ? Is that not plausible / probable ? Earth's estimated bulk bulk-modulus resembles diamond, which does in fact form at depth, down in the mantle, at high pressure, as a high-pressure phase (of carbon). Mars' estimated bulk bulk-modulus resembles regular rock, as known, from earth's crust, at low pressure.
  12. First, convection occurs in stars, and dredges dense material, from the deep interior, out to the rarified surface. That is the opposite of what Enthalpy stated, "convection is densest matter on top" (paraphrase). Second, a one-zone model: [math] \frac{dP}{dr} = - g® \rho[/math] [math] \frac{K}{R} \frac{\Delta \rho}{\rho_0} \approx - \frac{G M}{R^2} \rho[/math] [math]\frac{\rho}{\rho_0} \approx 1 + \left( \frac{4 \pi G}{3 K} \left( \rho_0 R \right)^2 \right) \rho^2[/math] has the solution, via Pythagoras' theorem: [math] \frac{\rho}{\rho_0} \approx \frac{ 1 - \sqrt{1 - 4 \alpha} }{2 \alpha} [/math] [math] \boxed{ \alpha \equiv \left( \frac{K}{4 \pi G \left(\rho_0 \; R \right)^2} \right) }[/math] For earth, the relative density is nearly 2. And, if you plot that function, then you observe, that high relative densities only occur, near [math]1 - 4 \alpha \approx 0[/math]. So, if [math]\alpha_{\oplus} \approx 1/4[/math], then [math]K_{\oplus} \approx 500 GPa[/math], comparable to the bulk modulus of diamond. Ergo, perhaps the high-pressure phases, of material, in earth's mantle & core, have collapsed into super-dense configurations, with diamond-high bulk moduli.
  13. As crude estimate, the ram-pressure against a light-speed space-craft would be of order [math]P \approx \rho c^2 \approx 10^{-4} Pa \approx 10^{-9} atm[/math], which would perhaps require millions to billions of years, to slow a massive craft. i guess people know the formula c = g y, i.e. accelerating at 1G for 1 yr. results in light-speed (semi-classically). (Perhaps the more correct formula is [math]P = \gamma \rho c^2 \beta^2[/math] ? Relativistically, the swept-up momentum is not [math]\rho v[/math], but [math]\gamma \rho v[/math].) Most of the ISM is ionized, so ring-shaped crafts, with an axial, solenoidal magnetic field, might funnel most particles safely through the center of the ring -- which also could be spun, for artificial gravity, essentially an annular space station habitat... accelerated to high speed. Or, the ISM particles could be consumed, for fusion fuel, hypothetically, as an interstellar ramjet. Still, cosmic rays, and "induced cosmic rays", would perhaps pose problems for people. Inter-stellar space probes could only go to one star system. Whereas, spending the same amount of money, for super-sized space telescopes, near earth, would enable detailed surveys, of every star system in sight -- without risking burning up somewhere out in the ISM in interstellar space.
  14. The average density of the ISM is approximately 0.3 particles per cubic cm. For a (hypothetical) starship traveling near C = 30 B cm / s, every square cm of frontal surface would encounter approximately 10B particles per square cm per second. Each of those particles would basically become a Cosmic Ray, with an energy approximately equal to its rest-mass energy (assuming a gamma ~ 2), i.e. ~GeV. So, every front-facing square cm, would sweep up billions of GeV cosmic-ray-equivalents, per second. At earth's surface, about one GeV cosmic-ray collides per second (per square cm). So, relativistic starships would encounter billions of times more cosmic-ray-equivalents per second vs. rock space-weathering on the surface of earth or its moon. Whatever material formed the frontal shield would be pulverized, into lunar regolith powder. After one year of space travel, the frontal shield would have suffered the equivalent of billions of world-years of space weathering, i.e. would become pulverized into powder, as seen on earth's moon (unless i'm mis-understanding why the moon is covered in powder). Individual collisions would not be the problem -- billions of collisions per second (per square cm) would accumulate damage, to the craft. http://en.wikipedia.org/wiki/Interstellar_medium http://en.wikipedia.org/wiki/Cosmic_ray Note, the ratio of speed-to-gamma-factor is maximum near 0.7C, where gamma ~ 1.4. So, you could reduce damage, by slowing speed, to (say) half to three-quarters C. At ~0.4C, gamma ~ 1.1, reducing (relative) particle energies to ~100 MeV. Perhaps half-light-speed is the maximum effective space speed limit.
  15. According to the definition of Bulk Modulus (K): [math] \frac{\rho - \rho_0}{\rho_0} = \frac{P}{K}[/math] For crude estimate, assuming a constant K, and static equilibrium: [math] \frac{dP}{dr} = - g® \rho[/math] [math] \frac{K}{\rho_0} \frac{d\rho}{dr} = - \frac{G M_{<r}®}{r^2} \rho®[/math] But [math] M_{<r}® \equiv \int_0^r 4 \pi r'^2 \rho(r') dr'[/math] So, re-arranging terms, and taking the derivative, of the integral, to get the integrand: [math] \frac{K}{\rho_0} \frac{d}{dr} \left( \frac{r^2}{\rho} \frac{d\rho}{dr} \right) = - 4 \pi G r^2 \; \rho[/math] [math] \left( \frac{K}{4 \pi G \left(\rho_0 \; R \right)^2} \right) \frac{d}{dx} \left( \frac{x^2}{y} \frac{dy}{dx} \right) = - x^2 \; y[/math] where we have normalized the radius coordinate value, by the radius of the world ( R ); and the density value, by the uncompressed "natural" density ( [math]\rho_0 \approx [/math] 3000 kg m-3 ). In words, that equation states: (scaling factor) x (increase in density) = RHS The equation is similar, for all worlds (of the same bulk composition, defining K, [math]\rho_0[/math]); but bigger worlds (larger R) have a much smaller scaling factor ( [math]\propto[/math] R-2 ), requiring much larger increases in density. Now, note: [math] \boxed{ \alpha \equiv \left( \frac{K}{4 \pi G \left(\rho_0 \; R \right)^2} \right) }[/math] [math]\alpha_{\oplus} \approx \frac{1}{3}[/math] [math]\alpha_{moon} \approx 5[/math] [math]\alpha_{mars} \approx 1[/math] So, the planet Mars seems to be "transitory", between the "low-compression" regime vs. "high-compression" regime, i.e. of "moons" vs. "worlds" (for want of worthier words). And, Mars also seems transitory, between the inert Moon, and geologically active Earth. Perhaps plate tectonics somehow results, from "high compression" of rocky material ?? Note, that resembles supra-adiabatic compression, inside stars, which generates convection, i.e. material in the center is so compressed, that were it to expand and rise, it would still have more heat energy, i.e. temperature, than surrounding material, so being less dense, and so buoyantly rising. So, by analogy to stars, compression could conceivably cause convection, i.e. plate tectonics. On a P-T diagram, with adiabats over-drawn ( P [math]\propto[/math] T5/2 ); increasing the size ( R ) of the planetoid, and so decreasing its scale-factor ( [math]\alpha[/math] ), results in the world's "geotherms" ( P( r ) vs. T( r ) plotted parametrically, from the surface at R where P( R ) = T( R ) = 0 ) rising towards much higher densities (and, so, Pressures), until, with worlds bigger than Mars, their "geotherms" rise up above the adiabatic curve, so that central material tends to begin convecting. The above equation, when numerically solved & plotted (with the "Wolfram Alpha" website, using initial values y'(1) = 0, y(1) = 1, i.e. at the surface (x = 1), material has its natural density (y=1), and is initially non-compressing (y'=0) ), appears perfectly plausible
  16. There is an observed relation, between the brightness & power of relativistic jets, common to Quasars & GRBs: http://phys.org/news/2012-12-common-physics-black-holes.html Could the following calculations help explain the same? [math]P = \frac{e^2}{6 \pi \epsilon_0 c} \left( \gamma^3 \dot{\beta} \right)^2[/math] [math] = \frac{d}{dt} \left( \gamma m_e c^2 \right) = m_e c^2 \left( \gamma^3 \beta \dot{\beta} \right) [/math] So: [math]\gamma^3 \dot{\beta} = \frac{3 c}{2} \frac{m_e c^2}{\left( \frac{e^2}{4 \pi \epsilon_0} \right) } = \frac{3 c}{2 r_e} \beta [/math] [math]\frac{\gamma^3}{\beta} d\beta = \frac{3 c}{2 r_e} dt[/math] [math]\frac{\gamma^3}{\beta} \frac{d\gamma}{\frac{d\gamma}{d\beta}} = \frac{3 c}{2 r_e} dt[/math] [math]\frac{\gamma^3}{\beta} \frac{d\gamma}{\gamma^3 \beta} = \frac{3 c}{2 r_e} dt[/math] [math]\frac{\gamma^2}{\gamma^2-1} d\gamma = \frac{3 c}{2 r_e} dt[/math] By partial fractions: [math]\frac{\gamma^2}{\gamma^2-1} = 1 + \frac{1/2}{\gamma-1} - \frac{1/2}{\gamma+1}[/math] So the integration yields: [math]\Delta \left( \gamma + ln \left( \sqrt{\frac{\gamma - 1}{\gamma + 1}} \right) \right) = \frac{3 c}{2 r_e} \Delta t[/math] For relativistic jets [math]\gamma \gg 1[/math], and most of the power is radiated early on, at high [math]\gamma[/math]. So: [math]\Delta \gamma \approx \frac{3 c}{2 r_e} \Delta t[/math] [math]\boxed{ t_{cool} \approx \frac{2 r_e}{3 c} \gamma_0 }[/math] But the initial electron energy was also proportional to [math]\gamma_0[/math]. So, the average power of emissions is (calculated to be) constant at all energy scales: [math]\bar{P} \equiv \frac{E_0}{t_{cool}} \approx \frac{\gamma_0 m_e c^2}{ \frac{2 r_e}{3 c} \gamma_0 } = \frac{3 c}{2 r_e} m_e c^2 \approx 10^{10} W[/math] Perhaps some similar sort of scale invariance, whereby the power emitted by decelerating electrons is quasi-constant, could account, for the observed Jet brightness / power relation.
  17. So as not to be misleading... as i understand GR, as applied to Friedmann cosmologies, space-time is a static entity. At all times, that space-time is filled with matter-energy; and at each time, the density of matter-energy determines the scale factor (and rate of change thereof). For the Friedmann equations, one assumes, first and foremost, the global topology of space-time, e.g. closed (k=+1). Thereafter, that global topology is immutable. So, if (say) you choose to model a closed cosmology, then, ipso facto, the density within that space-time fabric must always be greater than the corresponding critical density -- the equations force everything else to adjust, so as to maintain the chosen topology. So, assuming the accuracy of the Friedmann cosmologies; then if our cosmos is closed today, then it always has been, and always will be. That means that there was always supra-critical density in the past; and will always be in the future. Ipso facto, what we perceive to be future regions of space-time, are already filled, with supra-critical mass-energy density -- presumably from the (wave-functions of the) electrons, protons, photons, neutrinos, and other fundamental quanta already existing "now", which (whose wave-functions) presumably persist far into the future. Otherwise, if the far future were devoid of matter (say), then the far future would have to be an open, Einstein-de-Sitter topology, inconsistent with present topology. But Friedmann space-time fabrics don't flip back and forth, between various global topologies. To be Relativistically invariant, i would guess that Relativistic QM equations, e.g. Klein-Gordon equation, "must" treat wave-functions as fully (3+1)D objects, which transform in Lorentz-invariant ways. the future holds all possibilities (more & more of which are "pruned" away) Wave-functions persist. Even when they "collapse", they only "shrink" into one of previously many possibilities. That one enduring possibility, after being actualized, then continues to evolve, according to the equations of QM, as it already would have, had there occurred no wave-function collapse. So, at present epoch, the wave-functions of fundamental quanta are pluri-potent, i.e. full of myriad possibilities, each evolving according to QM. Those possibilities spread out through space, and evolve over time. So, wave-functions can be crudely visualized as "bushy trees". As time advances, and interactions cause "collapses", some possibilities are actualized; and most vanish away. In the tree analogy, time advances up towards the top tips of the tree canopy; and "collapses" cause every branch at that altitude to be sawed off, except one (the one actualized, i.e. observed on measurement / interaction). Then, that possibility simply keeps on evolving, as if nothing had changed (for it, nothing did). In analogy, time keeps crawling up the lone-surviving, and now re-branching, branch. Until the next "collapse". Then, the process repeats -- every branch at that higher altitude is sawed off, but one. Time then keeps crawling up its re-branching "mini-tree" like structure. And so on. So, as time advances, the pluri-potent wave-functions of particles become more and more sparse, "thinned out", like a gardener pruning trees. Pluri-potent wave-functions, "ghosted out" across their many partially-present possibilities, tend to be less localized, and more dispersed, than actualized particles, immediately after a wave-function "collapse". Cp. the famous "Double Slit Experiment" -- wave functions are spread out across the whole macroscopic detector plane, before collapsing to a small microscopic region. Ipso facto, the future regions of the space-time fabric, which currently harbor vast pluri-potentiality, may be filled, more uniformly & isotropically, then at present, due to the phenomenon, of the QM spreading of wave-functions. If so, then the future regions of space-time may be "smoother" than at present. The evolution of wave-functions through time seems to be one of "choosing what to eat from a buffet"; and wave-functions are like the stack of all possible menus, for all possible meals, from now until the end of time (BC). Meal after meal, one dish is chosen from that menu, which then vanishes, along with all of the could-have-been menus, for all the deserts & next-meals, for all of those other non-chosen meals. Seemingly, "the future holds all possibilities"; interactions / measurements / observations keep selecting from remaining possibilities, ones to actualize. The future "possibility tree" of wave-functions is progressively "pruned", down to one "trunk", then one "branch" off of that trunk, then one "sprig" off of that branch, etc.
  18. On second thought, i doubt that anti-particles (perceived to be) propagating forwards in time are completely like particles propagating backwards in time. For, the EM fields (say) of an anti-electron would still propagate away from the positron, at light-speed; the virtual photons of EM interactions between the positron and other charges would occur in forward time. As such, a normal electron, propagating backwards in time, could "tell" that it was a positron propagating backwards in time; because particles in its vicinity would (from its time-reversed perspective) respond to what it would perceive to be its future positions & speeds. Again, particles could tell, by probing their EM fields, whether they were propagating forwards in time (in the same time sense as their fields' virtual photons), or backwards in time (vice versa). So, perhaps the "Feynmann perspective" was only a poignant comparison, not meant to be completely accurate.
  19. The (3+1)D fabric of space-time is static. But aren't wave-functions [math]\Psi®[/math] only 3D objects? They evolve through time, but they do not stretch back into the past, nor reach out into the future. If so, then wave-functions have lower dimensionality than the fabric of space-time in which they reside. If you consider a closed cosmology, where the fabric of space-time resembles the (1+1)D surface of a rugby football; then the collection of all wave-functions residing in our space-time fabric, "now", would resemble a 1D rubber band, stretched around the equator of the rugby football. As time passes, the rubber band is rolled from one tip ("Big Bang") towards the opposite tip ("Big Crunch"). (One could even speculate about "striping" the rugby football with several parallel rubber bands, representing collections of matter & energy, co-existing on the same space-time fabric, at different "nows".) However, against this, does not GR, and the Friedmann equations, treat mass and energy as filling the whole of space, at all times, i.e. filling the whole of space-time? At each "now", the shape of space-time reflects the mass & energy density at that "now"; but if the whole fabric of space-time is a static structure, whose shape at all times is already fixed; then mass & energy density also already exists, at all "nows". And, that density derives, ultimately, from the wave-functions of the quanta within the space-time fabric "now". So, seemingly, the wave-functions of all particles already fill the whole of space-time, from BB to BC. (Perhaps the future parts of wave-functions are as-yet undetermined, still latent with quantum pluri-potentiality ("what could be"); whereas the past parts of wave-functions have been determined, now fixed into past history ("what was"); time-evolution of wave-functions resembles traversing down through a tree-like possibility structure ("choose your own adventure"), whilst tearing out and tossing away all non-chosen choices. As time advances, the tree structure is whittled down (the "choose your own adventure" book is thinned away). But, intrinsically, all wave-functions have some existence & presence, at all times, from the "beginning of time" to the "end of time", from BB to BC. Mathematically, wave-functions even exist throughout all space, e.g. Hydrogen wave-functions have exponentially decaying "tails" that, theoretically, are still non-zero, at arbitrarily far-off locations. Ipso speculato, the fabric of space-time is a quantum object, qualitatively similar, to the individual particles residing within it, e.g. all are (3+1)D.)
  20. First, the Relativistic energy equation: E2 = p2 + m2 treats rest-mass like a "hyper-momentum", in an extra spatial dimension: E2 = pxyz2 + pw2 That extra "w" dimension can be construed, as the hyper-spatial "thickness" of the fabric of space-time. The fabric of space-time may have an "inside surface", and an "outside surface". And, the wave-functions of quanta may reside in between both said surfaces, like ice cream between the wafers of an ice cream sandwich. In reduced-dimensional visualization, in (1+1)D, the space-time fabric of our universe may resemble a "vase". In this hypothesis, that "vase" would have some "hyper-thickness", and would not be an infinitely thin membrane. Wave-functions of quanta hypothetically "slosh back and forth", reflecting from the "inside" & "outside" surfaces of the space-time fabric, which hypothetically acts as a wave-guide: Mass-less photons propagate, at the speed of light ©, entirely through time & space. Massive particles, at rest, propagate, at light-speed, entirely through time and the hyper-spatial "thickness" of the fabric of space-time. As massive particles are accelerated, their "absolute" hyper-spatial velocity rotates, from entirely "across" the fabric of space-time, bouncing back and forth in the "w" dimension; to entirely through the fabric of space-time, in the "xyz" dimensions. When accelerated to spatial velocity (v), the particle zig-zags across and through the space-time fabric, at some angle to the "w" axis: [math]v_{xyz} = c \; sin(\theta)[/math] [math]v_w = c \; cos(\theta)[/math] Because some of the particle's velocity is "used up" propagating through space, the particle propagates "across" space (in the "w" thickness dimension) more slowly than when at rest. The longer "bounce time" explains (Special) Relativistic time-dilation. If the "thickness" of space-time is [math]\delta w[/math]: [math]\delta t_{bounce} = \frac{\delta w}{ c \; cos(\theta) } = \frac{\delta t_0}{\sqrt{1 - \beta^2}} = \gamma \; \delta t_0[/math] But, why would slower "sloshing side-to-side across space-time" make clocks run slower, i.e. make quantum wave-functions evolve more slowly? This picture implies, that evolution of wave-functions somehow requires "bounces" off of the bounding surfaces of the space-time fabric. According to the SWE, the evolution of wave-functions is proportional to their energies: [math]\delta \Psi = \frac{\delta t}{\imath \hbar} \hat{H} \Psi[/math] And, in (General) Relativity, energies cause curvatures into the fabric of space-time. So, perhaps the energies of quanta induces "wrinkles" in the (inner & outer) "skins" of space-time; and when their wave-functions "slosh" up against said wrinkled surfaces, they reflect with distortions & diffractions, that cause wave-functions to evolve, spread out, etc. Zooming into the (1+1)D space-time fabric visualized above, looking at a short segment of space, at a single slice of time, whilst emphasizing said segment's hyperspatial thickness; the presence of a particle with energy may "wrinkle" the bounding "skin" surfaces of the space-time fabric: [math]= \; \longrightarrow \; \approx[/math] Then, as the particle's wave-function bounces back and forth, across the hyper-spatial thickness of the space-time fabric [math]\left( \uparrow \downarrow \right)[/math] the "wavy" space-time fabric induces distorting diffractions into the wave-function, after every bounce & reflection. Thus, the rate of reflections & bounces [math]\left( \delta t = \gamma \; \delta t_0 \right)[/math] determines the rate at which the wave-function evolves. Wave-functions evolve by one unit of change per bounce: [math]\delta \Psi = \frac{\delta t}{\imath \hbar} \hat{H} \Psi[/math] [math]\delta \Psi = \frac{\gamma \; \delta t_0}{\imath \hbar} \hat{H} \Psi[/math] Wave-functions evolve slower, at speed, because they bounce back-and-forth across the space-time fabric more slowly; and their infrequent sloshings side-to-side afford less rapid evolution. One must wait longer [math]\left( \delta t_0 \longrightarrow \gamma \; \delta t_0 \right)[/math] to observe the same change [math]\left( \delta \Psi \right)[/math]. This same picture can account, too, for gravitational time-dilation. For, around a massive object, space "sags" according to the Flamm paraboloid; the hyper-spatial height (w) of the space-time fabric is: [math]w® = 2 R_S \sqrt{\frac{r}{R_S} - 1}[/math] per the rubber sheet analogy. If you imagine, that wave-functions always bounce "up and down", whether that rubber sheet lies flat horizontally; or sags down somewhat vertically; then for a given transverse thickness [math]\delta w[/math] of the rubber sheet, the vertical distance the wave-function must actually propagate in between bounces is: [math]\delta l \; cos(\theta) = \delta w[/math] (angle between fixed "vertical" and normal to space-time fabric) [math]tan(\theta) = \frac{dw}{dr}[/math] (angle between "horizontal" and tangent to the space-time fabric, equal to the above by geometry) [math]\delta t = \frac{\delta l}{c} = \frac{\delta w}{c \; cos(\theta)} = \frac{\delta w}{c} sec(\theta)[/math] [math] = \delta t_0 \sqrt{1 + tan^2(\theta)}[/math] [math] = \delta t_0 \frac{1}{\sqrt{1 - \frac{R_S}{r} }}[/math] which is the correct formula, from the Schwarzschild metric. Thus, gravity can be construed as simulating speed, since curved space-time also creates an angle between the hyper-spatial / spatial propagation of wave-functions; and the wave-guide-like, skin-like surfaces, of the fabric of space-time. Speed rotates the former w.r.t. the latter; whereas gravity curves the latter w.r.t. the former. Either way, wave-functions bounce back-and-forth less frequently; and since said bounces are what induce distorting diffractions into the wave-functions, so slower bouncing implies slower wave-function evolution, i.e. the appearance of "time-dilation". This simple picture, of space-time being a fabric with non-zero hyper-spatial "thickness" [math]\left( \delta w \right)[/math], explains both the SWE; and Special & General Relativistic time dilation.
  21. According to Classical QM, wave-functions spread out through (proper) time. Do the Relativistic wave-functions, of photons, also spread out through time? Given billions of years, the wave-function of a stationary electron (say) could possibly spread out to many times its original size; if photons did something similar, then that diffusion phenomenon would resemble cosmic redshift. Or, does the fact that photons experience zero proper time (between any two events on their world-lines) imply that photons are "frozen in time", "frozen koosh-balls of EM field" that simply propagate through space in a static, non-spreading, configuration? If you designed a particle-beam-based communication system; would those particles "red-shift" as they crossed the cosmos, arriving at distant galaxies spread out over many times their original wave-function size? And, would that be due to GR-based cosmic expansion redshift; or Classical QM-based spreading of wave-functions; or both?
  22. From a "Feynmann perspective", perceiving anti-particles as (normal) particles, propagating backwards in time (anti-time-wards), pairwise processes resemble Compton scattering, of electrons, off of hard (high energy) photons: Pair annihilation resembles an incident electron, scattering off of an intense radiation field, and being "deflected backwards in time" Pair creation (as perceived in forward-time frames) resembles an anti-time-wards propagating electron, scattering off of an intense radiation field, and being "re-deflected forwards in time" Pairwise processes seemingly resemble Compton scattering, wherein the interactions are intense enough, to "boot particles back the other way through time". Normal particles are propagating, through the fabric of space time, from the "beginning of time" (Big Bang) towards the "end of time" (Big Crunch); anti-particles are normal particles, propagating through the fabric of space time, "the other way", from "the end of time" (BC) towards "the beginning of time" (BB), from a "Feynmann perspective", as herein understood & defined. Space-time "doesn't care" which way particles propagate (BB-->BC, BC-->BB); our "arrow of time" seemingly simply reflects the fact, that the preponderance of particles, in our space-time fabric, are all propagating through space-time, in the same timely direction (BB-->BC). Inexpertly, that asymmetry seems to deny the possibility, that all particles & antiparticles are the same exact particle, zig-zagging forwards and backwards through time. For, were that the case, there ought to be as many particles as anti-particles -- the one particle would have to be propagating forwards in time as often as backwards in time. Even if so, qualitatively, the "Feynmann perspective" seemingly implies that pairwise processes can be analogized to Compton scatterings (?). Please ponder a black hole, surrounded by an intense >MeV radiation field. Electrons falling towards the BH from afar could be considered, as successively "deflecting backwards in time", and then "re-deflecting forwards in time", off of that radiation field. Viewed from a large-scale perspective, such electrons would fall towards the BH, forwards through time; and then simply plunge straight into the BH, along a space-like axis, as they (on closer inspection) "rattled back-and-forth through time" down towards the BH: ................| ...._______| ../.............| /...............| |................| |................| e-............BH
  23. i understand, that the fabric of space-time, as a (3+1)D membrane, is not expanding, in a "hyper-dimensional" sense, viewing the membrane from the higher-dimensional "bulk" in which it resides. For example, imagine a (1+1)D space-time (x,t), visualized as a single sheet of paper, trimmed into a triangle, with the tip "down" towards the viewer. Time runs vertically, space horizontally. As time increases, from bottom tip to top edge, the space extent of that fabric increases. So, 1D space seems to expand. But the whole sheet of (trimmed) paper simply sits there on the desk -- (1+1)D space-time is static, unchanging, non-expanding.
  24. For sake of simplicity, assume a flat cosmology. Then, from the Friedmann equations: [math]H^2 = \frac{8 \pi G \rho}{3}[/math] [math]\dot{\rho} = - 3 H \rho = - \sqrt{24 \pi G} \; \rho^{3/2}[/math] If, at one moment of time, two separate regions of the universe have slightly different densities; then they will have slightly different scale factors, and expansion rates, according to the above equations. Seemingly, the density difference would evolve as: [math]\frac{\partial}{\partial t} \left( \rho - \bar{\rho} \right) = - \sqrt{24 \pi G} \; \left( \rho^{3/2} - \bar{\rho}^{3/2}\right)[/math] [math]\approx - \sqrt{24 \pi G} \; \left( \bar{\rho}^{3/2} \left( 1 + \frac{3}{2} \frac{\delta \rho}{\bar{\rho}} \right) - \bar{\rho}^{3/2}\right)[/math] [math]= - \frac{9}{2} \sqrt{ \frac{8 \pi G \bar{\rho}}{3} } \; \delta \rho[/math] [math]= - \frac{9 H_0}{2} \sqrt{\Omega_0} \; \alpha^{-3/2} \delta \rho[/math] [math]= - \frac{3}{t_0 } \alpha^{-3/2} \delta \rho[/math] [math]\boxed{ \frac{\partial (\delta \rho) }{\partial \tau} \approx - 3 \frac{\delta \rho}{\tau} }[/math] wherein the scale factor and time have been normalized: [math]\alpha \equiv \frac{a(t)}{a_0}[/math] [math]\tau \equiv \frac{t}{t_0} = \frac{3 t}{2 H_0^{-1}} [/math] So, seemingly, density perturbations (in a flat cosmology) decay away as [math]\delta \rho \propto t^{-3}[/math]. According to the Friedmann equations, denser regions expand faster, et vice versa. So, expanding (flat) space-time tends to "smooth" and "unwrinkle" itself as it stretches. Regions initially denser expand faster, and "catch up"; regions initially diffuser expand slower, and "drop back". So, why do textbooks state that density perturbations grow with time? Instead, i understand, from the above, that all density perturbations derive from the flow of matter through space-time, i.e. from "peculiar velocities" as seen in galaxies residing in cosmic large-scale Structures -- for matter fixed within the fabric of space-time, the expansion of the universe tends to smooth such density differences.
  25. the posted equation for sound speed was wrong: [math]P_M = P_0 \left( \frac{\rho}{\rho_0} \right)^{5/3}[/math] [math]\frac{T}{T_0} = \left( \frac{\rho}{\rho_0} \right)^{2/3}[/math] [math]P_R = \frac{a T^4}{3}[/math] [math]\boxed{P = P_{R,0} \left( \frac{\rho}{\rho_0} \right)^{8/3} + P_{M,0} \left( \frac{\rho}{\rho_0} \right)^{5/3}}[/math] [math]U_M = \rho c^2 + \frac{3}{2} P_M[/math] [math]\boxed{ U = U_{R,0} \left( \frac{\rho}{\rho_0} \right)^{8/3} + U_{M,0} \left( \frac{\rho}{\rho_0} \right) + \frac{3}{2} P_{M,0} \left( \frac{\rho}{\rho_0} \right)^{5/3} }[/math] [math]C_S^2 \equiv c^2 \frac{\partial P}{\partial U} |_{U=u_0}[/math] [math] = c^2 \frac{\partial P}{\partial \rho} / \frac{\partial U}{\partial \rho} |_{\rho=\rho_0}[/math] [math]\boxed{ C_S^2 = \frac{c^2}{3} \left( \frac{1 + \frac{5}{8} \frac{P_M}{P_R} }{1 + \frac{3}{8} \frac{U_M}{U_R} + \frac{5}{16} \frac{P_M}{P_R} } \right) }[/math] In the limit of radiation dominance [math]\left( U_R \gg U_M \right)[/math], the sound speed approaches the canonical value [math]C_S^2 \longrightarrow \frac{c^2}{3}[/math]. In the Classical limit of matter dominance [math]\left( \rho c^2 \gg \frac{3}{2} P, U_R \rightarrow 0 \right)[/math], the sound speed approaches the corresponding canonical value [math]C_S^2 \longrightarrow \frac{5}{3} \frac{P}{\rho} [/math]. During the epoch of Recombination ("De-ionization"), radiation & matter shared a common temperature [math]T \propto (1+z)[/math]. Thus, the ratio of matter-to-radiation Pressure was independent of redshift: [math]\frac{P_M}{P_R} = \left( \frac{\rho_0 k_B T_0}{\bar{m}} \times (1+z)^4 \right) / \left( \frac{1}{3} a T_0^4 \times (1+z)^4 \right) \approx 3 \times 10^{-8}[/math] Given a baryon-to-photon ratio of 3e-8, the pressure per quanta (baryon, photon) is approximately equal, suggesting some sort of "equipartition of energy". During De-ionization, the ratio of matter-to-radiation Density increased with decreasing redshift: [math]\frac{U_M}{U_R} = \frac{ \rho_0 c^2 \times (1+z)^3 }{a T_0^4 \times (1+z)^4} \approx \frac{22,000}{1+z}[/math] where critical density has been assumed. During De-ionization ( z~(1000-100) ), the sound speed, in the coupled radiation-matter fluid, so calculated, would have varied as: [math]C_S^2 \approx \frac{c^2}{3} \times \frac{8}{3} \frac{1+z}{22,000}[/math] [math]\beta_S \approx \frac{\sqrt{1+z}}{150} \approx 0.2 - 0.07[/math] The Jeans' wavelength would have increased with decreasing redshift: [math]\lambda_J \approx \frac{C_S}{\sqrt{G \rho}} \approx \frac{c/150}{\sqrt{H_0^2 \Omega_0}} \times \frac{1}{1+z} \approx \frac{D_0}{150} \times \frac{1}{1+z} \approx \frac{100 Mpc}{1+z}[/math] which would have been of order ~(0.1-1) Mpc. Including more correct numerical factors, ~(0.2-2) Mpc. At present epoch, such size scales would have stretched to ~200 Mpc. Such size scales, as calculated, accord closely with canonical estimates, of the Jeans' wavelength, at the epoch of De-ionization, and last scattering of CMB photons, accounting for CMB one-degree anisotropies; and for observed "Baryon Accoustic Oscillations" of the large scale structuring of the spatial distribution of galaxies, on ~500 Mly scales. The above corrections seemingly show that this amended analysis is consistent with canonical calculations, in common cosmology textbooks. Indeed, perturbations the size of the Jeans' wavelength, as calculated, would be about one-degree on the sky, (largely) independent of redshift: [math]\theta \approx \frac{\lambda_J}{D_A} \approx \frac{200 Mpc}{1+z} \times \frac{1+z}{2 D_0 \left(1 - (1+z)^{-1/2} \right) } \approx \frac{0.2 Gpc}{2 \times 14 Gpc} \approx \frac{1}{2}^{\circ}[/math] i understand, that perhaps the term "relic Baryon-Photon Accoustic Oscillations" might be more accurate, in that those "large" scale perturbations, corresponding at present epoch to Voids & Super-Clusters, were imprinted before De-ionization, in the coupled matter-energy fluid.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.