Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 02/14/20 in all areas

  1. Guessing you mean the 3 in 3 x 105 I have not heard one either. Google gives this, which is slightly more restrictive than my (engineering) version. https://www.shmoop.com/study-guides/pre-algebra/basic-operations/scientific-notation
    1 point
  2. I think in your (implicit) assumption that there will still be a continuous "wall" (wavefront?) of light at a great distance. If you look at something sufficiently far away then you will receive very few photons: some might go just above you head, some to your right and, every now and again, one might enter your eye (or telescope). Some images of very distant objects are taken over very long periods because it takes hours or days to catch enough photons to make an image. But your model does explain why the intensity falls off with the square of the distance: the photons are spread out over the surface of a sphere whose area is proportional to radius squared.
    1 point
  3. The issue at hand is whether or not you can see what's going on. You can see an impact even if it takes milliseconds. You just need a sufficiently fast movie camera. But you aren't going to film a photon absorption. (Not claimed by me) Not everyone in the discussion has a PhD in atomic physics. It doesn't make them wrong. You also probably don't want the conversation to be phrased as it might if the only participants were PhD physicists. Would it be helpful to you for me to say it's like the intermediate state of a Raman transition (with the expectation that I wouldn't have to do any followup explanation)? Does it "hit its target" after scattering? Do you think that the OP, or others asking questions, know what Raman scattering is, that they would consider it as part of the problem?
    1 point
  4. As I say, you need to start from the beginning. Learn a few basics. Then build on that. Then, eventually, you might be able to understand the things you are asking about. But just asking random questions about complex subjects is not a good way to learn.
    1 point
  5. ! Moderator Note You're making it an incredible amount of trouble, because you can't seem to recognize when someone is answering your questions (SO many answers have been given, but you're still looking for someone's help?). How many posts/pages were wasted because you kept insisting on temperatures below ABSOLUTE zero? You keep bringing up unrelated information you picked up from sources you refuse to link to, yet you want accurate clarification in the replies you get. Everyone here would love to show you what mainstream science has observed about the topic, but you seem very resistant to learning. It's clear you have many misconceptions that are holding you back. Can you please think about how you might alter your learning style? It clearly is doing you no favors, and it seems to be actively blocking you from accepting help.
    1 point
  6. ! Moderator Note This is the second time in a short span that you have done this: insinuate a claim, but not provided sufficient evidence in the thread to support it. It is not enough to just provide a link. Rule 2.7 states that people have to be able to participate without clicking any links That puts the onus on you to provide a quote from the article that provides a basis for the discussion. In this case, the passage that claims that this star is older than the universe. You imply that the article says this, in no uncertain terms, but it doesn't. That kind of argument style isn't acceptable. Do better.
    1 point
  7. So you are pointing out that you can't do something that careful explanations of physics don't do. Light has wave properties/behaviors, and particle properties/behaviors. You seem to be rebutting a different argument. It's not a strawman, per se, because there are some introductory explanations out there that use that phrasing. But it's strictly for the tourists — people reading a pop-sci article, taking a survey class in physics, or from someone giving an oversimplified explanation. If you've moved past the tourist level you know that classical descriptions don't apply; you don't have classical behavior and you know classical descriptions don't apply. MigL mentioned the phrase "quantum particle" which underscores that point. A quantum particle is not a little ball bearing, nor is it a water wave.
    1 point
  8. My assumption is only that interactions between particles are adequately described by the framework of QFT. This trivially preserves causality, but not necessarily locality and/or realism. It is the current scientific consensus. You can choose to not follow that consensus of course (as you seem to be doing here), but that doesn’t automatically make it “wrong”. You are merely putting forward a different hypothesis. In what sense then are they preferred? What mathematical definition of “preferred” are you using? Can you formalise this for us? I’ll be blunt with you - your very rejection of what you consider “dogma” appears to have become dogma itself for you. The paragraph I quoted the above from really lets that shine through very strongly. At least that’s the vibe I’m picking up. Physics should not become a partisan issue - it is not about metaphysical notions of “right” or “wrong”, but about what model works best in describing aspects of the universe as we see them. Sometimes models are “right” in some circumstances, but “wrong” in others. It’s an epistemological endeavour, not an ontological one. The map isn’t the territory, but it does need to accurately represent the relevant aspects of the terrain, on the relevant scales. Quantum theory / QFT actually does this rather well. As for the specific example of entanglement, since no exchange of information is necessary at the time of measurement, causality never even comes into it at all. By letting go of either realism or locality (or both), we eliminate the very need to exchange information, and hence no artificial notions of superluminality, preferred frames etc are necessary in the first place. Causality is trivially preserved, since the measurement always happens after entanglement has been created, and the outcome of measurements is compared following the usual rules of SR, which again trivially preserve causality. There really is no problem here that needs to be “solved” somehow. The problems only emerge if one demands that notions which appear fundamental in the classical domain (such as locality and realism) must be scale-independent, i.e. necessarily apply on all scales. Why should that be necessarily true? Again, it isn’t about what is right or what is wrong, but about what model best fits the universe we observe. To that end, there is no problem whatsoever in setting aside notions of realism and locality, of absolute time and space, if the resulting model is in good agreement with available data. Realism and locality are not sacrosanct notions somehow built into the foundations of the universe on small scales, rather, they originate in what us humans think the universe should be like; they are a reflection of our own experience, which is, after all, rooted in classicality and the low-energy regime. The unscientific act would be to unquestioningly assume that such notions apply across all scales. There is no apparent reason why they must, but plenty of reason to believe that they don’t. I think even the notion of causality itself may not necessarily be scale-independent. This remains to be seen. To make a long story short - not only is it no problem for me personally to set aside locality and/or realism, but I think it is a perfectly reasonable thing to do, if the resulting model describes very well what it is supposed to describe, while at the same time respecting other principles of physics, such as diffeomorphism invariance. To me, introducing preferred frames and space-like world lines creates many more problems than it solves. Don’t get me wrong here - investigating the implications of such things as preferred frames and space-like separations is quite a valid endeavour, but it doesn’t seem to add any value to physics as it stands. It just creates unnecessary problems and complications. Now, if you could put forward a model that preserves locality and realism without the need to add superluminality and preferred frames...that would indeed be something!
    1 point
  9. It can't be. That should be obvious if you are even remotely aware of what the "universe" is.
    1 point
  10. This is what I read somewhere about cosmos constant changing to below 1 a comment about the article I posted “IfPoplawski is right then the equation of state of Dark Energy might fall out trivially. I'd expect it to be close to w=p/ρ=−1w=p/ρ=−1 like the usual cosmological constant but the kinetic term would make it slightly larger than -1. As it turns out the measured value of w from the Planck data is -0.98 but with an error bar of 0.05. So beating down on the error might allow some interesting conclusions to be drawn.” i am trying my best to understand I am sorry but if you want to make fun of me it’s fine, I will keep trying to learn better and also this is from wiki about the Higgs and it’s possible decay lifetime 1.6×10−22 s which is not long Quantum mechanics predicts that if it is possible for a particle to decay into a set of lighter particles, then it will eventually do so.[162] This is also true for the Higgs boson. The likelihood with which this happens depends on a variety of factors including: the difference in mass, the strength of the interactions, etc. Most of these factors are fixed by the Standard Model, except for the mass of the Higgs boson itself. For a Higgs boson with a mass of 125 GeV/c2 the SM predicts a mean life time of about 1.6×10−22 s. The Standard Model prediction for the branching ratios of the different decay modes of the Higgs particle depends on the value of its mass. Since it interacts with all the massive elementary particles of the SM, the Higgs boson has many different processes through which it can decay. Each of these possible processes has its own probability, expressed as the branching ratio; the fraction of the total number decays that follows that process. The SM predicts these branching ratios as a function of the Higgs mass (see plot). One way that the Higgs can decay is by splitting into a fermion–antifermion pair. As general rule, the Higgs is more likely to decay into heavy fermions than light fermions, because the mass of a fermion is proportional to the strength of its interaction with the Higgs.[120] By this logic the most common decay should be into a top–antitop quark pair. However, such a decay would only be possible if the Higgs were heavier than ~346 GeV/c2, twice the mass of the top quark. For a Higgs mass of 125 GeV/c2 the SM predicts that the most common decay is into a bottom–antibottom quark pair, which happens 57.7% of the time.[3] The second most common fermion decay at that mass is a tau–antitau pair, which happens only about 6.3% of the time.[3] Another possibility is for the Higgs to split into a pair of massive gauge bosons. The most likely possibility is for the Higgs to decay into a pair of W bosons (the light blue line in the plot), which happens about 21.5% of the time for a Higgs boson with a mass of 125 GeV/c2.[3]The W bosons can subsequently decay either into a quark and an antiquark or into a charged lepton and a neutrino. The decays of W bosons into quarks are difficult to distinguish from the background, and the decays into leptons cannot be fully reconstructed (because neutrinos are impossible to detect in particle collision experiments). A cleaner signal is given by decay into a pair of Z-bosons (which happens about 2.6% of the time for a Higgs with a mass of 125 GeV/c2),[3] if each of the bosons subsequently decays into a pair of easy-to-detect charged leptons (electrons or muons). Decay into massless gauge bosons (i.e., gluonsor photons) is also possible, but requires intermediate loop of virtual heavy quarks (top or bottom) or massive gauge bosons.[120] The most common such process is the decay into a pair of gluons through a loop of virtual heavy quarks. This process, which is the reverse of the gluon fusion process mentioned above, happens approximately 8.6% of the time for a Higgs boson with a mass of 125 GeV/c2.[3]Much rarer is the decay into a pair of photons mediated by a loop of W bosons or heavy quarks, which happens only twice for every thousand decays.[3] However, this process is very relevant for experimental searches for the Higgs boson, because the energy and momentum of the photons can be measured very precisely, giving an accurate reconstruction of the mass of the decaying particle.[120]
    1 point
  11. ! Moderator Note Please quote that part then, like this: https://simple.wikipedia.org/wiki/Big_Bang ! Moderator Note It's not though, and people have pointed that out. And you've ignored them. So they asked you for a link and a quote where the claim is made, and you ignored that too. Nobody is interested in "from what I've been told". Please ask questions about specific things you've read and can link to. This will make it easier to not only correct your misunderstandings, it will remove you as your own biggest obstacle to the learning process. You have a tendency to confuse yourself by not learning one thing correctly before proceeding to the next.
    1 point
  12. Let me put my understanding of the situation to you. Two things seem given: First, the deformation on the cylinder suggests contact between the side of the dome and an obstacle. Second, the almost circular shape of the crater requires a vertical impact or explosion. At 30-50m/s the cylinder moves relatively fast. Anything that hinders its path in this movement but does not deflect it can only cause longitudinal scratches, but no lateral imprints. If the cylinder actually fell, it was stopped by the balcony, because it did not fall into the lower level. The only way to get from that vertical standstill to a lateral contact is to tip over onto the grid. In this case the the force wouldn't be sufficient. Furthermore, the grid would have to be next to the crater with the cylinder on top of it. A bounce or a lateral contact against the wall can be excluded, because such a contact would have left considerable / visible damage. Except for the fragment patterns from any explosion, however, no damage can be attributed to the cylinder. Furthermore, the bouncing should have taken place after the formation of the circular crater, because (as we know from hole at location 4) this bounce can only happen to a very small extent and the energy after bouncing could neither create such a crater nor damage a bed. Even if enough energy would remain the lateral bouncing cylinder would certainly not create a circular hole of its double diameter without more excessive damage to the surrounding walls first. Therefore, the assumption that these scratches were created as lateral bumps on the way between the roof corner and the crater (or even after the creation of the crater) is similar to squaring the circle.
    1 point
  13. McConnell don't care #MerrickGarland
    1 point
  14. Okay, I believe you. What now? I tried to tell them something like this and they down-voted me into oblivion when I first started on this forum. For some reason, I have lost my right to vote. I expect to get randomly banned for no real justifiable reason sometime soon over it. Do you know how they were able to distinguish between a photon and electron in the experiment? That also seemed to be a fundamental issue which is in dispute on these forums, not to mention the fundamental theorem of calculus... It may be just because strange is a Chinese speaking person from Italy, now. Do you believe that electrons can act more like photons or waves if they are not observed? They also tell me that all electronic signals are made of light, because electrons are too massive to behave like waves. What kind of light could this hypothesis even shed on this issue?
    -1 points
  15. You answered but have not said why I am wrong also the light from void has reached us that means the void has also or is about to Also this supervoid is showing there is something now right i am trying to understand this whole post but keep getting different answers
    -1 points
  16. This is my understanding and I am not being corrected. and there are no other cold spots in the in universe such as this one a cold spot is an area in the universe different than the regular areas that for some reason that area is colder than anywhere else which does not coincide with the standard Also as I’ve been stating there’s a lot going on in the universe but no one seems to want to state it
    -1 points
  17. The first problem: Quantum theory is incompatible with fundamental relativistic symmetry. In quantum theory, you have a configuration q of the system you consider, and the main object is the wave function \(\psi(q)\). It changes in absolute time \(\psi(q,t)\). The configuration is something global. For N particles, it consists of the 3N coordinates of these particles. So, this gives a wave function \(\psi(x_1,y_1,z_1, ...,x_N, y_N, z_N, t)\) There is no natural way to define a Lorentz transformation. (For special-relativistic field theory, there have been found ways to circumvent this, but an essential part of this is the decision simply not to talk about everything which does not follow relativistic symmetry. All the conceptual problems, especially those related with the violation of the Bell inequalities, remain problematic in this theory too, but one simply does not argue about it, creating (surprisingly quite successful) that there are no problems with this. In GR, this becomes much worse. One cannot really write down a reasonable quantum theory without specifying some time coordinate. In the naive hope, one would like to be able to show that this can be done in a way that does not depend on the choice of the time coordinate. This fails, and the resulting problem is named "problem of time in QG". The second problem is that GR is not renormalizable. This was thought, initially, as being fatal, but today we know that it is not a big problem at all, if one accepts that it is an effective field theory, that means, a theory which is only a large distance approximation of some yet unknown different theory. Unfortunately, for those who like the spacetime interpretation, this is hard to accept. To replace GR below some critical distance (say, Planck length) by some different theory without any infinities (this is called regularization) leads to theories which violate relativistic symmetry. Of course, it would be quite natural to assume that a more fundamental theory has a different symmetry than its large distance approximation. But for most proponents of fundamental relativity this is simply unacceptable.
    -1 points
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.