Jump to content

Dubbelosix

Senior Members
  • Posts

    518
  • Joined

  • Last visited

Everything posted by Dubbelosix

  1. Dark matter is only one parameter that physicists used to change the standard model, if you have no complaints, will you tell me exactly what the nature of your posts earlier where about, before I continue? By the way, anything that predicts something that is not concurrently within the standard model, is beyond it, let's be clear about that also. You see, when you have additional features that are not exactly required by the theory, means your theory needs to somehow accommodate and give reason why it is happening - that doesn't mean, though, (and I really understand why you are struggling with me), this does not always include the notion of ''breaking'' any fundamental parameter. Sometimes, we are finding very strange artefacts of our science which has not been postulated to exist within a first principle of the standard model. These divergences from the usual model though, are beyond the standard model physics, by strict definition. This is not surprising, at least, a few sources I have read, physicists expected this quite a few times. It means, additional parameters over all have to be concluded, but as Swansont noted, these are often called error bars. The errors can be overwhelming though, when you look at the entire model of physics - this is why physicists actually get frustrated and call the standard model, ''the theory of nearly everything.''
  2. You can take when I say discrepancies, as a statement against saying the standard model is all there is. No offence, but the nature of your posts supporting defence against my accusations there have been many divergences since the original standard model, is testament to that fact - you seem to think there hasn't been changes since it was first proposed... do you know how many years ago that was?
  3. The dark matter is added into the effective density parameter as a correction, but it is still not of same magnitude as actual density of flat spacetime. (It) ...doesn't seem consistent with reality. Dark matter is an excessive component especially in recent times and recent experimental results. Things are just not adding up, I am sorry if you don't like my opinion or rate it very much.
  4. You won't find anything inconsistent with the standard model. That isn't what this is about, you won't find anything violating the laws of physics. The standard model isn't about that, it was a very early model, or sketch if you like which we have continued to find divergences from.
  5. The issue is more complicated now. We have found at least half the required dark matter problem. I've seen this as an additional problem to dark matter, even if one considers the extra amount will not account for acceleration curves. If you take dark matter as we are supposed to understand it then add another half of its total factor, we have a silly picture of reality. I liken the idea of dark matter to the spawn of the discrepancies that seem inherent in the fundamental equations - the Friedmann equation is a good example, in which the flat space density is many magnitudes of order out of satisfying a flat spacetime. 1) either the universe as we understand it is not truly flat 2) either our theory about the mathematics is wrong 3) Or all the above Any other suggestions, because even as a theory, dark matter doesn't even seem consistent? It seems like a superfluous addition to an otherwise, strange phenomenon with sources that can be sought in other, more local dynamics. I am possibly more inclined to believe option one is the most likely. This is what Susskind (seems) to believe - he thinks over time, we will gradually measure a curve. But if we take the Friedmann equation seriously, the curve should be more significant, so it is possibly a combination of understanding of reality and the mathematics we describe it, at least, the mathematical model.
  6. It isn't. At least, was never considered to be. It was never needed. You see, the standard model is a sketch, if you think we get it right all the time from the word go, this would be remarkable, to me.
  7. It tells most of us interested in falsifiable theories, quite a lot.
  8. Logical means, applying thoughts and reasoning, in a consistent way with how we understand nature. What is our best guess? 1). Inflation appears to be being challenged by the best scientists in the field (only because) it leads to unfalsifiable theories, like eternal inflation which involves a concept of multiverses. 2) Nature appears to favour the ground state when it can, in relativity, this transposes to systems taking the most efficient paths through spacetime, which may be a curve in space. Ultimately, reality tends to follow the least action principle. What does this say about logical consistency concerning the theory of parallel universes, and the possibilities-into-realities that Hoyle was so troubled about?
  9. Sure... I'll provide one, and we'll keep going... You will need to give me time, a big long story, but the links I once had are no longer accessible. But... I know of a few. One relatively recent one was the discovery of the Pentaquark, which was one deviation from the standard model, but was predicted in beyond standard models.
  10. An argument from logical sentiment wins over most action theories. There is no reason in nature to think the universe obeys those chaotic creation statistics in an evolution, self-contained universe. It's nonsense.
  11. Yes, this is now among the many discrepancies of the standard model. There have been quite a few in the last seven years that I cannot recall them all. We have deviated some way from the standard model, this just hasn't been realised publically while yet, scientists in the background are aware of this.
  12. My friend Matti just wrote recently how the multiverse was one of those obvious wrong idea's concerning the nature of reality - I agreed. I knew from early on, the idea of parallel universes was actually crazy, but what surprises me is how many great scientists are still duped by this.... theory which fell apart almost as soon as it was developed, (according to Matti) but I agree with this statement, since Hoyle showed the nonsense it was comprised of, from a simple analogy of flipping a coin 100 times, you would create [math]10^{30}[/math] alternative universe in the process - no wonder Addams said in the beginning, the big bang happened, and this made many people angry. He wasn't kidding... flip it a few more times and you will create more universes than there are particles in the observable universe, (which is actually more than now) [math]3 \times 10^{80}[/math]. The factor of three arises from the three spatial dimensions (or degree's of freedom of the number). Well at least you are partially correct. I am by no means Einstein and even by those standards, I am still a drop-out. But, I get the impression there are smart people here who do know physics and are willing to learn and give information themselves, by which we learn in the process. Mordred is a good example, I have learned, quite a bit since studying papers that have been suggested. But being educated in physics, still is not a reason to suffer ill-judged wisdom, and seeing scientists do it, is a ... metaphorical, ethical-killing crime.
  13. Maybe it comes down to my definition of strong evidence - after the study of three galaxies that have lost their supermassive black holes, the evidence in my opinion, is strongly suggesting the energy of the supermassive black hole is playing effectively the same role as the required energy to bind a typical spiral galaxy together. As for the rotary universe model, I have not stated anything about it in this post and haven't found it relevant to the discussion of black holes and their energy as associated to the binding energy of galaxies. Personally-speaking, such effects should not be such a surprise and without a proper formulation of black hole physics (already evidence suggests we don't have our theory right on them) then we are literally talking about unknown objects - but the suggestion I have made, is not without reason and I have provided evidence, albeit, we may disagree on the usage of the word ''strong.'' Now... this model hasn't been tried and tested Mordred, let's be clear. The evidence the dark matter disappeared early on is quite new. I found a link with supermassive black holes, studied the possibilities and have came up with the model to explain the discrepancy. I have done a good study, I think I really was the first to suggest the dark matter problem of where it went to 10 billion years ago was actually related to black hole size. The evidence was strong enough, that Matti. a physicist in his own right, suggested the evidence strongly suggests a reformulation of black hole dynamics with relativity. Also, I found an age correlation between two galaxies - the one that lost its supermassive black hole earlier, was in fact a galaxy more loose than a previous example. This to me, couldn't be ignored. It is evidence also - that it consistently stands up. (so far) I want other people to do their own investigations and not just take what I say for the truth. There may be cases in nature which will come to trouble this model - for that I welcome it, for it then makes it falsifiable. I also ask that if anyone has the means to test the theory, to do so. Can I also be clear about one thing though, I just read back. If the full amount of matter is found, then there is no need for my model? See how this turns around? Finding half the missing matter is interesting, because it starts to fit more into a reasonable expected density distribution over space time, but(!), if we find all the matter, will it explain the rotation curve phenomenon? I hope it will not. As you said, finding half the missing matter doesn't explain rotation curves - however, we may come to map more accurately the universe and find all sorts of parameters not fitting together - I know of a good one I have wrote more extensively than I will write about now... but the Freidmann equation is a good example. The density reauirements of this equation does not fit flat spacetime. Finding half the missing matter but still expecting dark matter to hold, also posits problems. So maybe this will convince, hopefully some of you, the thoughts and reasoning to come to the conclusions I have. Dark matter is simply a parameter that is not holding up.
  14. Simple, we start looking back 10 billion years or so when rotation curves vanish from the universe. This is an indication that black holes may not have been massive enough. This was one of the first things I pointed out, because if black holes and dark matter are tied together the way this model suggests, and if dark matter cannot be detected 10 billion ago, then this model is testable by measuring only the earliest galaxies and testing whether the correlations hold up. So far, there appears evidence this could be the case. Some of these questions have to be approached carefully, you ask how big the black holes needs to be, but surely you don't expect me to answer this? Science isn't always about exact answers - it is often about investigation, and correlations and a matter of deduction. When relativity treats the black hole model phenomenon correctly, then maybe someone can make those calculations. I just think its a bit premature - right now we should work with what evidence we have right now, and work with it. A preliminary guess work however has shown that for at least most typical spiral galaxies harbours a black hole with an energy which is more or less equal to the binding energy of the galaxy. Central core will either flatten or will expand due to centrifugal forces. Black holes hold the galaxy structure together, for this, I found strong evidence.
  15. Related to these discussions, at least half the missing matter in the universe has been found. Again, I say, no need for dark matter. Rotation curves can be explained by supermassive black hole phenomena. https://www.newscientist.com/article/2149742-half-the-universes-missing-matter-has-just-been-finally-found/
  16. A summary elsewhere, and duly noted Mordred, certainly for future discussions. http://www.physicsgre.com/viewtopic.php?f=10&t=127412
  17. Ok, so hopefully I have collected a reasonable amount of information now. I am looking for a nice simple theory - we will look at the crucial expectation value of the theory and see how it implements into information theory. Assuming (from previous work) that we are working in a phase space with a some observable system, the probability density is given by [math]\rho[/math] that maximizes the relative entropy [math]S = -\int \rho \ln \frac{\rho}{\rho_0}\ dx[/math] And has a constraint normally notated as [math]\int \rho\ dx = 1[/math] Entropy can be measured by [math]S = - \int \psi \bar{\psi}\ \ln \psi \bar{\psi}\ dx[/math] where [math]P = \int_U \psi \bar{\psi}\ dx[/math] Which is the Max Born probability of finding some particle in a domain [math]U[/math]. Obviously the probability density is [math]\rho = \psi \bar{\psi}[/math]. This uses different but identical notation to the square value [math]\psi\psi^{\dagger}[/math]. Let's consider an argument and proof for a single particle wave function collapse, with [math]q = \frac{1}{n}[/math] [math]-\sum^{n}_{i=1} p_i\ \log q_i = \sum^{n}_{i = 1} p_i\ \log n = \log n[/math] Which is known as the entropy of [math]q[/math]. The inequality exists [math]h(p) \leq h(q)[/math] with equality [math]iff[/math] [math]p[/math] is uniform (uniform probability distribution). This is interesting, because you can argue, even in the absence of other particle dynamics, a single wave function could be capable of collapsing under it's own gravitational weight by assuming there is an analogue of the centre of mass for a wave function, which can be interpreted as fluctuating around the absolute square value of its wave function. The relative entropy is a distance measure between probability distributions. The entropy equations can be a little confusing so best write them out in full. The probability distributions this time, denoted as [math]p[/math] and [math]q[/math] is given as (see references), [math]D(p|q) = \sum_l p_l\ \log_2(\frac{p_l}{q_l}) = \sum_l p_l(\log_2p_l - \log_2 q_l)[/math] Which is basically the difference of the information gain between two distributions (under observation) - this must translate exactly into probability distributions which satisfy [math]\rho = |\psi|^2[/math] (the Born rule). Just for future reference, the relative entropy is also known as the Kullback-Leiber divergence. In this next case, we treat the geometry like an observable and the expectation value can be related to the density matrix [math]\rho[/math] given by: [math]Tr(R_{ij} \rho) = Tr(R_{ij} \sum_i p_i|\psi_i><\psi_i|) = \sum_i p_i Tr(R_{ij}|\psi_i><\psi_i|) = \sum_i p_i Tr(<\psi_i| R_{ij}| \psi_i>) = \sum_i p_i <\psi_i|R_{ij}|\psi_i>[/math] Which retrieves the expectation value of the geometry of the systems for any ensemble of states [math]\rho[/math], something we investigated before, but this time, we have it in the context of probability. This is, the very best and most simple way of investigating the expectation value in our theory (remember, we also briefly looked into how to vary the wave functions using a variational principle). This solution was specifically found within the last reference. It cannot be stressed enough though, if we want entanglement in this theory, we must, unlike the classical conditional entropy [math]S(a|b) = S(a,b) - S(b)[/math] ... which always remains positive, the quantum mechanical equivalent [math]S(\rho_A|\rho_B) = S(\rho_{AB}) - S(\rho_B)[/math] is not. The state is entangled if [math]S(\rho_A|\rho_B) < 0[/math]. Very simple and nice, where the subject of entanglement can often get quite complicated. I take it as a matter of principle to find theories that are at the core, as simple as they can become. Some important things to issue here, [math]S(\rho_A \otimes \rho_B) = S(\rho_A) + S(\rho_B)[/math] and that [math]\rho_{AB} = \sum_i \rho_{A} \otimes \rho_B[/math] are always separable. So if [math]S(\rho_A|\rho_B) = S(\rho_A)[/math] it is a separable state and no correlations. If instead we have [math]0 < S(\rho_A|\rho_B) < S(\rho_A)[/math] it is then said to have ''classical correlations.'' And if [math]S(\rho_A|\rho_B) < 0[/math] Then the correlation is quantum. Again, please check references to see this in literature. To finally sum up, it seems we have jigsaw pieces again to try and fit together. We identify the previous relationship of the probability to the geometry as [math]Tr(R_{ij} \rho) = Tr(R_{ij} \sum_i p_i|\psi_i><\psi_i|) = \sum_i p_i Tr(R_{ij}|\psi_i><\psi_i|) = \sum_i p_i Tr(<\psi_i| R_{ij}| \psi_i>) = \sum_i p_i <\psi_i|R_{ij}|\psi_i>[/math] We also have from previous formulations, that twice the deviation of the mean of the curvature as applied to a spacetime uncertainty was actually related to the expectation value as [math]<\psi|[\nabla_i, \nabla_j]|\psi>\ dx^3 = <\psi|R_{ij}|\psi>\ dx^3 \leq 2 \sqrt{|<\nabla^2_i> <\nabla^2_j>|}[/math] Again, identifying the spacetime subscripts [math](i,j)[/math] as systems related to their own entropy, then the previous equation will satisfy the entropy classical upper bound, [math]S(A:B) \leq 2min[S(A),S(B)][/math] The difference in information gain will also be related to the difference of geometries which we derived a while ago. ref http://www.math.uconn.edu/~kconrad/blurbs/analysis/entropypost.pdf http://www.tcm.phy.cam.ac.uk/~sea31/tiqit_complete_notes.pdf
  18. I think I know how to meld it altogether now, but once again it is late so will put pen to paper tomorrow.
  19. I'm not interested normally in unfalsiable ideas.
  20. Though density matrix will play role in bipartite system as well as [math]\rho_{AB} = |\psi_{AB}><\psi_{AB}|[/math] good night Mordred, speak tomorrow, I will be off to bed soon now.
  21. Sorry, in what sense, its late here, brain not working too quick now.
  22. That's the density function that usually for, [math]\rho = \sum_i n_i |i><i|[/math] I've not really encountered it much within the theory. Don't get me wrong, that density function is vital for the von Neumann or Shannon entropy.
  23. Oh I know what it means, but people make big work out the unitary groups and its not something I follow deeply. I don't know where my gravity is heading, only that I want to see the final result and see where it all fits and whether it is truly feasible
  24. [math]\sqrt{|<\nabla^2_i><\nabla^2_j>|} \geq \frac{1}{2}<\psi|[\nabla_i, \nabla_j] |\psi>\ d^3x = \frac{1}{2} <\psi| R_{ij}| \psi>\ d^3x[/math] Let's swap it around [math]<\psi|[\nabla_i, \nabla_j] |\psi>\ d^3x = <\psi| R_{ij}| \psi>\ d^3x \leq 2 \sqrt{|<\nabla^2_i><\nabla^2_j>|}[/math] We may multiply through that factor of 2, and what we can end up doing, is viewing the interpretation still as a mean deviation, but one that can reach twice the classical upper bound, once again. This obviously has a formal similarity to [math]S(A:B) \leq 2 min[S(A),S(B)][/math] We can understand then from previous equations, that [math]S(A:B)[/math] is also [math]S(A) - S(A|B)[/math]. This is identifiable also on page 64 of the following work: https://arxiv.org/pdf/1106.1445.pdf So tomorrow, and maybe even a few days after, I will be looking more into any possible relationship between the two inequalities along with investigations into quantum bipartite Shannon entropy, just as the von Neumann conditional entropy allows. I might leave the special unitary grouping to the more familiar
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.