Jump to content

fredrik

Senior Members
  • Posts

    532
  • Joined

  • Last visited

Everything posted by fredrik

  1. I don't understand what you mean here. How do you envision that entanglement, reduce the uncertainty that's due to incompatible observables? I'd expect a prescription, defined from the point of view of an incompletely informed observer. One difficulty in thinking about this is to not try to use information we really don't have in possession, in order to explain something. Sometimes we have incomplete information about somthing, and this can be used for systematic reasoning, if we at the same time can measure the confidence in the premises. Then our conclusion can be given an estimated probability of "beeing" right. It's easy to loose track of what is going into the inductions, and one ends up with a possible conclusion without any accompanying measure of plausability whatsoever. For example, in many decoherence treatments, there are two views. The inside view, and the global view. The global view, can explain the constrained view, given that we assign a border, and reduce information. But that is answering a question that wasn't the original one. The global view, having information about the entire "universe" really isn't a physical view, as it does not represent a true inside observer. Decoherence is interesting, but it does not provide the complete answer. It rather makes up another question, which it answers. But that other question is not a physical one, in the inside-view sense. That's how I see that. /Fredrik In one perspective, a probability distribution in the first place is an expression of uncertainty. I recall some philosophical paper where someone reflected over what a question is and made an interesting abstract association between "a question" and a "probability distribution" (PD). A PD can be taken to be a well defined meaning of a question, at the indexing defines the possibilities, and the weight at each index explains the odds. So the implicit question: what event will we observe next, is complemented by an _expectation_ of the answer. The expectation contains the SET of possibilities, and a weight for each possibility. Now if you actually get an answer, in general, this SET and this set of weights are updated. So the result of answering a question, is a new question. This is how I see QM. An objection to this is that the notion of "information" is unclear. What are talking about physical reality here, and thus it's a justified question to ask what the correspondence is between "information", "questions" and "answers" in terms of something physical? The way I see it, the physical microstructure of an observer, itself, encodes the prior information, which also contains all current "questions". So the PD:s IMO have a physical correspondence in the observers microstate. This also suggests that the abstract "game" of questions and answers, really correspond to the observers microstructure physically interacting with the environment. So the collapse of the wavefunction, IMO corresponds to a physical change of the observers internal microstate. We call it names like perturbations, excitations etc. One can also classify these perturbations as rotations, translations, and other types of excitations. One interesting thing here is to distinguish between microstate and microstructre. The mictrostructure defines the degrees of freedom, and their inter-relations. The microstate OTOH is simply state of and given microstructure. But in the above reasoning it's clear that in principle BOTH the microstate, and the microstructure may change! Changing of microstates is IMO a simpler type of information exchange, evolving the microstructures is more sophisticated and IMO may not always be unitary in the traditional sense. Of this unitarity and conservation of information is an ideal of modern physics, but I think we might be open to question everything in a scientific spirit, because as we speak of relational information, it's not trivial to define WHICH information is conserved. Because information, always refers to a real inside observer, and the notion of information is DEFINED relative to "inside measurements" this observer makes. But here my understnading of physics is incomplete. I think many feel the smell, but we're all trying to see where it comes from /Fredrik
  2. If you are referring to the heisenberg uncertainty principle, say between x and p, and wonder if this uncertainty could be due to missing initial information, and due to an unknown causal chaotic relation, then I personally think this path alone is unlikely to be fruitful. The reason for the uncertainty is not unknown interactions, it's because there exists a relation between x and p - they are not really independent. In general, if you are given two variables. There is no a priori reason to assume anything about their relations or non-relations, all assumptions should IMO be traced to the path of observations. The notion of adding information "x and p" does not follow the normal rules of statistics and classical probability. Exactly why this is so, may be analysed in different ways. Normally, it follows from the operator definitions in QM, or postulates, depending on how you line out the theory and the axioms. But of course this is unsatisfactory and clearly a cheap way, because if it's just a matter of postulates and definition, the question still remains why this fits so well with observations, and it make the scientific progress itself the interesting part! One can still question the process that leads to this CHOICE of definitions. I guess until it's all understood, there is no need to close any doors. But I ask myself questions like, given that we measure only position, how can the measurement of momentum, be understood to emerge as a result of a dynamica process? Ie. can new "operators" be understood to be a result of unpredictable interactions involving prior operators? Then all measurement process are related. This is the relation system I wish to find. This is in line with the intrinsic thinking popular from GR - when new phenomena appear, I'd expect them to be constructible from information at hand, and most probably as deviations/exceptions from the current expectations, rather than pulled out of the sky. The question is if a answers arises spontaneously to a question that you never asked, or if the is first goes via exceptions from expectations based on current questions, then the deviations induces a new question. I imagine that this is how measurement operators may appear, and perhaps an analysis of this process explains from more first principles their relations. So that we can reduce the number of free constants, postulates and definitions. /Fredrik When I think of this, it's of course possible to view this "relation" as a type of interaction as well. I guess it depends on how you see it. Words are easily wrapped around. Pehaps, to avoid bending words, it would be more interesting if you could expand what strategy your idea would use. How would this extrapolation be implemented physically and how are you to get information about this exponential repercussions? /Fredrik
  3. In my personal view at least, as per a epistemological view (that I consider to be the best setting to understand QM) the notion of classical causation can not be defined. What we observe are correlations. And while the plausability for existence of some speculated "causal relation", increases as we consistently keep observing that there are correlations, there is never a hard implication between correlation and causation. Even if we don't KNOW, it certainly seems like the best possible bet, to place our bets as per the "causation" plausibly "suggested" by our observed and retained history. So IMO, "causation x->y" as a law, in the first place is emergent from our observational history of correlations. So there is no a priori reason why correlations always suggest a consistent causation. And even when it does, there is a limited confidence in this causation, that we might try to measure by the amount of evidence we have suggesting it's plausability as per bayesian reasoning. My suggesting is to start from the logical end. Why do we expect causality in the first place? This is in fact asking for the causality between information an expectation. Which is exactly what's going on in QM. /Fredrik In classical mechanics causations tells us what happens given the initial state. Initial Condition -(Classical.LAW) -> Prediction of Observations Classically the causation itself are assumed to be objective and certain, and the only uncertainty lies in the initial conditions. An improved analysis of that might question the notion of causation. It is obvious that even is we think the causation is certain, we still have to discover it, as we discover the laws of nature. And un analysis of this process of discovery, suggest that there is always a uncertainty in the causation itself. The creation of causality expectations lies at the level: Experience of observations -> Processing of records -> Expectation on probable correlations IMO, this added level of analysis is a first conceptual step of appreciate QM. It also reveals that the expectations are dynamical things. Information about possible initial conditions -(Quantum.LAW)-> Expectation on possible Observations However the remaining causal determinism in QM, is the one between initial information and expectations. This means that QM says that given a specific initial information, there is an "optimum betting". This is what QM models. The question is what optimum betting really means? I like to think of it as the "most probably" betting, taken over the set of all possible betting algorithms. But the natural expectation out of this is that the betting algorithms themselves evolve, and thus that the determinism left in QM, is subject of analogous critics as the one in classical mechanics. I personally consider the resolution of this to consider the "betting computations" to be manifested by the observers microstructure. And evolving the betting algorithms means evolution of the observer. And the criteria for evolving is clearly the observers self-preservation. At this point one really als questions the observer! And one might ask things that, what is the simplest possible observer? Which amounts to ask for "elementary particles", because these guys are indeed the simplest possible, but nontrivial (distinguishable) observers! I think the conceptual view this present is very attractive and plausible. But the exact meaning of this, and realisation in the mathematical formalism of QM is still not understood. ( About the complex amplitude and QM superposition, my current working hypothesis is that the key to understanding this better is in the step "processing of records". I've been thinking of this for a while but still await progress. Analysis of the "processing of records" under the constraint of limited information capacity, and also relatively speaking (time) since time is just relational changes, may suggest that there is a natural "optimation problem" whose best (most probably) solution is the superposition. This process takes time, and would be related to decoherence and coherence (multidirectional). But I'm still thinking about that and I'm inconclusive. ) /Fredrik
  4. Perhaps. I don't read that magazine, except possibly once or twice at the denist waiting room. My general experience is that the colourful and "designed" magazines aren't the best place to get good info, except if you want to see nice pictures or illustrations. But I shouldn't say too much about that specific magazine because I never read it. As for science stuff, I personally mainly buy standard textbooks. On rare occasions i've payed for online articles. I have not principal problem with unknown authors or non-mainstream ideas, but still it's up to my own judgement to decide what information I want to process. There are a sea of information out there, and one can easily drown in them unless one can rate what to consume and not to. I'd say at best, that website does a poor job of selling me whatever answers he is providing. /Fredrik
  5. Perhaps a first question is what kind of magazines are you spending time and money on that gives you ideas on what to spend further money on? /Fredrik
  6. Never heard of it. And checking that website definitely does not give me a sufficient reason to motivate a purchase. /Fredrik
  7. I don't know what he means in the context of GR. But to give another coment I find alot of the geometrical thinking to be unsatisfactory, when you try to recast it into a information view, and question each construct for it's epistemological status relative a real inside observer. In classical deterministic and objective reality thinking there are no problems, but the question is, if we consider two - bounded memory - inside observers, how do they inform themselves about this manifold and any other things much of the classical formalism depends on? Is it obvious that they come to the same conclusions? This probably isn't what he meant though, but it's a general issue I see. I would like to reformulate differential geometry itself, in terms of information. I think that's what we need. There are many things, classically one imagines that the observer can cover the manifold and do measurements with rods and clocks, but he may not have the memory to store all this information - that's only one problem. I'm not aware of any matematical formalisms that realises this desire? Are you? I think this is part of what we might need to understand QG, at least that's how I see it. /Fredrik Somehow, the way I picture it, the observers information "forms" something that might be a manifold, but a highly dynamical one, and that is subject to change in response to interactions with the environment. Where the dynamics on this manifold, represents the observers subjective expectations of his environment. But in that picture the manifold is subjective, not objective. And then one can imagine that a particular observer A can behave differently than what B would suspect by projecting A onto his expectations. And this is because A:s internal degrees of freedom is unknown. This unpredictability probably would not be favourable for anyone, so one might expect interactions to exert a selective pressure for finding a state of maximum agreement. This might explain why most of the time, today, evolution has come so far that these strange things are neglectable. If we take Einsteins idea to be that the laws of physics must be found in a form that is the same regardless of the observer. But Einstein considered chosing and observer, as choosing a reference frame. But what about the observers microstructure, and other constraining qualities? ie. maybe the diffeomorphisms doesn't generate alla observers. This seems to me like a very natural direction of synthesis of GR and QM. But I think we need new math, or at least a new applied formalism, that unifies information theory (both information storate/compression AND communication theory) with differental geometry. Maybe that's something for you to fix ajb, since I understnad you are studying in this direction? ) /Fredrik This may sound similar to string theory in the sense that one considers a simplest possible case of a p-manifold beeing an "inside observer" living in say an 11-manifold. But the question I see as physically relevant is how that 11-manifold "looks like", from the point of view of a 1-manifold, and to what extent the 11-manifold is "speakable" from the POV of an inside observer? This also also connects to the holographic ideas that information contained in a "volume" of space can be represented by information living on the boundary. I associate a generalisation of this boundary to simlpy be the "communication channel" between an observer and it's environment. Clearly anything speakable is constrained but the "capacity" of the communication channel, and the conceptual step from "surface area" and "channel capacity" is clearly not large. Even thouhg I think they are sniffing alot of interesting sutff, the line of reasoning in string theory (judged by my incomplete understanding), never appealed to me and it doesn't seem sound, unless reinterpreted and reformulated to the extent that it almost gets sometihng else and isn't string theory anymore. /Fredrik
  8. An interesting, and quite current question, is the reflection around the scientific method à la the idea of progress by falsification, is wether the scientific method itself is static? Or should the scientific method itself be considered acquired? If we consider the "scientific method" to be the in some sense, "optimim" method of expanding our knowledge, then how is this optimum method found? Once can see that this is exactly the same principal question that takes us from ontology to epistemology. I have a feeling that some scientists, don't see the development (and thus questioning) of the scientific method itself, as a relevant question to science? At least that's the impression you can get from commonly seen argumentation. Inductive reasoning, needs premises and a rule of reasoning. But I think that even the rules or reasoning should be questioned. In it's simplest form, evolution can be though of as a random generation of ideas, and then a selection. toss the bad ones, keep testing the good ones. But at some point, the random generation isn't good enough, and more clever rating systems of theories to test strategies will be favoured. This seems to suggest evolution at all stages... not just evolution of a state of knowledge, as per the rules of learning - it should be expected that in parallell with this, even the rules of learning is evolving! I think there are differeing opinons in this among people, and it seems that some of the argumentation in for example approaches to quantum gravity IMO are related to this. The rules of reasoning, is our backbone, and fixed background. Without we get uncomfortable. This is not too unlike the question that seem to present themselves in QG. At least this is my personal opinion. /Fredrik
  9. There you go. Philsopher also means "lover of wisdom", so what's up with scientists, what do they love? /Fredrik
  10. I never had any conceptual problem to mix philosophy with science or never thought that what is technically "philosophical reflections" are irrelevent in "scientific processes" and development of science. The real interesting stuff about science IMO, is not what we know, it's how to push the limit of our knowledge further. And this process is possibly more complex than the result of it. One might even suspect that the question "what is science", is not a scientific question, but a philosophical one? So if you want to do "science-only", and not philosophy, perhaps you must not ask what science is? Science ~ knowledge(latin), and we are starting to think about what is knowledge, we are in epistemology, which is usually considered to be philosophy. /Fredrik
  11. It wasn't a stupid question at all IMO, probably one of the better ones. While it may not be so arousing to actually discuss "definition of words", it sure would be important to define terms physically that are used in reasoning. I don't know what the official comittee of definitions say, but these are the reflections I personally make on this: When I think of "law", I think of it like a "tool of reasoning". You have a set of initial conditions and a set of laws. Somehow these laws typically allows you to make predictions, where the "premise" is the initial conditions and your "rules of reasoning" are the laws, and at least classically the laws are fixed and eternal. But does that make sense? In the spirit of modern physics and Niels Bohr's epistemological perspective I would always want to ask, what to we KNOW about these laws? Ie. Laws or not, we are incomplete creatures and whatever the laws "are", the question we can not avoid is how does the process of gaining information about these "laws", and thus making them part of our knowledge look like? This type of reasoning, at least leads me personally to suggest that there is from a deeper philosophical science perspetice, no clear cut distinction between initial condition and law! Because the law is probably more like a condensation of acquired history, from which we have extracted information about common, repeating and seemingly constant patterns in nature. This we use together with the latest information we have (initial conditions) to GUESS what will happen next. This guess can be seen as a induction, and it's accuracy depends not only on the accuracy on the initial conditions, but also on the accuract of the rules of induction themselves. I think ultimately even the percepted laws may evolve. Alternativrely you could choose to say that, well then the laws was "wrong". But the question is what is more constructive? To look for something that may not exist, or face our limitation and try to optimise our chances in this game of life. So I like to loosely associate laws with rules of reasoning that are acquired. And these acquired laws are not eternal, they are most probably themselves dynamical and subject to ongoing distortion. I think the ideal that once we find the laws, they are eternal, sounds like a idealisation. And it is not a plausible one. The dynamical evolution of laws is to, provides a much more consistent picture. /Fredrik
  12. I didn't read all the original article but the appearance of jeffotron's post induces some spontaneous and thoughtless quick associations... that come more from a information theoretic physics perspective rather than bio... This scentence to me sounds like saying that acquired knowledge and skill (associating learning ~ adapation ~ evolution, in a larger setting, not just the dna) may possess a kind of inertia. In general, I think this is plausible thinking, if you make the assocation that the more trained you are, the higher confidence and skill do you acquire in predicting your environment, which in turns effectively means a higher handling skill of environmental disturbances. But this tuning is specific to each environment, and at some point I think there is a information constraint that limits the intertia possible - even giving infinite training, because once the learning capability start to saturate, the residual game becomes that of a re-learning. This means, that I don't think the "evolutionary rate" goes to zero, it rather maybe approaches a residual level, where there is random changes that could only be resolve by increasing the information capacity. Maybe the dna would need to grow into a black hole and consume the entire universe I was lame enough to not read the article, but imagining a possible programmers view, by my hunch induced guess and conjecture is that the entire "programming view" must LIVE within the game itself. Rather than god beeing the programmer, the world consists of interacting programmers. And each one has a limited memory, but not necessarily fixed. This will put a bound to the inertia and prevent evolution from stalling. /Fredrik Ie. suppose that we "collect data", and from that make a conclusion of an observed pattern. The confidence in this pattern must somehow be limited, but the more data we use for training the more confidence should be get. But what if our processing and memory puts a constraint on the amount of data we can relate to. Then at saturation, the process can proceed only be, retransforming the current data, and release the most significant data bit in order to make room for fresh data. I think of the microstructures or objects and observers, to be a manifestation of the transformation and rating rules that makes these decisions. those who are successful in this will survive, or remain "stable", if we are talking about physics and particles, others will destabilise and die. Ie. the response and rating rules of the organisms are somehow coded in their microstructure. The higher level encoding in organisms, with DNA and the protein synthesis and replication machiner is interesting, and although different, many conceptual parallells to the foundational physics abstractions can be seen. I've gained alot of personal inspiration from spending a couple of years trying to understand yeast cells, and then trying to abstract that... you get say X. If you do the same with physics, abstract it... and they seem to get something very close to X! It can't be a conincidence. /Fredrik
  13. It depends on how you define the probability measure. And how do we defined energy, and mass, and space and time? The "energy constraints" should of course we accounted for, so I see no contradiction. The transition probability, should certainly include this issue. And about least energetic, it's also relative to the environment. If the system release energy, then the environment is exited, there is clearly somwhere an equilibrium, a most likely balance. A "high energy" fluctuation is usually more unlikely than a low energy one, so "extreme" or "unlikely" options are expected to be self-supressing. Anyway, maybe I was unclear. I am trying to unify the formalism, and understnad how certain concepts are actually expected to emerge. "Energy" is definitely on the list to be properly defined. Classically we know what it is, but in the big picture it's still a fuzzy concept. /Fredrik I attempted to suggest a meaning for entropy and action above that both relates to a rating system. I did this without introducing the concept of energy. Of course, in classical mechanics the action is computer from the energy and the langrangian, say T-V, and so on. But this is all "classical thinking". What I was aiming at, is to find a deeper, first principle, understanding of this, that doesn't depend on classical ontologies. /Fredrik Maybe I should have said that the purpose of the post is to stimulate reflections in this direction. I sure can't be alone to pursue these lines... As I see it, the action more advanced than entropy so one can start reviewing entropy. What I'm suggesting is that the principle of least action and the principle of maximum entropy are sort of different expression of an underlying more fundamental idea. Try to interpret [math] 1/T = k \frac{\partial S}{\partial U} [/math] in terms of a a choice of information measure, which can be thought of as the entropy of a probability distribution over distinguishable states - the higher the entropy of the distribution, the more plausible is this distribution to be found, given ignorance of the underlying microstates. The question is, is there a more plausible measure of a priori plausability of a state, than to actually try to find the "probability" of the state? IE. What is the mathematical relation between entropy of a distribution, and some probability of the distribution, defined in some space of all possible probabilities? Exactly how would one construct such measure? And what would the correspondence to "energy" be, in such a picture? any "natural candidates"? Somehow, classically energy is often as a kind of potential impact, or significance in nature, so that there is a limit of the impact a small system with low energy can have on a high energy system - regardless of internal structure. This is intriguing. Maybe energy can relate to the sample size, defining the structure? If we consider a relative frequency, this approaches a contiuum as the sample size goes to zero, but then so does the information content. What is possibly the physical sense in such continuum? Could it be that relating energy to the information capacity from start is anywhere on track? If one tries a combinatorical trick, to represent distributions with finite sample sizes, the first idea seems to be discrete relative frequences. But clearly the _confidence_ in a relative frequency, depends on the _absolute_ frequency. [math]m \rho_i = f_i[/math] If you take this to be the "mass", or sample size, of the distribution that factors out. However, if you perform the combinatorical calculations for the distribution, and tires to computer the combinatorical "probability" that you will randomly find a particular absolute frequency, the distribution mass does not factor out, nad thus one can not simply take the continuum limit. Then a reflection is to try to make sense out of this expression [math] 1/T_{info} = \frac{ln\partial P[\rho,m]}{\partial m} [/math] In this interpretation, the better choice instead of entropy is "probabilty of probability", which can be shown to be better related to the this entropy http://en.wikipedia.org/wiki/Kullback-Leibler_divergence, rather than the shannon-like entropy. This is how energy or mass might be seen as a overall rating of distributions. When two distributions clash, the impact of a low measure distribution is bounded by it's sample size. Then this might be possible to generalise, to change. So far I only talked about "static distributions", but how does actually one distribution morph into another? Can this _change_ be rated with a probability? And will the relational changes suggested by that, possibly have any relation to dynamics relevant to physics? Similarly, one can try to find the probability not for a static distribution, but for transitions! Or if you like, probabilities define on the space of "differential distributions". This can be interpreted also as the probability that you were mistaken in the first place. And it's the fact that one never knows anything exactly that allows for this dynamics to take place. And things that you are confident in, will not change much relative to other things. This should suggest a relational change. Now if one could also try to explain structure formation and time from this, it would be a strong case. One should also be able to explain the notion of complex amplitudes rather than real probabilities. But I expect that to come out of this too. I'm an amateur having very little time to spend on this, so my progress is slow. But there has to be lots of papers on this that I havent' seen yet. Most standard texts on stat mech I've seen seem to be too entangled up with classical thinking, that it's more or less useless. Even though the parallells are clearly there, it's just not clean enough to expose the beauty. This only serves to briefly express the ideas for discussion and relating to current parallell ideas. If you think bits are missing, that's right, and those are what I'm looking for. I'm basically trying to further develop probability formalism to better fit reality and physics. Alot of questions at once, but that's life. /Fredrik From http://en.wikipedia.org/wiki/Kullback-Leibler_divergence, the setting of the reflections can be understood: "The idea of Kullback–Leibler divergence as discrimination information led Kullback to propose the Principle of Minimum Discrimination Information (MDI): given new facts, a new distribution f should be chosen which is as hard to discriminate from the original distribution f0 as possible; so that the new data produces as small an information gain DKL( f || f0 ) as possible" and also "MDI can be seen as an extension of Laplace's Principle of Insufficient Reason, and the Principle of Maximum Entropy of E.T. Jaynes. In particular, it is the natural extension of the principle of maximum entropy from discrete to continuous distributions, for which Shannon entropy ceases to be so useful (see differential entropy), but the KL divergence continues to be just as relevant." The interesting thing happens when part of our information, says somthing about the expected change of the same! This means that information about change, is already encoded in the initial conditions. Still the Kullback–Leibler divergence is just one measure. And the question still remains how this relates to something that we would call the transition probability, which I think is the relevant measure? Normally from normal stat.mech the type of "dynamics" we can see coming out naturally is basically a type of diffusion, where the states diffuse downhill the probability gradient, and the ultimate equilibrium state is that of maximum entropy. How can we understand more complex dynamics coming out of this, rather than the somewhat "trivial" diffusion and asymptotic approach to heat death? One possibility is the fact that entropy and disorder is relative, and thus the a priori "maximum entropy" state, may "a posteriori" prove NOT to be the most preferred one, since the conditions has changed. One certainly asks why don't they somehow "meet in the middle", but here is where inertia comes into play. The _confidence_ in the change is to high, that it may continue pass the presumed equilibrium point. This type of dynamics seems not too unlike that of general relativity! I find this also extremely intriguing. So "momentum" can the associated with confidence or "inertia of change". So the concepts are unified. If you ultimately related everything to information, in the epistemological spirit, the unification should be unavoidable. Also seems to fit well in line with the GR nuts and bolts, although possible far more general. But this suggest that we need to reconstruct alot of formalisms in current models. In particularly any background references, or a priori assume objective facts, are prime suspects IMO. One interesting key is to try to understand how relations evolve. For example, the relation between information that defines the dual spaces, say x and p. To just postulate, or accept that relation seems way too easy imo. The key to understanding in it depth seems to require understanding it's history, and the observable status of both these spaces can be understood via emergent relations? /Fredrik
  14. Decide for yourself if it makes sense, but check http://www.superstringtheory.com/basics/index.html /Fredrik
  15. In the quest for reformulating current physics in terms of the information view of modern physics, I pose this reflection. The idea behind the principle of least action, is that a system evolves as per the "trajectory" that minimised the action. In that, there is an implied known relation between the trajectories in general, and the action. So how can one interpret the meaning of this action? One intuitive interpretation is that, the action is simply a measure, defined on the set of possible trajectories, so that it is basically a rating system for trajectories, which is ultimately related to the a priori probability for that trajectory to happen. In that case the least action principle can be reinterpreted as the idealisation where one assume that the trajectory chosen, is simply the most likely one, if you associate the action with a transition probability. In this view, this suggest that the physics in this "least action principle" is mainly in defining the action as a function of the set of possibilies. Thus, if we forget about he notion of action itself (which we may suspect is just a _choice_ of measure for a kind of transition probability anyway - just like entropy is measure related to a priori probabilities), we might as well directly ask, What is the transition probability, for a certain transformation - where the transformation is identified as the one giving a certain change, or trajectory? So, what we are looking for, is a rating system for transformation. Ie. a sort of physical "subjective probability measure" on the set of transformations. It seems that this measure, must clearly be related to our confidence in the initial state so one can relate the initial state and final state by a "minimum information change"? Ie. the diffusion of the system is determined by the initial confidence, in the system. A highly confident initial state, will naturally have a lower measure of transformation, and thus effectively possess "inertia". So this seems to relate, and maybe unify entropies with actions. Entropy is a measure on the set of states. Action is a measure of the set of transformations on states. Both are unified at the upper level, via confidence ratings. Which, when taking togethre should imply a relational dynamics. So far that seems nice, but where does the set of states come from in the first place? It seems the SET of states, should be part of the initial information, and itself be subject of change, if you include also transformations that change topologies. And these should also be rated. Ultimately, it seems it boils down to the rating systems themselves. These must ALSO be part of the intial conditions, and be subject to change? How can we put these pieces together? Any suggestions? This is what I'm currently trying to gfigure out. I am trying to start simple, and take a simple example. How can the transformation of say a "q-space" to a q-p space be understood as sponteanous? What exactly drives this expansion? And how can the defining relation between the dual variables, be understood to be selected as the most fit one? Somehow I think there is and answer to this. Any maybe the duality relation can be understood as the most efficient information sharing splitting? Anyone aware of papers relating to this? /Fredrik
  16. Alot of things are symbolic representations and idealisations. Even what most of us consider to be common sense, like the 3 dimensional intuition alot of us have, which makes us project everything supposedly having to do with physics into litteral 3 dimensional pictures - even THIS is a idealisation that I think we evolve simply because it has been very successful. Wether it's "fundamentally accurate" at all scales we simply don't know. All we can do is guess and keep questioning. And I like to think of "points" and "lines" as a representation of our knowledge of the physical reality, rather than reality itself. A point, say a triple (x,y,z) somehow represents a piece of information, a "data point", to add to my collection of incomplete informaiton. I think the persistent and natural desires to always, to always try to make litteral visual paintings of physics may be inhibiting, because somethings probably get far harder to understand that what otherwise be necessary. I think there is no more thinking behind the idea that a string is one-dimensional, than it is behind the idea that previous elementary particles are 0-dimensional. The idea is I guess to try to add degrees of freedom, and see if that solves anything. When the qft machinery is applied to this stuff, consistency requirements in accordance to made assumptions (right or wrong), forces the introduction of more dimensions. There was no direct experimental support for these, the where suggested as a consistency requirement coming out of theoretical reasoning based on a particular set of assumptions. /Fredrik
  17. In the goal of this thread, to develop more conceptual intutition of QM and GR and their similarities rather than differences, as a guide to learn more about this, here is another angle of reflection along the same lines. Anyone having more reflections on my relfections from a different viewpoint is appreciated. In short General relativity the dynamics is described at two levels. (I) The dynamics in - or relative to - spacetime. This is usually expressed so that a particle subject to no non-gravitational forces, follow a geodesic in spacetime. (II) The dynamics of spacetime itself. This is usually expressed by einsteins field equations, which is a relation between the geometry of spacetime and the matter and energy distribution in the stress energy tensor. If the mass of the test particle is small enough to not distort spacetime, the dynamics is effectively that of a particle moving in a fixed, but curved background. This background is determined by the energy and mass distribution of the environment. for example when a stellar dust particle circles the earth in space. But the nontrivial things happen when the particle is massive enough to significantly distort it's own environment. That means that for each infinitesimal change of position of this particle, there is an infinitesimal change of the entire geometry of spacetime! This means that, as the sytem evolves, the "geodesics" keep changing too. this is for example when several extremely massive neutron stars occur in many body problems, all of the participans make massive contributions to curving the spacetime. If we let that be a simple idea of classical GR. How can we now rethink, or reinterpret, those principles in terms of something that is easier to merge with the information nature of QM? I propose the following conceptual analogy as an alternative to "visualisations". In short, dynamics is described at two levels. (I') The dynamics in - or relative to - prior expectations. This can be trivially expressed so that a particle subject to no unexpected feedback, evolves as expected. Put this way (I') appears almost trivial. (II') The dynamics of the expectations themselves. The dynamics of expectations, can be decomposed into two parts. a) expected dynamics (as induced by constraints) b) unexpected feedback Obviously the unexpected dynamics is inherently unpredictable. So our best bet is to go base decisions one expected dynamics, but leave the door open for unexpected events, because given insight of our incompletness, the unexpected is still somehow expected. For me at least, this gives a fairly clear conceptual vision, suggesting some deeply interesting parallells between QM's information perspective and GR's relative views. Some key conceptual issues I personally see is: - The association of movement along geodesic, with the "expectated change". And if we from a pure informational view, can induce such an "expected change", then this defines a geometry of our information. - The idenficitation of dynamical geometry, with evolving expectations, because as changes occur, our set of knowledge changes, and this updates our expectations. Questions: - In describing the theory general relativity, there is a birds view present, which can also IMO be seen as a background expectation that isn't induced from a real observer. It's somehow an external observer or god. This is the sense in which GR is deterministic, and I also think that a superobserver does qualify as a kind of background. - In a sense one might be tempted to say that unexpected feedback is related to when we have "open systems". But IMO, closed vs open systems is not possibly a valid intial condition. Because how would you know if the system is closed or open in advance? It must clearly by an idealisation. So is this a "problem"? It seems so, but I think that this problem may also be part of the solution to the problem of time. Because at first it seems we are just drifting and drifting, it's not possible to come to a certain conclusion! Frustrating! But maybe this is nothing but the drive for time? I think this is extremely interesting. - What can one say, about the structure formation in such open crazy world? Is it possible to make any generic predictions about probable structures, in the case of some minimal assumptions? It may seem thay anything is possible, but that isn't necessarily a problem at all as long as everything isn't equally "probable". To make gigantic jumps from this chaos to everyday life probably isn't possible, because it would be a logical jump. Perhas the first ad simlpest thing to elaborate is the microstructure of reality. What are the simplest, nontrivial structures that we would expect in such crazy world? We talk about information, but where is this information encoded? How does even the concept of probability make sense at this point? At the same time it gets crazy and unpredictiable, it seems to get _simpler_. Because the complex things doesn't exists, so the "chaos" seems self-restraining. Ie. how is this very theory relating to a hypotetical simplistic prehistoric observer who might be nothing but a flip flop device? If you are a flip-flop device, how can you improve, and evolve? Einstein fought with trying to establish the relation between spacetime geometry and the matter and density coupling. His answer was Einstein field equations. Now we are at a similar situation, to establish the relation between the subjective expectations (defining expected "generalised geodesics") and the intrinsic information of an observer. IE. How does an observer, with incomplete information compute the optimum strategy for his actions? This contains many subquestions indeed, and one question is to what extent Einsteins field equations can be exploited to directly solve this, or do we simply have to invent a new fundamental relation that would be the generalisation of einsteins field equations, where we take the step from the mechanistic spacetime view, to a generic information space? /Fredrik
  18. I decided to analyse Rovelli's LQG ideas so some week ago I got his book "Quantum Gravity", and last night in bed I skimmed parts of the Quantum mechanics chapter, to see his view of things. And I was delighted to see that he in sections 5.6.3 (Information) and 5.6.4 (Spacetime relationism versus quantum relationism) expresses a eflection that is strongly related to the topic of this thread. Rovelli writes on p.221 in QG "Thus, locality ties together very strictly the spacetime relationism of GR with the relationism underlying QM. It is tempting to try to develop a general conceptual scheme based on this observation. This could be a conceptual scheme in which contiguity is nothing else than manifestation, or can be identified with, the existence of a quantum interaction. The spatiotemporal structure of the world would then be directly determined by who is interacting with whom. This is, of course, very vague, and might lead nowhere, but I find the idea intriguing." This is a different view of expressing something that I like. I definitely share his idea that the idea is intriguing. I also share Rovelli's relational view of QM. But on one significant point, I get the feeling from my incomplete reading of this book that he (at least conceptually) simplifies a crucial point: I several places he talks about information the observer "have" about the system, and says something that it doesn't need to be stored. I have to read on in his book, but if he is somehow not accounting for that information capacity of the observer that constrains his possilbe observations and "possession of relations", then I disagree, because this constrain is in my eyes a key factor. This is exactly the factor that ordinary QM, disrespects, and IMO quite probably connected to many divergence issues. But perhaps Rovelli's view in this chapter is to explain his view of _standard QM_, THEN I almost completely agree. And it will be interesting to read the rest. /Fredrik
  19. I think there are different, and other ways to understand intrinsic geometry besides visualisations. It's probably a matter of personal preference. It seems your desired to visualize everything litteraly is part of the confusion? One of my favourites is to consider the geometry of information. But I think this is more abstract than the geometry, and I doubt this will help but here goes briefly. This is an interesting analogy that connects statistics to geometry. Consider an abstract state space, basically think of it as an indexation of distinguishable states. Now consider that as you "live" and participate in interactions, or "sample this state space" you acquire experience and a buildup of a historical frequencies (~probabilities) occurs. Now clearly it may be that some states prove - from history - to be more frequently populated, this means that to these states you kind og assign a higher a priori probability. Now in this statistical world, what is the closest thing we can think of as a "geodesic"? It's the most likely path a random walker would take constrained to take himself between two points (or states). Clearly the state space which have a a priori higher probability so to speak will "attract" the random walker and kind of curve the state space. The "regions" or high a priori probability is also the places where any random walker is most likely to be found. So intrinsically speaking, one can consider a geodesic to be something like the path with the higest a priori probability to be taken by a random walker. This way, one can imagine various geometrical concepts without the typical geometry analogies so to speak. This way, a geodesic is really nothing but the "most expected path" between two states, but while fully allowing variation around this expectation. but as the interactions continue, expectations are updated, and so is to "geometry". /Fredrik Also note how I in my personal "imaging" considers geometry to be accumulated somehow, as a condensation of history. Clearly this means I consider geometry kind of to be information, and thus can be rated. This suggest that there is to be expect a sort of natural intertia of geometry itself, from the information poitn of view. And this may hint, how the relation between "dynamics in spacetime" (ie. random walking) and the "dynamics OF spacetime" is connected, because this connection is conceptually as I think of it, the updating of expectations in response to revised history. And of course the key here is that ALL history can not be retained. Only a condensation of the supposed most significant history. The amount of retained history in my thinking relates to the information capacity of the random walker. Because strange as it seems, the random walker is the only valid observer. In GR, the energy and mass distribution controls how the spacetime itself evolves, via Einsteins field equations. Similarly of course, the dynamics in spacetime is given as usual. This non-linear feedback is IMO quite analogous to the above ramblings. /Fredrik
  20. Bji, I think you'll get some ideas if you look up a differential geometry book. I bet there are online pdf's too. It's clear that it can in a local sense at least, make sense to "imagine" the space embedded in a higher dimensional space, but but point is that this embedding is ambigous and not unique. That is, given the constraint of your "lower dimensional perception" as you put it, there are several ways to imagine this beeing embedded in higher spaces, and this embedding simply can't be determinied from withing your constrained perception, that is the main point. What can be determined (in principle, in the classical domain) is the intrinsic part of the geometry. What is beyond that is not uniquely determinable by something "living under the constraint" the surface means. It's not that it doesn't make sense to imagine a higher dimensional embedding, it's just that it's not unique, and thus should be "shaved off" the description, and to get rid of arbitrary embeddings and keep only the things that are non-ambigous. This is all in line with trying to purify our theories, to contain relations between real observables only, and to do away with constructs that while useful, are ambigous. So that we only have left the physical degrees of freedom so to speak. Still I'm not sure if you take issue with this from a physical point, or a mathematical point. After all differential geometry is mathematics and there are a bunch of axioms that build geometry, like the concept of straight and parallell lines etc. If you wonder what the mapping of these things to reality is, then there are more understandable objections, but then that's what I called the second issue in the other thread. And I don't think that is your question? /Fredrik
  21. You got some comments in the other thread, but here is some more. To me this issues can be decomposed into several subissues. - The first issues is what intrinsic vs extrinsic curvature means in classical (no quantum stuff) geometry. - another issue is how to properly define the degrees of freedom that defines spacetime and it's curvature to be realistic observables and to try to figure out the relation between geometry and information, and what information content there is in the geometry itself, and to where _this_ information relates. My impression is that your questions regards the first issue, so I think my post was not making sense because it aims to deal mainly with the second issue and my line of reasoning was to appeal to intuition that if you understand the first issue, the second issue isn't far away by an abstraction. The 2D surface of a 3D space is just a simple example, picked because it can be visualized to illustrate a deeper concept that doesn't necessarily depend on anything "visual". In the sphere in 3D space there are both intrinsic curvature in the 2D surface, and there is the extrinsic curvature how the 2D surface curves in the third dimension so to speak. This curvature in the third dimension does not yield an intrinsic curvature. To appreciate the difference: compare a sphere with a cylinder. Both are obviously curved relative to the embeddeding, right? But the cylinder has curvature only into the third dimension, so an ant walking on a part of a cylinder cylinder would see it indistinguishable from a flat plane, because a cylinder is nothing but a flat planned "wrapped up", but the wrapping is done without _deforming the plane_ - this is the key. As you know from playing with paper, you can fold a paper into a cylinder without stretching the sheet. But you can not simply make a paper sphere out of a plane piece of paper without stretching the paper fibres or making wrinkles. The difference between a cylinder and a sphere should give a simple visualisation of the difference between two things that are both obviously curved in the 3D space, but where only one has an intrinsic curvate (stretching). Intrinsic curvature happens when the paper or surface is deformed differently in different locations. Does this make sense, or am I missing your point? /Fredrik
  22. Depending on exactly what your goal is, it sounds like a good plan is to acquire a broad knowledge both in molecular biology, quantum chemistry and also physics. The fields called quantum chemistry and quantum biology tries to study more complex chemical and biological systems in terms of QM. I suspect, based on my own development, that as you learn more, you will also find the right ways. Before I studied any QM at all, long time ago, I had a very naive view of reality, relative to now. I would not be able to chose the right way back then, so I like to think that the focus is always on the next step. "Always in motion is the future" like Master Yoda said /Fredrik To get back to the idea in the OP, in the general quest of unification and trying to find the common denominator between QM and GR that naturally allows a new formalism, that respects the best of two worlds so to speak, I like drawing parallells to Martins explanations on the GR stuff in the other thread. I get the impression that Martin seems to prefer the GR side of things, and try to add QM ontop of that, rather than the other way around. I'm try to reflect and perhaps find new angles, and I particulary find this statement interesting. In a certain sense the notion of a distance measure between two points, can be considered as the distance between two states (as in possible locations), and the shortest distance between any two given states is thus sort of like a measure of a potential a priori transition between the two states. This in line with Ariel Catichas ideas to associate "distance in space" as a kind of probability for these two "states", "locations" of simply "pieces of information" to be mixed up! So distance is a kind of parametrisation of locality itself, and that space is arranged as per locality "ordering", so that states that are less likely to be mixed up are "farther apart". How can one interpret "relativity teaches us not to expect distances to be constant" so as to get closer to measurement and information ideals (that I personally think is (or rather should be) the core of QM, in line with Bohr's ideal)? Then if we consider a distance to be a sort of an expected probability measure that two distinguishable states are mixed up, or alternatively as a measure of ther distinguishability, then the bold statement suggest that we should not expect that expected measures are constant, or in effect that we should not expect that our expectations are certain. This is, in my thinking, right in line with the analogies of intrinsic vs extrinsic geometry, and information. That is, from he incomplete, inside frog-view, it is not possible to rate our own ultimate expectations. This directly connects to non-unitarity. Unitarity, means that we in our theories, take our expectations to be laws! This makes no sense, in the suggested philosophical analysis I'm trying to do in what I consider to be the natural continuation of Bohr's philosophy. The missing part in QM, to which a clue can be find in the GR analogies IMO, is that QM considers "measurements" and relations between measurements. But of course, there is no such thing as a measurement without an observer. And also in line with the GR concept of "parallell transport" of say tangentspace vector, information must be transported between observers, and due to the fact that not all observers have the same capacity to store information (including expectations, and rating systems), it seems very strange to a priori expect these transformations to be unitary. If you think about this one can even to a cetain extent draw loose parallells to the popular concept of regularization and renormalization, if you consider scaling over complexity of the observer. This seems to suggest a natural regularization scheme that is easily pictures to have a physical meaning. The cutoff is at the complexity scale, with of course one would loosely associate considering the holographic ideas, to related to the observers "entropy/energy/mass". If we picture that the actions calculations, is actually implemented in the observers "microstructure", then it's entirely obvious that the naturaly cutoff is the complexity of the observers - there is physical a limit to what the observer can simply fit in his smallness. And this regularization is physical, by letteint the observational resolution go to infinity, it corresponds physically to letting the observer grow in information capacity and thus we would expect - mass! /Fredrik
  23. Thanks for your comments Ajb, as far how current physics works, there are alof of non-observables. But I was trying to ask what to expect from the next revolution to unify both GR relational ideas with the observer relational ideas of QM. I think that somehow, we haven't yet incorporated the beauty of both QM and GR into our conceptual thinking. I am trying to probe that. It seems you very much like the initiated roads, I feel like that's taking me to the wrong place so that's why I try to find my way through the bush. In the gauge thinking, I can't help making a loose reflective association between the gauge degrees of freedom and the degrees of freedom that the identity of the observers means. Since in the global geometrical thinking, once seems to describe the world the observers lives in, but not from the observers view, but from the birds view - this is a bit unsatisfactory. In th birds view the local observers seems arbitrary and lacking physical reality, instead the connections between them are objective - gauges choices are arbitrary but gauge connections are not - ie. The birds view can not explain the subjective choice, but it can certainly explain their connections. I guess this is as close to your thinking I can stretch, I'm not sure what you think of this. Also the use of symmetries are too wild IMO. It's more to me like a kind of tool in reasoning, that _enforcing_ various symmetries locally and globally we can make conclusions and find new phenomenology. But from the observational point of view, which IMO is sort of the only physical view - albeit arbitrary from the point of view of "god" or the "birds view", I want to understand the dynamics of how this symmetries arise as a resuly of local information processing and retention. I kind of suspect that we may need a new formalism, that respects all current ideals from the first construct. /Fredrik If we are first talking about the normal standard QM, I don't think has much to add to those question. However the associations I do, are more in the sense of a pictures development of quantum mechanics, and it's interpretations. But I suspect that to make sense of these at this point fuzzy things, a normal standard appreciation of the basiscs is needed. My strange reflections is not standard QM. For the basics, i'd say linear algebra and ordinary differential equations. Made simple, the "states" in quantum mechanics make a vector space. Measurements correspond to linear operators. And the operators have eigenvalues. The eigenvectors or (eigenfunctions) are found by eigenvalue problems. If you have a basis already, this is an algebraic problem. If not, one usually solves an eigenvalue problem which is a differential equation to find the eigenfunctions. For example the orbitals in the hydrogen atoms correspond to eigenfunctions. In QM the concept of innerproduct is also important as it is used to computer expectation values and probabilities. All the terms are from linear algebra. Technically though, QM is usually dealing with infinite linear vector spaces, or spaces or functions, studied in more details in functional analysis. But for a first QM course, I think most people have not studied advanced functional analysis. Alot of it is generalisations from normal linear algebra, but conceptually it's similar. One can also study the operators in QM from an group algebraic perspective. But this is all mathematical classifications. I'd say for a basic appreciation of the very basics, linear algebra and basic differential equations an eigenvalue problems is about what you need. But then that's not where it ends, probably where it starts The axioms in QM is what attaches the formalism to reality, analyse this carefully. Some basic familiarity with probability theory and basic statistics might help a little too, when the frequentists interpretation is discussed. /Fredrik I'm glad you could read out what I tried to say. This in my thinking, strongly relates also to the issue of regularization in an abstract sense, maybe we can call it self-regularization - which is an important part of action formulations. I'm not if that makes sense, or adds any clarity to my previous reflections on "geometry counting" - like, from what point of view is the counting constructed? (I'm still waiting for Rovelli's book btw) /Fredrik
  24. In the way things are, and from a mathematical point of view, I agree completely. But one might ask - and I do - does non-observables has a place in a physical theory. It's to me, a sign of mathematical redundancy - by the same reasoning that the non-physical embeddings are. If these non-observable things, can not be removed, or chosen arbitrarily, without destroying the physics, then it suggest that we are making use of an ambigous embedding or background which is disturbing, isn't it? The quantization procedure is a bit of a vague notion itself, usually when starting with a classical theory, and then taking it into the quantum domain. I'm not sure that's the way to go. I'm not sure but I try to start with the notion of distinguishability. Either one can consider this as an elementa of discretensss, or one can consider it as a cutoff in continuum models, where you cut off anything with sufficiently low measure, and then either the measure is "0" or it's not. This is probably a weak point in the reasoning though. This notion of distinguishability keeps existing throughout the complexity. I expect that once we see the right way of reasoning, quantization and gravity phenomenan are not in contradiction, and one shouldn't have to "quantize" gravity, I'd expect it to come hand in hand. /Fredrik
  25. In a certain sense I agree completely. This is also I think another important angle of the problem. While the problem of drawing a line between two parts is real, the fact that certain self-organisation takes place that is still more or less identifiable as a stable sub-system makes the question relevant. In my thinking, I imagine that for consistency, ANY subsystem could be interpreted as an observer. However, not all observers are "stable" so to speak. So I think the particular subsystems that become "observers" distinguish themselves this way. For example, put a biological organism into the "wrong" environment. You can do this without problem, and it's still fairly clear what the distinction is between the organisms and it's environment, however the organism will probably die. But in some cases, the organism may be so strongly dependent on the environment, that it's even hard to define transporting it to another environment, since it would probably imply immediate destruction of the system. So the same way that say a particular environment in biology favours appearance of certain organisms, in physics a particular environment might make favour certain particles. And in the abstraction I was trying to make a particle qualifies as an observer as much as anything else. Also I imagine that an observer can increase it's information capacity by learning how to predict and make use of the environment. Humans have a lot of tools, like books and computers. But it still takes a certain level of sophistication to do that. To describe how an observer can increase it's mass, and therfore intertial stability spontaneously is something that begs for a better understanding. And I think the environment plays a key here since spontanesous or not must depend on the environment. This is why one would expect a different "zoo" for each environment, but in each case there should be an emergent logic. /Fredrik
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.