Jump to content

fredrik

Senior Members
  • Posts

    532
  • Joined

  • Last visited

Everything posted by fredrik

  1. I haven't seen it but if I'm not mistaken he is working within the M-theory framework of string theory. I don't know much about that and while there seems to be interesting fragmentary concepts, I personally find the entire string approach akward as it's IMO focusing on the wrong questions and starts by far too much background structure. I simply have hard to appreciate the method of "string philosophy". But I think if something interesting happens, it will be in the supposedly more fundamental M-theory, and I hope they will find new "first principles" and explain how "strings" are emergent from something else. But I also expect the background structure of theirs to be explained. /Fredrik
  2. The reason I used time here is because you used time originally. But I do not think time is "given". That was my description of the QM way which I'm trying to improve. My envisioned construct starts from a concept of relative distinguishability and set of relations/dataq, where change is a relative distinguishable change among with relations. Time is possibly a choice of parametrisation of the most probable path of changes consistent with the relations. So far, like in GR, I have an idea how to define differential time this way, but global time probably can't be defined in the ordinary sense at least, but I'm not sure that's necessary either. But I envision all "dimensions" to be defined in a similar way. Not only time, but also space. But I am still struggling with this. /Fredrik The main reason time seems to distinguish itself is because I consider it to define the direction of change, but along this "growth", new dimension can grow out as well. I have ideas on how this should be formalised, but it's a tricky one, and it will take me some time I am sure. /Fredrik
  3. I personally see no clean way of separating ontology and epistemology in this context, this is part of my point. Epistemology explain the induction of ontological constructs. To consider an non-trivial ontology without epistemological support is to me not consistent with a "strong background independence". The trick seems to start with a minimal starting point and then see how "ontologies as well as higher epistemology" forms. This is my idea. Therefore, I see it as unclear to try to separate matter from time, as they are clearly related to the extent that one without the other makes little sense to me. But in this context epistemology to me is not only concerned with "human knowledge", I am picturing it as how everything is connected and how different things gain information about each other. However I understand your point that matter doesn't "require" time, but OTOH if it wasn't for time, I don't see why there would be aterial objects n the first place. /Fredrik
  4. If we forget about time for a second, and consider the general case. What would a distinction between real and not real be that satisfies you? I take it "pokable with a stick" is not it, right? Are your thoughts real? Usually in QM, time is part of the background given and is usually considered as a parameter, to which we relate our information (the state). Same with space. If space and time was not given, but is rather emergent (as some think, including myself) then the normal procedure to derive QM falls apart. So time and space is not questioned. It is given, and is the starting point and is used to formulate the probability space. From this, using symmetry arguments most conservation laws are derived too. To question time and space normal QM is inadequate and we need a more fundamental formulation. IMO this is possibly the best we was able to do so far, but no way it's the final answer. /Fredrik
  5. Almost appropriately, it seems there is a difficulty to find an exact verbal definition of what "philosophy" is, it's methods and goals. The meaning of the word "philosopher" as per the ancient greeks is "friend of wisdom". And as we know philosophy is usually also divied into branches and topics (ontology, epistemology, ethics and logic and maybe more). Philosophy is not random baloney, it is supposedly a sort of an supposed deeper "rational analysis" of various things. You can go on to question every single word, what does "rational" mean and so on. And this is in part a philosophical analysis. Sometimes philosophy is characterised as "thinking about thinking" or "analysing the analysis" as I like to think of it, which is IMO certainly an important part of self-interaction and highly relevant to science as I think of it. For example science: Our scientific method certainly IMO falls under the domain of philosophy, I suppose much influenced by Popper who was a philosopher. I sometimes see a tendency among scientists to never question or improve these foundations, but instead just buy into a particular philosophy so to speak. But they are nevertheless resting on a philosophical foundation, and this can certainly be discussed. I think also "science" can be given different meanings. Usually the falsifiability and so one is actually a philosophy of science as per popper. It may seem plausible enough to be obvious, but that's taking things too lightly IMO. Part of the effective human science is clearly bordering to sociology/politics as well, because scientists judge each other. But does this man a man stuck on an desert island can't do science on his own? Further, would it be possible to analyse the thought process in the spirit of a scientific method? I think it is possible, but the problem is of course that two different scientists may come to different conclusions. Does this make it unscientific? Exactly HOW MANY scientists must come to the same conclusion before it's science? Obviously there is a gray area here. Discussing this grey area means discussing the philosophy of science, and at least as far as my limited history knowledge knows the history of science is most certainly a branch of philosophy that is try to gain knowledge about nature using some idea of a "good scientific method". Also, something that is from what I can see often "trivialized" or overseen or missing altogether in the poppian philosophy of science is an ANALYSIS of the process of updating a theory. IMO, the process of falsification is the easy part. The feedback of the falsification to let it induce an optimal correction is the part we need to analyser further. From the physical point of view, I think of "philosophy" as part of a "self-interaction", which is also why I think there are no clear objective (agreement among the set of all observers) reference, but lies is part of the fuzzy problem IMO. > I've met scientists who scoff at philosophers (because all they do is think) I guess there are many types of scientists, ranging from those working with applied science, to those trying to extend and improve the foundations of science any maybe even change it. To question yourself is difficult. /Fredrik One can also note some of the wordings and early elaborations of many famous people who has made groundbreaking discoveries in physics, and it's obvious that they have made plenty of philosophical considerations, that may not always be obvious in their results. The results is usually clear, consistent and successful, but the way there is probably not as clear. Once a formalism is developed it is alot easier. But how can we expect to break new ground by just "following the rails"? I by no means think all big minds in history of science has "scoffed at" philosophical reflections. /Fredrik In short, I agree with you here. /Fredrik
  6. Yes I think it has a connection to that, or is a generalisation thereof (or rather the holographic principle might be a special case) If one consider the holographic principle to suggest that the information contained in a volume of space can be represented by a theory that lives on it's boundary, then what I persoanally envision is that the information about the environment, can be represented by a "model" that lives on the observer, or rather HAS TO BE represented by a model that is defined in the observer, because what other option is there? This is exactly why I think the constraints imposed by the nature of the observer itself (of course "observer" here doesn't generally mean humans, it could be a measurement device or an atom) must be implemented in along with the first principles. The generalisation I imagine is to consider a sort of probability formalism that is defined in terms of it's interaction BUT also constrained by it's capacity to actually retain information. The task is to find the ultimate fundamental framework that implements this. From what I understand the original ideas of holographic principles come from a special setting, rather than considering potentially novel first principles. So I've got a feeling a proper first-principle-understanding of this is still to be found. /Fredrik
  7. Fred, I realise your are thinking loud and I like some of your reflections, but sometimes i find your posts a bit cryptical and sometimes giving and impression of composed of fragments of different questions with unclear relation. I don't think I'm the right person to say this but maybe try to treat one key point at a time to the extent possible as to make it easier from someone else to see your perspective and what the question or suggestion is to comment on. Before it's all formalised that's not easy. There is some people that don't understand or doesn't want to understand until the task of formalisation is already done, but I have not problems to at least try to understand philosophical reflections. I this thread I wasn't sure what your key point was, and thus I wasn't sure what to comment on. Sometimes I think the philosophical threads can be enlighting if there is a good focus, but sometimes due to misunderstandings they have been developed into big messes, that only increase the confusion rather than reduce it. It's usually easier to convey these fuzzy ideas to someone who are already on the same track, then to someone else. About time I think your connection to disorder is interesting. I make a similar connection. In a background independent view, there is of course no external clocks. The clocks chosen have no choice than to be part of the system, and they are thus subject to the same uncertainties as is everything else. And the first task before introducing time, is how to "select/define a clock" in the general case. I definitely think the expected arrow of time can be deduced from the observers information. On the quantum information level, time is bound to fluctuate and this might explain why the arrow of time and gravity phenomena is hard to see on this level because the perspective is too narrow. To "see time" I think we need a larger part of history, and also sufficiently complex observers. I think the final task is to translate the fuzzy thinking into a formalism that will unify this. The reason why I do the ramblings is because atm I have no better language because I'm looking for it. The foundations of information theory and probability theory is IMO where I look. The mainly missing part is how the observers microstructure complicates the picture! We talk about measurements as projections, but usually doesn't raise the issue of the nature of the "screen" where the projetions are made? What if the screen are to small to retain the entire projection? These things are what I want to implement in the formalism from scratch, and it does suggest to me a revision of the probability formalism, and information concept used in QM. /Fredrik To be a little more specific, and I have probably said this too many times already. What does background independence when for probability theory? Well I figure that there is no given probability space, and no given priors! The probability space and the priors are themselves evolving only relative to themselves. So where do we start? And where does the non-trivial structures come from? Stated this way, it suggest that the problem is strongly attached to the probability formalism. Not only do we need measurements to find the state of the microstructure, we need measurements to FORM the microstructure as well! Exactly how must this relation look like? /Fredrik My current attempt that I'm working on is to, rather than to start from the axioms of probability, I start with some basic axiom that an observer can at minimum distinguish between two states, and from thereon, I try to build relations in uncertain picture, where deviations from expectations are used as a guide to create structures on demand. In that picture "effective" probability spaces are generated, but they do not satisfy the axioms of probability because there is no way to guaranteed unitarity, but non-unitary observeations is exactly what drives the expansion of the space. So combinatorial considerations of a "microstructure" that is _built_ from elementary relations based on the distinguishability concept will generate effective probability theory, with the desired properties. That's the idea, but there is many details yet I haven't figured out. The arrow of time with then be unified with a generalisation of the second law, where the information about the dynamics is incorporated into the "state", thus also giving an effective "entropy" to dynamics. So I have a semi-clear vision on what I want to do... but I go on steps and let it mature to make sure it's right, then I try to develop a mathcing formalism as I go along. If this works gravity will be a dynamic effect in thsis picture that is crucial for stability and structure formation. /Fredrik note: unitarity will still be attained in an effective sence when the learning curve is flattening, however there seems to be no way to maintain unitary during steep learning. At least not a way that is consistent with my vision. /Fredrik
  8. I take it that regardless of any final resolution we can agree that this is an interesting topic. If I read Vincent properly, it seems you are attracted by the concept of an "Era of physics based on information". I am a little curious to hear what your view as a string thinker is on the unification of that with the string framework? Do you consider information as something encoded in the state of the microstructure suggested by the string framework? Do you also ask what the support for that particular microstructure in terms of information supporting it? /Fredrik
  9. This is why I think that until then the superposition is the as close to real we get, and I imagine this superposition as a representation in the observers internal state - in this sense, the superposition is "real". But the reality is not the unknown that's "out there", the reality is the projection of that, which lives in the observer. Which ultimately means that reality is subjective - which btw, does not bother me one bit, on the contrary does it make perfect sense. This may seem strange if you are coming from the deterministic philosophy of classical mechanics, but once you get over that, this is IMO very naturaly, intuitive and it does make sense. But current standard models is not designed along these principles. But my hope is that the next generation of models will. The picking I'm doing is not picking on QM in favour of determinism, it's rather the opposite direction, that I pick on the determinism of "probability mechanics" and the idea that has be be an objective reality. If we really take the concepts of QM to heart, the consistency of reasoning is stabbed if we stop at a the second quantization, or any any fixed level. First quantisation has issues to handle many indistinguishable particles, so it leads us to second quantization. There are still problems... so some people come up with strings, but what about a third quantization? I always had the feeling the string theory is a special case, or alternative route to a generic third quantization. But the problems persists... at what level of quantization does things stop - maybe there is principle that that answers this that prevents the degrees of freedom to get uncontrollable? Complex models may have the potential to be more accurate, but they require a higher representational capacity in the observer, and they will "possibly" be slower in their adaptive responses unless they evolve a more clever selection mechanism. So there is probably a balance between potential and response times. This may suggest and answer why "100th quantization" is not realized. /Fredrik
  10. I'm not much into quotes in general, but here is one I like that applies. "There is only an abstract physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature." -- Niels Bohr I think this quote captures the essence that science isn't about finding forever lasting truths, it's about learning. And if you develop Bohrs statment with the interpretation that the implicit observer - "we" - is arbitrary and that every observer does not necessarily agree, then I think the statement is very modern and still applies. Everything we "know" about nature, is acquired during our experience with it. So it seems very natural to think that the "process of inducing and selecting expectations" from real interactions is more to the fundamental point. So if someone says that "we know what the electron has a certain mass" - I think the more fundamental part is to expose the exact process of, given our history, we are lead us to this "knowledge"? But analysing the logic of learning, we are at a more fundamental level IMO. Analysing that, should also yield us an estimate of the uncertainty in this knowledge. This is how I like to interpret Bohrs quote. /Fredrik
  11. It's true that information needs a formal definition, and a measure. Shannons or the nuemann entropy has limitations IMO. This is part of the problem IMO. The problem I see seems to be how to find an observable measure of what informtion that is missing. My starting point is the observers set of relations, and try to consider the transition probability for the set of relations - this should be induced from the current state of relations, which is my the information capacity of the observer itself limits it's predictive power. A given configuration can be given a measure of its uncertainty (how likely it is to be seen), this is related to the normal shannon and cross entropies of this state. One can define another measure say P(f|p), which is basically the probability to observer a given relative frequency, given a prior distribution. Take the logarithm of this and one will find an interesting relation between what can be interpreted as a transition probability and the information divergence. But the problem is that the prior is also nothing but estimated, which yields an expansion. But then the one-2-one principle suggest that the expansion itself also takes up some memory. If one does the combinatorics on this one will find that the expansion is driven by the dataflow, but constrained by the observers relational capacity. I am trying to elaborate this very simple idea to allow dynamic probability spaces. The spaces themselves are selected as the observer communicates with the environment. The selection mechanis is based on the fitness of the "selected" view/theory/structure. I am not posting any math yet because I have no formalism yet that reflects my thinking. Once the formalism is done, the rest of it is pure maths of exploring the formalism and training it against real data. Once i have done more I intend to provide the math, but this first part is I think harder, also to communicate to others as we are elaborating the common formalism that is normally a prerequsitie for communication. In this way, the concept of entropy can not be entirely separated from the dynamics in any fundamental way except as an approximation. /Fredrik Good quesiton. If I have to answer quick, I'd say yes. I'm on my way to a b-day parth so I'll get bakc more later! /Fredrik
  12. ...doesn't have energy? Ours brain certainly have energy, right? Or do you suggest that your thoughts have an realisation independent of your physical brain? /Fredrik
  13. It sure isn't easy to find answers, I think these are extremely interesting and relevant questions. Some personal reflections. I am one of those who strongly believe in looking for a deep unification of information frameworks with physical theories, and in particular associating generic learning frameworks with the scientific method itself since I consider these general scientific considerations to be a necessary foundation for any fundamental science, and physics in particular. I belive that any structures and reality itself, is in a sense emergent from the observers interactions with the environment, and even that the observers assembly is part of this dynamics. An observer is bound to develop successful relations with the environment in order to stay in business. This implies a sort of selection mechanism. I ultimately think that one can understand the laws of physics by analysing the relational information in this picture. When this is done, there seems to arise natural concepts of intertia (as in resistance to change; not movement in space, but rather "change" in general, or alternatively reduction of uncertainty). One of the things I'm currently trying to find out, is what the proper formalism must look like that realises this things. Once this is found, the formalism can be explored and then I expect to see probably several generations of structures as well as dynamics as the complexity increases. With complexity I imagine the information amount of the system. One would obviously expect that complex structures in general has more complex interaction properties that less complex structures. I basically picture physical structures and their microstates as encoding the possible interactions, and that each concept and structure in the model has a more or less at least, one to one "representation" with supposed "physical reality". We humans represent with mathematics what nature represent in physical structures. But they supposedly are equivalent descriptions of the same "thing". The observations I've done is that normal probability theory is inadequate to do this, because the probability spaces themselves as we normally think of it aren't observables. The frequentist interprations makes no sense. It's a very good approximation, but it does not hold water for a fundamental reconstruction. But still probability theory is crucial, and part of the game. So I think we need a dynamic probability theory where the probability themselves are dynamical, and can be given a indirectly observable status. To embed the probability space in a larger (background) probability space of probability spaces does not solve the fundamental problem (IMO at least). It just iterates the flawed method! And it does help in the sense that it makes the flaw smaller, but it also has a disadvantage that it probably makes the background space unnecessarily large and there comes the next point - a low complexity observer can not (in my view) *relate* to arbitrarily complex structures (recall the ideal that there is a one-2-one relation between model and "relality"). I take the association of information and mass/energy very seriously and I think we so far don't understnad the full meaning yet. I'm not sure I would personally be prepare to in detail discuss the black hole entropy before the fundamental conditional structure is discussed in detail in the appropriate formalism. I guess this is what string theorists try to do in their formalism and LQG people in theirs, but for anyone that doesn't buy into the formalism in the first place the results is very uncertain. To me, that the perception of reality may _possibly_ be constrained by the capacity of the observer to _relate_ to the environment is often overseen in the papers I've skimmed. I think a measure of relational capacity is needed, and I tend to associate this to inertia. I have not come very far myself with the formalism, but I suspect gravity and time can be understood as emergent in as intertial phenomena in a system of quasi-stochastic relations. I write quasi, just because of the problem to define the probability space properly (I think non-unitary features is required for consistency). I've started to elaborate this, and so far it's a basic combinatorical approach based on relations (where I consider the relations themselves to be subjective), where the basic task is to calculate transition probabilities to all possibilities allowed by current uncertainty, but also a mechanism that updates the calculational engine when unexpected information arrives. I figure thta in reality the fact that fundamentally unexpected thing cna happen can't be banned. Another thing I've found is that the normal entropy definition is not very useful either. It basically suffers from the same flaws as does the foundations of probability theory. Instead I consider a recursive entropy, but there is no need to call it entropy, because IMO the mainly interesting use of the entropy anyway is the relation ot transition probabilites. I have not come to that point yet but most probably ordinary QM will be something like a low order approximation where the approximation lies in the assumption that you can know your probability spaces (there will be many, that have relations with each other, so that prios are induced between them) with certainty. So I have no formal answers yet but I think there is alot to happen in these areas in the future and if anyone else has ideas I'm interested. I think this is interesting and I'm interested to hear other peoples ideas. Since I suspect noone has the answer, I consider this to be all about exchanging ideas... /Fredrik
  14. I have to admit that while the topic is potentially interesting, I have a little hard to understand Fred56:s message. The quotes out of an uncertain context is confusing me at least. You write at one place "he's trying" whom does he refer to? I don't follow the sequence of reasoning, and what does "our model" refer to? Fred, I've seen some of your other posts and I like your questioning of things but to speak for myself your posts are not always easy to read (that probably applies to me as well but anway /Fredrik
  15. > So you feel the concept of information having mass/energy doesn't pose a problem for the determinist/empiricist view? The "determinist view" is not a view I have and perhaps that explains why I didn't get the supposedly obvious point IMO, "information having energy", is closely related to the fact that one always have various degrees of confidence in information. That is we get a message, but the question is what is the confidence in the message? This certainly constrains the impact the information of the message can have on the receivers/observers state. Not the striking association to the concept of "inertia". Intertia of an opinon, to resist revision to an incoming possibly compromising message. I rather think the association is excellent and beatiful - not a "problem" at all. It rather may allow for a more distinct definition of energy, in terms of information - at least that's wha'ts in my plans. Currently the notion of energy is not very well defined in the first place - THIS worries me. /Fredrik
  16. Interesting topic but I didn't get the point? There are alot of quotes and mystical comments /Fredrik
  17. I'm not sure I follow your way of putting it. The word "mental" sounds strange to me in the context, like time would be something specific to humans. In that case I'm not sure I agree. I think it's at least "as real as anything gets". But of course, one can still argue how real anything can get and how stable this reality really is. For me, what things "really are" is not interesting unless there is available to me discriminating factors. Things remain to me what the seem to be as induced from all available information, but things change, I learn and re-learn. But in a sense I think I agree with you, that the old "absolute realism" views of these things are doubtful. I think the fact that some things are relative, subjective and fuzzy, doesn't make them "less real". I think reality is fuzzy. /Fredrik
  18. As usual I don't know what the original motive is but if Fred56 suggests that that measurements and the definition of many things ultimately are a bit fuzzy (but still clear for most practical purposes) I personally agree. I think the question then is to ask, how come this apparently fuzzy stuff is still so successful? How come this shows remarkable stability and success? And perhaps more important, and the key that will distinguish the discussion from fruitless philosophy is if this thinking or "insights" can be exploited to improve our own understanding of the apparent and effective reality? So that we can be even more fit? I think so. My personal thinking of time is that it is a (our) choice of a parametrisation of observed relative changes. A valid question is howto quantify _change_ before we have the notion of time without beeing circular? It seems we need a fundamental "measure of change" that does not utilize macroscopic references such as clocks. I think such a measure can be constructed from "probabilistic" models, where one can induce effective probabilities for changes by combining information also about pattern of observed changes. So I personally still await a fundamental unification dimensions from first principles. I think it can and will be done, sooner or later. In this way I personally don't consider these questions irrelevant to fundamental physics. /Fredrik
  19. I'm not sure I exactly follow why you want negative entropy? In the shannon definition he considers probabilities of distinguishable events, and asks if we know these probabilities (this is our information) how uncertain are we of the true microstate? To answer this question one can define the shannon entropy, which is devised to be a measure of our uncertainty of the microstructure. [math]S = - \sum_i p_i ln p_i[/math] Shannon argues how to get this expression up to a constant in his original paper: http://plan9.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf The link to statistical physics is that the microstate is the motion of the billions of billions of individual molecules spinning, moving etc, and the macrostate (pressure, temperature etc) yields a probability distribution over the microstates. This means that if the entropy of our information (probability distribution) is high, our _uncertainty_ about the true microstructure is high. So high entropy is a measure of missing information or uncertainty of the underlying state of the microstructure. But shannon entropy does not consider uncertainty in the microstructure itself, just it's state. I guess my original posts went too far in one go. alot of missing information about microstructure ~= alot of uncertainty in the information ~= high entropy low entropy means that the information gives very high certainty about the microstructure. > how information can have entropy and reduce entropy too If we take information to mean the probability distribution - information about the microstructure (or distinguishable events like shannon called it). Then this information have an entropy as per [math]S = - \sum_i p_i ln p_i[/math] To avoid confusion perhaps in this case, calling entropy uncertainty of information rather than information is actually better. Information have entropy and if hte uncertainty in the information is lower, the entropy is lower. This is shannons question, and his answer to what this measure is, is the shannon entropy. With information uncertainty he means, HOW much "information" or "knowledge" about the microstructure is MISSING, if we only know the probabilities of the distinguishable microstates? This is the same as to ask what is the uncertainty in this information. If the entropy is high, the probability distribution has many many possible combinations of microstats that gives the SAME probability distribution, this is why we are then "less certain" about the true microstate. I'm not sure if that made it clearer? /Fredrik Mmm. I think I see your confusion... a suggestion Perhaps a better wording is to call the probability distribution information about the microstructure. If we are given a probability distribution about the distinguishable microstates, that clearly provides us with (using english) "information" about the microsstructure, right? We can also consider this probability distribution to be a message sent to us. Now we wonder, what is the quality of this information? i.e how much uncertainty is in this information? ie. how much information is still MISSING to have complete knowledge of all the microstates? A measure of that, is the entropy. So I usually think of entropy as a measure of missing information/knowledge about the microstructure. Most of my early comments attempted to be elaborations and extensions to this, and I suspect I instead only messed it up because I didn't quite get your question. /Fredrik
  20. I don't know your background of the questions, I'm only guessing... Here are some quite PERSONAL comments... that is strongly colour by my own perspective... which may not be relevant to your questions... You decide for yourself. In the usual notation entropy and information are related in the sense that entropy of something is a measure of the observers/receivers prior missing information about this. But one can also say that the entropy is a measure of the quality of information relative to your prior knowledge. IMO, the philosophy of information has strong similarities to the philosophy of probability theory. Certain branches to which I belong, argues that all probabilities are relative, in consistency with this I also think that all information is relative. When one tries to define an absolute(non-relative) measure of information (like the shannon entropy) that in fact contains a hidden implicit condition. That's the selection of the microstructure and the ergodic hypothesis that there exists an unique absolute prior or symmetry. The the shannon entropy is in fact conditional on these assumptions, the the absolute appearance is IMO deceptive. IMO, it makes little sense to make a fundamental dicussion or reflection on what is information and what is entropy without considering this things, becase this is where the concepts are ultimately rooted. So one can take information as a measure of information (or missing information, depending on how you see it). But still, what information is, is not unambigous so is entropy. There are different definitions of entropy, depending on what propertis you want it to have - what are you going to do with the property? Instead of the shannon entropy there is the relative entropy (also called KL-divergence) which is a measure of information relative to a prior. There are three different entropies which are related. [math] S_{cross} = S_{KL} + S_{shannon} [/math] [math] S_{KL} >= 0 [/math] By some simple toying with basic combinatorics I found that one can find the KL-divergence related to the expected probability P to make a certain relative frequencey observation... it's something like this from the top of my head if Iremember correct. [math] <P> = w e^{-S_{KL}} [/math] The the number of degrees of freedom of hte microstructure goes to infinity, w -> 1. Entropy is a measure of missing information about a message relative to the receiver/observer (=a measure of NEW information IN the message). Thus, the entropy of the message is clearly generally reduced if the receivers or observers prior information in general is larger. I'm not sure if I got your point though? Like I said originally I think one can make many different reflections on this, from different angles depending on your purpose. [math] <P> = w e^{-S_{KL}} [/math] In this elaboration, P is is the expected probability to observe an unlikely message. The problem is that the probability is not exact, it's only an expected probability. But this can be expanded into an algorithm. This is how the information divergence is intuitively related to the "probability" of seeing a particular message. Unexpected messages are more unlikely to be seen, and thus have larger relative entropy. To elaborate that consider the following situation that the best estimate you have for hte probability given no other info, is the relative frequency as registered from the experience. Consider this to be your prior. Then ask what is the estimated probability that you will find an fixed size message, assuming your prior is constant. Then one can apply the multinomial distribution and to find the estimated probability,which gives the formula above. Basically an estimate of the probability to observer a certain "probability distribution" with a certain confidence level. To observer an unlikely "message" with low confidence, is quite likely, but to observer a unlikely message with high confidence is simply unlikely. It sounds like a play with words, but I found that toying with the combinatorics of relative frequencies gives some intuitive understanding on some of these concepts. But in reality, the prior might be dynamically updated if the receiver/observer will respond(change) in response to the new info, so the prior is I think dynamical. That's at least how I see it. But this is an area where there is seemingly debate and there is probably no universal answer that everyone currently agrees upon and it gets complicated when you consider this because the notion of change makes things complex, and then comes time. I think most of the information concepts are abstractions one way or the other, and the question is what abstraction is most efficient for the quest in question. I'm sorry if it's unstructured. I'm working on some papers but nothing is done yet. It's just meant as inspiration and hints. /Fredrik
  21. Great reflections, I think there is probably no end to how far this can be discussed. I guess many others will response but but some first comments. A simple answer might be to say that thermodynamic entropy and shannon entropy are different things, which they are... but that is a too simple and unsatisfactory answer because they have obvious relations which are interesting which the curious mind can't dismiss as conincidence One can also discus the concept of information and entropy all on it's own. Shannon entropy is in fact far from the only measure of "information". There are different versions, sometimes one can assing desirable properties axiomatic status and prove from that one particular entropy definition is singled out. But then again, those axioms can be chosen differently. Usually information are defined in terms of probability, or in terms of combinatorics based on microstructures, which are loosely related too. So the concept of information itself is at least in physics not so unambigous as one might think. Except for the factor of boltzmanns constant, thermodynamic entropy is pretty much at least IMO "in the spirit of shannon entropy" if one defines a probability space from the microstructure and takes N -> infinity, then there is a relation between shannons formula and the number of distinguishable microstates of the chose microstructure. There are several philosophical problems with this that may suggest that shannons measure isn't universally the best one. For example when you try to do relativistic thermodynamics. The large N is one issue, but also a serious problem is how to select a microstructure where each state is a priori "equally likely". This is not a trivial, especially if you try to understand this in a context where you want all constructs to be induced from real observations. Taking arbitrary prior distributions into account, one is led to various relative entropies (K-L divergence, or information divergence), which is sort of an update version based on conditional probabilities. The getting rid of the "large N" thinking will I think have even more profound implications, connecting to change time and possibly also energy and mass. I think there is alot around this that to my knowledge nonone has satisfactory answers too, and people has different "effective methods" to handle this issue. I personally think the revolution in physics will related to information and information processing, and that sooner or later we will reveal the fundamental connection to the laws of physics and the laws of communicating and information processing observers. /Fredrik I think the next intriguing this is the information probably have mass/energy too, but that only makes sense in a context where there is changes. I do not have the answers but I think this is a very good focus. A extremely simple first hand association is inertial mass ~ confidence of your prior probability distribution. When new information arrives, you need to make a decisions if, and how, to update your prior. Clearly one fundamental element is to assign relative confidences to your prior and the new possibly conflicting information. This allows for a very simple yet profound possible link to physical intertia. This is something I started thinking about again since this spring, and it's quite exciting. It's way too fascinating and associative to be a unlikely conincidence. Unfortunately, and also strangely it is hard to see much papers that treats this from a fundamental viewpoint. There are many other approaches that tangents this, but from a completely different angle - coming from withing other major approaches. I think there is need for a clean revision of this from more first hand principles. I think ideas from different fields are healthy. /Fredrik One of the implications of my previous "issues" is that consistency requires that sometimes not only the prior is updates, the probability space itself that the prior "lives in" can also change! All this can be viewed from pure information processing reasonings. This makes it very complicated and all the more fascinating. /Fredrik Another way of seeing the latter is that the microstructure itself - which is used to define information in the first place, is itself uncertain! In the shannon case, we really make an assumption... we select a microstructure (formally at will, although of course there are good grounds for it, but they are not perfect) and fix it - this does not make sene in the general case IMO. /Fredrik
  22. The question is remotely related to many things but the main question was to hear what people think of the appliance of normal probability theory to describing reality with focus on natural and physical phenomena. One can probably make many reflections on this in different wayas. One way of seeing it is that if we want to make quantiative predictions in an structured way so we can have some book keeping on our progress, one seems to be naturally lead to develop symbolics and the concept of numbers and mathematics, which tends to be provide us with a consistent language, and sometimes considering the possible mathematical formulations, can provide a hint to how reality might behave that is consistent with the chosen framework. I see mathematics as a something that has been developed by humans, and humans are part of nature. It seems to guide many physicists. For example extending the intuitive ideas of geometry to higher dimensions and non-euclidian geometries has attracted many people. Assuming you think that reality must be described by such framework, you can study the formalisms themselves and find that there are only certain ways that is internally consistent, which means consistent with the framework itself. The frameworks themselves are constructed from axioms, which themselves can not be proven or disproven, then must only be consistent with the rest of the axioms. One can construct many different frameworks from different axioms, and the question is how to choose the more efficient one. Also, there is nothing that stops us from constructing new axioms. Another framwork is the probabilistic one. My question was wether the probabilistic framworkd, originating from the axioms of probability really provides a sounds basis for a fundamental theory as is? This is almost a philosophical question, but then even science has it's roots in philosophy. My focus is not just to find the best theory, but also to describe that very process better, and consider theories as dynamical objects too. /Fredrik
  23. Thanks for the comments! In a sense I'm all with you, I think what you say is pretty much the standard methodology - unitarity is sort of a consistency check, and if new evidence arrives, the "scientist" needs to replace or rework his theory to recover unitarity... I will not argue with this. ..buthere we are stepping over the non-trivial step - the response that recovers unitarity? This response itself does not seem to respect unitarity in the general case. My point was that in the quest of a fundamental model, one might expect some consistency of reasoning beyond what is demanded from an effective theory. Shouldn't one such requirement be a uniform treatment of interactions? Why should the scientists response to observations, be fundamentally different than a particles response to a field, or collision wit another particle? Indeed a solution is to inflate our models, more dimensions and fields - basically increase the information capacity of our model, but if those additional degrees of freedom are introduced on a generic basis and taken to be objective, then I see another issues with that. The model is far more complex than called for, in most cases. The model is designed to give unitariy even in the worst case. We are considering hidden / not yet seen structures to explain what we might possibly see. Isn't this severly disturbing?? Sure unitarity makes things easier in a certain sense - it provides us with a solid playground. In non-unitary models, the playground itself is not solid. But that's exactly what reality is, isn't it? Is this *really* what we see, or is it what we think we see? Anway, this is not what I see. I see a bunch of scientist that do unitary theories, and everytime they are wrong, they respond with reworking the theory to again make it unitary. And they pretend this "reworking" isn't part of reality. Why would the life of a scientist be _fundamentally_ different than that of any physical system? The one obvious difference is an enourmous scale difference in complexity, but if we are looking for fundamental understanding of reality, I can not accept this special treatment? /Fredrik What I'm after is a fundamental understanding of physics in terms of dynamic inflation that is observer invariant. The let's say "differential" deviation from unitarity is what inflates and tweaks the observers state. That we can not just keep inflating things in absurdum seems clear because a limited observer can hardly (informationwise) relate to this? And then the question is... what happend to the idea that science is about what the observer can measure? I see a consistency issue in the methodology here. Comments? /Fredrik Yes since non-unitary modelling is a fundamental relaxation, it gets more sensitive. But in my thinking there may be fundamental principles that yields stability - self organisation, unstable structure simply don't survive, they exist transiently but they don't persist. In my thinking anytime an observer are close to equilibrium (in a wide sense) with his environment he will see unitariy, all consistent with experience. But one can imagine cases where we are either so far from equilibrium that unitarity is a poor approximation. This is where I associate equilibration to learning. This provides (in my thinking) the analogy between scientist and any other system. The particles knowledge it has attained, need to be stored in correlation to the environment, also this equilibration is I think relatedto the arrow of time, and explains why the arrow of time is hard to see on microlevel, but as we beef up the complexity it becomes evident. To me it's and indication of something missing. /Fredrik
  24. I am curious if there is anyone on here that would under some circumstances could imagine a sensible fundamental non-unitarity model of physical reality? To argue about conservation of probability may seem foolish, because it follows from the axiomatics of probability theory. This is not the level the question is intended. The key is, in reality, as opposed to pure mathematics, can we really properly attach the axioms of probability theory to a fundamental theory? That we can do it with great success in many effective theories we all already know. So that isn't the question either. We can calculate the probability, and then collect all the data from the lifetime of the universe and then get the relative frequency, which we take to represent probability? Is this completely satisfactory, or just almost satisfactory? And does the distinction make a difference? I don't mean to make this a lenghty discussion of the topic but I am mainly curious to hear comments, reflections and opinions from everyone that has been reflecting over this once of twice. /Fredrik
  25. My personal view of the key points is in a certain sense that 1) QM has a more modern and proper view of the science, namely that scientific models should concern things we can measure, directly or indirectly, and one can not in the general case assume that one can make measurement arbitrarily without actually making a difference and that considering "gedanken measurements" alone is satisfactory. 2) GR OTOH realises that certain things are relative to the observers state, and thus one can not generally make comparasions between observations made by different observers. The notion of comparing observations that relate to different things, need to be defined. All observations can be nothing but relative. And to compare two observers, one may wonder howto make a parallelltransport of one observation to another observer with "minimum" distortion? If that is not possible, then the notion of comparasion is undefined. 3) Still, relativity only attempts to do this by accounting for a certain class of observers. Those observers who are related by the spacetime transformations of GR. It seems that is should be fairly clear that observers can differ in more ways than their relative spacetime motion and position!? 4) The QM foundations are still IMO not satisfactory in it's application of probability theory. Some reasons is that it's not trivial to measure a probability distribution or measure a probability space, because these are also relative to the nature of the observers. I think some people swollow this because nature has not so far allowed a clear discrimination to take it beyond "philosophy", while OTOH some people apparently was never bothered by this in the first place. I think we need a new mathematical framework. Unitarity is like trying to know with certainty your own ignorance. I can't imagine how that can make sense except on paper where you can do anything you want. /Fredrik
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.