Jump to content

fredrik

Senior Members
  • Posts

    532
  • Joined

  • Last visited

Everything posted by fredrik

  1. I suspect that if the strings are decomposed into smaller units, then it means strings aren't fundamental anymore - at least relative to the decomposition. However like in any axiomatic system approach, one can easily imagine that what is an axiom in one system is a theorem in another one, as long as they are consistent. Perhaps in the elementary string thinking the "bits" aren't given any ontological status of "physical bits", but merely bits that occured due to choice of description. This relates to some other opinions if it was here or in anoher place, that people take on different views on models. If you consider the mathematical model just as as human model, one could mathematically imagine continous things, but there the continuum has no physical basis. It's just one of out probably many possible mathematical models. But I have problems with such thinking - to me it is to not take the model seriously. I think the modelling should be kept in contact with reality and map onto the physical reality to the maximum possible extent. This is also related to the use of symmetries in physics. On one hand unbroken symmetries are trivial, and thus representing a complete redundancy. Broken symmetries are otoh the interesting ones, and it seems the breaking of symmetries is a key focus. This is in particular relevant when you consider that ideally the laws of physics must exists also to others than humans. Therefore I think that ideally the models should some way or the other be representable as physical structures. I share part of this with tegemarks thinking. IMO the point is not to say that physics is mathematics, but rather to take the mathematical models seriously and find the minimal model that exactly maps onto reality. In this way, consider a particle. Then it seems the concept of efficiency of representation becomes important. What are the limits a small system can physically related to? How does two small systems actually communicate? They can hardly afford the luxurous redundant setting of human science? Yet, don't we expect that the same logic applies to a small subatomic system as to a human - in principle? when searching for a unified theory scalable over complexity? Regardless of what one thinks of string theory I still think that it is an interesting link, because it can provide a link to other approaches and provide a sensible way to connect the approaches and perhaps find common denominators. Another way I see a discreteness as the most natural starting point goes back to the philosophy of science and logic. In logic it seems the simplest possible thing is a statement that is true or false, a boolean statement. Using this, together with axioms, we have been able to construct impressive theories and framework for continous mathematics, but they can be thought to sort of all boil down to, or be built on, discrete logic from which relations and systems are built. /Fredrik I hope this isn't considered off topic, but I see it as closely related. A very similary question is, while respeting measurement ideals: how can we RELATE to a infinitely resolved (continuum) probability? How? the analysis with frequentist interpretations shows several issues, one of them beeing the memory requirement. The memory directly limits the resolution possible of the probability/microstructure. If we distinguish between the continuum of the formalism and the physical probabilities then the formalism contains nonphysical redundancies. If you don't care about this, fine, but if we look for the minimum representation the redundant degrees of freedom are wasting our resources. Even the first principle considerations suggest that our probability spaces are discrete, and thus considering them microstructures forming a physical basis for encoding is better, but they make up a discrete probability space. If someone OTOH really believes in infinite information inside bounded structures, then I have serious difficulties to understand how than can be turned into something constructive and connect to logic. It doesn't seem to make sense, not to mention how you would be able to pull time out of processing of infinite sets. /Fredrik
  2. another comment on this I'm not a string advocate at all and won't be one, but it has occured to me as well that the simplest possible extension to a boolean record, as the memory size increases is a structure defined by the history or the flipping states, which in in the continuum limit defines the probability. Not consider that this probability is changing, and is thus unstable. One can easily imagine this as the probability range 0-1 defining the coordinate along a string with unit length, and the uncertainty of the value at each position can defined a new dimension - so as to see this as a one dimensional string, swinging into further dimensions, and the dimensionality of the external space is related to the "complexity of the uncertainty". This way, the origin of a string is understandable, and IMO the string dynamics can be deduced from more first principles. Ie. there is no need to postulate existence of elementary strings. I have not read smolins original suggestions of what he means with string bits, but if this has any relation to it, then the string bits are really more to be seen as information quanta. And the string is more like the simplest possible imaginable "pattern". And this patterns can further be excited and so on. This has the potential to make percet sense. But one would only wonder what took them so long. I guess it's because this is supposedly a philosophical question, like everything else there are no clean and obvious routes to find the answers to /Fredrik
  3. I guess this thread has turned blog-like I think the association to computing and in particular _computing time_ is a good one. One can consider transformations as more or less efficient, and I think that it might be possible to describe spontaneous structure formation in terms of a diffusion of information between structures, where the feeback implies a selection of efficient structures. So what starts as the the brute force of computing like a random testing of all describable options, will randomly restructure, and clearly the more efficient structures are more likely to survive. I'm trying to see how such a description might possible explain the emergence of the complex amplitude formalism. I've got a strong feeling, that the mathematical relation between the dual spaces is selected for it's special properties in this way - if this works it implies an interesting connection between these information ideas with mathematics itself. I've got some ideas that are closing up on beeing possible to forumlate in terms of a testable formalism. In a few months I hope to have some more insight in this. /Fredrik
  4. A personal comment... but I'm notsure if I got the question right It's a supposed solution to the problem that quantum mechanics has classical "background" structures, that seem to come out of nowhere, and be classically objective. Quantum darwinism is the idea that this background structure are selected by allowing the observer beeing a subssytem that can not possible encode the information in the entire univers, to interact with the interaction, and this interaction might statistically favour formation/selection of a stable (with respect to the environment) "reference" background. The basic idea is excellent, but the problem is that the explanation is only an explanation relative to a larger subsystem. The explanation itself, as posed in this approach is not IMO contained in the original system. Thus the proposed answer, is IMO, the answer to an alternative question rather than the original one, but I think the basic idea of environmentally selected references is dead on. But there might versions of it, or other similar ways to implemen the basic idea of "environmental selection". I dig this idea, but I think we need more ingredients to get a satisfactory solution. I'd like to express the selection in terms of "original learning", rather than "environmental training" although it's sort of the same thing. A difference is probably in the representation, the original learning strategy will be "less classical" but more "compact", and it can hopefully "live" in the origina system only (respecting it's limited information capacity). /Fredrik
  5. > "how do we know uncertainty and time do not have any kind of a relationship" I personally think it has a deep relationship. For Zurek's ideas see for example http://en.wikipedia.org/wiki/Quantum_darwinism At the end, there are some arxiv papers by Zurek. Yes, Quantum darwinism is a decoherence related idea. I personally think some of this thinking is interesting and possibly part of the solution, but I don't think it's the final solution alone because there are many problems Zurek's papers leaves untouched. /Fredrik
  6. This is the flavour of reasoning I like. I have tried to, but I fail to imagine how a general observer can relate to a continuum. I've also tried to connect logic reasoning to a continuum, and it seems the only logical and rational way to introduce the continuum is as a limiting case of discrete models. In effect this is what we do when we introduce real numbers in mathematics. When we are dealing with continuum models I think we are dealing with limiting cases. And can there be interesting physics taking place where this limiting case is invalid? I think so. /Fredrik
  7. I think a related question when asking if space if discrete or not, is if the answer is conditional on the observer or not, because a question is formulated in the context of something. Ie. maybe it's related to "observational resolution". This is my personal take on the problem, that the key is not only in space itself, but as much as in the observer relating to it - is the observer discrete? /Fredrik
  8. Fwiw, honey certainly contains both fructose and glucose. A typical analysis of honey b/w could be ~ 38% fructose ~ 31% glucose the rest is other water, more complex sugars, minerals, nitrogen sources etc. /Fredrik
  9. fredrik

    Random???

    I see this as a matter of definition, but for me the word random is an idealisation used to treat cases with missing discriminating information to prefer one option over the other. If we can't predict something, by calling it random and then assuming it has a fixed probability distribution. But how do we know the probability distribution? Record infinite amount of data and use the time series to calculate the relative frequency? What prevents the actual recorded sequence to be taken as the exact pattern? Of course the point is howto predict the data before it happens, right? But then, before we have seen the data, how do we identify the probability distribution? How do we know that the relative frequency will be the same for each large sub sequence? The concept of randomness is IMO a kind of idealisation, and therefor to ask if somthing is "truly random" or just "apparently random" is unclear to me unless someone can identify a strategy for measuring randomness. But given that, if this measurement takes infite time, then it means we will never find out in finite time? And even if we did, we would probably run out of memory before that. I think there is a domain where true random and just unpredicable is fundamentally indistinguishable, the difference is in some idealisation that doesn't make full contact with measurments. /Fredrik
  10. Hello Fred, I was away for a few days. Are you not using the terms energy where perhaps "free energy" - that is energy that can be spontaneously converted to work (classically speaking) - might be more appropriate? Which in essense really is more related to entropy IMO, since it measures the amount of energy you can extract for work and still have a spontaneous reaction, which is determined by the total increase of entropy. (ie beeing overall probable) Phrased differently that can loosely speaking be seen as a measure of how unlikely a processes can you drive locally and still have the global process likely? Likely and unlikely are meant to associate to probable vs non-probable and spontaneous vs non-spontaneous. If this is what you mean I think we are describing it similarly. Sure i think in the abstract sense, any interaction is communication. Eating included if we are talking about humans. I am not sure what you mean with expenditure, but perhaps we can see expenditure as a "risk"? To make progress you take risks? Ultimately I think even "in-animate" objects are evolved the same way, except it's complexity level is far lower. In a certain sense perhaps "reproduction" can occur indirectly by propagating your opinion into the environment, because this possilbly "selects" the environment to be more selectable to your alikes? That's of course a gross simiplification but the idea is an interesting possibility. This would mean that this abstract reproduction would be a sort of self-stabilisation or cooperating observers. /Fredrik
  11. I just got home from a worktrip. About Christians papers it's not available. I never liked the idea that you have to buy articles. The preview doesn't motivate me to purchase it. Garretts papers seems very interesting as I read it quickly. However he doesn't seem to address the foundational problems from a first principle view. He rather is guided by mathematical beauty. For this reason I see his paper as very interesting, and I hope that people will follow this up, but it does not in that papers make the fundamental reconstruction that I'm personally attraced to and therefore I have not specific comments without analysing the papers i depth. My immediate comments is his from my point of view, choice of problem and method. But I don't see in what sense garretts paper has to do with classical explanation, or what it means? /Fredrik
  12. I think you wonder, how come an observer can increase it's information and certainty spontaneously? I picture this, as usual, as beeing driven by the total "entropy" (entropy should be used with care in this context, so I put it in quotes, but what is the real thing is that the expected change is in this direction), so the environment simply favours evolution of knowlegable inhabitants, until there is a information balance where the observer loose as much as it gains. Analgous to heat exchange, expect generalized. I personally don't see this as a big conceptual mystery at the moment. /Fredrik There are many ideas made along these lines, various decoherence inspired papers where there environment is important. That is quite interesting and probably provides a part of the answer, but not all of it as far as I can see. It does not always in my opinion make sense to consider the environment as an infinite sink in the same that we sometimes do in thermodynamics. This is again an idealisation, but of the wrong kind we need IMO. It seems to be made in the reductionism philosophy in mind in the sense that everything is understood as a simplification as something more complex. But I think that is missing the whole point, that there is a limit to the relatable complexiy for any observer. I think we need to analyse the situation from the right perspective - from the admittedly "incomplete" perspective, rather than to try to understand the incomplete perspective in terms of a reduction of the complete perspective. Ie. analyse the logic of induction, based on incomplete information. /Fredrik The part that is cheating is when the "mass" of the environment is orders of magnitues larger than than of the observer, but what if the mass of the observer is comparable to the reminded of the universe (whatever that means), then we need another strategy of analysis, because ultimately there is a symmetry between the observer and the environment. Because you migh as well chose to say the the environment is the obeserver, and the observer (the reminder of the universe) is the environment of the observer, although twisted. /Fredrik
  13. I'm not going anywhere, and I'm open to have my opinion challanged by the unexpected. But it would have to fight the intertia of my current opinion and turn the unexpected into the expected and until the unexpected occurs my personal journey continues as per expectations. I need a reason to change direction. /Fredrik
  14. I personally think this is a sound objection. I think when we can answer this in it's full, we may also understand how the probability spaces as well as classical structures are formed and how it relates to fundamental concepts like mass. In my preferred view, the observer is the realisation of the record, and subjective probabilities are induced from the observer himself. Any objective, as opposed to subjective probabilities would only make sense to the extent there is anagreement between a large set of observers. The emergent physical nature of the observer, is the microstructure who is used to encode and store information. In this view it's clear that complex microstructures will have a larger "intertia" against change, than does a simpler structure. There seems to be some close connection between intertia (as in resistance against perturbation) and the "record capacity". What remains is to identify a successful formalism to implement this, and how to pull out of it also time and space. To understand exactly how all structures emerges out of the basic principles. And in this context most probably background structures like space and superimposed structures like particles must be unified. In expect that this almost philosophical questions will get an answer eventually. /Fredrik
  15. I intermittently loose the focus in the dicussions as they seem to take place at different planes at the same time. I thought the original topic "mass of information" aimed to understand mass in terms of information, or alternatively as a property of information. This is bound to be muddy but interesting. Perhaps the first step is to dissolve the pre-existing ideas and then try to regroup. IMO at the information / mass level we are probably below spacetime, then suddenly into the discussions the concept of a "photon" enters, which is defined in a different set of ideas and formalism and IMO is defined as a higher level construct, but in another construction. So what are we talking about? If we regroup and want to reconstruct the universe again, I think it's very confusing to mix into it objects defined elsewhere, as the connection is unclear to me at least. I think if we are to start discuss intertia, and information in a reconstruction the photon as well as all other existing concept built on the standard formalisms and models need to be reconnected to the reconstruction? At least that's how I picture it. What is the meaning of a photon if we have dissolved spacetime? /Fredrik I belong to those who wishes to reconstruct the most fundamental concepts from minimal first principles, and then see why new concepts/structures as complexity allows are unvoidable inductions. It seems we may disagree what should be fundamental? If we start out by accepting the notion of 3D space, then I have personally lost track from the outset. I think we need to see how the space we apparently see can be induced. And what about the certainty of that induction? It seems that at logical level one would want to assign measures similar to intertia and momentum, and thus to find a proper information theoretic connection to these concepts just like we try to find information theoretic explanations of entropy, which has proven to also be non-trivial. /Fredrik
  16. I take it your task is to design an school experiment that you should carry out and explain? The time required from biological processes depends on many things. Usually beer and wine makers call the primary fermentation the part where the yeast converst most of the sugars into ethanol. The time depends on many things, in particular how much yeast you start with. But if you pitch industrially primary is usually 2-7 days. Peak fermentation where there is a maximum active yeast population and CO2 production usually occurs a 1-2 days after start. Secondary is the finishing phase of fermentation where the last fraction of slow sugars like maltotriose are partially converted (there are always residuals) and some maturation is going on, the yeast is flocculating and so on. It's a clearning, finishing and early conditioning phase. Secondary can be say another week. But sometimes it's much longer, it depends on the conditions. Some powdery strains may take up to a month or more to flocculate after finished fermentation. About how long time it takes to "spoil" beer or wine it depends on how stable and contaminated it is to start with. And at least for an amateur having little sophisticated equipment, your nose is bar far your most sensitive gauge. So the question is how you intend to gauge it? Tasting, smelling or more quantitative analytical methods? I often leave slants of beer and wort around in my apartment just to see what happens, and you can't get very funny things growing if you wait. But if you want something significant to happen in 4 days you need to pitch into it a culture of appropriate bacteria. Leave an aerated ethanol solution around aerated to capture acetobacter. I never made vineager but I've captured acetobacter several times by mistake. If I were to make vineager properly I'lll definitely want to culture up a pure population of acetobacter to pitch with. That will make it more reproducable and much faster. Maybe the first step of your project could be to, catch wild acetobacter and keep them alive, then use them to perform your ethanol experiment? /Fredrik
  17. Some _personal_ opinions again I think this is all healthy refections! The thread is converging back to the point after some diversions, Persistence is the key to progress There are many elaborations and questions one could make out of this topic indeed, it's hard to know where to start. But from some past discussion I think a decent starting point is to appreciate roughly the success of classical statistical mechanics, how partitioning and probability works there. Without this background I suspect the objections that I think some of us are trying to put forward are hard to appreciate. I see several relates issues in bringing this forward. In classical stat mech, there is no ambigouity in selecting a partitioning and thus effectively defining your microstructure and probability space (these terms are all related and is sort of different views of the same thing). So everything is ALOT easier due to the idealisations. In a revision of this, the partitioning is ambigous for several reasons, thus making any constructions conditional on this choice. The other issues that possibly touches to gravity and quantum gravity, is that the microstructures must be define in terms of relations to the observer, not only making it observer relative, but also dependent on the observs relational capacity, this connects information capacity and mass, but in a way that is not yet understood. Like Zurek said in the context of "quantum darwinism": "What the observer knows is inseparable from what the observer is". -- Rev. Mod. Phys 75, 715 2003 So not only is the partioning ambigous, the nature of the observer IMO most probably puts constraints on the complexity of partitionings that are possible. Also since the new view is by construction made in a dynamics context the concept of state and dynamics are blurred, so it ontology and epistemology. About entropy, one can ask what do we want it to be? what do we want it to be a measure of? This question also determines the measure itself. The von neumann entropy is IMO at least certainly no divine measure given to us. It's simple and is formally close to the classical counterpart, but if you analyse it with the mentioned issues in the back of your head, it's easy to motivate yourself to find something that is more satisfactory (this is regardless of practical values ANY idealisation has). /Fredrk
  18. I see. Then I'd look closer to the particular case to see what factors and conditions you have. Is this primary, secondary fermentation? or is it bottled wine? And are you considering a atypical case or normal case? Another complexity is esterification processes. But relatively speaking, that makes marginal modulation of the ethanol concentration, but larger relative impacts on esters and acids, but still 4 days is a short time frame. Then there are also other types of alcohols besides ethanol that relatively speaking may change more during maturation. I've mostly been into beer, but I'll check tomorrow if I've got some references. I almost have bookshelf with only beer and yeast articles only. /Fredrik
  19. I agree that if you get a wine turned into plain vineagar, it's probably microorganisms and oxygen that is the problem but oxidation reactions in alcoholic beverages are complex in particular if you are talking about oxidation in the context of _flavour changes_ because subtle changes may have significant flavour impacts, but those ways are not necessarily practical to exploit as some industrial method to oxidise alcohols. So I wonder in what context you ask this? Are you trying to find a method to exploit industrially or are you actually to understand flavour stability of wine? Bacterias is an option, acetobacter also needs oxygen. Normal yeast can also to a certain extent oxidse ethanol, given oxygen, although these pathways are repressed during normal fermentation conditions. There are also different pathways that dominate depending on conditions. During normal fermentation ethanal is an intermediate during fermentation, that is reduce to ethanol. But part of it is oxidized further on to acetic acid which then turns into acetyl-CoA during typical fermentation conditions. This is a different pathway to produce acetyl-CoA than during clear respiration. But there are other ways oxidation of alcohols take place. There are typically alot of interactiong redox pairs in beer and wine. Including metals, sulphur compounds etc. Some metals, in particular iron acts as a catalyst and can speed the negative aging, and increase the production of various aldehydes. Some of the oxdiations and reductions are part of the maturation and aging process as well, and residual biological activity is only part of the thing, also plain chemical reactions with complex chains of redox reactions is at play. For example too much iron in your water is not nice if you want to make beer or wine, the flavour treshold of acetaldehyde is very low. Much lower than the treshold for acetic acid. Sometimes emergent infections are first detected as a early green acetaldehyde aroma, before the last vineagar stage comes. Acetobacter can produce alot of a acetaldehyde, so can other bacteria that occurs in beer or wine. /Fredrik
  20. This may seem like a lame answer but I personally haven't spent that much time analysing bell's theorem to pieces mainly because from my point of view, it's not a high relevance problem. It seems originally the motivation behind it was the desire to maintain the old ideals. I have persoanlly released those ideals on other grounds, and thus trying to find flaws in alternative grounds to release them can not be motivated for me. But if there was motivation I think alot could be said about that, and the implicit ideas used in it. For example locality itself, refers to the concept of space and distance. But what is space in the first place? If I'm not mistaken Ariel Caticha (who's proclaimed vision is to derive general relativity (classical) from some principles of inductive inference and Maximum entropy principles) has elaborated that space is rather defined in terms of relative influence. That distance is rather defined as a measure of influence or distinguishability. But how does one effectively separate 3D space from other generic configuration spaces? In particular if you spell out the locality assumption properly in terms of conditional probabilities and use bayes theorem, I can't help wonder why locality should be obvious. There are many other things I prefer to get headache of than to try to restore local realism. Thereof my slightly misirected attention /Fredrik
  21. I guess this comment was general, and perhaps not relating to my post but I'll just note to avoid beeing misunderstood. I personally don't have anything against uncertainty concepts and indeterminsm and leaving the classical realm, this is good stuff. On the contrary, do i have serious objections on the determinism of probability in QM. The original idea of QM, is that while we can not in general determine the outcome of a particular experiment, we CAN determined exactly the probability - this is where it smells. This usually resorts to ridicilous reasoning of infinite experiment series and infinite data storage. The argumentation holds perfect in the practical sense, but it is not rigorous reasoning to me, but that's possibly relative. The foundations on QM are IMO idealisations. Their support is the the implications of them are proven effective in experiments, which means they are GOOD idealisations. But if we now want to extend our understandin to the next step, the question is if these idealisations should be updated or if we want can patch on onto the existing ones. I see this as philosophical questions of science and it's method. I can't comment on the days where QM weren't known the mankind as I wasn't around but I figure that popping the realism of classical mechanics was an eye opener (it was for me). In the light of today, of course the fundaments of classical mechanics is even worse than than of QM. But that doesn't stop it from beeing excellent in most cases. I am not familiar with his thinking, but what the impact of the self, on it's conception of the world view is I think necessarily massive, because I think my conception of the world lives within me (encoded the microstructure that makes me), where else would it reside? But what is me? Is my notes and books and computer part of me? How about society? I think there are different ways to the define a self but I can not think of the self as be defined unambigously without reference to it's environment and an organism is consistent only in it's right environment. /Fredrik
  22. Just to add a personal opinon as requested in th OP. I think quantum theory needs to be revised. But, OTOH, what is the difference between revised and extended? It's a matter of point of view I guess. In a certain sense QM is very beautiful, but there are some major problems I have with it, and that roots in the probability formalism used. When so anal about everything beeing measurable (which is GOOD), the foundations are seemingly an exception from this principle. The concept of objective probabilities are very foggy business IMO. Also the concept of objective probability spaces are similarly vague IMHO. There is probably a still a very good explanation why the current models are still successful, and the extension will not change this, just like we can see why classical mechanics is still right for all practical purposes in most everyday scenarios. I think QM is emergent, and the same goes for many of the symmetries. One of the annoying things is the sometimes the theories are forumlate as if they were independent of an observer, while they can never be fundamentally so as far as I understand, only in en effective or emergent sense. For practical purposes, there is not always a difference, but the difference will be in the effectiveness in extended knowledge. None of these ideas contradict current effective models though. /Fredrik
  23. I did a quick search but found mostly various speculations that the brain "seems to do quick FFT", and various _models of the brain_ using fourier transforms in the pattern recognition. I see the clear association there, but it's on the same level as nature. I am trying to understand this deeper and how it connects to a logical description that can be formalised. The fourier transform has interesting properties, for example the gaussian distribution (beeing central in probability theory) is self-dual and transforms to itself. The fourier transform of a gaussian distribution is also gaussian. However, the gaussian is not the only self-dual function under fourier transforms, but this is a hint, and an interesting note. I am sitting with my memory full of a set of distinguishable events and note a frequency distribution that is fluctuating. How can I make progress and learn? Can I somehow, use part of my memory to investigae a transformation of the fluctuations, and so to speak extract more information? I seek the transformation to use, that gives me maximum yield, given the constraints and NO PRIOR preferred symmetry of the deviations. Sometime like that... but I am still looking for formalise the question. I have a feeling that a selection will take place and the transformation which are "most fit" will ultimately come to dominate, given enough processing. Somehow this is a key question, I don't find it attractive to introduce a transformation without induction. The transformation must be self-induced at some level. /Fredrik
  24. Are you referring to some particular study? If so, I have missed it and would be interested to see it. /Fredrik
  25. I think, making sense out of the seeming unvoidable "self reference" is the difficult trick we must pull off. What else do we reference? You are referencing yourself all the time wether you want it or not. _You_ are asking questions, and you are seeking answers and this results in a modified question and a modified You, because obviously the old question lacks the same relevance once answered, becuase asking questions takes resources. You are evolving. During each cycle there is progress. If one first considers a memory record of distinguishable events, then we induce from that a relative frequency, and the uncertainty of the supposed true probability is at best directly related to the memory size. Then how can this situation evolve further? Can we use part of the memory to store deviations from a stable probability? What is the most efficient transformation to use? Is the mathematica relation between postion and momentum (fourier transform) of ½ power of the distribution somehow selected by nature? If so, by what logic? I think there is an answer to this.... In the world of all possible relations, what is so special about the fourier one? or maybe it's not special at all? Is it just human arbitration at play? In what sense is the fourier relation so "fit", and why? This is one question that I need to solve to go further in my attempts. I got an idea last night and I'll try to let it mature a bit. I think all strucutre are emergent from evolving self-references. What we need to understand is the consistent logic that allows that in an understandable way. /Fredrik
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.