Jump to content

fredrik

Senior Members
  • Posts

    532
  • Joined

  • Last visited

Everything posted by fredrik

  1. Hello Captain, The few things I'm aware of are some products used by beer breweries to prevent excessive foaming during fermentation. This is used by homebrewers and also some commercial breweries. The product name is Fermcap manufactured by http://www.kerrygroup.com/ I never used it by I know many homebrewers that have. The delicate balance for a brewer is that you want moderation. You can to minimize foaming during fermentation as well as during wort boil, however you do not want that at the expense of final foam stability. Head retention and foam texture are key quality parameters for beer. This is the target for this antifoam agent. But I'm sure there are others. About the price I don't have it in my head, but given thta homebrewers buy it, it's not that bad. Check out a good homebrewing supplier and they might have it. The active agent is indeed a silicon oil at least in this specific antifoam is supposedly an emulsion of dimethyl polysiloxan(E900) in a food grade emulsifier. For foam quality excessive foam blowoffs during fermentation can actually be detrimental to final foam quality since sometimes polyphenols and foam active proteins are enrichd in the foam, and these are drained from the bulk if large blowoff volumes are used. Normal homebrewers othewise keep enough headspace to minimize blowoff. But the other downside of that is oxygen inclusion coming with the headspace, until ti's flushed. /Fredrik
  2. I guess the resident expert may have better advice but what come to my mind is to look at the fermentative pathways in E coli and try to make sure none of the fermentation products are produced. Wether its' lactic acid, ethanol or anything else (I've never studied E Coli metabolism, just S.Cerevisae). If it does pure respiratory growth, no fermentative products should appear, right? Other than that, some papers on yeast I've read does a modern gene expression analysis. You pull cells out of the culture and does an check what genes are expressed. But that seems like more high tech to me. I'd look for production of compounds hat are only produced by the fermentative pathways, if you want to exlude that there's both fermentation and respiration going on at the same time. /Fredrik
  3. I'm no expert either but E coli are usually, like brewers yeast, facultative anaerobes, meaning that can do both respiratory growh and grow by fermentation. In brewers yeast, the same strain the homebrewer propagates mainly fermentative (the small amount of O2 supplied, are to build UFAs and ergosterol for healthy cellwalls, it's a common homebrewers myth that yeast respires during wort aeration, it doesn't becaues wort glucose level inhibits respiratoty pathways), is propagated commercially at the yeast companies aerobically. Like CharonY already asked I think the question is what exactly that is to be proven. I might suspect that the question is to prove that the strain you have is not a respiratory-deficient mutant? And wether you have a pure strain at all? In normal brewers yeast for example, theres often a small % of the cell population that's respiratory-deficient. Some yeast propagation methods, unintentionally select for these, which usually give undesirable flavours. I'm not very fluent in standard chemical tests used in modern labs since my experience is limited to what I can access at home, but usually biomass calculations gives info on the energy production and thus respiration/fermentation ratio. Without respiratory pathways, there is limit of possible biomass yield. With yeasts the respiratory deficient cells usually also look different in microscope. There are tests wehre you can track the consumption of O2 in the culture, but from what I know of brewers yeast, the fact that o2 is consumed, does not mean it is used for respiratory energy production. THere are many places for o2 to go. So I don't thinks this alonw would prove anything either? In beer fermentation, the wort is saturated with O2, and in very short time it's all gone, but many tests, including gene expression tests shows that while most O2 is consumed by yeast, rather than wort oxidations, none is used by respiratory pathways.. UFA and sterols synthesis are majors ones, but also pathways that disspiates excess o2 to reduce oxidative stress are in play. But all cellular processes are powered by fermentative pathways. /Fredrik
  4. Here are some not so well edited associative comments os it gets philosophical, and I'm not sure wether this thread is in the right section but anyway. But given your past posts I see a good chance of conveying the message, this motivates this scrambled post. I am betting on your error correction on this one Can you elaborate on this in the context of molecular biology? I didn't suggest that current methods are perfect, I only said they are much better than they used to be But few things are so good that they can't get better. This makes sense to me and I see your point. However I think this very phenomenon is also what is responsible for stability. We need stability too. The current paradigm (whatever it is), is here for a reasons: it has evolved because it was successfull. If this was not so, it would beg the question how come such an improbable paradigm evolved. This means that any confidence in contradictory suggestions must be leveled against our confidence in the current paradigm. This phenomenon of conservatism is to me, analogous to the concept of inertia. But this arguments for the soundness of conservatism must not be confused with the universal correctness of the current paradigm. Anyone subject to that confusion is subject to the problem you describe. This is the same as to mistake the theories that has been corroborated for the last 100 years are obviously representing the truth. However, the next complicaiton IMHO, is that sometimes such a confusion is actually rational! Because you can never "maintain" sceptism on all points, because this consumes resources. One effect of the intrinsic view I argue for, is exactly a kind of truncation, where there for each obeserver is a probability cuttoff, where something that is simple sufficiently improbable to happen, is assumed to never happen! And this is rational given the constraints. The point here is not to confuse rational decisions with "correct decisions", as measured relative to future outcomes. The point of observation is just that, this system ACTS/BEHAVES, AS IF certain risks are ZERO. THIS is the point. And this happens regardless of wether it's true or not! This is my point. This is related to game theory thinking of that makes it easier. Again this is a simplificaition, I'm sure someone will point out that there is no universal measure of rational action either and this is right. But I acknowledge this problem too, but I ignore that here. From my point of analysis, the point is not which paradigm is "right" or representing absolute truth, because I have no reason to think that such universal measures of truth will be found. Instead my focus is that of the dynamics of opinion independently of the notion of truth. Then it seems to be that it's a basic observation that dynamics of opinon always relates to a prior structure or paradigm. This is so because I think it's the most rational way it can be. But again, what's rational isn't always true. These are two different questions IMO. what's true in an universal sense, is not something that interests me. I think that the systems reaction to it's environment, is more dependent on what information is fed to the system, and probably not at all on wether whatever is fed into it, is "right" or "true" from the point of view of another system. I think this complexity, is part pf what is responsible for interactions in general. This is seen also in human interactions. Alot of human conflicts are simply a result of different people having different opinion on what is right and wrong. The logic we de facto see is that each human fights for what she thinks is right. All of them act conservatively to preserve "their truth". Usually the asymptotic result is a compromise, like all equilibration processes. A new truth is formed, to which all participants can agree. They both realise that this is the most constructuve outcome to both of them, since it is not possible two maintain a stable inconsistency. Inconsistencies, as in differing opinon always leads to interaction. I use human interactions here to create a mental image, but what I have in mind is physical systems. The abstraction translates. The supposed idea here is that this very abstraction, results in a MUTUAL selection, that results in formation of stable systems of probable preference. Some wild hopes (in the future extnesion of this reasoning) would be to explain the emergence of structure of the standard model of particle physics. Things like things + other things are my motivation for this. But also, there are implications beyond physics. World economy, war and peace and other human level phenomena that most probably have large common denominators to this abstraction. This measn that reasearching this direction is not limited to only what goes on inside particle accelerators. One one needs to see the analogies and it have many more hand on applications. I'll respond more later, but this is very non-mainstream. It's my personal quest, that's in progress. But I can dig up a set of papers written by others, whose common denominator are a good first glimpse of my choice of reasoning. more later ..Some of the ideas are implied above though. /Fredrik Some ideas of others that IMHO are worth mentioning. None of these are exactly what I am looking for! but all of them have something in common, that I consider important. I have pointed out the main point i like in there reasoning, to illustrate the point, and a short comment what I don't like. (1) Olaf Dreyer has an own idea he calls "internal relativity". It may sound like a lame name, but some of they core keys are very close to some of my starting points as well. Here are some of this work "Why things fall" -- http://arxiv.org/abs/0710.4350 The main key point, of his reasoning, that I like is this: "We have termed our program Internal Relativity to stress the importance of looking at the system from the point of view of an internal observer" It might not be clear from such a simple statement the meaning of it, but it has consequences... some quotes from his paper "We claim that the internal point of view has not been taken far enough. If one strictly adheres to it, one finds not only special relativity but also general relativity. This is the central novelty of Internal Relativity" "In our view, matter and geometry have a more dual role. One can not have one without the other. Both emerge from the fundamental theory simultaneously" I mainly find a first impression that his reasoning is not taken far enough. He is too focused on spacetime. (2) Ariel Caticha, following a tradition similar to (E.T Jaynes, the author of Probability theory - the logic of science; http://omega.math.albany.edu:8008/JaynesBook.html). Ariels method is somewhat along Max Ent methods, and his idea is that GR (Einsteins Equation) should be an implication from the rules of inductive inference, as an kind of information geometry http://en.wikipedia.org/wiki/Information_geometry). The main key point, of his reasoning, that I like is this: Taken directly his his webpage: "My recent work explores whether the laws of physics might be derivable from principles of inductive reasoning. These principles - consistency, objectivity, universality and honesty - are sufficiently constraining that they lead to a unique set of rules for processing information: these are the rules of probability theory and the method of maximum relative entropy." -- http://www.albany.edu/physics/ariel_caticha.htm As I see it, I think ariel misses some of the points of the "intrinsic point", that kills universality. But his basic direction is to my liking. (3) Carlo Rovelli, which works mainly on LQG, is not really representing my opinons, but he his paper on Relational Quantum Mechanics contains in the early part an excellent reasoning, but unfortunately he develops in a direction that looses me. but it's worth reading. "Relational Quantum Mechanics" -- http://arxiv.org/abs/quant-ph/9609002 The main key point, of his reasoning, that I like is this: "The notion rejected here is the notion of absolute, or observer-indepdent state of the system; equivalently, the notion of observer-independent values of physical quantities" "First of all, one may ask what is the “actual”, “absolute” relation between the description of the world relative to O and the one relative to P. This is a question debated in the context of “perspectival” interpretations of quantum mechanics. I think that the question is ill-posed. The absolute state of affairs of the world is a meaningless notion; asking about the absolute relation between two descriptions is precisely asking about such an absolute state of affairs of the world. Therefore there is no meaning in the “absolute” relation between the views of different observers. In particular, there is no way of deducing the view of one from the view of the other. "Does this mean that there is no relation whatsoever between views of different observers? Certainly not..." "There is an important physical reason behind this fact: It is possible to compare different views, but the process of comparison is always a physical interaction" "Suppose a physical quantity q has value with respect to you, as well as with respect to me. Can we compare these values? Yes we can, by communicating among us." The point where he looses me is his treatment of probability theory. He explicitly avoids discussing what he calls "the meaning of probability". This is a mistake IMO. But the early part of his reasoning is outstanding IMHO. I have a few design principles behind my personal thinking - to emphasise the importance of the intrinsic vs extrinsic views. - the rationality is different in the two views - there is an intrinsic rational action - different views can be compart only in one way - physical interaction, there exists no other means for universal measures. - the intrinsic view is connected to an obeserver. And I take Zurek's (http://en.wikipedia.org/wiki/Wojciech_H._Zurek) notion to heart - "what the observer is, is indistinguishable from what the observer is", I think that is an most excellent scentence! - and most important, ALL of the above must fin in one particular intrinsic view, namely the theorist, which is a part of universe, and has finite resources. One objection I have is the use of the continuum probability as an abstraction for degree of belief that E.T Jaynes introduces in his book. This is not an innocent statement, and IMO completely violates ths "intrinsic point". Because the instrisinc point of view, must be constrained IMO. You can not even make a basic real life computation in infinite information in finite time. Therefore I don't think they belong as a basis for reasoning either. From my point of view at least, it follows from this view, also a view on the scientific method. Because consistency of reasoning here requires that my quest for understanding the universe, must fit in the same abstraction as an excited atoms quest for equillibrium in an unknown environment. But basic assumption is that there is an analogy, and this is a rich source of intuition for me. /Fredrik Correction: An typo appeared. Zurek's correct quote is "what the observer KNOWS, is indistinguishable from what the observer IS" /Fredrik More in this direction from Lee Smolin can be found in one of his books. Written for wide audience without math. The focus is thus on conceptual understanding and general direction rather than technical detail. "The life of the Cosmos" -- http://www.amazon.com/Life-Cosmos-Lee-Smolin/dp/0195126645 Smolins main argument for a new paradigm of evolving law, is closely related to what consider to be the key in the intrinsic point of view. Because an intrinsic point of view, IS (think Zurek) an observer, which is a part of the universe. And if we then consider that this observers knowledge about "law" is indistinguishable from his own existence, evolution of law, and evolution of spieces really are two sides of the same coin. /Fredrik
  5. From my point of interest here, the quest for knowledge certainly goes hand in hand with a quest for the method of acquiring knowledge. Neither knowledge, nor the method is perfect, nor do I expect a day when it is. But through history, human knowledge as well as human method for knowledge has developed. Long time ago there was no developed scientific methods at all and not much of a rational system to establish what is truth. It evolved from faith, and appeal to authority(probable opinion historically referred to is opinion of authority), to more objective ideals of establishing truth. However there's more to the method than to secure objectivity, also efficiency is an issue. I think we work as fast as we can, given the contstrained resources, but if you ask if there is a way to speed time? I don't think so. If you're ask if your or me have to wait for the mainstream ideas to be revolutionized before thinking outside box, I think the obvious answer is no. This is similar to question I ask myself too. Since we are dealing with somewhat creative processes here, I think this question is related to artificial intelligence, and self-evolving learning models. This is why I think evolutionary models is needed in physics, that applies also to the notion of law. Relating to for example the discussion http://www.scienceforums.net/forum/showthread.php?p=455842#post455842 My associations to this entire discussion is not as much historical as that of the view of physics and physical law as a result of an evolution. To try to describe this evolution, to the extent possible (to perfectly do so, is I think impossible), is strongly related to the question of a self-organising and observer, existing in an a priori unknown environment. My personal starting points is closely related to statistical inference, but from a point of view that is more intrinsic, that respects the constraints, in particular information capacity constraints of an observer. There is only a limited set of inferences possible from the inside. I think this explains certain properties, and possible makes predictions about self-organisation. /Fredrik
  6. Here is an easy-to-read essay by Frank Wilczek(who got the nobelprice 2004 for his work on quark confinement) related to this. He notes that to an accuracy of 95% all mass is simply a form of "confined energy". The rest masses of quarks en electrons in ordinary mass is a small part. The mass of the neutron and proton in the atom nucleus are reduced to basically confined massless gluons, the quark rest masses beeing a minor part. Certainly the question is open regarding the remaining 5% but it's a good essay. "The Origin of Mass" --http://web.mit.edu/physics/facultyandstaff/faculty_documents/wilczek_p@m03_FINAL.pdf Wilczek also has a new (non-mathematical) book, "Lightness of Being: Mass, Ether, and the Unification of Forces" -- http://www.amazon.com/Lightness-Being-Ether-Unification-Forces/dp/0465003214 that has been discussed here: http://www.scienceforums.net/forum/showthread.php?t=36035 /Fredrik
  7. I agree. I do by no means suggest the above as a way of surrendering. On the contrary am I suggesting that the very insight of this, with make the search more efficient It's like they say, the journey is the goal. It's matter of self-preservation IMO. I picture it's conceptually the reason why things acquire stability. The are constantly seeking improvement. If you live in an enviroment like this, you realise that the only way to maintain "status quo" is to keep striving. What we see is more I think like meta-stable steady states. /Fredrik
  8. I agree completely. I listed that as a point of issue, not as a statement of mine. To give my escape to this: there is a level in the hierarchy of construction, where the evolution, has not distinguishable pattern - ie it's "random" from the point of view of the observer, which also means that the observer doesn't see this as a problem, because I think that the evolution of law goes hand in hand with the evolution of observers (and matter), thus at some point the observer is himself a fluctuation. So the question becomes, what possible laws, can an observer "see" that is himself a small random fluctuation? All other higher constructs, the does provide a law of the evolving law are, I picture, emergent, from a the self-organised chaos. I think the predictions that SHOULD come out of this idea when matured, is expectations on the observers/particles and systems that does populate the universe, and this is in itself a manifestation of what laws that populate the universe since if we maintain the observable ideal, only "observable laws" are under discussion. Therefore, the population of the universe constitutes a population of "opinion" of law, which by self-reinforcements becomes the law. So to me, the quest here, is to better describe exactly this self-organisation. And see if we can produce predictions. The ultimate payoff would I think be if arguments can be shown that the laws we see, and the matter we see, are in line with expectations of such self-organisation. I think that to hope that some theory will predict everything perfectly isn't going to happen, but perhaps we can find the best achievable way to ask new questions, so that even when we are wrong we make to the best of our logic, "probable progress". I don't think anything more than that is possible. /Fredrik
  9. Fortunately it's not as bad as it looks Sometimes questioning yourself is the way to make progress. In the past posts I was try to be diplomatic. Granpa associated the original topic to "error correction", and while I think that the classical shannon theory doesn't near solve the problems, I see a way to make reasonable associations in a wider perspective. Everyone is coming from a different direction on here and appropriately to the discussion, I can only guess what any poster is trying to communicate And to apply statistical inference, I first need to guess the channel, which isn't known either. The original question was how scientific theories are constructed. How is that interpreted? Is it a history question? or is it an attempt to create an abstraction to some kind of "logic of creativity". I think the obvious answer that noone has a eterministic "method of creativity" was given early in the thread, yet the discussion lived on. In my personal abstraction of predictions and how it relates to communication channels is vaugely this Consider and abstract observer (could be a physical system, not necessarily a biological system). What can this observer infere about the reality in it's environment? One can picture that ther is a "comunication channel" to the environment through wich information is fed. And the problem is, given the observations, what can it infere about the outside? Then, based on our "expectations" on reality, and on the "future" a rational observer can determine what action to take. But this abstraction raises many questions that invalidates the abstractions in the original shannon theory. In shannon theory, inference and error correction is possible statistically if the channel (transition probability) and marginal probability is given. In the above example of mine, these are not given, they also needs to be guessed. In my personal abstraction, this relates to learning. Learning, as in related to science (learning about nature, by means of processing experimental input and experience) means, developing the opinon of the "communication channel" through which you interact with reality, in a constructive way. This further limits the possible error correction because there is uncertainty also in the channel (transition probabilites). Anothre problem is when the receiver is saturated. A finite physical systems can (most would agree at least) hold only a finite amount of information. Therefore, the abstraction of asymptotic stead state streams is even more inappropriate. This finite information capacity in the nodes, implies a kind of "cutoff". Which further introduces choices: What information to discard, can we compress the data etc? This is how I see a possible abstraction to learning. So to realte to the error correction of shannon, a learning model, is not just error correction over a knonw channel, it contains feedback whereby the channel evolves. The evolution of the channel is analogous IMHO the evolution of questions asekd, and the evolution of experimental design. In a certain sense an experimental setup, does specify a kind of "communication channel" (although subject to various uncertainties) through which we make inference about nature. But these communication channels are not given! WE built them! /Fredrik x -> [ noisy channel ] -> y, characterized by p(y|x) - ie. the probability of y @ receiver, when x is sent. Bayes theorem [math]P(x|y) = P(y|x)\frac{P(x)}{P(y)}[/math] and [math]P(x,y) = P(x|y)P(y)[/math] Shannons reasoning was that, given a channel characterized by P(y|x), there is a marginal distribution P(x) that maximizes the information divergence between the joint distrubution P(x,y) and the indepdenent case P(x)P(y). Shannon limit [math] C = max_{P(x)} S_{KL}(P(x,y)||P(x)P(y)) [/math] So the shannon capacity is 0 iff x and y are independent. [math]S_{KL}[/math] is the Kullback–Leibler divergence, http://en.wikipedia.org/wiki/Kullback-Leibler_divergence Anyway the key is that the shannon assumes the channel is known, perfectly known. Which is not always a realistic scenario. /Fredrik
  10. One opinion: Smolin raises many great points in that powerpoint. If it doesn't make sense, read it until it does I've expressed my personal opinon on this before, but I'll add it to this thread again. I have come to a similar evolutionary escape as Smolin, so my opinion is that the notion of eternal makes little sense. There are I think many angles to argue towards this. Smolin raises many good points IMO. To mention some keys Smolin mentions, to the problem without diverging into personal opinons too much I think some are - If the laws are evolving, isn't there a deeper law relative to which they evolve? - Paradoxes of unification and symmetry. What exactly is the relational meaning of "distinction without difference". - Questions of consistency, completeness and uniqueness I think one thing that prevents progress, is the obsession with perfection, or perfect consistency. IMO, this problem is exactly what motivates the evolutionary escape. This lack of perfection, is the drive of the evolution. It's ony of my personal motivations: Any attempt of analysis of mine, has resulted in that one question is replaced by another one, and you basically end up in a processing loop. Any attempt to establish a finite, durable conclusion has failed. Then I realised that this is not a problem, rather it's exactly the point. Because the world isn't static in the first place, why expect our understanding of it to be static? I associated to this before, but recall the story of "Einsteins blunder". He was looking for a static universe. Now the story is vaugely analogous, applied to the logic of reasoning, and theory building. Is the (at least so far) failure to nail the universal eternal complete, consistent laws telling is somethng? IMHO it does. /Fredrik
  11. I've never been interested in so called pure mathematics, but my motivation for mathematics has always been the same that motivates me to understand the world I live in (My focus has been natural science: physics but also biology). As an efficient language to describe what I see in a rational manner. So I see the development of mathematics as going hand in hand with the development of science. For me, one without the other is unthinkable. What can you rationally express without efficient language? But also, what to do with a language, if you have nothing to say? As has been been the case in the history of physics, to see the simplicity of nature has been closely related to "inventing the language in which it gets simple". That's what "economy of language means to me". Simplicity is relative. /Fredrik
  12. It doesn't look better. I think they call it self interaction Ok I'll stop now, I hereby declare to not type more until after xmas. /Fredrik
  13. I agree, that's the better question, because it puts the finger on the real problem. How to ACT upong incomplete information. This is exactly where this makes a difference. The most accurate answer is I think: I don't know. Now when we have settled that, we still do not escape the choice. Either you can throw in the towel and act randomly, or you can, given that no definite opinion can be formed, try to somehow count your evidence supporting that the outcome that the sun will rise tomorrow is the "least speculative" possibility, given tha fact that you do not know for sure. Then your actions are chosen as to maxmize your utility, based upon what you think will happen. This can yield a somewhat rational behaviour, and the chances are that those systems that act rationally will be better off in the long run. This is still fuzzy, but my take on physics, is inspired by that this is how nature is constructed, and has evolved. Yes it is just a guess, but those who will not play will not win. And the point is even that we are living the involutntary game of line, and to not place your bets, and keep your resources, is also a bet. And you are easily stripped by neighbouring systems. Play and you have a chance to survive, to not play is not a safe strategy. /Fredirk I think even given this, you unavoidable EVOLVE and develop a non-random action-strategy. This is by selection from the environment. With "random" here I simply mean relative to a given observer. There is no universality in randomness more than there is in symmetry. IMHO at least. Maybe I will be required to revide my strategy, but I am confident in it, and it is the basis for my actions. What other choice does a man with a tiny brain have? /Fredrik It's hard to stop. I think the point here is that, as far as you can distinguish possibilities, a rational action acts upon ALL of them, not necessarily one of them randomly. I think this is subtle but it's ultimately one way of by interaction, determined the action of a system. Think quantum superposition, where it seems to be that case that the system somehow acts upon ALL possibilities, not just one of them randomly. But this gets us too far for the thread. I am working on this as part of my personal projects, but it's still in progress. I think this analysis when taking further (though it wont happen in this thread) might suggest a deeper understanding of quantum logic and the appearance of non-commutative opinions/information in the action context. It's in the dynamica context, of producing an action, based upon this incompleteness, that this gets really interesting, and where the evolution idea gets moving. It also unites, like I think I wrote earlier, the concept of entropy, and the concept of action. The ambigouty of entropy measures, is similar to the ambigouty of rational action, but evolution is the possible way out I see. /Fredrik
  14. I see your point I don't want to start any long discussions today since I won't be able to fllow up, but the question is IMHO, what exactly does it mean to say "a swan is white". Can we make any certain observations at all? I am suggesting that there is a point where the reasoner can not distinguish this formal uncertainty, and at that point you simply end up with an opinon, whose questioning doesn't pay off, or isn't even possible. And at that point you say that, my expectation is that all swans as white, and I act upong that expectation. Then if I am wrong, then I simply revise my opinion of the underlying microstructure of possibilities. I'm not picking on induction, just saything that I think it's not certain. But at the same time do I hold the opinion that, the uncertainty itself, guides us in the evolutionary process. Like science has evolved. You don't need _universal qualifiers_. But I think that often, universal qualifiers are indistinguishable from the best possible guess. I think data compression, error correction etc are indeed part of this. But from the point of view of computer science, these things take place often in fixed contexts. Similar view can also be taken to physics. Frank Wilczek made the analogy to data compression in this latest popularize book (http://www.amazon.com/Lightness-Being-Ether-Unification-Forces/dp/0465003214) which I read some months ago. He makes the analogy in the context of refecting upon the notion of symmetry, which are strong guides in the development of the standard model of particle physics. Just like there is no universal algorithm for compression that is equally fit for all cases, one might argue wether symmetry is contextual and thus sort of relative. What does this in the quest for the deepest symmetry of nature? Thus IMO conceptually relates to to this limiting procedures. /Fredrik
  15. Here is a quick reply... will probably be down more during xmas, lots of stuff to do, bu here is a quick one before te holiday.s maybe I'll check in later beofre new year. so i wont start any lenght argument this is just a short explanatin of what I meatn with the last post. You're right that it came out uncelar I definitely agree with your point that the problem of error correction is relevant to the discussion of induction. I just didn't choose to comment on that there was enough other stuff to comment on in this interesting discussion. However error correction is IMHO an induction, not a deduction. Only during certain ambigous idealisations or truncations, can this induction be turned into "probabilistic deduction"; ie. the induction is turned into DEDUCTION of probabilities. The idealisations IMO lies in the probabilistic formalism, and this is why I think it really is a form of induction, because the notion of probability refers to obscence limits, limits that are not realised in actualy situations. Not logically valid, but again that's not the point, it's still apparently efficient. The noisy channel problem is indeed an application of inductive reasoning.By means of Bayes theorem, that a probability of the input given output, is inferred from the transition probabilities of the channel and the prior distibution of the input. So your point that, observation of only white swans in some sense, is a rational basis for EXPECTING only white swans is IMO sound. Altough the analysis could be further decompose some things. Still it's clear that this "induction" is not of deductive nature, but then again the validiy of induction does IMHO rely on deducion. IMO the key is evolution. Here I'm with your reasoning. Yes I was probably unclear, sorRy. I just meant to give my view on, not elaborate in detail, but I see that what I wrote might sound strange or wrong. WAeat I meant was that in the code that makes the actual probability of error arbitrary small (ie goto zero) the code length goes to infinity. Shannons theorem relates the maximum capacity given, signal to nosie ration, the noisy channel theorem says For any ε > 0 and R < C, for large enough N, there exists a code of length N and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ε. And: N -> inf, as ε -> 0. How long time do you need to transmit an infinite code? How long time do you need to make an infinite experiment? Is this condition, while mathmaticall unobjectionable, a good description of reality? Here we enter the issues of probability itself, and the sense in using continuum probabilities. This is exactly also my objection to why what some call deduction is really just a very very confident induction. But it's never perfect. It can't be. But again, that's not the point. It doesn't bother me, but apparently it did bother Popper. Popper tried to avoid induction, but failed. When you make a finite experiment, and calculate the confidence level for any confidence interval, this is actually formally still an uncertainty even for the probability of the confidence of a given confidence interval. In a discussion about he sciencetific method and fundamentals and notion of physical law and how it's induced from experimental experience, I don't think it's acceptable to overlook these points regarding the statistical reasoning. This doesn't make error correction useless of course, I just meant to suggest that you can't argue that the error correction is deductive to nature. Ie. beeing 100% certain. Popper seems to hold the idea that induction is unacceptable, and he was looking for a deductive escape. I agree that the induction isn't foolproof, but I disagree that this invalidates it's utility. Maybe we can agree that the progress of science is not described by deductive logic? But I think some of us still expects that there is SOME logic to it, which is I think the quest. Inductive logic is somewhat ambigous as popper noted, but that doesn't necessarily invalidate it, because the quest is how to best make progress, not how to make deductive progress, when it seems that isn't possible. Merry Xmas everyone unless we hear until then!! /Fredrik
  16. Thanks, Ok then I understand your posts better on that point. But I still wonder if we talk about different things. With induction, I include also various forms of probabilistic induction, complemented by a subjective interpretation of probability. To state that all swans are white, because it's all we have seen, is really the simplest of simple. I admit it was a little whiile since I read Poppers book, but I think even Popper was a little more sophisiticated than that. However, I am not sure I would call what you describe as deduction? Sure you can deduce things from axioms, but the process of selecting axioms are hardly deductive. What you describe as the difference between and "ad-hoc hypothesis" and "a hypothesis with a solid logical underpinning", is exactly what I would call inductive reasoning. Ie. the process by which you come up with hypothesis. I suspec we mean the same thing, but I don't understand why you call it deduction. As as see it Popper totally misses the importance the reason behind hypothesis generation. Ie. unlike submitting random hypothesis for falsification trials, science uses anything but random hypothesis right? I think this is exactly your point. If so we fully agree. My confusion is how you call this deduction? I call it induction? But I in a certain sense, probabilistic induction, is a form of deduction too. Just like an indeterministic theory like QM, really is deterministic. Maybe this is the source of confusion. IF so, I would say that we fully agree except on one point. I do not accept what you suggest as deduction But neither do I think that our future understanding of physical will keep fully global unitary evolution. But I think that's another discussion. /Fredrik When I think of it I think the normal terms is "probabilistic deduction", rather than probabilistic induction, sorry for the added confusion. Ie. "induction as probabilistic deduction". Where each possible deduction is assigned a probability. Anyway, I suspect we mean the same thing, even though the terminology got mixed up. The notion of probabilistic deduction itself doesn't solve anything thouhg, since more problems appear when you try to define the physical basis of these probabilities. This again, IMHO, suggests tha the probabilistic deduction, really is a induction. /Fredrik
  17. Not really. The Shannon theorem of information theory. (http://en.wikipedia.org/wiki/Hartley's_law#Hartley.27s_law and http://en.wikipedia.org/wiki/Noisy-channel_coding_theorem ) is a relation between the maximum information transfer rate capacity possible for a given fixed communication channel with given bandwith and noise. It says that to maintain a low probability for error (high confidence level), you have to reduce the effective information transfer rate. But to reach perfection, 100% error free, your capacity drops to zero for any channel that's not noisefree to start with. So in a finite time frame, you're trading away and amount of information communicated, for an increase in confidence. This makes it interesting though since it kind of relates amount of data capacity, information and time. But I'd say that there are plenty of much more complications that suggested by classical information theory. /Fredrik
  18. About the swans and induction. I think the interesting part is when inductive reasoning is put in context. The fact that seeing many white swans, never logically allows you to deduce anything about the next observation is clear, and not particularly interesting IMO. Usually the inductive reasoning determines your expectations, and thus if we assume rational actions, your behaviour. So different choice of reasoning may be of different degrees of utility. And thus a selection in favour of efficient reasoning is expected. So what is the expected reasoning? The question is IMHO, given that you are forced to bet the only money you have. You have seen only white swans, and in the bet you have two choices. That you will see a white swan next, or that you won't. If you are wrong you die, if you are right you live. You can say that either choice is equally possible, yet I think that most would tend to bet for white. And IMO the point isn't wether it's "logically valid" - it isn't but that's not the point - the conjecture that this is how nature works (action based upon incomplete information) may give insight to predictive modelling. This is so regardless of the "validity of the induction". The idea also from physics is that a system, responds to the local information. Is this valid?? I think that's not the question at all. The question is, what more plausible expectations do we have, than to expect intriniscally rational behaviour. The motivation is not that it's unique or valid, it's because, given that we are to reason upon admittdely incomplete information, it's seems to me the most plausible thing to expect. Why? Because an system acting differently would fight it's environment, and thus probably not persist. The one reason why I think about this, is that I think the scientific utility here, will be that when we understand this better, we will better understand the nature of physical law. You can similarly object that physical law is ambigous! How can you, from any amount of observation, deduce the correct physical law. In line with the swan stuff, you can't. But again, that's not the point. The point is, what then to do? and more importantly, what does nature do? How come then, we have this apparent stability indespite of total lack of a priori hard logical references? As DH said before, the scientific progress is "creative" and not described by simple deductive logic. I agree on that. Then my quest is: what abstraction or formalism does best describe it? Again, I think here the point is not to find it, it's too look for it. My highly personal motivation for this isn't philsophy in itself, it's a conjecture of mine that there exists a deep analogy with 1) a rational logic of reasoning upon incomplete information 2) the action of a physical system in reaction to it's information about it's environment. By analogy, and analysis of logic of reason, MAYBE we can further deepen the understanding and structure of physics. If my analysis of this fails to yield such an improved understanding, that will help solve some of the open problems in fundamental physics, I will consider my conjecture falsified. In physics there are already many things that have remote similarities to reasoning. Inertia and non-commutativity. These things can also appear in reasoning. In physics we make experiments and get observational data. In reasoning, you formulate an question according to your expectations, and fire it. /Fredrik To respond to this slightly out of context. Anothre problem with this is 1) how do you possibly know when you have observed every swan on the planet? I say you don't. 2) Also there will be scenarious where you brain it's large enough to store the raw-data of your observational histor. Then decisions need to be made. To compress some data, do discard some data. The very choice of compression algorithm, and discarding algorithm will make a difference. This information-limiting effect alone will produce interesting behaviour, that is a result of their own incompleteness. Some actions can possibly be deduce to this. In human history sometimes mistakes are repeated, possibly because history is forgotten. The utility of history is not curiousity, it's usually to guide us in the future. This is what also what brain research suggests, that the brain stores past data, but optimized to be of maximum utility for the expected future. Some have suggested that this partly might explain why the memory of past events are often distorted by the brain in the storing/compression process. If our brain was optimized for actually remember data as it was, we would probably have the capacity to do so very well. In some brain disorders, like svantas I think this may explain why their record of details are so amazing. But then, they have other problems. /Fredrik
  19. Is this a typo for induction? I think you meant to say the importance of induction in science? If not, I'm slightly confused about that word in that place. /Fredrik
  20. Now I think I maybe see what you mean and why we keep argue. It seems your main efforts is to argue towards the point that. Q. Is theory generation process mechanizable? A. No. (This is essentially a critic of the validity of induction) Simply put, I agree on that. But to me, that is not wher the quest ends. In despite of this I find it undeniable that alot of the time there does indeed exists at least quasi-rational reasoning that is impressively efficient. My choice of focus is then how to merge the two "observervations". Q1. Is theory generation process mechanizable? A1. No. and The very often, unreasonable power of various inductive reasoning. Here I think that rationality is emergent. The perfect rationality is unattainable, but the lack of perfectly consistent rationality doesn't imply total irrationality either. I similarly belieive in emergent observable symmetries. Perfect symmetry is also free lunch IMHO. So I think while there is no perfectly mechanical theory generation, there exists some evolutionary self-modifying theory generation. And this makes it plausible the apparent rationality in nature, albeit imperfect, in despite of the lack of a priori reason to expect one. For me the difference this makes is how you view the quest for TOE:s. You may easily say that there will never be a TOE. I totally agree. But that doesn't answer to our quest of increasing our understanding of the world. So how do we view the extensive use of symmetry arguments in theory building in physics? Is it because we belive there is a perfect symmetry, or is it really just a sort of least irrational reasoning? Ie. given thta we accepted tht there is no perfectly rational reasoning, do we look for the as far as we can see, the least irrational reasoning? That's more like I think of it. So I think of reasoning as rational, but with the contraint that there is an irreducible uncertainty in the meausre of rationality. And rationality is also not universal. Two observers may disagree upon what's rational. So I think there is more to this than "Is theory generation process mechanizable? No. " although I agree on that point. /Fredrik
  21. DH, I think we to large part agree. The interesting part, which I consider was part of the "point" in the other thread, which was mainly the goal of my argumentation, is that questions in philosophy of science often lead to interesting and constructive progress for science too. My motivation for bothering, is that often, the opposite view is put forward. So I think it's my responsibility to at least add my opinion to maintain balance. Perhaps by development of method, but sometimes method and subject are similar, in particular is this so for various forms of artificial intelligence and machine learning. And indeed if we're talking about machine learning the problme of induction and rational reasoning is close. My opinion is that there exists no such universal fixed computer on which such algorithm can live. I consider these things to be philosophy of science, but it overlaps with the science of machine learning, because there are simialrities. This relation between software and hardware, is quite similar IMHO at least with the relation between physical law and physical observers. This does relate to the problem of what's to be considered as "observable". Clearly, without observers, the entire notion of observable is ridicilous. YET, we expect the laws of physics to be observer-invariant. /Fredrik The description of this dual view as a bit paradoxal, also reveals what I persoanlly think is the implication of this reasoning, on the view of symmetry principles in physics. IMHO, they are best seen as rules of construction and reasoning, rather than fundamental structural features of nature. /Fredrik
  22. To add another opinion. (I apologize if I repeat any points already made.) The quest for such method and also the difficulties of finding such a method, is pretty much at heart of philosophy of science, in particular the "problem of induction", which questions wether there is a valid method of induction. This largely overlaps with this recent discussion on the philosophy of science: http://www.scienceforums.net/forum/showthread.php?t=36917 About "empiricism vs reason" I expressed my personal opinion of that, although dressed in different words in the above thread. A common definition of Empiric is from http://www.thefreedictionary.com/empirics "One who is guided by practical experience rather than precepts or theory" IMO, as is implicit in my take on this there really is no contradiction between the two views. A simple illustration of this is that, for sure a rational reasoning must take into account all available information, including all empirical one! But then empirical data always changes, each time we are consuming new observations. The natural synthesis here is IMO to conclude that there seems to exists no "fixed background reason". Or I think you can also say there is "no mechanical method". But then to respond to It seems that the self-reference here, suggests that since any rational method interacts with itselfs, and thus "evolves". At best one can imagine that there is a rational method for dynamically update the method, so as to stay consistent with the latest empirical data. But one soon starts to suspect that that is subject to the same critics. So we are lead to repeat the argument, and are lead to an expansion, method of method of method. But, at some point, as I argued in the above thread, the reasoner, having limited resources and representation capacity, can not support such infinite constructs. Therefor at some point, it's simply an irreducible opinion, take it or leave it. At this levels, we have an irreducible irrationality. But fortunately this irrationality is tamed by the evolved method. So in conclusion, I think there is a sort of method, but it's self-modifying method, which at some level contains an al least "momentarily" irreducible element of irrationality. /Fredrik
  23. Yes exactly. To put this examlpe back in the context of my ramblings, I would phrase so so that the logic of correction here assigns infinite confidence in the prior _structure_. Now, that begs the question what the physical basis for such infinite confidence is. It really doesn't make sense. Instead this "infinite confidence" is rather a truncated guess, where you simply can further rate your own uncertainty, becaus constructing that measure will require more information capacity. In the light of the present evidence, one can often see that perfectly rational decisions in the past was not optimal. But that doesn't mean there was somthing wrong with the decision. This is also part of the game. Sometimes you are right and sometime you are wrong. For example, the fact that people occasionaly do in fact get millionaires by playing on the lottery, does not make putting your money into lotteries a rational decision. But also the opposite happens, apparently irrational decisions turn out to be keys to success. When you try to analyse this, it seems impossible also to find a crystal clear and universal measures of rationality. IMHO, the measure of rationality itself, is evolving. This should result in a self-organising evolution. I think if we can find a proper new mathematical and conceptual abstraction of this, this formalism can apply on a vast scale of phenomena. From the "evolution of physical law" that Lee Smolin and others are sniffing on, to the self-organisation in biology. No to mention the problem of understanding the intelligence of the brain. This sort of information theoretic apporoach to the problem of induction, that takes into account the nature and representation of information from the inside view, to constrain what is possible (A clear example of what makes no sene is the very notion of an infinitely confident reference - this has IMHO no physical justificatin whatsoever), is the spirit in which I attack philosophy of science. /Fredrik
  24. I just occured to me another simple comment to add, that might further serve as an illustration on this issue (that is, but objection to jaynes use of real numbers as a fundamental starting point for reasoning upon incomplete informatoin), and how that at least conceptually, even though not directly, relates to the infinity-problems I refereed to: AS I see it, there are several key point in reasoning upon incomplete information, that constitutes a rational logic of reason 1. The logic of expectation or various forms of "probabilistic inference". ie. given this incomplete information, I am lead to make this particular guess, or this distribution of possible guesses (this contains complications itself indeed) 2. The logic of correction! This is, how do you respond, when new evidence is thrown right in your face, that is in total contradiction with your previous information? Somehow we need to RATE and LEVEL the new contradicting information, with the old prior so as to determine how the new information deforms the prior into a new opinion. Of course, this is what Bayesian updates supposedly does. But you can certainly ask, is bayes rule the only way to do this? My opinion is that it's not. And this partly probably relates to the critic to induction. But that doesn't mean it can't be made to make sense. The bayesian reasoning, uses a simple way of merging information. In particular it does not handle truly contradictory information. As long as there is a fixed microstructure, providing the space of probability distributions, bayes reasoning is good, but sometimes the inconsistency can not be resolve by just updating the microstate, it may require deformation or change of the microstructure itself. To do this, I have been considering picturing the information content encoded not only in the microstates but also in the microstructure. Which gives hierarchy of measures on measures. In this idea, which would replace or "extend" the plain bayesian reasoning, I need to be able to assign a measure of "inertia" between two opinions to determine what happens when they "collide". Here the continuum comes into play. If you try to "count evidence", and you find that two "opinions" have infinite measures, how can you compare them? Therein lies the difficulty as I see it. Sure one solution is to define measures of information, like entropy measues, like shannons etc. But those measures are in fact similarly ambigous. There are several "entropy measures around" which renders me with a choice. These things have accumulated in my brain for a while and I finally realized that there is no other escape but to at least try to solve this. /Fredrik "The logic of correction" is the difficult part. In popperian reason this is the step where one falsified hypothesis is replace by a new hypothesis. This step is completely ignored by popper. So I think he tactically ignored the most difficult part. The problem is not wether to dismiss a theory when falsified. It's like to comit suicide each time you are wrong. Of course then you are wrong only once and never again - it's not constructive. The problem is howto use the evidence that suggest falsification, to provide and expectation for a new hypothesis, in a spirit of "minimum specualtion". /Fredrik
  25. Just a note: I am not denying the power of calculus. I am however questioning it's the universality, uniqueness and it's fitness. I was very sloppy but with "take it to infinite many complicates take place" I mean to say that you can not arbitrarily take limits, because the order of how you take limits make a difference. This was what I referred to. For example summing integrals vs integrating the sum. This makes no difference for finite scenarios. But if you're dealing with infinite series the order matters; ie the limiting procedures of elements doens't commute It's the same twisted logic used in renormalisations. It's not that it doesn't work, it's the fact that it's ambigous from the point of view of reasoning. And that's is not just a technical objection. To me it's worse than that. Physicists has gone wild on continuum models and mix it freely, and there is something that just doesn't make sense. It's when you mix these philosophical analysis of contiuum in the context of information processing and reasoning upon incomplete information, that the continuum comes out as containing a redundancy, but where you have lost track of the symmetry that would remove it. Maybe my own context confuses this. Some of us talk about newtons mechanics, some about QM and gravity. My context is the foundations of the laws of physics, in particular the problems appearing in quantum models and when trying to understand what QG is. If we are sticking to Newtons mechanics, I think most of what I said seems unmotivated. In newtons mechanics, calculus is fine. But unfortunately newtons mechanics doesn't seems to explain our world. /Fredrik
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.