Jump to content

fredrik

Senior Members
  • Posts

    532
  • Joined

  • Last visited

Everything posted by fredrik

  1. If you mean what I think you do, I would say you can't know, implying that and that there is no difference and it's the same thing, you just try to illustrate the same thing in different ways. The "rates" of these "event counters" would not be independently observable themselves in an objective manner, because you always need a reference. It's only the relative drift of different "counters" that is observable, and it's this drift that defines the evolution. The ticks of each counter alone are just changes - no "rate" implied. The concept of rate is born when you start to compare *different* changes. Thus to tweak the local perception of time, you need something that tweaks the relative drifts of different events. /Fredrik
  2. Yes, but when I picture the clock vector above as my "stick", it is in the differential sense. It defines dt, not a finite time t2-t1, which i think requires integrating along the evolution which is not elementary. The existence and uniqeness of such finite time parametrization is not obvious, because of the toplogy of the system. But I do not see that as a major problem, except from a technical point of view of solving equations. The minimum differential needed to establish an event may be discussed and is one of the issues I work on. I count on that it is solvable. One may also wonder if events are really discrete or continous. But I think there is a way to unite it. A stick or clock may seem trivial as compared to a human, but from a first principle view, a stick also makes it's "observations" on you. The stick interacts with the environment just like humans do, although in a much more trivial, and less intelligent way. Technically the situation is symmetric, it's just that you are more intelligent and have more information storage. I expect a good theory to explain this symmetry. So a stick needs to interact to exist. If the stick doesn't interact with anything you would not be able to see it, nor use it. This may in fact happen if the conditions are strongly chaotic. You have easily loose track of your stick in the noise. But then you can also argue that a stick that you are not able to track in the ambient noise is a useless reference anyway. That's how you can "loose dimensions". /Fredrik
  3. An even can mean simply "change", change of state, or an additional bit of information reaching your brain. That's an event. As soon as you register any change whatsoever, there is an event. So sticks or not, you still need to be able to have your brain register whatever you measure. So it can still be thought of as events? Interesting that you see this as a problem. I see it as a feature Isn't what we all think an explanation of things to the point where we can all say that this complicated thing is in fact, just x. And we all know x. We only need to reveal some entanglement to see it. That kind of thing? To explain something complex in terms of something that is even more complex that the original raw data, is not progress. The simpler the better. /Fredrik
  4. This has turned out somewhat elaborate and philosophical already so here are some more along those lines. In your terminology I'd say it's a relative count, and specifically *your* counting. Somebody elses counts gives their time, not yours. I think I know what you are after, and if you think what I think you do, "slowing all events uniformly" does not affect the pereception of local time. The pace of a clock has it's meaning only in the context where the clock "lives", a *relative pace in it's environment*, since a clock isn't something unreal god like device, a clock is just a part of reality, that we use for internal reference. And a clock device is bound to obey the same rules and limitations as does everything else. Here are some related speculative ideas taken out of my current thinking that I'm working on... And these relative or internal paces, can be given probabilistic interpretations so that time can be thought of as a parametrisation of change. If you (not litteraly) consider the future as a vector, you do not know in which direction the future points, but you may see that some changes are far more likely than others, based on the present. So you can picture the differential future vector mapping out a surface of a possible future, and you can normalize this surface so that each point on the surface represents in an infinitesimal sense the nearest future and each point on the surface is equally probable. One point on this abstract surface represents the scenario that the ONLY change you see is a clock tick. The vector from the present configuration to this point can be called the "clock vector". By probabilistic reasons "the most probable future" (which would be THE future in the classical case), is the shortest path into the future. Shortest path here meaning, that only a small disturbance is required for the transition. "Now measure" this disturbance relative to the disturbance that the clock device would require for the same probability. This ration is always smaller than 1 as per construction, irrespective of choice of clock! (Note that this can be taken as a seed to the lorentz invariance, though much more fundamental) That is, you are technically free to choose *any device* as "your clock". They all measure the same flow, just of course the units are different and the referenc is more or less weird, but that's your choice. This is fuzzy, but I'm struggling with giving this a definite and proper mathematical description. The "measure" I refer to, will be related to information content and entropy, and the nice part is that it will not depend on absolute entroyp, only the relative entropy, and if I am correct any strictly increasing bijective function of entropy will do. Which I think will solve the critics that there are different kinds of entropy. So only the relative entropy is interesting, and I think the variations will leave the system invariant. I also great hope that these ideas will from first principles generate symmetries similar to SR and probably also GR. The SR and GR transformations will hopefully be replaced with a uniform reference transformations, that transform information between observers. My expectations is also that things like the lorentz invaraiance will be broken in chaotic domains (where the "future surface") is more symmetric. And these symmetries would be recovered as the chaos is reduced. This is not intented to remain talking though, all the details will achieve mathematical suites, and I'm still working on it + trying to scan what the existing work in this aread has accomplished so far. There are several working along these lines but so far the progress seems modest. But from my impression the number of people working along these lines are very small as compared to the other things like strings. /Fredrik
  5. Another personal reflection in line with my thinking is > Is Time Necessary? If changes would be completely chaotic time would not be necessary, or rather time itself (as read from a clock device) would also be chaotic and we would not be able to distinguish it from the noise. At this point, space probably wouldn't be distinguishable either. I somehow picture time beeing the last thing to go away. The fact that we can successfully define a clock device and get a useful parametrization of history that allows us to learn something, proves that changes are not comptely chaotic. So in that sense, to ask is time necessary, is to ask, is it necessary to learn, or make progress? /Fredrik
  6. What's the difference? Suppose "it's invented", does that mean it's not "real"? If that makes you more satisfied, why not? The question is still, what difference do you think it makes? They way I think about it, time is a parametrisation of progress into the future. Wether we "travel into an uknown but pre-existing future" or if the future are created/revealed on the fly as progress is made - what is the difference? To me it's the same thing still unless someone can think of an experiment to distinguish they two options. Considering that physics tries to "explain" the universe from scratch, I'd say we have to invent all of it. Time, space and matter. We just need to find that minimalistic starting point, that most people accept that at least for the next few hundred years need to further explanation. In despite of my comments I think the original question was a good one. /Fredrik
  7. The superposition can also be more generically interpreted as that our information is a weighted sum of our options, in which case it seems almost obvious, if you take the view that the wavefunction represents the observers information about the system described. But in the case of transformed or entangled information, the options in the sum, need not be individually "physical" or "classical". (Whatver that means, I don't like that word:) - just as a reminder that in QM, also classically forbidden paths are summed. Often "the superposition principle" also refers to the fact that a linear combination of solutions to the QM equations, is also a solution, simply because the equation itself is linear. /Fredrik
  8. Ok, I put some more steps down (I migth have done more mistakes though). 1) Glycolysis [ce] C6H12O6 + 2NAD+ + 2H3PO4 + 2ADP -> [/ce] [ce] -> 2 pyruvic acid + 2ATP + 2H2O + 2NADH + 2H+ [/ce] 2) Pyruvate dehydrogenase step [ce] pyruvic acid + CoA-SH + NAD+ -> [/ce] [ce] -> acetyl-CoA + NADH + H+ + CO2 [/ce] 3) Kreb cycle [ce] acetyl-CoA + 3 NAD+ + FAD + GDP + P + 2H2O -> [/ce] [ce] CoA-SH + 3 NADH + 3H+ + FADH2 + GTP + 2CO2 [/ce] Add this up, simplify, and you get [ce] (1) C6H12O6 + 10 NAD+ + 2FAD + 4P + [/ce] [ce] 2ADP + GDP + 2 H2O -> [/ce] [ce] -> 6 CO2 + 10 NADH + 10H+ + 2FADH2 + 2 GTP + 2ATP [/ce] 4) Now compare this with the oxidative posphorylation. [ce] 10 NADH + 2 FADH2 + 6 O2 + 10H+ -> [/ce] [ce] -> 10 NAD+ + 2 FAD + 12 H2O [/ce] ATP production in mitochondria is 2.5 ATP/NADH except for the two NADH (per glucose) who come from glycolysis and needs to exchanged for a FADH2 inside the mitochondria. 1.5 ATP/FADH2 Check a book on the details on these steps, it's unreadable to post it here. This needs that we need to balance the phosphorylation (of 10 NADH and 2 FADH2) with ATP production. (8*2.5 + 2*1.5 + 2*1.5 = 26 ATP) [ce] 26 ADP + 26 H3PO4 -> 26 ATP + 26 H2O [/ce] [ce] 10 NADH + 2 FADH2 + 6 O2 + 10H+ [/ce] [ce] + 26 ADP + 26 H3PO4 -> [/ce] [ce] -> 10 NAD+ + 2 FAD + 38 H2O + 26 ATP [/ce] Add this to (1) [ce] C6H12O6 + 6 O2 + 30 H3PO4 + 28ADP + 2 GDP -> [/ce] [ce] -> 6 CO2 + 2 GTP + 28ATP + 36 H2O [/ce] Since GTP and ATP can be transferred to each other we can effectively write [ce] C6H12O6 + 6 O2 + 30 H3PO4 + 30ADP -> [/ce] [ce] -> 6 CO2 + 30ATP + 36 H2O [/ce] [ce] C6H12O6 + 6 O2 + (30 H3PO4 + 30ADP ) -> [/ce] [ce] -> 6 CO2 + 6 H2O + (30ATP + 30 H2O) [/ce] /Fredrik
  9. You're correct. I noticed screwed up the formula when I wrote it because I rearranged it on the fly, I'm sorry. I found it messy to type in on board like this... but here is another one, a bit more developed. I noticed there is some annoying line limit when using the ce tags. Unless I made more mistakes it should be [ce] 10 NADH + 2 FADH2 + 6 O2 + 24H+ +24e- -> [/ce] [ce] -> 10 NAD+ + 2 FAD + 14H+ + 24 e- + 12 H2O [/ce] Simplifying to [ce] 10 NADH + 2 FADH2 + 6 O2 + 10H+ -> [/ce] [ce] -> 10 NAD+ + 2 FAD + 12 H2O [/ce] But [ce] C6H12O6 + 6 O2 -> 6 CO2 + 6 H2O [/ce] But the apprent extra 6 water has to be consumed elsewhere. I think one need to look at all details too see the mechanisms: Water is consumed in ATP and GTP hydrolysis as well as in the krebscycle (2 or 3 or something like that), but also created during glycolysis. I don't have it all ontop of my head so I'd have to check to respond. For example [ce] ATP + H2O <-> ADP + H2PO4^- + H+ [/ce] [ce] GTP + H2O <-> GDP + H2PO4^- + H+ [/ce] But sometimes I find bio books confusing because sometimes shorthand is used. Maybe someone else has ito ntop of their head, or I'll try ro respons later. /Fredrik
  10. > So does the 10 NADH per glucose hold a total of 10 H, and the 2 FADH2 hold total of 4 H? Yes you can say that, but it's hydrogen ions(H+), not atoms(H), so balancing takes more than just bundling the H's together with O's to get water. In biochemistry the explicit reaction formulas are often complex and various shorthand notations are used. And to see the detailed reaction mechanism you need to go beyond the summation formulas and analyse each catalytic step involved as well as the transport of spieces between cytosol and mitochondria. The two redox pairs we deal with is [ce] (1) NAD+ + H+ + 2e- <-> NADH [/ce] [ce] (2) FAD + 2H+ + 2e- <-> FADH2 [/ce] With oxygen we have [ce] (3) O2 + 4H+ + 4e- -> 2H2O [/ce] During glycolysis and krebs cycle, (1) and (2) moves in the forward reaction. And you end up with 10 NADH and 2 FADH2 that needs to be put back to their oxidized states, by running (3) to the right. [ce] 10 NADH + 2 FADH2 + 6 O2 + 24H+ +24e- -> [/ce] [ce] -> 10 NAD+ + 2 FAD + 14H+ + + 24 e- + 6 H2O [/ce] Cleaning up [ce] 10 NADH + 2 FADH2 + 6 O2 + 10H+ -> [/ce] [ce] -> 10 NAD+ + 2 FAD + 6 H2O [/ce] If you wonder what's up with the 10 H+, they are also generated, 6 pcs / glucose during krebs, 4 pcs / glucose during glycolysis, basically (one per NADH). /Fredrik
  11. (*) In short "FADH2 -> FAD" requires ½ O2 "NADH -> NAD" requires ½ O2 for details see http://en.wikipedia.org/wiki/Oxidative_phosphorylation Glycolysis (glucose -> 2 pyruvate) gives you 2 NADH per glucose. PDH conversion of (pyruvate -> acetyl-CoA) gives 2 more NADH / "glucose" Krebs cycle gives you 3 NADH and 1 FADH2 per acetyl-CoA. Total you get a "redox shift" of 10 NADH + 2 FADH2 per glucose. To restore this as per the oxidative phosphorylation(*) you need 6 O2 / glucose. /Fredrik
  12. I don't see that as something invalidates the good sides though. There are "idiots" everywhere, free or not. It might even be a good lesson for us to not believe" everything we read, even though it's expensive or the author claims to be some authority. For anyone who wants real understanding I'd take it as a suggestion only, and try to verify it and the idiots will often sort themselves out quickly. Anyone who considers themselves incapable of making some sort of basic evaluation of the information, is probably dangerous to themselves to start with and have to learn the hard way. I've just added details on some topics that was wery thin or nonexistent, but from what I remember mostly on the swedish wikipedia. /Fredrik
  13. In a certain sense I agree with the attitude that "what is the difference". But to give an opinion to the question "if the sensation of free we all have contradicts what I know about QM and reality", my personal answer is clearly No. I see no contradiction either. IMO some keys is that some information by it's nature is fundamentally entangled up with other things and can't be shared - it's consumed and becomes part of a self, or observers information set. Thus some pieces of information are available to nonone but me. It's part of what defines me, and distinguishes me from everyone else. To me at least that all fits well within a modern interpretation of quantum worlds. To me there is no difference what I don't know and what I don't know so to speak. I make my own decisions based on information at my hand, which IMO is as scientific as it gets. I think an interesting things is to find the framework that unifies decision making as per a human brain with basic particle scattering phenomena. In my view they have alot in common. A scattering experiment and particle collision is in a certain way a simple example of resolving conflicting information. You can information about an incoming packet about to hit a target. We wonder, what will happen? That *something* will happen we are sure, because nature always finds a way. So what is the logic of resolving the conflict. I think this can be generalized into something what will also cope with generalized decistion and information processing, such as the human brain. But that seems to currently be in the future though. So I am one of those who think that not "classical QM", but well the extension and future physics will also give insight into brain function. At least if the worlds takes the turn I think, that's what we will see coming. This is why I persoanlly advocate abstracted information theoretic models. I think physicists and AI people should keep in contact. /Fredrik
  14. I'm not sure I understood your suggestion. But if you are referring to the so called free energy (Gibbs, helmoltz) in thermodynamics you are correct that they are strongly related to entropy. If something is in a highly ordered (=unlikely) state, it's a fair guess that such a system is likely to degrade in time, or it will have a tendency to change into a more likely configuration. Yet the rate at which it does is yet unknown. That tendency can be tamed into a force, that can be used to do work. Like organisms digest food, and convert the free energy of the oxidation reactions into work (biosynthesis). http://en.wikipedia.org/wiki/Gibbs_free_energy The free energy concepts is what is most commonly used in chemistry and biology, and it's heavily entangled with entropy as well. In the exended treatment of QM that I lookg for, the conservation laws tend to apply to things like information. This can also explain the logic in other violations. Still I'm not sure íf that's what you meant but it seems like something thereabout. /Fredrik
  15. If you continue to pursue the abstract interpretations, there are different more abstract interpretations of entropy. There is something called information entropy, which can be said to be a measure of our ignorance of the system (could be any variable, or datastream). It's used in various fields out of physics, for example in dataprocessing, where the entropy can also be thought of as information content. This way, entropy is given a cleanly abstract information theoretic meaning, without physical references. It's all about information, and ultimately the dynamis is about updating your information, and making guesses based on incomplete information. I think many of those who work along these lines today hole an opnion somewhere along the lines that reality is about formulating prognosis based on admittedly incomplete information. You know certains things, and you need to make an educated guess based on what you've got. And this exact procedure can be given a scientific formulation. The basics of such an approach is to start to formulate physics from scratch from a much more information theoretic first principles. And many laws of physics *might* be able to be deducable from such a dynamical model in terms of information mechanics. Elementary things are for example, logical entanglements (that are there, but seemingly hidden), and and merging of conflicting information, which happens all the time. What is the scientific way of judging conflicting information? The reference, or "right order", you ask for is the case of perfection or complete information, and there is zero uncertainty. /Fredrik
  16. There is a more exact meaning of disorder in the statistical definition of entropy. You can look up the canonical ensembles. http://en.wikipedia.org/wiki/Canonical_ensemble There you see the clear connection between "disorder" and microscopic degrees of freedom, and uncertainty. In the context the system typically has two levels of description. Macrolevel and microlevel. There are certain variables that is taken to define the macrolevel. For example energy and temperature. If the entropy is zero, it means the macrostate defines the microstate. The idea is that the microlevel is generally subject to disturbances, and from plain probabilistic reasoning, if there is no prior assymmetry between the microstates (ie all the microstaes are equally a priori likely), an arbitrary unknown disturbance are more likely to increase the entropy than to decrease it. That's the third law. Low entropy = low uncertainty. However, by the same token, low uncertainty is also less likely. If you throw dice, to come up with the microsates, there is a clear relation between the probability of the outcome and it's entropy. This is the whole beauty of the concept. While this is good stuff, it's still important to understand that in a certain sense the uncertainty in QM is of another kind. The x-p uncertainty can be thought of also as beeing the result of a logical entanglement. What can be done to the entropy stuff in the quantum domain is that you consider the future of a series of equiprobable surfaces. This surfaces can be deduces from entropy principles, as high entropy change is more likely than a low entropy change. And unlike thermodynamics, you can include the kinetics in this reasoning. This is the basic idea behind how, a unknown or random microphysics, can give rise to nontrivial macrophysics. And this idea can be extended and generalized. I think there are many interesting things going on in these fields even today. Some people are working on to deriving general relativity from such generalized entropy principles. This is close to my interests too. The concept of relativity can be deduced to such principles and the concept of a prior probability distribution. Clearly different observers have different priors. The beauty of this approach if successful is that it will probably give a completely new insight in GR and what the most natural extension to the quantum domain is. This is where I personally expect alot of future physics. I'm currently working on this things but i don't want to blur up things by posting incomplete math. I was going to post later for comments, but at this point I think it will come out as baloney and I will keep it for maturation. I've got a few key points to solve first. /Fredrik
  17. Yes that's correct. /Fredrik
  18. What you are looking for is the "molarmass" - the mass of one mole of the compound. The molaramss of sucrose is 342.3 g/mol. http://en.wikipedia.org/wiki/Sucrose With som sugars you have to compensate for the moist when you scale it (glucose for example). But i think normally sucrose is pretty dry. 342.3 g (DRY) sucrose and dilute to 1 liter final volume with water. /Fredrik
  19. I guess the label "positive thinking" isn't all that descriptive, it's more like an everyday term? I think the things we are talking about are the techniques we are discussing or should be discussing here is what is applied is what's called cognitive behavioural therapy (CBT). Like YT suggested it goes beyond superficial "thinking of bunnies", which alone hardly works, this is what I called "baloney". I would like to say that the ideas of CBT principles is that you need to not only accept your thoughts, but reveal them. Take your brain with the pants down thinking bad thoughts! And this experience itself is self-rewarding not only because you break a destructive habit, but because due to the increased control and awareness! This can also bee a positive feedback, and provide a constructive, and continous way, out of the irrational thoughts. When you start to be able to analytically disassemble your own thinking you may reach a state when your analytic side laughs at your irrational thoughts as they come... you have sort of developed an mental antibody to your irrational thoughts that disarms them as they show up. And by the end of the journey you have actually learned something. I'm not a psychiatrist but I've a person that always thinks alot, so this is technially my personal opinions. I have applies similar techniques to other fields. I have often tried to apply cognitive techniques when I solve other problems. And to study your brain, in action, so to speak is pretty interesting. Because you'll soon see that the brain while excellent at making associatiosn (seems to me at least) to make repeated use of certain repeating patterns of attacking things. I have also come to realise that I do use emotions also when I solve physics problems, although one may like to think it's just hard logic, but it's not. The logic still rests on a fuzzy ground. And new logic emerges from the fuzz too. I've even tried to draw parallells between the construction of the universe and my own brain, and I am convinced there are similarities between creation and learning. This is what I'm currently working on. /Fredrik
  20. I'm totally with you here Gibs, it works the same for me. There is no way you can fool yourself that easily, that would be too easy I often manage to find just that rational answer you refer to, my way of thinking is probably individual though. I have found a way to be intrigued by my own reactions at times, and be able to sort of step outside and try to analyze myself by in a neutral manner try to observe my own mode of thinking. That way I manage to get myself out of any the moody irrational emotions, into a more analytic mode and even learn something new. Not about something specific, but about the brain and reality. I don't know of any studies, but I think you are not alone. To think it would be as simple and "think about naken women or candy" and life gets good, is just baloney IMO. We are too smart for such tricks to work. /Fredrik
  21. When I think of "positive thinking", to me it reflects that fact that you should focus on the constructive, not the desctructive. I think everyone has to find a way of thinking that works for their personality. I like challanges, and when things look bad I can take as a personal challange to do what seems impossible at the time. At that point the darkness turns into an opportunity I just can't miss. I think it is by no means so simple that straight success is the best lesson. Sometimes bad things can be valuable too. I think it makes sense. However, to kind of tell someone else "think positive" just doesn't work like that. I think the meaning of "positive thinking" is someone each one of us has to find out what it means, because the words themselves says almost nothing. I think there are some things that we have to do on our own. Other people can be inspiration and help but they can not do it for us. But not all people are like me, so what works for me doesn't work for everyone. /Fredrik
  22. I awaiting edit#2 where you exclaim we don't have to answer that > d=distance between the objects Look closer at this. /Fredrik
  23. Interesting. I think the productlog isn't counted as an elementary function though, but symbolic notation is always readable, but I did a quick check at that solution seems consistent with my suggested inverse. The product log is defined as the inverse to [math]xe^x[/math] and if you use that relation it can be quickly seen to be consistent with the expression I found which is the inverse of ajb's expression. [math] y(t) = -C2 -C2 \; Productlog \left( \frac{e^{-1 - \frac{t-C1}{C2}}}{C2}\right) [/math] is by definition of the productlog the solution to [math] \frac{e^{-1 - \frac{t-C1}{C2}}}{C2} = - \frac{y(t) + C2}{C2} e^{-\frac{y(t) + C2}{C2}} [/math] Taking the logarithm of both sides [math] -1 - \frac{t-C1}{C2}= - ln \left\{ y(t) + C2 \right\} -\frac{y(t) + C2}{C2} [/math] and with redefining constants this seems consistent with the inverse form you can easily derive, in case the excerise requires proof of the productlog version. /Fredrik
  24. Probably homework? but anyway... here are some hints, instead of looking for y=f(t), one can look for t = f(y)? [math] yy'' +(y')^2 = y' [/math] Subst. w := yy' -y and the equation transforms into w' = 0. Thus w(t) = a [math] yy' -y = a [/math] Then against subst. z := y + a and you'll get [math] \left\{1-\frac{a}{z}\right\}dz = dt [/math] And you should find a solution [math] t = y+b-aln(y+a) [/math] a and b are constants. You might need to doublecheck the steps which I omitted beacuse I could be wrong. /Fredrik
  25. The problems is I think in part to define optimum. Optimize what? Lenght of your life? Intelligence? Muscle strenght? Reistance against infections? Speed? /Fredrik
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.