Jump to content

The Philosophy of Science


Resha Caner

Recommended Posts

Philosophy Vs Science?

 

It is philosophy that has to change it's ideas when faced with scientific advances. For example just think about general relativity and quantum mechanics. They have changed our understanding of nature hugely and thus natural philosophy.

 

It makes me wonder just how much philosophy the working scientist (I'll include mathematicians here) uses day to day. It would be a great shame if philosophical ideas hindered scientific advancement. In essence, hanging on to old ideas and not embracing the new.

 

And as such, I wonder if science really needs philosophy or is it philosophy that really needs science?

Link to comment
Share on other sites

Perhaps scientists, being only human, react as any other would if their holy cow is threatened.

 

No doubt. I missed your earlier sarcasm, so I can back up and agree with you.

 

Mokele & ajb, I think you miss what I mean by the "philosophy of science". The word "philosopher" often conjures an image of Saruman sitting in Orthanc contemplating "cogito ergo sum", i.e., the possibilities of existence. That's not what I'm talking about.

 

And, I take some pride in thinking that engineers probably understand the small details better than scientists. I can recall standing in a swamp in Georgia watching a machine drag logs to a mill as it sank up to the cab in mud. Then I go back to the office and listen to someone rant about contamination ruining a sensitive component because they caught an assembler smoking in the plant. Oh my! That tiny bit of ash from the cigarette is going to destroy the machine, but slogging through a swamp won't. Yeah, right.

 

So, what I'm talking about are the attempts by science to claim objectivity and "it just is". For example, gravity is real, well, just because it is. Go drop a ball and observe it for yourself. That's not good enough, because Newton's experiments still work today - even with our improved measurement accuracy. It wasn't that we've invalidated Newton's experiments. In fact, people were well aware of the precession of Mercury long before Einstein came along. So why did they continue to amend and support Newton? Why didn't someone find relativity earlier? If all we're going to do is amend existing science, we could go back to Ptolemy and amend what he did.

 

It was not objectivity that led to the new discovery, but creativity - a blatant subjectivity. That opposes what many claim science is.

Link to comment
Share on other sites

Mokele & ajb, I think you miss what I mean by the "philosophy of science". The word "philosopher" often conjures an image of Saruman sitting in Orthanc contemplating "cogito ergo sum", i.e., the possibilities of existence. That's not what I'm talking about.

...

So, what I'm talking about are the attempts by science to claim objectivity and "it just is". For example, gravity is real, well, just because it is. Go drop a ball and observe it for yourself. That's not good enough, because Newton's experiments still work today - even with our improved measurement accuracy. It wasn't that we've invalidated Newton's experiments. In fact, people were well aware of the precession of Mercury long before Einstein came along. So why did they continue to amend and support Newton? Why didn't someone find relativity earlier? If all we're going to do is amend existing science, we could go back to Ptolemy and amend what he did.

 

It was not objectivity that led to the new discovery, but creativity - a blatant subjectivity. That opposes what many claim science is.

 

I definitely see your point. Some of the later posts in this thread seems to contain some sort of idea that science and philosophy are somehow in contradiction - which naturally feeds tension as in philosphy vs science, like we are talking about science vs magic. But such a contradictory view wasn't at all what I had in mind. I don't see the contradiction at all. And if I understand Resha, it wasn't what he was talking about either.

 

I assumed were not talking about philosophy in general, but rather philosophy of science in this thread. I haven't ever seen any sensible person suggest that philosophy of science and science itself are in contradiction. On the contrary do they historically have common roots and foundations - a quest for knowledge and acquisition of knowledge, and what knowledge IS. A particular CHOICE of philosophy of science, say Poppers, is a particular choice of rational method to acquire true knowledge. Poppers pain was the notion of induction, which he couldn't stand, he thought he could turn the inductive process to an deductive one. IMO he didn't succeed very well, but that's a different discussion.

 

Another common misconception is that philosophers invariably are conservative realists, that fail to make sense out of quantum weirdness etc. I often see the ones refusing to face philosophical questions as far more conservative, and avoiding difficult questions. The label them philosophy is an escape.

 

IMHO at least, a modern true philosopher of science aren't suggesting that science is crap, the questioning of science and the scientific process is made in a constructive way to improve the effiency of progress by insight into things like hypothesis generation (which Popper did not do). No sensible philosopher would suggest trash science and go back to armchair contemplations. The idea as I see it, is to _evolve_ science and it's method. In this respect those who refuse to ponder this possibility are the true conservatists.

 

/Fredrik

Link to comment
Share on other sites

Maybe my reply was too confrontational. I didn't mean it that way (though my enthusiasm is often interpreted as such). fredrik probably gave a much more neutral rendering of what I meant.

 

And I agree with you, fredrik, that what Popper did was incomplete. I think his accomplisments were spectacular (his attacks on induction are most useful), but I would never call myself "Popperian". I don't yet ascribe to any one particular line of thought, though I'm liking what I read about the Bayesians so far.

Link to comment
Share on other sites

And I agree with you, fredrik, that what Popper did was incomplete. I think his accomplisments were spectacular (his attacks on induction are most useful), but I would never call myself "Popperian". I don't yet ascribe to any one particular line of thought, though I'm liking what I read about the Bayesians so far.

 

Maybe you would also like E.T Jaynes books "PROBABILITY THEORY:

THE LOGIC OF SCIENCE".

PROBABILITY THEORY -- THE LOGIC OF SCIENCE

 

He tries to instead of the ordinary more flat axiomatic approach, argue from the concept of degree of plausability and arrive at essentially the ordinary rules of probability. Then he elaborations on applications on bayesian probability and uses it as a rule of reasoning on physics. Ariel Caticha is someone who has is working in this tradition. He even have the idea that GR might be derivable as a consequence of rational reasoning upon incomplete information. In this view the "beauty" is a simple principle of a rational action upon incomplete information, given a set of players implementing that strategy, we get an expectation on total dynamics, if we expect each part to behave rationally in the sense of a subjective bayesian sense.

 

This is partly interesting, but one mistake I think Jaynes does is that early in his book, he makes the abstraction that a degree of plausability is represented by a real number between 0 and 1. I think the real numbers are non-trivial, and really themselves refer to limits. That's the point where he accepts the continuum reasoning. I prefer to ask what spectrum of degree of plausability that is distinguishable. And then I lean towards an abstraction that the entire resolution of plausability dependes on a parameter, which you can associate to the obserevers information capacity, because these degrees can only live in the internal state. This is exactly where the philosophy of mathematics enter the picture as well. How do you construct the continuum in a rational, accessible way by finite procedures. Maybe the true continuum is not contains redunancies that make it less ideal as a starting point? After all, in mathematics too as far as I have always seen it, you first define natural and rational numbers, before you define real ones. There is a logic in it's construction, that may or may not have analogies to physical evolution. I think mathematics partly relate to reality, but that doesn't mean I think of it in realist terms, because I don't think of reality in traditional realist terms.

 

/Fredrik

 

It was not objectivity that led to the new discovery, but creativity - a blatant subjectivity. That opposes what many claim science is.

 

This somehow relates to the idea of reasoning upon incompete information. Since there is no deductive path towards progress, no matter how much Popper would have wante there to be one, some element of chaotic irrational diversity/variation is unavoidable and probably key to progress. This then needs to be tamed by selective mechanisms. The way I think of it, variation naturally appears as a result of the failure to establishe with certainty a rational reasoning. It's the uncertainty in the rational reasoning itself, that provides variation. But the overall rationality (albeit fuzzy) provides stability and self-organisation.

 

If you see it this way, there really doesn't have to be a major conflict with objectivity and subjectivity. Instead objectivity can be thought of as emergent in a community of players ruled by subjective rationality. Their interaction alone, provides selection for objectivity.

 

This is the philosophical basis for how I reason on symmetry in physics as well. Symmetry is a kind of objectivity, and hence I think symmetries are emergent. I have serious doubts in the belief of the concept of fundamental symmetries. If you object to this, and claim that nature is full of symmtries, then add to this the problem of by means of real experiement, extract from experimental data, the symmetry. There is always uncertainty and statistical uncertainty. I think dismissing this as technical problems is mistake.

 

/Fredrik

 

He even have the idea that GR might be derivable as a consequence of rational reasoning upon incomplete information. In this view the "beauty" is a simple principle of a rational action upon incomplete information, given a set of players implementing that strategy, we get an expectation on total dynamics, if we expect each part to behave rationally in the sense of a subjective bayesian sense.

 

To understand how this possibly can relate to anything related to GR, the idea is that one can consider the "inertia of information", in the sense that statistically or probabilistically, contradicting evidence is weigthed against the prior information, and thus any contradicting information is only slowly deforming the initial information. If all parts of the systems implement this, then one should get a complex dynamics where there are relative limits on how fast certain things can change simply because of the required accumulation of new evidence needed to change a previously established opinon. Also, given that all information are of finite accuracy, ALL prior structures are principle subject to evolution - it's free of a fixed background information. So it's not background independent, because the background is essential, it's just that the background is dynamical, reltative as opposed to fixed universal.

 

This is the idea that has attracted me, to generalize the principle of relativity to not just spacetime events, but to information-events in general. And the spacetime manifold that that context must be defined by current state of the observers "information microstructure".

 

I recently found that a similar, though a more constrained idea of this is also pursure by Olaf Dreyer who labels his idea "internal relativity", and his core ideas are close enough to the above for a starter. The idea is that this reasoning should bring insight on the problem of what is observable, and how that relates to an inside observer - hence then name internal relativity.

 

Some quotes from one of Dreyers paper for a teaser

 

"We claim that the internal point of view has not been taken far enough. If one strictly adheres to it, one finds not only special relativity but also general relativity. This is the central novelty of Internal Relativity."

 

"In our view, matter and geometry have a more dual role. One can not have one without the other. Both emerge from the fundamental theory simultaneously."

 

"Our objection to this setup is that one does not have direct access to the geometry; we use matter to infer lengths and times. We believe it is desirable to have a theory where there is no geometry without matter, instead geometry and matter arise simultaneously."

 

"This problem is called the problem of time. The approach presented here shows this to be an unnecessary complication that arises because of an unphysical idealization that does not take into account that geometry and matter arise together. By neglecting one part, matter, and just focusing on the other part, geometry, one introduces the problem of time. The problem of time is the price one pays for not realizing that pure gravity is an unphysical

idealization."

-- http://arxiv.org/abs/0710.4350

 

All these questions ahve strong philosophical dimensions.

 

He is also working on a project called "quantum space II" for which he received an funding from fqxi:

 

Technical Abstract:

The research in this proposal is concerned with the foundations of quantum mechanics and with quantum gravity. On the first subject, we argue that three misguided steps in the standard understanding of quantum mechanics prevent us from solving the measurement problem. The first step is the notion of classical objects. We argue that the classical world can be understood as consisting of special quantum mechanical states. The second step is the tension between the deterministic nature of the Schroedinger equation and the observed probabilistic nature of quantum mechanics. We show that with our definition of classicality probability is a necessary consequence. The last step is that we assign properties to microscopic objects that they cannot have. We show that these three steps are the problems that make quantum mechanics so puzzling. Taken together our solutions to the three problems constitute a solution to the measurement problem. In quantum gravity we continue the program of internal relativity. We propose to derive geometry from the low-lying excitations of a solidstate system. We show how Newtonian gravity naturally arises in such a system. We furthermore propose to apply the theory to the early universe and show that we can reproduce the observed spectrum of the cosmic microwave background radiation. "

-- http://www.fqxi.org/large-grants/awa...ls/2008/dreyer

 

This is yet another guy working in this what I think is modern spirit, that also happesn to be very philosophical. If you read the arguments it's very much philosophical arguments, but it's hard to deny that they are good.

 

/Fredrik

Link to comment
Share on other sites

Long, interesting thread. Where then to start? I'll just dive right in with the last post:

 

This is partly interesting, but one mistake I think Jaynes does is that early in his book, he makes the abstraction that a degree of plausability is represented by a real number between 0 and 1. I think the real numbers are non-trivial, and really themselves refer to limits.

That's one way to look at the reals, as the set of all Cauchy sequences of rationals. The axiomatic approach simply posits that there exists a complete Archimedean ordered field, shows that any two complete Archimedean ordered field are isomorphic to one another, and show that the sets defined by Cauchy sequences or Dedekind cuts form a complete Archimedean ordered field. Within isomorphism, there is one complete Archimedean ordered field.

How do you construct the continuum in a rational, accessible way by finite procedures.

Another way to define the reals is the set of all decimal expansions. That this simple approach is valid is quite amazing. Even engineers can understand it! (Denigrating a group is in general unacceptable, non-politically speech. It is acceptable so long as it is self-denigration. I have been working as an engineer for a long time.)

 

The reals between zero and one (inclusive) is even easier:

 

[math][0,1] = \lbrace x : \exists \;\lbrace d_0,d_1,\cdots,d_n,\cdots\rbrace ,d_r\in(0,1,...,9) : x=\sum_{r=0}^{\infty}d_r10^{-r}\rbrace[/math]

 

In short, the collection of all numbers of the form 0.d0d1...dn..., where each dr is an integer between 0 and 9 (inclusive).

 

This is not a finite procedure, and ignores the problems of uncountable numbers. It works and is easy to grasp. (The engineer's ultimate tests.)

 

Since there is no deductive path towards progress, no matter how much Popper would have wante there to be one, some element of chaotic irrational diversity/variation is unavoidable and probably key to progress.

Back to philosophy of science. Popper was on the right track regarding falsifiability as a key distinguishing factor between science and non-science. Where he went wrong was in viewing falsification as a cornerstone of science.

 

In a very real sense, all science is inherently false in the sense that the models built by science are only approximately and provisionally correct. The rub: Just because a scientific model has been falsified in some regime (e.g., Newtonian mechanics) does not mean it is "always false". There is, in my mind, a big distinction between always false models (e.g., the caloric theory of heat, Aristotelian physics) and provisionally correct models (e.g., Newtonian mechanics, quantum mechanics, relativity). Newtonian mechanics has been falsified in the realms of the very small, very fast, and very large. Some future Einstein will find flaws in quantum mechanics and relativity.

 

Just because Newtonian mechanics has been falsified does not mean it is not valid. We still teach it and still use it; many branches of engineering are applied Newtonian mechanics.

Link to comment
Share on other sites

Nice to have more join the thread!

 

Long, interesting thread. Where then to start? I'll just dive right in with the last post:

 

I too agree that this discussions contains so many parts. My addition to the thread was mainly to support the OP in his thinking that philosophy of science is important, and that many important fundamental questions could easily be classified as such, or tangent to.

 

A comment on this

 

...

This is not a finite procedure, and ignores the problems of uncountable numbers. It works and is easy to grasp. (The engineer's ultimate tests.)

 

I am totally with you that there is no real problem in defining the real numbers in mathematics. Maybe I was unclear, I don't see this as a pure mathematical problem, if you by mathematics mean consistency of language. It's consistent enough for me on that level, what my objection is about, is the uniqueness and fitness of the language in the current context - physics. If we are looking for some kind of isomorphism between physics and mathematics, then one wonder what the physical basis is for representing to start with a infinite sequence. It takes an infinite amount of information to do so.

 

Even from the pragmatic engineering point of view, you always work with finite precision. A real number in a computer, is represented by say a 32 or 64 bit integer.

 

If we are using these things, to work out constraints of physics, by the kind of reasoning often used in theoretical physics (consistency requires this, symmetry requires that etc) the above things are far more than a technicality to me.

 

I think it's true what is sometimes said in quest for simplicity in physics, that given the right language or representation, things can be beautiful. So maybe the quest for beatiful laws of nature is the quest for the choice of language that allows an "economic" way of expression. ie. simplicity is relative to language.

 

Also, about "finite information", this becomes even more acute when discussing quantum phehomena and the quest for quantum gravity. Here I think the fitness of some of the language used, is questioned. The continuum included here, IMO at least.

 

/Fredrik

 

what the physical basis is for representing to start with a infinite sequence. It takes an infinite amount of information to do so.

 

I can't speak for Dreyer's true view, but to speak for myself, this objection relates to this statement he did

 

"The last step is that we assign properties to microscopic objects that they cannot have."

 

In particular does this apply to the true intrinsic information, system A has about system B. Maybe the continuum is simply too big. I personally think it is, but since I don't have any answers, all I insist on, is to raise the question.

 

The usual measure of intrinsic information, like as various correlation measures, like various entanglement measures, miss one very important point and it's that those MEASURES are sitting in an external context. They are not true intrinsic measures. Compare here with the extrinsic vs intrisic curvatures in geometry. We still do not have a proper intrinsic formulation of information theory and informatoin evolution. This is a problem IMO.

 

/Fredrik

Link to comment
Share on other sites

Sigh. My reading list just gets longer and longer.

 

This is partly interesting, but one mistake I think Jaynes does is that early in his book, he makes the abstraction that a degree of plausability is represented by a real number between 0 and 1. I think the real numbers are non-trivial, and really themselves refer to limits ...

 

You lost me. You guys are going over my head now. Can you try again to explain your objection, or is it too deeply embedded in the math?

 

This somehow relates to the idea of reasoning upon incompete information ... This then needs to be tamed by selective mechanisms.

 

This part I got. I've heard this idea before, and it's a nice idea, but unprovable. Why? Because we don't know what we don't know. We can find examples where selection has tamed the subjective, but we don't know what other subjective aspects will be forever untamed because ... well, because we don't know.

 

In a very real sense, all science is inherently false in the sense that the models built by science are only approximately and provisionally correct. The rub: Just because a scientific model has been falsified in some regime (e.g., Newtonian mechanics) does not mean it is "always false". There is, in my mind, a big distinction between always false models (e.g., the caloric theory of heat, Aristotelian physics) and provisionally correct models (e.g., Newtonian mechanics, quantum mechanics, relativity). Newtonian mechanics has been falsified in the realms of the very small, very fast, and very large. Some future Einstein will find flaws in quantum mechanics and relativity.

 

Just because Newtonian mechanics has been falsified does not mean it is not valid. We still teach it and still use it; many branches of engineering are applied Newtonian mechanics.

 

Amen. I like your summary here.

Link to comment
Share on other sites

It was not objectivity that led to the new discovery, but creativity - a blatant subjectivity. That opposes what many claim science is.

Who claims science is strictly objective? The lay press (and scientists after the fact) like to present science and mathematics as completely logical as a nice, wrapped-up-with-a-pretty-bow process. I disagree. The scientific process requires creativity to advance. A frequently asked category of questions on this forum is "How did <some brilliant scientist or mathematician> come up with the idea for <something Kuhn would call a paradigm shift>?" The answer is: <some brilliant scientist or mathematician> was insanely creative. The art of connecting the dots is an art. Science is the pretty picture that results coupled with observations indicating that the pretty picture is real.

 

A much bigger problem in my mind is the problem of theory-laden observation. Is this a bunch of pelicans

Hans2.JPG

or a bunch of antelopes?

Hanson3.JPG

(from http://www.loyno.edu/~folse/Hanson.html)

 

You might want to add "theory-laden observation" to your search list and your reading list.

 

Sigh. My reading list just gets longer and longer.

Just to make it a bit longer: I suggest you read up on the free lunch theorems. (The author was probably a Heinlein fan.) There's a good bibliography here: http://www.no-free-lunch.org

 

Strictly speaking, the no free lunch theorems pertain only to machine learning techniques. However, a lot of what is said could be said of the scientific method in general. Is the the scientific method a free lunch, or is there no free lunch in science?

 

This part I got. I've heard this idea before, and it's a nice idea, but unprovable. Why? Because we don't know what we don't know.

Not necessarily. There are some things that we know we don't know. There are ways to deal with these known unknowns. A significant sub-branch of multi-criteria decision making deals with decision making under uncertainty. What these techniques can't deal with are the things that we don't know that we don't know.

 

Donald Rumsfeld alluded to exactly this distinction between the known and unknown unknowns: "Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns -- the ones we don't know we don't know." This was deep -- and a sign he read something of philosophy of science (and recent literature in the field in decision analysis). The press was shallow to dismiss what he said as stupid.

Link to comment
Share on other sites

Who says you cannot be creative and objective at the same time? Those qualities are not mutually exclusive. And the whole of science depends on it.

 

I posted these quotes elsewhere in this forum. They are worth repeating (besides, I like them a lot).

 

 

Harold Edgerton: "That's the nature of research--you don't know what in hell you're doing."

 

Albert Einstein: "If we knew what we were doing, it wouldn't be called research, would it?"

 

Isaac Newton: "The most exciting phrase to hear in science, the one that heralds new discoveries, is not 'Eureka!', but 'That's funny...'"

 

Albert Einstein: "I never came upon any of my discoveries through the process of rational thinking."

 

 

The creative process is neither rational nor objective. It is something else altogether. What distinguishes a scientist from a loon is that a scientist is able to (in fact, must) look at their non-rational inspirations with an objective and critical view.

Link to comment
Share on other sites

Resha, just woke up. I can at least try to explain later. I would not say it's not a technical objection deeply embedded in math. I'd rather say it's deeply conceptual and philosophical problem and also depending on a personal conjecture of mine. It's related to the problem of induction, but my escape is different than Poppers.

 

Everyone watches nature, sees structural symmetries and patterna, and tries to guess and generalized from it. Rather than to look at structural patterns, I see behavioural patterns. This connects to the behaviour of rational reasoning upon incomplete information.

 

In short the continuum constitutes a microstructure, and this influences reasoning upon incompleteness, in an ambigous way. I see the continuum as a limit of large information. When this doesn't apply strange things happens. I have an idea of how quantum logic is emergent from classical logic, and a key to understnading it, instead of postulating it, is the constraint "finite information" has on the rational reasoning. Rational reasoning isn't the same as deterministic reasoning. It's more about "rational guessing". The problem of howto make sense of this without resorting to hypothetical infinite experiment series, is what I think of as constructive selection and evolution.

 

I think can make sense of that if information is discrete. The continuum doesn't fit in my picture. I'll try ot see if where is a cleared way of explaning later.

 

/Fredrik

Link to comment
Share on other sites

Thanks for doing this. It may be that my understanding of the math you referenced is insufficient. If so, I have some responsibility to slog through it and learn more. I don't expect you to give me an online course.

 

Of course any help is greatly appreciated. You are kind to help me out.

 

My purpose with this thread was to gage experience with the philosophy of science. I'm glad that it seems several here are knowledgable and/or interested. I don't think that's the case in the general scientific (or engineering) community. Maybe I'll pose other questions in the future.

Link to comment
Share on other sites

You lost me. You guys are going over my head now. Can you try again to explain your objection, or is it too deeply embedded in the math?

 

I tried to come up with something that might be clear, started to type but I ended up with even more stuff that I realised will probably just raise more confusion, so I skip that. Instead I'll just say that my opinion is that this is deeply complex, and the analysis of this, unavoidable leads straight onto OTHER but related problems. This means that a proper explanation unavoidable gets lenghty, and in part raises more new questions than answers. Also I am still working on the big picture myself. Related things are foundations of probability theory as a basis for induction. This is certainly a start, but there is more to it than just bayesian probability. That other thing is that the entire reasoning taking places, also need representation.

 

This in particular leads to very complicated and apparently unavoidable self-reference problems which is the biggest problem of all. Now, what do you make out of an unavoidalbe self-reference? I think the result results in an evolution. The evolution itself, is the solution to the problem of self-reference I am working on. This relates also to the physical basis of time. As you see, from my point of view at least this is complicated. There is no simple one-liner for this.

 

Many would spot inconsistencies, but the inconsistencies of our own understanding - not of isolated things like koglomorom axioms - but of the reality we apparently see, are forced upon us. Therefore to ignore them, and call it philosophy is not an answer. It's just a declaration that you don't see the questions. This in itself is also somehow part of the game.

 

Now with that disclaimer this is what I think may be simple way of illustrating the reasoning behind my objection to the continuum probability.

 

I, like Jaynes, think of probability theory sort of as the logic of science, or reason. Jaynes point is that if you start out asking philosophical questions about reasoning on incomplete information, he arrives in a totally different way, to the same mathematics as ordinary probability theory. This is itself interesting, and I think it's good stuff, but the picture can be improved.

 

I am looking for a new discrete version of probability theory, where the probability value, normally beeing the continuum between true and false, are instead generated combinatorically by the complexions of the information structure. This will recover the continuum in the n -> infinit limit, but if you do that early on, as a starting point, you loose track of the exact limiting procedures. As everyone knows from math, if you mix expressions and calculations, integrations etc, containing series, and then take it to infinite many complicates take place. You must keep track of what one is doing here. One resuly of this is that you end up with the problem of choosing a prior distribution from, and you have an uncountable set to choose from. This problem will not appear in what I advocate.

 

I think that the abstraction of limits of infinite sequences etc, correspond at best to asymptotically phenomena ni nature. Finite systems observer in finite time, does not easily match here. At least not in the way I think.

 

The potential here is plenty. One of the most absurd think in modern theoretical physics is all this toying around with infinites and then invention of principally arbitrary procedures to remove them. The idea is that the motivation for the arbitrary tricks is that the result is finite. This is baloney to me. This is exactly what I hope to avoid by these questionings.

 

Sorry to be so unclear but it's the closests thing to a simple answer. I just wanted to spin on, on the questions you started in the thread.

 

/Fredrik

 

My purpose with this thread was to gage experience with the philosophy of science. I'm glad that it seems several here are knowledgable and/or interested. I don't think that's the case in the general scientific (or engineering) community. Maybe I'll pose other questions in the future.

 

Yes this is also why I participated in the thread. We can not solve, and perform a full analysis of the philosophy of science in one thread. And also, I don't think anyone on here would be so foolish to claim they have all answers either. I think this is the point here, to acknowledge that it's sound to raise these questions. It's by no means IMHO at least, in contradiction or conflict with scientific progress. On the contrary is it necessary, like also several others said in the thread.

 

/Fredrik

Link to comment
Share on other sites

Very interesting. I understood this last post - at least enough to grasp a few ideas if not the details. And, as you predicted, it raised some questions for me. But maybe I'll save those for another day.

 

I've been working on some stuff of my own, though it is a little more "down and dirty" than yours - and that seems to be my problem.

 

My job is related to rotating machinery, and I have an interest in the nonlinear dynamic behavior of those machines. Within the heavy equipment industry where I operate, the typical procedure is to linearize everything.

 

After beating my head against a wall for many years, trying to convince engineers to use nonlinear techniques, I decided to try a different approach. I realized that engineering is not only tied to Newtonian mechanics, but it is also tied to Newtonian calculus. By switching to a different calculus, I can produce some very interesting results that look much like the linear techniques that engineers love to use.

 

If you think hard enough, you realize that the switch I propose changes common definitions such as "stiffness", "damping", and possibly even "time". People don't seem to like that. I've been trying to convince them that it's really just some fancy curve fitting, but have not gotten far.

 

In trying to present this to professional journals, so far I have lacked the mathematical background to rigorously develop the idea, so I'm stuck.

 

I appealed for help to some acquaintences with world-class skills and reputations, but they have declined to participate. The recommendation is that I do this as a dissertation for a doctoral program. I can't afford to do that right now, so I make what progress I can and bide my time.

Link to comment
Share on other sites

Here is where I have to depart from what fredrik and resha have written -- somewhat.

As everyone knows from math, if you mix expressions and calculations, integrations etc, containing series, and then take it to infinite many complicates take place.

To the contrary. Have you ever tried to compute the inverse of an nth difference equation? Calculus is, IMHO, much more powerful and much simpler than are finitary methods.

 

One of the most absurd think in modern theoretical physics is all this toying around with infinites and then invention of principally arbitrary procedures to remove them. The idea is that the motivation for the arbitrary tricks is that the result is finite. This is baloney to me. This is exactly what I hope to avoid by these questionings.

A lot of mathematicians still cringe at the normalization procedures developed by physicists. <begin{rant}> These ...

[math]

\aligned

\sum_{k=1}^{\infty}1 &= -\,\frac 1 {2} \\

\sum_{k=1}^{\infty}k &= -\,\frac 1 {12}

\endaligned

[/math]

... are stinking piles of something. Just because a series has an analytic continuation beyond its interval of convergence does not mean the series converges beyond its interval of convergence. <end{rant}>

 

My job is related to rotating machinery, and I have an interest in the nonlinear dynamic behavior of those machines. Within the heavy equipment industry where I operate, the typical procedure is to linearize everything.

You have to grant that linear techniques are much more amenable to analysis than are nonlinear techniques. There are all those -ilities (controllability, stability, robustness, safety, cost) that engineers have to worry about that can be very hard to assess with non-linear techniques. There are also all those not-invented-here engineers you have to deal with. The latter problem, well, those old farts will retire someday (or maybe not; their 401Ks have shrunk by quite a bit in the last few months).

 

The former problem is partly due to the purveyors of the non-linear techniques themselves. Some people stay in academia so they can play in sandboxes. I can understand this; I love debugging the blank piece of paper. Developing something new is much more fun and satisfying than is trying to apply those stupid -ility concepts to someone else's technique (or even to one's own technique). That said, there has been a lot of work done with non-linear controls regarding the -ilities as of late.

 

After beating my head against a wall for many years, trying to convince engineers to use nonlinear techniques, I decided to try a different approach. I realized that engineering is not only tied to Newtonian mechanics, but it is also tied to Newtonian calculus. By switching to a different calculus, I can produce some very interesting results that look much like the linear techniques that engineers love to use.

The problem with throwing out calculus is that you throw out a lot of other things. Even when working in the realm of digital control, which inherently has some aspects of discrete maths (BTW, have you dinosaurs in the rotating machinery industry moved to digital control yet?), calculus can still play a major role.

 

If you really want to wow your coworkers, look into the recent developments in symplectic control.

Link to comment
Share on other sites

There are all those -ilities (controllability, stability, robustness, safety, cost) that engineers have to worry about

 

There ain't no free lunch. The question is whether the advantages outweigh the problems.

 

The problem with throwing out calculus is that you throw out a lot of other things.

 

I'm not throwing out calculus. I'm replacing one calculus with another.

 

have you dinosaurs in the rotating machinery industry moved to digital control yet?

 

It depends. Some machines advance, some don't. Some of the old stuff is very rugged, which relates to one of my earlier comments about watching machines working belly deep in mud.

 

Some paradigms are hard to break. Using more advanced technology (for us) often means more sensitive equipment. Then we have to protect that equipment, and that means $$$. If you want to pay more for all the products my machines dig, cut, push, and lift, then I can do that.

 

There are many changes I wish I could make, but "it's cool" doesn't justify much in the business world.

Link to comment
Share on other sites

To the contrary. Have you ever tried to compute the inverse of an nth difference equation? Calculus is, IMHO, much more powerful and much simpler than are finitary methods.

 

Just a note: I am not denying the power of calculus. I am however questioning it's the universality, uniqueness and it's fitness.

 

I was very sloppy but with "take it to infinite many complicates take place" I mean to say that you can not arbitrarily take limits, because the order of how you take limits make a difference. This was what I referred to. For example summing integrals vs integrating the sum. This makes no difference for finite scenarios. But if you're dealing with infinite series the order matters; ie the limiting procedures of elements doens't commute

 

It's the same twisted logic used in renormalisations. It's not that it doesn't work, it's the fact that it's ambigous from the point of view of reasoning. And that's is not just a technical objection. To me it's worse than that.

 

Physicists has gone wild on continuum models and mix it freely, and there is something that just doesn't make sense. It's when you mix these philosophical analysis of contiuum in the context of information processing and reasoning upon incomplete information, that the continuum comes out as containing a redundancy, but where you have lost track of the symmetry that would remove it.

 

Maybe my own context confuses this. Some of us talk about newtons mechanics, some about QM and gravity. My context is the foundations of the laws of physics, in particular the problems appearing in quantum models and when trying to understand what QG is.

 

If we are sticking to Newtons mechanics, I think most of what I said seems unmotivated. In newtons mechanics, calculus is fine. But unfortunately newtons mechanics doesn't seems to explain our world.

 

/Fredrik

Link to comment
Share on other sites

A new idea which simply made predictions we already knew were correct would not be of much use. The test comes when the new idea makes new predictions that we do NOT already know to be correct. Established ideas are different.

It does occur that you could have a new idea which provides an old prediction.

Link to comment
Share on other sites

I was very sloppy but with "take it to infinite many complicates take place" I mean to say that you can not arbitrarily take limits, because the order of how you take limits make a difference. This was what I referred to. For example summing integrals vs integrating the sum. This makes no difference for finite scenarios. But if you're dealing with infinite series the order matters; ie the limiting procedures of elements doens't commute

 

It's the same twisted logic used in renormalisations. It's not that it doesn't work, it's the fact that it's ambigous from the point of view of reasoning. And that's is not just a technical objection. To me it's worse than that.

 

I just occured to me another simple comment to add, that might further serve as an illustration on this issue (that is, but objection to jaynes use of real numbers as a fundamental starting point for reasoning upon incomplete informatoin), and how that at least conceptually, even though not directly, relates to the infinity-problems I refereed to:

 

AS I see it, there are several key point in reasoning upon incomplete information, that constitutes a rational logic of reason

 

1. The logic of expectation or various forms of "probabilistic inference". ie. given this incomplete information, I am lead to make this particular guess, or this distribution of possible guesses (this contains complications itself indeed)

 

2. The logic of correction! This is, how do you respond, when new evidence is thrown right in your face, that is in total contradiction with your previous information? Somehow we need to RATE and LEVEL the new contradicting information, with the old prior so as to determine how the new information deforms the prior into a new opinion. Of course, this is what Bayesian updates supposedly does. But you can certainly ask, is bayes rule the only way to do this? My opinion is that it's not. And this partly probably relates to the critic to induction. But that doesn't mean it can't be made to make sense. The bayesian reasoning, uses a simple way of merging information. In particular it does not handle truly contradictory information. As long as there is a fixed microstructure, providing the space of probability distributions, bayes reasoning is good, but sometimes the inconsistency can not be resolve by just updating the microstate, it may require deformation or change of the microstructure itself.

 

To do this, I have been considering picturing the information content encoded not only in the microstates but also in the microstructure. Which gives hierarchy of measures on measures.

 

In this idea, which would replace or "extend" the plain bayesian reasoning, I need to be able to assign a measure of "inertia" between two opinions to determine what happens when they "collide".

 

Here the continuum comes into play. If you try to "count evidence", and you find that two "opinions" have infinite measures, how can you compare them? Therein lies the difficulty as I see it. Sure one solution is to define measures of information, like entropy measues, like shannons etc. But those measures are in fact similarly ambigous. There are several "entropy measures around" which renders me with a choice.

 

These things have accumulated in my brain for a while and I finally realized that there is no other escape but to at least try to solve this.

 

/Fredrik

 

"The logic of correction" is the difficult part. In popperian reason this is the step where one falsified hypothesis is replace by a new hypothesis. This step is completely ignored by popper. So I think he tactically ignored the most difficult part.

 

The problem is not wether to dismiss a theory when falsified. It's like to comit suicide each time you are wrong. Of course then you are wrong only once and never again - it's not constructive.

 

The problem is howto use the evidence that suggest falsification, to provide and expectation for a new hypothesis, in a spirit of "minimum specualtion".

 

/Fredrik

Link to comment
Share on other sites

The bayesian reasoning, uses a simple way of merging information. In particular it does not handle truly contradictory information.

I agree with you there. If you feed a single twenty+ sigma outlier into a Bayesian update you will get a prior estimate that is completely out-of-whack because the Bayesian update algorithm implicitly assumes that there is no such thing as a twenty+ sigma outlier. To be a bit too anthropomorphic, the Bayesian update bends over backwards to accommodate an apparently nonsensical value. So, simple, use a rule of thumb that rejects such outliers. You never use them to update the estimate.

 

This is fine sometimes. Sensors (or people) can and do give completely false readings. A high order zero can be misread as a one on transmission, in which case throwing out the twenty+ sigma outlier is exactly the right thing to do. Or the sensor just had a glitch, which it will correct (possibly with a twenty+ sigma outlier in the other direction). Once again throwing out the outliers is the exactly right thing to do. Or the sensor failed, in which case throwing out every subsequent reading from that sensor is exactly the right thing to do (there hopefully are some redundant sensors).

 

Or the system just underwent an un-modeled state change and the twenty+ sigma outlier is in fact very close to representing the true state, in which case throwing out the twenty+ sigma outlier is exactly the wrong thing to do.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.