Jump to content

How are scientific theories produced


Effie

Recommended Posts

explain

I am not about to teach you Logic 101. If you don't understand its structure or purpose you should not be making sweeping statements about it.

how so ?

You identify an attribute of A and then point out that B does not have that attribute, without showing any necessity for B to have that attribute.

 

Ergo, a false comparison.

 

In any case, your statement was utterly meaningless since most forms of logic are formalisations of reason.

Link to comment
Share on other sites

Originally Posted by north

explain

 

 

I am not about to teach you Logic 101. If you don't understand its structure or purpose you should not be making sweeping statements about it.

 

no need to teach , I've taken logic 101 yrs ago

 

just explain where your coming from

 

I am not about to teach you Logic 101. If you don't understand its structure or purpose you should not be making sweeping statements about it.

 

You identify an attribute of A and then point out that B does not have that attribute, without showing any necessity for B to have that attribute.

 

Ergo, a false comparison.

 

In any case, your statement was utterly meaningless since most forms of logic are formalisations of reason.

 

well does A represent and what does B represent to you

 

this is important

Link to comment
Share on other sites

I don't think it can be explained in simpler terms.

 

Regardless, the fault does not lie with me but with you, so it is not my explanation to give. You are the one who needs to explain why your fallacious statement has any bearing on this discussion.

 

If you can't then I suggest that you don't try, because you can't afford any more infractions. Refer to the site rules if you are in any doubt as to why infractions are given out.

Link to comment
Share on other sites

I don't think it can be explained in simpler terms.

 

Regardless, the fault does not lie with me but with you, so it is not my explanation to give. You are the one who needs to explain why your fallacious statement has any bearing on this discussion.

 

If you can't then I suggest that you don't try, because you can't afford any more infractions. Refer to the site rules if you are in any doubt as to why infractions are given out.

 

ah I see

 

and good night

Link to comment
Share on other sites

No. Induction underpins all of science. Scientific theories cannot be proven true. The mass of evidence in support of some theory is not proof of correctness in a logical sense; to think that it does is to commit the logical fallacy of affirming the consequent, "If P then Q; Q; therefore P". What experimental evidence can do is annihilate a scientific theory; "If P, then Q; ¬Q; therefore ¬P" is the logical valid concept of denying the consequent, or modus tollens. One lousy piece of contradictory evidence can throw a hypothesis (or even a long-standing scientific theory) into the dustbin of falsified conjectures. Observing thousands of white swans, but never seeing a black swan, does not logically prove the hypothesis "all swans are white".

 

Popper, IMHO, is simply wrong regarding his objection to induction. He ignores the importance of deduction in the scientific process. From the onset, an ad-hoc hypothesis has significantly less merit than does a hypothesis with a solid logical underpinning. Scientific theories are "proven" in a manner similar to convictions in law. A legal conviction requires a logical rationale in the form of motive and opportunity, lack of exculpatory evidence, and sufficient confirming evidence. Similarly, a scientific theory requires a solid logical underpinning, no disconfirming evidence, and sufficient confirming evidence. Popper is right in the concept of falsification. Because of the problem of affirming the consequent, legal convictions can be overturned and scientific theories can be discarded in light of new exculpatory or disconfirming evidence.

 

who's Popper and what objection to induction?

 

observing all white swans doesnt prove that all swans are white but it does establish a rule that swans tend to be white. further observations lead to other rules. each rule supports some other rules and argues against some other rules. these rules form a vast interconnected web. the process of error correction then takes place. a weight/confidence value is placed on each rule depending on how many other rules support it. after this is done the values are interpreted (I wont go into this step but its important) then the process is repeated with the new values depending not only on how many other rules support it but also on the confidence value of each of those rules.

 

this is error correction. can it ever establish that 'ALL swans are white'? I dont know. but it can probably establish that 'whiteness' is characteristic of 'swanness'.

Link to comment
Share on other sites

http://en.wikipedia.org/wiki/Karl_Popper#Problem_of_Induction

 

Problem of Induction

 

Among his contributions to philosophy is his attempt to answer the philosophical problem of induction. The problem, in basic terms, can be understood by example: given that the sun has risen every day for as long as anyone can remember, what is the rational proof that it will rise tomorrow? How can one rationally prove that past events will continue to repeat in the future, just because they have repeated in the past? Popper's reply is characteristic, and ties in with his criterion of falsifiability. He states that while there is no way to prove that the sun will rise, we can formulate a theory that every day the sun will rise—if it does not rise on some particular day, our theory will be disproved, but at present it is confirmed. Since it is a very well-tested theory, we have every right to believe that it accurately represents reality, so far as we know.

 

 

 

he doesnt take error correction into account.

Link to comment
Share on other sites

observing all white swans doesnt prove that all swans are white but it does establish a rule that swans tend to be white.

No, it doesn't. All it means is that you have observed white swans. You, yourself, appeared quite aware of this type of interpretation when you posted this:

 

Granpa's comment about sheep as indicated by a common math joke which he shared in the spacetime foam/fabric thread

Link to comment
Share on other sites

Popper, IMHO, is simply wrong regarding his objection to induction. He ignores the importance of deduction in the scientific process. From the onset, an ad-hoc hypothesis has significantly less merit than does a hypothesis with a solid logical underpinning. Scientific theories are "proven" in a manner similar to convictions in law.

 

Is this a typo for induction? I think you meant to say the importance of induction in science? If not, I'm slightly confused about that word in that place.

 

/Fredrik

Link to comment
Share on other sites

uh. hello? it was a joke. I was being facetious.

 

That's not the point. The point is, it demonstrates your existing knowledge that your own assertion above is mistaken. Seeing white swans means just that. That you've seen white swans. You cannot, based on that, "establish a rule that swans tend to be white," which is why I corrected you and why I reminded you that you already seemed to know this.

Link to comment
Share on other sites

About the swans and induction. I think the interesting part is when inductive reasoning is put in context. The fact that seeing many white swans, never logically allows you to deduce anything about the next observation is clear, and not particularly interesting IMO.

 

Usually the inductive reasoning determines your expectations, and thus if we assume rational actions, your behaviour. So different choice of reasoning may be of different degrees of utility. And thus a selection in favour of efficient reasoning is expected.

 

So what is the expected reasoning?

 

The question is IMHO, given that you are forced to bet the only money you have. You have seen only white swans, and in the bet you have two choices. That you will see a white swan next, or that you won't. If you are wrong you die, if you are right you live.

 

You can say that either choice is equally possible, yet I think that most would tend to bet for white. And IMO the point isn't wether it's "logically valid" - it isn't but that's not the point - the conjecture that this is how nature works (action based upon incomplete information) may give insight to predictive modelling. This is so regardless of the "validity of the induction".

 

The idea also from physics is that a system, responds to the local information. Is this valid?? I think that's not the question at all. The question is, what more plausible expectations do we have, than to expect intriniscally rational behaviour. The motivation is not that it's unique or valid, it's because, given that we are to reason upon admittdely incomplete information, it's seems to me the most plausible thing to expect. Why? Because an system acting differently would fight it's environment, and thus probably not persist.

 

The one reason why I think about this, is that I think the scientific utility here, will be that when we understand this better, we will better understand the nature of physical law.

 

You can similarly object that physical law is ambigous! How can you, from any amount of observation, deduce the correct physical law. In line with the swan stuff, you can't. But again, that's not the point. The point is, what then to do? and more importantly, what does nature do? How come then, we have this apparent stability indespite of total lack of a priori hard logical references?

 

As DH said before, the scientific progress is "creative" and not described by simple deductive logic. I agree on that. Then my quest is: what abstraction or formalism does best describe it? Again, I think here the point is not to find it, it's too look for it.

 

My highly personal motivation for this isn't philsophy in itself, it's a conjecture of mine that there exists a deep analogy with

 

1) a rational logic of reasoning upon incomplete information

 

2) the action of a physical system in reaction to it's information about it's environment.

 

By analogy, and analysis of logic of reason, MAYBE we can further deepen the understanding and structure of physics. If my analysis of this fails to yield such an improved understanding, that will help solve some of the open problems in fundamental physics, I will consider my conjecture falsified. In physics there are already many things that have remote similarities to reasoning. Inertia and non-commutativity. These things can also appear in reasoning. In physics we make experiments and get observational data. In reasoning, you formulate an question according to your expectations, and fire it.

 

/Fredrik

 

what if you observe every swan on the planet?

 

To respond to this slightly out of context. Anothre problem with this is

 

1) how do you possibly know when you have observed every swan on the planet?

I say you don't.

 

2) Also there will be scenarious where you brain it's large enough to store the raw-data of your observational histor. Then decisions need to be made. To compress some data, do discard some data. The very choice of compression algorithm, and discarding algorithm will make a difference. This information-limiting effect alone will produce interesting behaviour, that is a result of their own incompleteness. Some actions can possibly be deduce to this.

 

In human history sometimes mistakes are repeated, possibly because history is forgotten.

 

The utility of history is not curiousity, it's usually to guide us in the future. This is what also what brain research suggests, that the brain stores past data, but optimized to be of maximum utility for the expected future. Some have suggested that this partly might explain why the memory of past events are often distorted by the brain in the storing/compression process. If our brain was optimized for actually remember data as it was, we would probably have the capacity to do so very well. In some brain disorders, like svantas I think this may explain why their record of details are so amazing. But then, they have other problems.

 

/Fredrik

Link to comment
Share on other sites

I have no idea what you are talking about. what if you observe every swan on the planet?

 

While it would improve the confidence in your probability, you would still only legitimately be able to say that every swan you'd personally observed was white. Any suggestions that all swans are white would certainly rest on solid evidence, but such an absolute declaration would still (in it's heart) be false and unproven. The point is, you cannot know for sure. Further, as fredrick rightly mentioned, how would you know that you'd observed all swans on the planet? You wouldn't, and that's another hole in your position.

Link to comment
Share on other sites

I didnt say that you would know that all swans were white. didnt you read what I wrote?

 

I said that you would know that there is a tendency for swans to be white. to be totally accurate what you know is your expectation (as someone else correctly pointed out below). I use the regular nonmathematical definition of the word. can your expectation, by means of error correction, ever become %100? that is the question. can you transmit a message over a noisy channel without any loss whatsovere?

Link to comment
Share on other sites

can your expectation, by means of error correction, ever become %100? that is the question. can you transmit a message over a noisy channel without any loss whatsovere?

 

Not really.

 

The Shannon theorem of information theory.

(http://en.wikipedia.org/wiki/Hartley's_law#Hartley.27s_law and http://en.wikipedia.org/wiki/Noisy-channel_coding_theorem )

 

is a relation between the maximum information transfer rate capacity possible for a given fixed communication channel with given bandwith and noise. It says that to maintain a low probability for error (high confidence level), you have to reduce the effective information transfer rate. But to reach perfection, 100% error free, your capacity drops to zero for any channel that's not noisefree to start with.

 

So in a finite time frame, you're trading away and amount of information communicated, for an increase in confidence. This makes it interesting though since it kind of relates amount of data capacity, information and time.

 

But I'd say that there are plenty of much more complications that suggested by classical information theory.

 

/Fredrik

Link to comment
Share on other sites

With the marvel of modern statistics you can get away with saying swans are white, since it allows for a fudge factor. You just need to word it correctly and the theory would be considered valid even if it is irrational. There is a high probability all swans will be white. When you see a gray swan and think the theory is not rationally correct, we are told they are still white but within a margin of error. The theoretical corollary of modern theory is don't trust your rational common sense, not because the theory is not rational, but because if you use rational common sense, you will not see that it is off by the fudge factor we have given ourself. If X increases the risk of Y, we are all white swans (all at risk). Don't try to reason.

 

The third corollary is a push toward a random universe. We have built in the requirement of being off into science theories. If theories are too accurate or rational, they will appear to be wrong, since they don't have enough error to be right. We are all at risk, because we are all white swans, within a margin of error. The god of chaos can get you at anytime since in a random universe we can't take any chances. The gray swan can turn white just like that, so don't trust common sense. He is really a 90% white swam even if he looks gray.

Link to comment
Share on other sites

Is this [deduction] a typo for induction? I think you meant to say the importance of induction in science? If not, I'm slightly confused about that word in that place.

I meant deduction, the importance of which Popper ignored. Popper instead focused on empirical theories ("all swans are white") -- naive induction. In a sense, his example "all swans are white" parodies the scientific method. It is a straw man.

 

Consider special relativity, for example. Einstein started from the basis of two simple axioms -- two universal quantifiers. The Lorentz contraction is one of several consequences or deductions from these axioms. Lorentz on the other hand postulated the Lorentz contraction in what is now called Lorentz ether theory. How to distinguish the two (special relativity and Lorentz ether theory)? Observationally, you cannot. The distinction is in the nature of the axioms. Popper's analyses would have been closer to the mark if scientific theories were purely empirical. Our best scientific theories are far from empirical. Scientists have discarded Lorentz ether theory because it is too empirical.

 

 

(statements on error correction)

Scientific theories (at least the most powerful ones) involve universal quantifiers. "The speed of light is the same for all observers." There is no gray there. The power of the theory springs in part from the universality of the quantifier.

 

Using a probabilistic approach is going to limit you to empirical theories, theories with little consequence. "All swans are white" has no consequences of note. It is not a good example of a scientific theory. Another big problem with a probabilistic approach is Hempel's paradox. "All swans are white" is logically equivalent to "all non-white things are non-swans". In a Bayesian sense, observing a green apple or a brown cow is confirming evidence of the proposition that all swans are white.

 

This discussion of error correction is a bit off-topic. For one thing, the problem of induction was known long before Popper and long before Shannon. Even more importantly, how does looking at the scientific method from the perspective of communication channels add to the discussion?

 

About the swans and induction. I think the interesting part is when inductive reasoning is put in context. The fact that seeing many white swans, never logically allows you to deduce anything about the next observation is clear, and not particularly interesting IMO.

That is part of what I am saying. Popper focused on a fictitious empirical law. He created a straw man.

Link to comment
Share on other sites

That is part of what I am saying. Popper focused on a fictitious empirical law. He created a straw man.

 

Thanks, Ok then I understand your posts better on that point.

 

But I still wonder if we talk about different things. With induction, I include also various forms of probabilistic induction, complemented by a subjective interpretation of probability.

 

To state that all swans are white, because it's all we have seen, is really the simplest of simple. I admit it was a little whiile since I read Poppers book, but I think even Popper was a little more sophisiticated than that.

 

However, I am not sure I would call what you describe as deduction? Sure you can deduce things from axioms, but the process of selecting axioms are hardly deductive.

 

What you describe as the difference between and "ad-hoc hypothesis" and "a hypothesis with a solid logical underpinning", is exactly what I would call inductive reasoning. Ie. the process by which you come up with hypothesis.

 

I suspec we mean the same thing, but I don't understand why you call it deduction. As as see it Popper totally misses the importance the reason behind hypothesis generation. Ie. unlike submitting random hypothesis for falsification trials, science uses anything but random hypothesis right? I think this is exactly your point. If so we fully agree. My confusion is how you call this deduction? I call it induction? :)

 

But I in a certain sense, probabilistic induction, is a form of deduction too. Just like an indeterministic theory like QM, really is deterministic. Maybe this is the source of confusion.

 

IF so, I would say that we fully agree except on one point. I do not accept what you suggest as deduction :) But neither do I think that our future understanding of physical will keep fully global unitary evolution. But I think that's another discussion.

 

/Fredrik

 

When I think of it I think the normal terms is "probabilistic deduction", rather than probabilistic induction, sorry for the added confusion. Ie. "induction as probabilistic deduction". Where each possible deduction is assigned a probability.

 

Anyway, I suspect we mean the same thing, even though the terminology got mixed up.

 

The notion of probabilistic deduction itself doesn't solve anything thouhg, since more problems appear when you try to define the physical basis of these probabilities. This again, IMHO, suggests tha the probabilistic deduction, really is a induction.

 

/Fredrik

Edited by fredrik
multiple post merged
Link to comment
Share on other sites

error correction isnt off topic at all. its exactly what the op asked about.

 

the op asked how we can say what we know about a system when we start off knowing absolutely nothing about it? are we doomed to forever just saying that we dont know for certain? that is exactly what error correction is about.

 

if you knew more about how multidimensional parity works maybe you would understand what it is I am saying. did you read post #55?

 

Not really.

 

The Shannon theorem of information theory.

(http://en.wikipedia.org/wiki/Hartley's_law#Hartley.27s_law and http://en.wikipedia.org/wiki/Noisy-channel_coding_theorem )

 

is a relation between the maximum information transfer rate capacity possible for a given fixed communication channel with given bandwith and noise. It says that to maintain a low probability for error (high confidence level), you have to reduce the effective information transfer rate. But to reach perfection, 100% error free, your capacity drops to zero for any channel that's not noisefree to start with.

 

not true. thats the opposite of what it says

 

 

The Shannon theorem states that given a noisy channel with channel capacity C and information transmitted at a rate R, then if R < C there exist codes that allow the probability of error at the receiver to be made arbitrarily small.

Edited by granpa
multiple post merged
Link to comment
Share on other sites

Here is a quick reply... will probably be down more during xmas, lots of

stuff to do, bu here is a quick one before te holiday.s maybe I'll check in later beofre new year. so i wont start any lenght argument this is just a short explanatin of what I meatn with the last post. You're right that it came out uncelar

 

error correction isnt off topic at all. its exactly what the op asked about.

 

I definitely agree with your point that the problem of error correction is relevant to the discussion of induction. I just didn't choose to comment on that there was enough other stuff to comment on in this interesting discussion. However error correction is IMHO an induction, not a deduction. Only during certain ambigous idealisations or truncations, can this induction be turned into "probabilistic deduction"; ie. the induction is turned into DEDUCTION of probabilities. The idealisations IMO lies in the probabilistic formalism, and this is why I think it really is a form of induction, because the notion of probability refers to obscence limits, limits that are not realised in actualy situations. Not logically valid, but again that's not the point, it's still apparently efficient.

 

The noisy channel problem is indeed an application of inductive reasoning.By means of Bayes theorem, that a probability of the input given output, is inferred from the transition probabilities of the channel and the prior distibution of the input.

 

So your point that, observation of only white swans in some sense, is a rational basis for EXPECTING only white swans is IMO sound. Altough the analysis could be further decompose some things. Still it's clear that this "induction" is not of deductive nature, but then again the validiy of induction does IMHO rely on deducion. IMO the key is evolution. Here I'm with your reasoning.

 

not true. thats the opposite of what it says

 

The Shannon theorem states that given a noisy channel with channel capacity C and information transmitted at a rate R, then if R < C there exist codes that allow the probability of error at the receiver to be made arbitrarily small.

 

Yes I was probably unclear, sorRy. I just meant to give my view on, not elaborate in detail, but I see that what I wrote might sound strange or wrong.

 

WAeat I meant was that in the code that makes the actual probability of error arbitrary small (ie goto zero) the code length goes to infinity.

Shannons theorem relates the maximum capacity given, signal to nosie ration, the noisy channel theorem says

For any ε > 0 and R < C, for large enough N, there exists a code of length N and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ε.

And: N -> inf, as ε -> 0.

 

How long time do you need to transmit an infinite code?

How long time do you need to make an infinite experiment?

Is this condition, while mathmaticall unobjectionable, a good description of reality?

 

Here we enter the issues of probability itself, and the sense in using continuum probabilities. This is exactly also my objection to why what some call deduction is really just a very very confident induction. But it's never perfect. It can't be. But again, that's not the point. It doesn't bother me, but apparently it did bother Popper. Popper tried to avoid induction, but failed.

 

When you make a finite experiment, and calculate the confidence level for any confidence interval, this is actually formally still an uncertainty even for the probability of the confidence of a given confidence interval.

In a discussion about he sciencetific method and fundamentals and notion of physical law and how it's induced from experimental experience, I don't think it's acceptable to overlook these points regarding the statistical reasoning.

 

This doesn't make error correction useless of course, I just meant to suggest that you can't argue that the error correction is deductive to nature. Ie. beeing 100% certain.

 

Popper seems to hold the idea that induction is unacceptable, and he was looking for a deductive escape. I agree that the induction isn't foolproof, but I disagree that this invalidates it's utility.

 

Maybe we can agree that the progress of science is not described by deductive logic? But I think some of us still expects that there is SOME logic to it, which is I think the quest. Inductive logic is somewhat ambigous as popper noted, but that doesn't necessarily invalidate it, because the quest is how to best make progress, not how to make deductive progress, when it seems that isn't possible.

 

Merry Xmas everyone unless we hear until then!! :)

 

/Fredrik

Link to comment
Share on other sites

if a swan can only be white if it is %100 white then I would suggest that no swans are white.

 

I see your point :) I don't want to start any long discussions today since I won't be able to fllow up, but the question is IMHO, what exactly does it mean to say "a swan is white". Can we make any certain observations at all? I am suggesting that there is a point where the reasoner can not distinguish this formal uncertainty, and at that point you simply end up with an opinon, whose questioning doesn't pay off, or isn't even possible. And at that point you say that, my expectation is that all swans as white, and I act upong that expectation. Then if I am wrong, then I simply revise my opinion of the underlying microstructure of possibilities.

 

I'm not picking on induction, just saything that I think it's not certain. But at the same time do I hold the opinion that, the uncertainty itself, guides us in the evolutionary process. Like science has evolved. You don't need _universal qualifiers_. But I think that often, universal qualifiers are indistinguishable from the best possible guess.

 

I think data compression, error correction etc are indeed part of this. But from the point of view of computer science, these things take place often in fixed contexts. Similar view can also be taken to physics. Frank Wilczek made the analogy to data compression in this latest popularize book (http://www.amazon.com/Lightness-Being-Ether-Unification-Forces/dp/0465003214) which I read some months ago.

 

He makes the analogy in the context of refecting upon the notion of symmetry, which are strong guides in the development of the standard model of particle physics. Just like there is no universal algorithm for compression that is equally fit for all cases, one might argue wether symmetry is contextual and thus sort of relative. What does this in the quest for the deepest symmetry of nature?

 

Thus IMO conceptually relates to to this limiting procedures.

 

/Fredrik

Link to comment
Share on other sites

well then you can get into the whole question of categories and whether inclusion in a category is all or nothing. (it isnt). but I think that might be getting off topic.

 

I think a better question than 'are all swans white' is 'will the sun rise tomorrow'.

Link to comment
Share on other sites

I think a better question than 'are all swans white' is 'will the sun rise tomorrow'.

 

I agree, that's the better question, because it puts the finger on the real problem. How to ACT upong incomplete information. This is exactly where this makes a difference.

 

The most accurate answer is I think: I don't know. Now when we have settled that, we still do not escape the choice.

 

Either you can throw in the towel and act randomly, or you can, given that no definite opinion can be formed, try to somehow count your evidence supporting that the outcome that the sun will rise tomorrow is the "least speculative" possibility, given tha fact that you do not know for sure. Then your actions are chosen as to maxmize your utility, based upon what you think will happen.

 

This can yield a somewhat rational behaviour, and the chances are that those systems that act rationally will be better off in the long run.

 

This is still fuzzy, but my take on physics, is inspired by that this is how nature is constructed, and has evolved. Yes it is just a guess, but those who will not play will not win. And the point is even that we are living the involutntary game of line, and to not place your bets, and keep your resources, is also a bet. And you are easily stripped by neighbouring systems. Play and you have a chance to survive, to not play is not a safe strategy.

 

/Fredirk

 

Either you can throw in the towel and act randomly

 

I think even given this, you unavoidable EVOLVE and develop a non-random action-strategy. This is by selection from the environment.

 

With "random" here I simply mean relative to a given observer. There is no universality in randomness more than there is in symmetry. IMHO at least.

 

Maybe I will be required to revide my strategy, but I am confident in it, and it is the basis for my actions. What other choice does a man with a tiny brain have? :)

 

/Fredrik

 

It's hard to stop.

 

try to somehow count your evidence supporting that the outcome that the sun will rise tomorrow is the "least speculative" possibility, given tha fact that you do not know for sure. Then your actions are chosen as to maxmize your utility, based upon what you think will happen.

 

I think the point here is that, as far as you can distinguish possibilities, a rational action acts upon ALL of them, not necessarily one of them randomly. I think this is subtle but it's ultimately one way of by interaction, determined the action of a system.

 

Think quantum superposition, where it seems to be that case that the system somehow acts upon ALL possibilities, not just one of them randomly.

 

But this gets us too far for the thread. I am working on this as part of my personal projects, but it's still in progress. I think this analysis when taking further (though it wont happen in this thread) might suggest a deeper understanding of quantum logic and the appearance of non-commutative opinions/information in the action context.

 

It's in the dynamica context, of producing an action, based upon this incompleteness, that this gets really interesting, and where the evolution idea gets moving. It also unites, like I think I wrote earlier, the concept of entropy, and the concept of action. The ambigouty of entropy measures, is similar to the ambigouty of rational action, but evolution is the possible way out I see.

 

/Fredrik

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.