Jump to content

Statistical Argument Against Evolution


ku

Recommended Posts

so you have a bag with 100 swans. you pull out 99' date=' and they are all white.

 

two hypothesis can fit the fact that 99 swans are known to be white, vis:

 

H1: 99 swans are white and 1 swan is non-white

H2: 100 swans are white.

 

the probability that the first 99 swans out of the bag will be white is the P(1st swan is white)*P(2nd swan is white)*P(3rd swan is white)...*P(99th swan is white)

 

the probability that a white swan will be chosen = the number of white swans/the total number of swans.

 

if H1 is true, then the P(first 99 swans are all white) = (99/100)*(98/99)*(97/98)...*(1/2), 1/2 being the 99th choice, where according to the hypothesys there would be two swans left, one of which is white and one of which is not. this gives a probability of 1/100 (logical, concidering the probibility that the non-white swan would be picked last is equivelant to the probability that the non-white swan would be picked 1st = 1/100)

 

so there would be a 1/100 chance of getting the observed results if H1 is the case.

 

If H2 is the case, then there would be a 100/100 chance of getting the observed results (if all 100 swans are white, the chance of the first 99 swans being picked would be 1).

 

not [i']entirely[/i] sure where to go from here, but i think that the following is logically/mathematicaly true...

 

probability of obtaining the observed result if the given hypothesis is true:

 

H1: 1/100

H2: 100/100

 

therefore, H2 is 100 times more lightly to yield the observed results than H1, ie P(H1 is corect) = 1/101, P(H2 is correct) = 100/101

 

or in other words, H2 has a 99.00990099(reccuring)% chance of being correct,

 

accept H2 (at 99% confidence interval)

 

so not certain that the last swan would be white, but would quite highly expect it to be. have also quantified my uncertanty with the confidence interval, which is essentialy the P(my assumption is correct)

 

would i be surprised if it were black? yes and no. yes because there is only a 1/101 chance that it would be non-white. no because 1/101 chance is not the same as 0 chance. im sure that things with only a 1/101 chance of happening fail to happen just over 99% of the time, but occasionaly they do happen

 

Excellent statistical analysis. So you perform the experiment. 99 swans were white. There is now only one swan left in the bag.

 

My question: Would you be surprised if it was black?

Your answer: Yes and no.

Link to comment
Share on other sites

  • Replies 70
  • Created
  • Last Reply

Top Posters In This Topic

My question: Would you be surprised if it was black?

Your answer: Yes and no.

 

alright, badly worded.

 

yes i would be surprised because i was 99.00990099(recuring)% sure that the next swan would be white

 

but i wouldnt be absolutely flabbergasted, because a (roughly) 1% chance is still a chance -- and iv seen enough things that have only a 1% chance of happening fail to happen, that its only to be expected that a few events defy the odds and actually hapen dispite the low probability that they would.

 

which is the whole point of assighnin percentiles to certanty -- its not binary.

 

ie: no, u cannot be 100% certain of the colour of the last swan, but you can be pretty dam sure :P

Link to comment
Share on other sites

There is one swan in a bag, there are no other swans in existence. What is the probability it is white?

 

You've changed the conditions of my example. Your position, as I understand it, is that (un)certainty is never quantifiable, since it's binary. All I have to do is come up with one counterexample to refute that.

 

If I know that 99% of swans are white, and there is a randomly chosen one in the bag, the probability that it is white is 0.99

Link to comment
Share on other sites

You've changed the conditions of my example. Your position' date=' as I understand it, is that (un)certainty is never quantifiable, since it's binary. All I have to do is come up with one counterexample to refute that.

 

[/quote']

 

What you just said about finding a counterexample, is absolutely correct. I am making a statement, about uncertainty. If the statement is true, then no counterexample can be found. If a counterexample is found, that suffices to prove the statement false. Right. Then you say this:

 

If I know that 99% of swans are white' date=' and there is a randomly chosen one in the bag, the probability that it is white is 0.99[/quote']

 

I didn't stipulate that the one in the bag was randomly chosen, I stipulated that it is the only swan in existence. In other words I changed N from 100, to N=1. In the original problem, you eventually reached a point where 99 swans were outside of the bag, and the observer had seen that each of these 99 swans were white. But the observer still had not seen the last swan. Whence I claim the observer is totally uncertain as to the color of the last swan. I claim that the observer cannot quantify his uncertainty in any non-trivial way. The observer is uncertain about the color of the swan in the bag. <--- that is all we can say. This is my position.

 

Suppose you do indeed know that exactly ninety nine swans are white. Then there is more than one white swan of course. But so you know that there are non-white swans with certainty too, since you KNOW that exactly 99% of swans are white. In other words, if you know that 99% of swans are white, then you already know or can infer that 1% of swans are non-white.

 

I guess I didn't understand your example, but mine is very clear to me at least. And of course I am still of the position that human uncertainty cannot be quantified in any non-trivial way. In other words I don't see that you have found a counterexample to my example.

Link to comment
Share on other sites

I guess I didn't understand your example' date=' but mine is very clear to me at least. And of course I am still of the position that human uncertainty cannot be quantified in any non-trivial way. In other words I don't see that you have found a counterexample to my example.[/quote']

 

I don't think it's that difficult. All you've done is to define a specific problem and then extrapolate that to include all cases, which is invalid. Your example is very specific and is a tiny fraction of all possible examples.

 

Do you ever carry an umbrella with you, even if it isn't raining at the moment you leave wherever the umbrella happens to be? Does anyone? Why would they do this at some times, but not others?

Link to comment
Share on other sites

I don't think it's that difficult. All you've done is to define a specific problem and then extrapolate that to include all cases' date=' which is invalid. Your example is very specific and is a tiny fraction of all possible examples.

[/quote']

 

Oh, I see. You are saying that I've chosen a specific case, and then generalized. That's not how I see it, but I understand you now. Actually, I seem to have inferred it from the personal experience of being uncertain, and not this specific problem.

 

Hmm. Are you saying that I have to provide an analytical proof that certainty cannot be quantified in any non-trivial way?

 

Are you suggesting that I actually have to analytically prove that it is impossible to be .7 of certain?... or rather .3 uncertain?

Link to comment
Share on other sites

Oh' date=' I see. You are saying that I've chosen a specific case, and then generalized. That's not how I see it, but I understand you now. Actually, I seem to have inferred it from the personal experience of being uncertain, and not this specific problem.

 

Hmm. Are you saying that I have to provide an analytical proof that certainty cannot be quantified in any non-trivial way?

 

Are you suggesting that I actually have to analytically prove that it is impossible to be .7 of certain?... or rather .3 uncertain?[/quote']

 

Yes, if you want to demonstrate your thesis. But as I have already given a couple of counterexamples, I am of the opinion that you needn't bother.

Link to comment
Share on other sites

Swansont~

Whast you are saying confuses me. If I step outside and I am uncertain as to whether it will rain or not, I still can't quantify that uncertainty. If I say to myself, "I am 25% certain it will rain soon," that is completely arbitrary. This is why Johnny says you can't analytically prove it is (im)possible to be .7 certain.

Link to comment
Share on other sites

Swansont~

Whast you are saying confuses me. If I step outside and I am uncertain as to whether it will rain or not' date=' I still can't quantify that uncertainty. If I say to myself, "I am 25% certain it will rain soon," that is completely arbitrary. This is why Johnny says you can't analytically prove it is (im)possible to be .7 certain.[/quote']

 

Joshua, I think he wants me to try to prove what I am saying analytically somehow.

 

The first thing I would say is that on any given statement, either an individual is certain of the truth value of the statement, or not.

 

Then I would say that the discussion should be restricted to statements whose meaning is actually understood by the individual in question, because certainly if you don't know what the meaning of a statement is, you cannot judge whether it is true, or whether it is false.

 

So let individual R be uncertain as to the truth value of statement S, but stipulate that individual R understands the meaning of S.

 

The question is, can we quantify R's uncertainty, or not.

 

Well, let S be a statement which is verifiable by R. Therefore, there is some possibility that R will figure out the truth value of S sometime in the future.

 

The previous statement indicates that the uncertainty of R, as to the truth value of S is a temporal variable.

 

Now, there are only so many ways that we can quantify a variable.

 

We could use the real number system, the natural number system, we could restrict quantification of human uncertainty to positive fractions. Certainly, human uncertainty cannot be modelled as a complex variable, so that helps a bit. Negative uncertainty makes no sense, so that's out.

 

We can certainly use 0,1 to quantify human uncertainty, but I claim that this is all we can do, and I call this the trivial way. If R is uncertain as to the truth value of S, uncertainty of R about |S| = 1, if R is certain as to the truth value of S, uncertainty of R about |S| =0.

 

 

The question then is can human uncertainty be quantified in any non-trivial way.

 

I said no, and I think he wants an analytic proof that the answer is no. I will have to think about what to do next.

 

This is especially interesting in the case of human memory. What I mean is this. Suppose that you used to know how to drive to someplace, but that you have forgotten now. So you are about to take a trip there, but you don't remember how to go. Yet, if you pause to think long enough, you may remember. So here is a simple case where your uncertainty varied in time. You were initially uncertain as to the driving directions, and then later you remembered, and were no longer uncertain. You can see the binary pattern. Human uncertainty is a two valued variable. The question is what constitutes a proof of this, beyond direct experience.

 

That I dont know how to answer, or even if it should be answered.

Link to comment
Share on other sites

Swansont~

Whast you are saying confuses me. If I step outside and I am uncertain as to whether it will rain or not' date=' I still can't quantify that uncertainty. If I say to myself, "I am 25% certain it will rain soon," that is completely arbitrary. This is why Johnny says you can't analytically prove it is (im)possible to be .7 certain.[/quote']

 

Then how do you decide whether or not to take your umbrella?

 

How does anyone do any kind of risk/reward analysis?

 

Johnny's stated position is that it is binary - 100% certainty or uncertainty. So if you can't state that an outcome is 100% certain, you have absolutely no clue about the outcome. If that's truly the case, let's play poker sometime. Bring a wad of cash. You'll be compellled by your position to bet the same amount on almost every hand, since you can't be certain that you don't have a winning hand (unless you have 2,3,4,5,7 off-suit) and have sworn you have no way to quantify your (un)certainty. You'd also be compelled to draw cards at random, but to avoid any problem we'll just play stud.

 

Anyone willing to put their money where their mouth is, as it were?

Link to comment
Share on other sites

Anyone willing to put their money where their mouth is' date=' as it were?[/quote']

 

I want to hear what you have to say about this. So what are you going to explain?

 

Before you answer...

 

Suppose that i give you a bag with 52 cards in it, a normal deck. And you take out 51 cards. I then ask you what is the probability that the card still inside the bag is the ace of spades. You can look at all 51 cards at your liesure. If you are truly certain that all 52 cards of a normal deck went into the bag, because you put them in yourself, then certainly you can figure out whether or not the ace of spades is still in the bag. In fact, you could tell me exactly what card is in the bag, with no uncertainty. So I still see human certainty as obeying binary logic.

Link to comment
Share on other sites

can we really be sure that the world will still exist tommorow? no. there is a chance (albeit an extremely slim one) that a meteor will strike the earth, or that the sun will explode and burn the earth to ash. reguardless of how unlikely these events are, the possibility still exists so we cannot be absolutely 100% certain that the earth will still exist tomorrow, ie

 

certanty that earth will still exist tommorrow < 1

 

your binary model of certanty only allows us to have a certanty level of 1 or 0. so what do we say about our certanty that the world will exist tommorow? do we round it up to 1, and ignoor the slim but very real chance that the earth will get hit by a meteor before tomorrow? or, with 1 and 0 as the only options, and with us not being certian to a level of 1, do we simply say that our certanty level is 0?

 

your binary model also yields a paradox: if we are not 100% certain that the world will still exist tomorrow, and we are not 100% certain that the world will not exist tommorow, then what is our certanty that the world will still exist tomorrow?

 

not 1, as we are not 100% certain that the world will still exist tommorrow.

 

not 0, as this is equivelent to saying that we are 100% certan that the world will not exist tomorrow (which is not true).

 

and not a certanty of 0<x<1, as your model does not allow this.

 

so, when asked "will the world still exist tomorrow", the only answre that i see your model allowing is "i cannot comment".

 

or "we cannot be certain", but in this case atleast, we can be pritty certain that the world will, in actual fact, exist tomorrow.

 

and we can quantify our level of (un)certanty by saying that we are "almost 100% certain". which is incompatible with your model.

 

just rationaly thinking about 'how certain you are that the world will still exist tommorow' compared to 'how certain are you that a tossed coin will land heads'. neither of them has a certainty of either 1 or 0, yet you are clearly certain of the validity of both statements to different levels . ie, almost 100% certain of the validity of the statement 'world will still exist tomorrow', only roughly 50% certain of the validity of the statement 'a tossed coin will land heads'. there for you can quantify uncertanty in terms other than 1 and 0.

 

unless were talking sematics here, ie that the word 'certain' is an absolute?

Link to comment
Share on other sites

can we really be sure that the world will still exist tommorow? no. there is a chance (albeit an extremely slim one) that a meteor will strike the earth' date=' or that the sun will explode and burn the earth to ash. reguardless of how unlikely these events are, the possibility still exists so we cannot be absolutely 100% certain that the earth will still exist tomorrow, ie

 

certanty that earth will still exist tommorrow < 1

 

your binary model of certanty only allows us to have a certanty level of 1 or 0. so what do we say about our certanty that the world will exist tommorow? do we round it up to 1, and ignoor the slim but very real chance that the earth will get hit by a meteor before tomorrow? or, with 1 and 0 as the only options, and with us not being certian to a level of 1, do we simply say that our certanty level is 0?

 

your binary model also yields a paradox: if we are not 100% certain that the world will still exist tomorrow, and we are not 100% certain that the world will [b']not[/b] exist tommorow, then what is our certanty that the world will still exist tomorrow?

 

not 1, as we are not 100% certain that the world will still exist tommorrow.

 

not 0, as this is equivelent to saying that we are 100% certan that the world will not exist tomorrow (which is not true).

 

and not a certanty of 0<x<1, as your model does not allow this.

 

so, when asked "will the world still exist tomorrow", the only answre that i see your model allowing is "i cannot comment".

 

or "we cannot be certain", but in this case atleast, we can be pritty certain that the world will, in actual fact, exist tomorrow.

 

and we can quantify our level of (un)certanty by saying that we are "almost 100% certain". which is incompatible with your model.

 

just rationaly thinking about 'how certain you are that the world will still exist tommorow' compared to 'how certain are you that a tossed coin will land heads'. neither of them has a certainty of either 1 or 0, yet you are clearly certain of the validity of both statements to different levels. ie, almost 100% certain of the validity of the statement 'world will still exist tomorrow', only roughly 50% certain of the validity of the statement 'a tossed coin will land heads'. there for you can quantify uncertanty in terms other than 1 and 0.

 

unless were talking sematics here, ie that the word 'certain' is an absolute?

 

I am certain of this much though:

 

Either the world will still exist tomorrow or not.

 

You can be uncertain about both things...

 

You can be uncertain about the truth of the statement, "The world will still be here tomorrow"

 

You can also be uncertain about the truth value of the statement, "the world will not still be here tomorrow"

 

There is nothing strange about this, because these statements are about the future.

 

But, you cannot add up your uncertainties of each of the parts, and expect the values to equal your certainty of the whole.

 

Maybe I can say that better later, when I think about it some more.

Link to comment
Share on other sites

well then your model would have a certanty of 0 for both statements?

 

tell me, if we were to go back through these posts and replace the word 'certain' with 'confident', then would that be acceptable to you? eg, "after 99 white swans, we could be roughly 99% confident that the 100th swan would be white"

Link to comment
Share on other sites

well then your model would have a certanty of 0 for both statements?

 

It would go like this.

 

Let S denote an arbitrary statement.

Let |S| denote the truth value of S.

 

If I am uncertain as to the truth value of S then U ( |S| ) = 1' date=' and conversely.

 

If I am not uncertain as to the truth value of S then U ( |S| ) = 0

 

Ok so...

 

Let U (X or not X) denote my certainty of the statement (X or not X).

 

I know binary logic, so I know this is the law of the excluded middle for any specific statement X, so that I know that its true, for any statement X, so

 

U(X or not X) = 0

 

So thats going to hold for me, no matter what statement is used as X.

 

 

But let X denote a statement about the future, like this one:

 

X = Tomorrow I win the lottery.

 

Well i didn't buy a ticket for it, and I don't plan to, so I don't expect to win the lottery, but I may go out and buy one tomorrow, and really win in the evening.

 

So

 

I don't currently know the truth value of the statement, "tomorrow I win the lottery."

 

I also don't currently know the truth value of the statement, "tomorrow I don't win the lottery"

 

So...

 

U(X) = 1

 

U(not X) = 1

 

So it is [i']not [/i] a theorem of the logic I am using that:

 

 

2=U(X) + U(not X) = U(X or not X)=0

 

It can't be a theorem of it.

 

Regards

Link to comment
Share on other sites

So it is not a theorem of the logic I am using that:

 

 

2=U(X) + U(not X) = U(X or not X)=0

 

It can't be a theorem of it.

 

agree, as 2 cannot =0.

 

hmmm, i am increasingly beginning to think that we are arguing linguistics.

 

i accept that certanty and uncertanty are absolutes, ie you are either certain or you are not.

 

i agree that you cannot be 0.7 certain, anymore than you can be 0.7 of any absolute (0.7 infinite, 0.7 biggest etc)

 

however, there is a very real and very useful way of measuring your level of certanty in a non-binary way.

 

for example, immagine that the talles person in the world is 10M tall

 

a person who is 9M tall is not 0.9 the tallest person in the world, as 'tallest' is an absolute. you cannot be the 0.9 tallest person in the world. you either are the tallest person in the world, or you are not.

 

but you can be 0.9 the height of the tallest person in the world.

 

with uncertanty, if you take it as an absolute, then ok you cannot be 0.7 certain.

 

but you can be 0.7 as certain as you would be if you were certain to a level of 1.

 

umm, not sure that last statement was worded well, so to further explain:

 

when i say i am 0.9 certain that the statement 'X' is true, then what i mean could be viewed as:

 

STATEMENT 1: i am certain, to a certanty level of 1, that statement 'x' is true

 

STATEMENT 2: there is a 90% probability that statement 1 will be correct

 

so ok, if you say that you can only be certain to a level of 0 or 1. but you have to accept that your assesment of your certanty may be incorrect, and the probability that you are incorrect is something that can be measured, ie

 

STATEMENT X: the world will still exist tomorrow

STATEMENT 1: i am certain to a level of 1 that statement x is correct

STATEMENT 2: there is virtually 100% chance that statement 1 will be proven correct.

 

so overall, taking into account our certanty and the probability that we are correct, we can say that we are almost 100% certain/confident that the world will still exist tomorrow.

 

or in the case of the swans:

 

so you have a bag with 100 swans. you pull out 99, and they are all white.

 

two hypothesis can fit the fact that 99 swans are known to be white, vis:

 

H1: 99 swans are white and 1 swan is non-white

H2: 100 swans are white.

 

the probability that the first 99 swans out of the bag will be white is the P(1st swan is white)*P(2nd swan is white)*P(3rd swan is white)...*P(99th swan is white)

 

the probability that a white swan will be chosen = the number of white swans/the total number of swans.

 

if H1 is true, then the P(first 99 swans are all white) = (99/100)*(98/99)*(97/98)...*(1/2), 1/2 being the 99th choice, where according to the hypothesys there would be two swans left, one of which is white and one of which is not. this gives a probability of 1/100 (logical, concidering the probibility that the non-white swan would be picked last is equivelant to the probability that the non-white swan would be picked 1st = 1/100)

 

so there would be a 1/100 chance of getting the observed results if H1 is the case.

 

If H2 is the case, then there would be a 100/100 chance of getting the observed results (if all 100 swans are white, the chance of the first 99 swans being picked would be 1).

 

not entirely sure where to go from here, but i think that the following is logically/mathematicaly true...

 

probability of obtaining the observed result if the given hypothesis is true:

 

H1: 1/100

H2: 100/100

 

therefore, H2 is 100 times more lightly to yield the observed results than H1, ie P(H1 is corect) = 1/101, P(H2 is correct) = 100/101

 

STATEMENT X: all the swans are white (H2 in above example)

STATEMENT 1: i am certain to a level of 1 that statement x is correct

STATEMENT 2: there is 99.00990099% chance that statement 1 will be proven correct

 

so, overall, i am 99.00990099% confident/certain that all 100 swans are white

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.