Jump to content

Approaching 1/2 Probability


Conjurer

Recommended Posts

14 minutes ago, uncool said:

I get that you're trying to talk about something close to the law of large numbers (which is why i referred to it), but still don't understand what the "problem" is supposed to be, no. 

How does the average of the possible outcomes approach the expected value, when the probability of getting the same number of heads and tails becomes less likely, as you increase the number of coin flips?

Edited by Conjurer
Link to comment
Share on other sites

10 minutes ago, Conjurer said:

How does the average of the possible outcomes approach the expected value, when the probability of getting the same number of heads and tails becomes less likely, as you increase the number of coin flips?

Because for any range around the expected value of 1/2, say, 0.498 to 0.502, the number of outcomes (i.e. number of "allowed heads") within that range increases as N gets larger. In other words, the sequences that "help" the theorem (speaking very, very non-precisely) do not have to be only those that get exactly the same number of heads and tails.

Edited by uncool
Link to comment
Share on other sites

5 minutes ago, uncool said:

Because for any range around the expected value of 1/2, say, 0.498 to 0.502, the number of outcomes (i.e. number of "allowed heads") within that range increases as N gets larger. 

The number of possible outcomes with the expected value, actually, decreases compared to the total number of possible outcomes as the number of outcomes gets larger.

Edited by Conjurer
Link to comment
Share on other sites

3 minutes ago, Conjurer said:

The number of possible outcomes with the expected value, actually, decreases compared to the total number of possible outcomes as the number of outcomes gets larger.

You are missing my point. Again: it isn't only about "outcomes with the expected value". It's about outcomes close to the expected value. 

For a quick (and imprecise) example:

Consider the sequence H, H, T, H, T, H, T, ...

At no point is the number of heads equal to the number of tails. But the average number of heads is still 1/2. 

Edited by uncool
Link to comment
Share on other sites

14 minutes ago, uncool said:

You are missing my point. Again: it isn't only about "outcomes with the expected value". It's about outcomes close to the expected value. 

For a quick (and imprecise) example:

Consider the sequence H, H, T, H, T, H, T, ...

At no point is the number of heads equal to the number of tails. But the average number of heads is still 1/2. 

How would you propose what range of outcomes should be selected in order to obtain an average of the expected value?  I don't know of any established process of doing such a thing in mathematics.

Edited by Conjurer
Link to comment
Share on other sites

By allowing any (reasonable) choice of what "close to" means, using quantifiers.

The theorem says that for any reasonable choice of "close to" (read: for any open interval containing 1/2), the proportion of outcomes covered by that "close to" will get arbitrarily close to 1 (as the number of flips increases). 

In short, by using nearly the same setup as standard limits.

Edited by uncool
Link to comment
Share on other sites

9 minutes ago, uncool said:

By allowing any (reasonable) choice of what "close to" means, using quantifiers.

The theorem says that for any reasonable choice of "close to" (read: for any open interval containing 1/2), the proportion of outcomes covered by that "close to" will get arbitrarily close to 1 (as the number of flips increases). 

In short, by using nearly the same setup as standard limits.

I added up all the probabilities of getting the same number of heads and tails as the number of flips approach infinity then divided by the number of flips, and I got 1/r!

lim_(n->∞) ( integral(n!)/(n^r (r! (n - r)!)) dn)/n = 1/(r!)

 

 

Link to comment
Share on other sites

1) The limit you have set up doesn't make sense, because you quantify n twice.

2) It looks to me like you are quasi-randomly putting formulae together without understanding what they are for, rather than figuring out precisely what you want, then determining the formula for that. What, exactly, is that formula supposed to calculate?

3) Yet again, "getting the same number of heads and tails" is not the right way to look at it.

4) Is your post supposed to be a response to mine? It doesn't refer to anything I actually said...

 

The precise statement of the theorem for this case is:

For any a < 1/2 < b, for any epsilon > 0, there exists some N such that for any n > N, the number of sequences of length n with number of heads between a*n and b*n is greater than (1 - epsilon)*2^n. 

Edited by uncool
Link to comment
Share on other sites

21 minutes ago, uncool said:

1) The limit you have set up doesn't make sense, because you quantify n twice.

2) It looks to me like you are quasi-randomly putting formulae together without understanding what they are for, rather than figuring out precisely what you want, then determining the formula for that. What, exactly, is that formula supposed to calculate?

3) Yet again, "getting the same number of heads and tails" is not the right way to look at it.

 

The precise statement of the theorem for this case is:

For any a < 1/2 < b, for any epsilon > 0, there exists some N such that for any n > N, the number of sequences of length n with number of heads between a*n and b*n is greater than (1 - epsilon)*2^n. 

I took the equation for the number of combinations nCr = (n!)/(r! (n - r)!), and I divided that by the number of permutations with replacement nPr = n^r in order to get the probability of a desirable outcome out of the total number of outcomes.  Then I took the integral of that to add up all of the probabilities of those outcomes.  Then you say that the law of large numbers says that they all have to be divided by the number of flips, so then I divided that by n.  

I didn't get one as an answer doing that.  It should be 1/r! in any case this happens.  If there where 4 flips it would be 1/2! = 1/2.  Then if there were 6 flips, it would be 1/3! = 1/6, because r=n/2 in this example.

Edited by Conjurer
Link to comment
Share on other sites

39 minutes ago, Conjurer said:

I took the equation for the number of combinations nCr = (n!)/(r! (n - r)!), and I divided that by the number of permutations with replacement nPr = n^r in order to get the probability of a desirable outcome out of the total number of outcomes.

It should be nCr/2^n, not nCr/n^r, if by "desirable outcome" you mean "exactly r heads". 

39 minutes ago, Conjurer said:

Then I took the integral of that to add up all of the probabilities of those outcomes. 

Integral with respect to what, and why is the integral the appropriate choice here - especially when everything is discrete, not continuous?

39 minutes ago, Conjurer said:

If there where 4 flips it would be 1/2! = 1

Err, 1/2! = 1/2, not 1. 

 

The mathematical statement of the theorem as applied to this case would be:

lim_{n -> infinity} (sum_{an < i < bn} (nCi)/2^n) = 1

Summation, not integration, over possible numbers of heads, not over n. 

Edited by uncool
Link to comment
Share on other sites

26 minutes ago, uncool said:

It should be nCr/2^n, not nCr/n^r, if by "desirable outcome" you mean "exactly r heads". 

I thought it could be done both ways.  2^4 = 16  Then that is 4 flips of either heads or tails.  Then if it was n^r, then it would be 4^2 = 16

2^6 = 64 That is 6 flips of either heads or tails.  Then if it was n^r, then it would be 6^3 = 216

Maybe, you are on to something there.  I just checked the first case before.  I don't see why n^r wouldn't work in this situation.  That is the equation that was given in the middle of the page here.

https://www.calculator.net/permutation-and-combination-calculator.html

33 minutes ago, uncool said:

Integral with respect to what, and why is the integral the appropriate choice here - especially when everything is discrete, not continuous?

I thought it would just put it into a more generalized form which would be able to work for all cases, in order to be considered a proof.  A mathematical proof doesn't seem to be as good if it only works for one special case.  I was just using this example to try to get to it.

34 minutes ago, uncool said:

Err, 1/2! = 1/2, not 1. 

I am not sure how that typo occurred, but I corrected it.  I thought it was interesting that the law of large numbers would say that it had an average probability of getting exactly the same out of heads and tails was 1/2 for only 4 flips, when the probability of getting that is 3/8 of all the possible outcomes.

39 minutes ago, uncool said:

The mathematical statement of the theorem would be:

lim_{n -> infinity} (sum_{an < i < bn} (nCi)/2^n) = 1

Summation, not integration, over possible numbers of heads, not over n. 

lim_(n->∞) 1/n integral(n!)/(2^n (r! (n - r)!)) dn = 0

Link to comment
Share on other sites

16 minutes ago, Conjurer said:

I thought it could be done both ways.  2^4 = 16  Then that is 4 flips of either heads or tails.  Then if it was n^r, then it would be 4^2 = 16

That is a coincidence. If you had 6 rolls, there are 64 sequences, not 216.

16 minutes ago, Conjurer said:

I thought it would just put it into a more generalized form which would be able to work for all cases, in order to be considered a proof.  A mathematical proof doesn't seem to be as good if it only works for one special case.  I was just using this example to try to get to it.

The above provides no justification for integrating anything, and reads like what I said above: "quasi-randomly putting formulae together without understanding what they are for, rather than figuring out precisely what you want, then determining the formula for that."

16 minutes ago, Conjurer said:

I thought it was interesting that the law of large numbers would say that it had an average probability of getting exactly the same out of heads and tails was 1/2 for only 4 flips, when the probability of getting that is 3/8 of all the possible outcomes.

I have no idea what you are saying here; the law of large numbers says nothing about what happens for a specific number of flips - it's about large numbers

16 minutes ago, Conjurer said:

lim_(n->∞) 1/n integral(n!)/(2^n (r! (n - r)!)) dn = 0

Again: why integrate? What is this formula supposed to calculate? And the 1/n restricts "allowed outcomes"; it doesn't multiply any probabilities.

Edited by uncool
Link to comment
Share on other sites

6 minutes ago, uncool said:

The above provides no justification for integrating anything, and reads like what I said above: "quasi-randomly putting formulae together without understanding what they are for, rather than figuring out precisely what you want, then determining the formula for that."

Because the law of calculus says that is the same as adding them all together

7 minutes ago, uncool said:

I have no idea what you are saying here; the law of large numbers says nothing about what happens for a specific number of flips - it's about large numbers

I used the weak law to calculate it

{\overline {X}}_{n}={\frac {1}{n}}(X_{1}+\cdots +X_{n})

8 minutes ago, uncool said:

Again: why integrate? And the 1/n restricts "allowed outcomes"; it doesn't multiply any probabilities.

I want the generalized form for any type of event

Link to comment
Share on other sites

11 minutes ago, Conjurer said:

Because the law of calculus says that is the same as adding them all together

Which "law of calculus"? Please explain specifically.

14 minutes ago, Conjurer said:

I used the weak law to calculate it

The weak law is still about large numbers.

14 minutes ago, Conjurer said:

I want the generalized form for any type of event

That still doesn't explain the formula you are using.

Link to comment
Share on other sites

15 minutes ago, uncool said:

Which "law of calculus"? Please explain specifically.

The fundamental theorem of calculus

16 minutes ago, uncool said:

The weak law is still about large numbers.

I don't think it actually matters how large the numbers are being used with it.  Any law or theorem should work the same no matter what size variables are used.

18 minutes ago, uncool said:

That still doesn't explain the formula you are using.

I just put in what you gave me into wolfram to solve it.  That was the combinations divided by the total number of outcomes as the number of flips approach infinity.  It gave me the answer that there was 0 chance that I could flip a coin an infinite number of times and get the same amount of heads and tails.

Link to comment
Share on other sites

1 minute ago, Conjurer said:

The fundamental theorem of calculus

Please explain precisely how you are using the fundamental theorem of calculus here.

2 minutes ago, Conjurer said:

I don't think it actually matters how large the numbers are being used with it.  Any law or theorem should work the same no matter what size variables are used.

The law of large numbers is about a limit, so no, what happens for small numbers does not affect it.

3 minutes ago, Conjurer said:

I just put in what you gave me into wolfram to solve it.

The formula you have written is not what I gave you. I told you the exact formula:

lim_{n -> infinity} (sum_{an < i < bn} (nCi)/2^n) = 1

Link to comment
Share on other sites

5 minutes ago, uncool said:

Please explain precisely how you are using the fundamental theorem of calculus here.

The integral is the same as an undefined interval of sums.  The combination formula is not limited to any specific values.

11 minutes ago, uncool said:

The law of large numbers is about a limit, so no, what happens for small numbers does not affect it.

In the example the wiki gives about a die roll, it just adds 1-6 and divides by 6 to get 3.5.  Then it explains that a large number of roles should average 3.5.  I think you are mixing up what the word large implies here.  It is just saying that a large number of roles should average to be the expected value.

13 minutes ago, uncool said:

lim_{n -> infinity} (sum_{an < i < bn} (nCi)/2^n) = 1

You didn't define the limitations of a and b so then it is the same as the integral.  Instead of nCi, instead, I put in the whole combination formula with it being over 2^n.

Link to comment
Share on other sites

35 minutes ago, Conjurer said:

The integral is the same as an undefined interval of sums.

This is not precise. I am asking you to be precise. The integral of what function will be equal to what sum? I think you are likely misremembering how Riemann sums are used.

35 minutes ago, Conjurer said:

In the example the wiki gives about a die roll, it just adds 1-6 and divides by 6 to get 3.5. 

That is the method to get the expected value, yes.

35 minutes ago, Conjurer said:

Then it explains that a large number of roles should average 3.5.

Yes, which doesn't say anything about, for example, what happens when you roll 4 times. 

35 minutes ago, Conjurer said:

You didn't define the limitations of a and b so then it is the same as the integral.

First, I did define a and b (in an earlier post): "For any a < 1/2 < b". Pick any a and b satisfying those inequalities, and the limit I wrote will be correct. Second, I guarantee that it is not, in part because the summation is over values of i, not over values of n, in part because the "division by n" is about the bounds (the sum could equally be written as "sum_{a < i/n < b}"), not the value, and in part because there is no integration happening.

Edited by uncool
Link to comment
Share on other sites

7 hours ago, Conjurer said:

The probability of getting heads or tails is 1/2, so you should get either heads or tails, half of the time...

You are claiming that as a result of flipping a coin over and over, you should end up only getting either heads or tails most of the time, after flipping it a lot.  What you are saying just doesn't work out with observational reality!  

I am the one asking for the proof to this problem!  I am the one that doesn't understand how probability theory has become accepted without such a proof.  The point of this post is me asking you for a proof of why the difference in the total amount of heads and tails doesn't approach infinity.  In reality, you will never encounter a situation where you just flip so many heads, and then a coin just starts flipping tails a predominantly larger amount forever.

This is still vague. Can you use mathematical symbols to show what you mean? Do you think the formulas I posted have not been proved* mathematically? 

Can you post a reference to observations that does not match what I said? 

I'll try to get some time to run a simulation (more fun than posting results from others). 

6 hours ago, Conjurer said:

Given more chances, yes, you could tend to a larger count of something in a row that is larger than another something, but something would have to halt that process for it to work.  

Where in formulas I posted is it stated that something must occur in a row or specific order? Note that earlier throws of a fair coin of course does not affect the outcome of future throws. 

 

*) Examples: 
http://www.columbia.edu/~ks20/stochastic-I/stochastic-I-GRP.pdf
https://en.wikipedia.org/wiki/Law_of_large_numbers#Proof_of_the_weak_law

Link to comment
Share on other sites

Here are some results from a simple simulation. I’ve tried to incorporate some of the things mentioned in this thread. 

I think @uncool has covered many of the more theoretical aspects, hence I focus on a more experimental approach in this post.

The simulation throws a coin n times, keeping track of heads and tails and finally calculates [math]|h-t| [/math] ( h=heads, t=tails) and [math] \frac{h}{t} [/math]. Initially n=10. To reduce the impact from any statistical outliers the above is repeated 20 times and the average of [math]|h-t| [/math] and [math] \frac{h}{t} [/math] is kept. Then the number of throws n is increased by a factor 10 so n=100 and the process above is run again. This is repeated for n=100, n=1000 … n=1000000000.  The final result is a list of tuples containing n, average of [math] |h-t| [/math] and average of [math] \frac{h}{t} [/math] :

no of throws per run    average(|heads-tails|) over 20 runs     				average(heads/tails) over 20 runs

10              		2                 						1,61845238095238
100             		6                 						1,00452536506648
1000           	 		29                 						0,992180661027686
10000           		80								1,00101746079017
100000          		263								1,00211222673523
1000000         		716								1,00060018766435
10000000        		2553         							1,00001882460519
100000000       		7893         							1,00001224082561
1000000000      		27652         							1,00002288341028

The results supports my earlier statements as far as I can tell. If you disagree please comment in detail what additional simulations or observations you require. The descriptions provided so far is not very clear. If there is still doubt regarding validity of formulas, post a detailed description of what mathematical proof or simulations that you have performed or want help to perform.  

Unless specifically requested I see no reason to post long lists of source code.
Note; I have limited time to investigate the specific random number implementation used but I used a random number generator intended for cryptography instead of a standard one. It should be good enough to not have impact on results. 
I would have preferred to run larger tests but I do not have time to improve the program.

 

Edited by Ghideon
Link to comment
Share on other sites

16 hours ago, Ghideon said:

This is still vague. Can you use mathematical symbols to show what you mean? Do you think the formulas I posted have not been proved* mathematically? 

Can you post a reference to observations that does not match what I said? 

I'll try to get some time to run a simulation (more fun than posting results from others). 

Where in formulas I posted is it stated that something must occur in a row or specific order? Note that earlier throws of a fair coin of course does not affect the outcome of future throws. 

 

*) Examples: 
http://www.columbia.edu/~ks20/stochastic-I/stochastic-I-GRP.pdf
https://en.wikipedia.org/wiki/Law_of_large_numbers#Proof_of_the_weak_law

By definition, it is the law of large numbers, but I don't think that is going to be able to help you grasp the concept any more that what statements I have already provided here for you.

The problem now, is that 

2 minutes ago, Ghideon said:

Here are some results from a simple simulation. I’ve tried to incorporate some of the things mentioned in this thread and especially:

I think @uncool has covered many of the more theoretical aspects, hence I focus on a more experimental approach in this post.

The simulation throws a coin n times, keeping track of heads and tails and finally calculates [math]|h-t| [/math] ( h=heads, t=tails) and [math] \frac{h}{t} [/math]. Initially n=10. To reduce the impact from any statistical outliers the above is repeated 20 times and the average of [math]|h-t| [/math] and [math] \frac{h}{t} [/math] is kept. Then the number of throws n is increased by a factor 10 so n=100 and the process above is run again. This is repeated for n=100, n=1000 … n=1000000000.  The final result is a list of tuples containing n, average of [math] |h-t| [/math] and average of [math] \frac{h}{t} [/math] :


no of throws per run    average(|heads-tails|) over 20 runs     				average(heads/tails) over 20 runs

10              		2                 						1,61845238095238
100             		6                 						1,00452536506648
1000           	 		29                 						0,992180661027686
10000           		80								1,00101746079017
100000          		263								1,00211222673523
1000000         		716								1,00060018766435
10000000        		2553         							1,00001882460519
100000000       		7893         							1,00001224082561
1000000000      		27652         							1,00002288341028

The results supports my earlier statements. It does not support your claim cited above. If you disagree please comment in detail what additional simulations or observations you require. The description so far is not very clear. If there is still doubt regarding validity of formulas, post a detailed description of what mathematical proof or simulations that you have performed or want help to perform.  

Unless specifically requested I see no reason to post long lists of source code.
Note; I have limited time to investigate the specific random number implementation used but I used a random number generator intended for cryptography instead of a standard one. It should be good enough to not have impact on results. 
I would have preferred to run larger tests but I do not have time to improve the program.

 

That does not agree with the law of large numbers...

The problem now is that, the law of large numbers doesn't consider the probabilities of the possible outcomes.

Link to comment
Share on other sites

1 minute ago, Ghideon said:

How? Details please.

https://en.wikipedia.org/wiki/Law_of_large_numbers

 probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.