Jump to content

Chance vs. probability


Function

Recommended Posts

Bignose and Cap'n Refsmmat, with the greatest respect both for yourselves and for your positions, I fear you have completely misunderstood my point or points.

 

 

I continue to hold that the word ‘chance’ embodies different concepts from the word ‘probability’ for very good reasons, not least because ‘chance’ is a much more general concept.

I agree, however, that some use (particularly) the plural (chances) as synonymous with (statistical) probability.

It is for this reason that statisticians adopted the word probability for rigorous codification so you (as has already been mentioned by several posters) hardly find the word ‘chance’ used in statistical works.

Chance is, however, often mentioned in other sciences where there may be several causative agents at work, some deterministic, some not.

 

Turning now to the adjective ‘random’, it is true that a random variate or variable has a well defined probability distribution function, although to quote Professor Kreyszig

 

“Caution! The terminology is not uniform”

 

This function is founded upon and only deducible in elementary theory if based on the principle of mutual exclusion or statistical independence.

This principle is another way of stating my principle point.

 

 

For Bignose’s example this is embodied in his statement fair dice.

 

This means that the first die has equal 1/6 likelihood of turning up 1,2,3,4,5 or 6, as does the second.

So the likelihood of turning up say a 12 is given by the product of these equal probabilities ie 1/6 x 1/6 = 1/36.

 

Similarly for a fair draw from a pack of cards the probability of drawing the king of diamonds is 1/52 and the probability of drawing any king is the sum of the individual probabilities,

.ie 1/52 + 1/52 +1/52 +1/52 = 4/52.

 

More complicated combinations are also available to form full distributions.

 

All of this is entirely consistent with what I have said before and also with the new material introduced by yourselves.

Edited by studiot
Link to comment
Share on other sites

I continue to hold that the word ‘chance’ embodies different concepts from the word ‘probability’ for very good reasons, not least because ‘chance’ is a much more general concept.

But again, you are the one who in particular invoked the phrase 'in the statistical sense' -- again on a science forum, in the math subforum, implying using the definitions as in common use by working mathematicians and statisticians.

 

And in this case, chance and probability are used interchangeably. I guess I remain unconvinced that the two words 'embody different concepts'. All the way back to your post #4 talking about the causes... the mathematical descriptions of the randomness don't care about the causes. If there is something more than randomness, or something that skews the distribution, then that can be accounted for, but I have never seen the words chance versus probability to denote that.

 

Your other example of the handedness vs. ambidextrousness doesn't really work either in my opinion. Because that is well covered by the mathematics of hypothesis testing. And the words chance and probability can be interchanged there, too. E.g. what is the probability this batch of pills have the correct does in them? What is the chance the transmission will fail on the MY15 truck line after 30 thousand miles? These two questions mean the same thing to me. Same thing at least to me...

 

Any chance you can provide an example from the literature where this specific word choice is done? A very quick cursory glance at the texts I have don't seem to make any distinction at all.

Edited by Bignose
Link to comment
Share on other sites

So let us continue to examine the word 'random'.

 

A random sequence is a sequence that cannot be expressed more compactly than a complete list by any algorithm.

 

(after Chaitin and Solmonoff. What Wikipedia call the Kolmogorov definition if you care to look it up)

 

Let my sequence, drawn from a binary system i.e. 1 or 0 for simplicity, be {A} where A is either 0 or 1.

 

Now the question arises:- Is this sequence random?

 

Well, mathematically it conforms to the above definition so it is random.

 

But a physicist might well wish to distinguish between circumstances as to how I arrive at this sequence.

 

For instance if I always calculate A = 4/4 I will always arrive at the sequence {1} and if I always calculate A = (4-4) I will always arrive at the sequence {0}..

 

So my result is predeterminate

 

But if I flip a coin and choose A = 1 for heads and A=0 for tails then which sequence I arrive at is indeterminate or at the behest of chance.

Link to comment
Share on other sites

A random sequence is a sequence that cannot be expressed more compactly than a complete list by any algorithm.

.

.

.

For instance if I always calculate A = 4/4 I will always arrive at the sequence {1} and if I always calculate A = (4-4) I will always arrive at the sequence {0}.

 

But if you always calculate A to be a certain value it can be expressed more compactly than a complete list, i.e. it's always one or it's always zero. Therefore it's not a random sequence, so it will be predeterminate.

Link to comment
Share on other sites

Surely the simple statement "1" is the shortest possible, and considerably shorter than the shortest mathematical algorithm or calculation I can think of to generate a 1.

 

Whilst some would debate the null sequence can be a sequence or random, the sequence with just one term is perfectly admissible.

Link to comment
Share on other sites

Surely the simple statement "1" is the shortest possible, and considerably shorter than the shortest mathematical algorithm or calculation I can think of to generate a 1.

Whilst some would debate the null sequence can be a sequence or random, the sequence with just one term is perfectly admissible.

 

OK, but still if you always calculate A to be a certain value it can be expressed more compactly than a complete list and so it's still not a random sequence. I don't understand the relevance of the other statement, please explain.

Link to comment
Share on other sites

 

I don't understand the relevance of the other statement, please explain.

 

 

Which other statement, please be specific, then I can try to answer your question and explain further.

 

 

OK, but still if you always calculate A to be a certain value it can be expressed more compactly than a complete list and so it's still not a random sequence.

 

 

How can this be? The process of calculation is longer than the list.

Edited by studiot
Link to comment
Share on other sites

 

Ah, wait. Are we only considering a sequence with one term here?

 

 

Yes, I'm trying to make the example as simple as possible.

 

With binary variables and a single term there are two (or three possible sequences, depending whether you include the null sequence (with no terms) or not) that meet this.

 

Both of these sequences are defined as random, as noted.

 

The either can be arrived at in either a deterministic way or a way that depends upon chance.

 

There is little more to be said about the deterministic route since it determines the outcome.

But there is a twist to the chance route worth discussing if you are interested.

Edited by studiot
Link to comment
Share on other sites

[snip]

What happens is that because there are an infinite number of numbers on any interval on the real line, the chances of getting any individual number does indeed go to zero. That is, 1.99999999999 [math]\ne[/math] 2.0 [math]\ne[/math] 2.000000000001. And so on.

 

[/snip]

 

How indiscrete!

 

More like

1.99999999999 2.0 2.000000000001

 

When you're reading measurements, you assume accuracy down to the last decimal place provided and no further, so it's always discrete.

Link to comment
Share on other sites

When you're reading measurements, you assume accuracy down to the last decimal place provided and no further, so it's always discrete.

In the physical world, yes, every measuring device has an accuracy. But mathematically that isn't so. Every range has an infinite count of numbers between it. And for a continuous probability, the probability of getting exactly one of those values is zero, per the above.

 

This is solved by talking about probability of a value being in a range, which in many instances could indeed be the accuracy range of a measuring device.

 

I thought I made this clear in the rest of my post (that you '[snip]ped' away. Did you read it all?

Edited by Bignose
Link to comment
Share on other sites

I did, but I didn't understand it all.

 

I now agree, however. For calculations from measurements, the output can only have as many significant figures as the datum with the fewest significant figures. This isn't done in typical calculations, implying that exactness is assumed.

Edited by MonDie
Link to comment
Share on other sites

 

I did, but I didn't understand it all.

 

 

 

You need to be clear in your mind which is the dependent and which is the independent variable.

 

The probability is the dependent variable.

x is the independent variable in Bignose's case.

Function's case is more complex since there are two independent varaiables (x and y) to get the area.

 

Does this help?

Edited by studiot
Link to comment
Share on other sites

It helps a little bit. :) Sorry about the ninja edit.

 


 

[snip]

 

What happens is that because there are an infinite number of numbers on any interval on the real line, the chances of getting any individual number does indeed go to zero. That is, 1.99999999999 [math]\ne[/math] 2.0 [math]\ne[/math] 2.000000000001. And so on.

 

So, given that the above is the answer to the question as asked, does asking that question make sense?

 

What I am driving at is that basically, usually one doesn't care that the exact value of the variables is 1.98435638726323636752387532276354... One normally only cares if it is in some range. "Is the value within 5% of the mean?", "Is the value more than 3 standard deviations from the mean?". Or, similarly, every measuring device we have has some margin of error. If I put a ruler down, the best I can say with confidence is that the length is between a pair of tick marks.

 

So, the question that needs to be asked is: how likely is the variable to be in some range? Or how likely is it that the fly lands in a certain area?

That changes your integral not from a to a, but from a to some b not equal to a.

 

[/snip]

 

You seem to talk about two values: the actual value, and its approximated measurement. As acknowledged, measurements are necessarily discrete, i.e. the spectrum of all measurable values for any instrument is discrete. If the spectrum of all actual values is discrete too, then the probability is greater than zero regardless which value you mean.

 

edit, edit, edit, oh I can't help myself!

Edited by MonDie
Link to comment
Share on other sites

We acknowledge two types of probability distributions:

Discrete and Continuous.

 

For a discrete variable the summation sign (using capital sigma) is appropriate.

For a continuous variable (which can but need not run from minus infinity to plus infinity but only between certain numbers say x=a and x=b) the integral is appropriate.

 

Do you understand what the terms outcome and event mean in statistics?

Edited by studiot
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.