Skip to content

Probability is not impervious to paradoxes

Featured Replies

  • Author
9 minutes ago, OldTony said:

This is just an observation but seems relevant to the discussion. It is very difficult, apparently, for the human mind to produce a long list of truly random numbers. There are recognised tests for randomness and rather strangely, I think, that even if you produced a list of numbers by a method such as rolling a dice it may well produce a list of numbers that would fail a test for randomness. For that reason you can purchase a book of random numbers that agree with the rules of randomness.

Well, the word 'random' is a tricky one. As I said, there is no unique way to define 'random'. People tend to think of random as synonimous of 'unbiased'. Bertrand's paradox --which I also mentioned before-- shows that the premise of randomness as one of total unbiasedness (equal probabilities for all the values of a variable within a range) gives different probabilities for different variables that equally describe the same problem. Namely:

Consider an equilateral triangle that is inscribed in a circle. Suppose a chord of the circle is chosen at random. What is the probability that the chord is longer than a side of the triangle?

Then Bertrand proceeds to calculate this probability by different methods, all equally unbiased, but with respect to different variables (two points chosen at random, a point and an angle, etc). The answer is different depending on which variables you choose.

The illusion that 'random' means something precise comes from: 1) Choosing a discrete set of outcomes, and 2) Assuming there is a probability distribution that's 'written in stone', like, eg, Laplace's rule of equal probabilities, some principle of symmetry infering that (example, the fair coin), etc, so that assigning probabilities is reduced to a counting problem.

Otherwise, we need some kind of law or fundamental principle that tells us what the distribution is, like we have in statistical physics, for example.

On 4/25/2026 at 3:40 PM, joigus said:

Otherwise, we need some kind of law or fundamental principle that tells us what the distribution is, like we have in statistical physics, for example.

Isn't Murphy's law sufficient?

  • Author
2 minutes ago, dimreepr said:

Isn't Murphy's law sufficient?

Unfortunately, no. Murphy's law is not an actual law of probability, but a humorous observation on the nature of our expectations.

1 minute ago, joigus said:

Unfortunately, no. Murphy's law is not an actual law of probability, but a humorous observation on the nature of our expectations.

But an actual representation of randomness is rejected bc of our expectations.

  • Author
7 minutes ago, dimreepr said:

But an actual representation of randomness is rejected bc of our expectations.

Maybe so, but not by a humorous observation on the nature of our expectations. 'Everything that can go wrong will go wrong' is no probabilistic law. Starting with: it's manifestly false.

The nature of our expectations is quite irrelevant to the laws of probability anyway...

4 minutes ago, joigus said:

Maybe so, but not by a humorous observation on the nature of our expectations. 'Everything that can go wrong will go wrong' is no probabilistic law. Starting with: it's manifestly false.

I'm here to learn, how does this differ from the infinite monkey's hypothesis?

  • Author
5 minutes ago, dimreepr said:

I'm here to learn, how does this differ from the infinite monkey's hypothesis?

The infinite-monkeys is a metaphor to illustrate the arguably paradoxical nature of probability. In that case, you take an extremely unlikely event and flood your laboratory with attempts to obtain a succesful outcome. What paradox does it try to illustrate? That, in the limit, even an event with probability =0 is possible if there is a continuum of outcomes accesible. The metaphor makes the point clear even though no-matter-how-big a number of monkeys will never produce a continuum of outcomes (books written at random).

But that has little to do with what I was trying to argue. Namely: That the word 'random' doesn't necessarily mean something precise in a number of cases.

1 hour ago, joigus said:

That the word 'random' doesn't necessarily mean something precise in a number of cases.

In most cases 'random' simply means lack of information about the process that leads to the 'random' outcome.

The example I often use is the set of numbers 1,5,9,2,6,5,3,5,8,9,7,9,3,2,3 which seem perfectly random numbers between 0 and 9, and when presented with those numbers, one assumes they cannot be generated from each other, or a mathematical process.
Yet if you calculate Pi, and subtract 3.14, you are left with those 'random' numbers as the remaining 15 digits, so not random at all when new information is obtained.

That is not to say that there are no truly random events, to which, of course, you can assign a numerical value.

1 hour ago, MigL said:

In most cases 'random' simply means lack of information about the process that leads to the 'random' outcome.

The example I often use is the set of numbers 1,5,9,2,6,5,3,5,8,9,7,9,3,2,3 which seem perfectly random numbers between 0 and 9, and when presented with those numbers, one assumes they cannot be generated from each other, or a mathematical process.
Yet if you calculate Pi, and subtract 3.14, you are left with those 'random' numbers as the remaining 15 digits, so not random at all when new information is obtained.

That is not to say that there are no truly random events, to which, of course, you can assign a numerical value.

If a monkey types Hamlet verbatim without knowledge of English, it is a random event, isn't it? Just because we can get it to fit ex post facto something we know exists, doesn't make it a thoughtful effort; deterministic.

Edited by StringJunky

  • Author
45 minutes ago, MigL said:

In most cases 'random' simply means lack of information about the process that leads to the 'random' outcome.

The example I often use is the set of numbers 1,5,9,2,6,5,3,5,8,9,7,9,3,2,3 which seem perfectly random numbers between 0 and 9, and when presented with those numbers, one assumes they cannot be generated from each other, or a mathematical process.
Yet if you calculate Pi, and subtract 3.14, you are left with those 'random' numbers as the remaining 15 digits, so not random at all when new information is obtained.

That is not to say that there are no truly random events, to which, of course, you can assign a numerical value.

Yes, but simply declaring 'lack of information' doesn't determine the probability distribution, does it? Did you happen to take a look at Bertrand's circle paradox?

In the case you provide, a relatively-simple change of variables to a certain s=f(pi) renders the probability distribution deterministic in s. If you're not given this information, every digit has equal probability of occurring (as far as we know, because nobody can decode pi in terms of digits), so it's a good generator for the prescription 'equal probability for every digit from 0 to 9). I draw your attention on the fact that 'equal probability for every digit from 0 to 9' is just one way of defining 'random' in this context.

34 minutes ago, StringJunky said:

If a monkey types Hamlet verbatim without knowledge of English, it is a random event, isn't it?

Yes, but (I insist) 'random' by itself doesn't mean much. Here's the distribution of probabilities for the speed of a Xenon molecule at temperatures, T = 298 K and T = 3000 K. Both are random, and yet, at T = 3000 K the Xenon-molecule speed is much less random (much more predictable) than at T = 298 K:

Xe3000-298MB.png

50 minutes ago, joigus said:

speed is much less random

I should have said 'much more random'. Sorry.

Edited by joigus
minor correction

  • Author
1 hour ago, joigus said:

at T = 3000 K the Xenon-molecule speed is much less random (much more predictable) than at T = 298 K:

Therefore I should have said 'at T=3000K the Xenon molecule is much more random (much less predictable) than at T=298k.

I hope I didn't make my argument completely un-understandable.

Sorry. I sometimes think I may well have been misdiagnosed as 'cognitively normal' when I may well belong in the 'cognitively-exceptional' spectrum.

For some uncanny reason, I tend to express thing the opposite way I mean to.

Edited by joigus
correction

1 hour ago, joigus said:

Yes, but (I insist) 'random' by itself doesn't mean much.

I have already agreed that the term random can be problematic even to the point of producing paradoxes.

One consideration is this.

What do you mean by the frequentist definition:- the probability of an event E, p(E) = F/ N , where N is the total number of trials and F is the number where the outcome is E.

Has any outcome ever occurred for the scenario you originally described or is F = 0 ?

  • Author
3 minutes ago, studiot said:

What do you mean by the frequentist definition:- the probability of an event E, p(E) = F/ N , where N is the total number of trials and F is the number where the outcome is E.

Has any outcome ever occurred for the scenario you originally described or is F = 0 ?

In response to your first question: Yes, that's exactly what I mean by a frequentist definition.

In response to your second question: I assume by 'my scenario' you mean the molecule-speed vs probability scenario...? In that case, F as absolute frequency (number of times it produced a certain value) would be, say, 1 or two, while the number of times it's been tried would be (ideally) infinitely many. And therefore, the relative frequency would be f = 1 or 2 / infinity = zero. => zero probability does not imply zero occurrences when infinitely many tries are involved.

I hope we're talking about the same thing. If not, it's probably my fault, and I apologise.

14 minutes ago, studiot said:

I have already agreed that the term random can be problematic even to the point of producing paradoxes.

Please, point out to me, if you can, where you made this qualification, as it escaped me. It's a very interesting point to me, as I think many misunderstandings when talking odds come from this, as 'random' could mean Laplace (finite sample space), binomial, Poisson, Gaussian, or who knows what...

2 hours ago, joigus said:

Please, point out to me, if you can, where you made this qualification, as it escaped me. It's a very interesting point to me, as I think many misunderstandings when talking odds come from this, as 'random' could mean Laplace (finite sample space), binomial, Poisson, Gaussian, or who knows what...

On 4/23/2026 at 11:54 PM, joigus said:

I think the essence of both these comments is pretty similar. Yes, knowledge, or even ballparking, intention, etc, of the person guessing essentially changes the probability distribution.

I have developed the habit to actually ask, 'what do you mean "random"? According to what probability distribution?' Most people get confused, but I think I know what I'm asking.

The moment you know something, or venture to guess something, or think you know something, the probability distribution of your answers already changes.

On 4/23/2026 at 11:59 PM, studiot said:

Yes agreed.

2 hours ago, joigus said:

In response to your second question: I assume by 'my scenario' you mean the molecule-speed vs probability scenario...? In that case, F as absolute frequency (number of times it produced a certain value) would be, say, 1 or two, while the number of times it's been tried would be (ideally) infinitely many. And therefore, the relative frequency would be f = 1 or 2 / infinity = zero. => zero probability does not imply zero occurrences when infinitely many tries are involved.

I hope we're talking about the same thing. If not, it's probably my fault, and I apologise.

No I was referring to your original post#1, and asking if (any) of the events had ever occurred, hence the reference to F = 0 as I don't see how any of them can have occurred, given their imprecise specification.

No apology needed, I need to try and make myself clearer. Try this.

If an event has never occurred how does a frequentist define its probability and what does he mean ?

6 hours ago, studiot said:

If an event has never occurred how does a frequentist define its probability and what does he mean ?

Surely the frequentist ( never heard the term, but I suppose I am one ) would define the probability of an event E, p(E) = F/ N , where N is the total number of possible outcomes, and F is the one desired outcome, E.
But I suppose this doesn't give a frequency distribution ...

23 hours ago, joigus said:

The infinite-monkeys is a metaphor to illustrate the arguably paradoxical nature of probability. In that case, you take an extremely unlikely event and flood your laboratory with attempts to obtain a succesful outcome. What paradox does it try to illustrate? That, in the limit, even an event with probability =0 is possible if there is a continuum of outcomes accesible. The metaphor makes the point clear even though no-matter-how-big a number of monkeys will never produce a continuum of outcomes (books written at random).

But that has little to do with what I was trying to argue. Namely: That the word 'random' doesn't necessarily mean something precise in a number of cases.

Aren't we just circling back to the nature of a paradox, namely it's existence?

  • Author
9 hours ago, MigL said:

Surely the frequentist ( never heard the term, but I suppose I am one ) would define the probability of an event E, p(E) = F/ N , where N is the total number of possible outcomes, and F is the one desired outcome, E.

Yes.

https://en.wikipedia.org/wiki/Frequentist_probability

32 minutes ago, dimreepr said:

Aren't we just circling back to the nature of a paradox, namely it's existence?

No. Please, explain.

14 hours ago, MigL said:

Surely the frequentist ( never heard the term, but I suppose I am one ) would define the probability of an event E, p(E) = F/ N , where N is the total number of possible outcomes, and F is the one desired outcome, E.
But I suppose this doesn't give a frequency distribution ...

Sorry to inform you that this is a wholly inadequate definition.

It is correct for a fair die, but not if the die is loaded.

It is not even correct for the sum of the dots on two fair dice rolled together.

Further consider a horse race of six novice horses. (novice indicate that the horse has never won)

So how do you assign probabilities before the race ?

Worse the total outcomes must now be greater than six to account for events such as not finishing, disqualification etc.

Paradoxically, only Bayesian methods offer the one in six equal probabilities as a starting point.

Edited by studiot

Still not convinced ...

There is no history of the 6 horses, so you have no reason to assign a probability other than 1/6 to each horse, since the expected ( guaranteed ) outcome is that one of the 6 must win.
To do otherwise, you'd be making up information.

If I am misunderstanding you, please dumb it down, as I can be rather 'thick' sometimes.

Edited by MigL

59 minutes ago, MigL said:

Still not convinced ...

There is no history of the 6 horses, so you have no reason to assign a probability other than 1/6 to each horse, since the expected ( guaranteed ) outcome is that one of the 6 must win.
To do otherwise, you'd be making up information.

If I am misunderstanding you, please dumb it down, as I can be rather 'thick' sometimes.

Is it not possible for all six horses to be disqualified for some reason?

Then there would be no winner.

Or what if there was a dead heat ?

This is of course different from the die which must come down on one of its six faces.

But what about my two dice version ?

Are the probabilities of getting a spot sum of 10, 11, 12 equal or equal to 1 in 6 ?

There are six possibilities here namely 7, 8, 9, 10, 11, 12.

I think you will find that is why @joigus introduced the idea of a frequency distribution.

Edited by studiot

On 5/4/2026 at 2:05 PM, joigus said:

No. Please, explain.

I'm thinking the infinite monkey hypothesis is wrong bc infinity has limits, there can't be an infinite number of monkeys, as we know them.

  • Author
On 5/4/2026 at 11:10 PM, studiot said:

Is it not possible for all six horses to be disqualified for some reason?

Then there would be no winner.

Or what if there was a dead heat ?

This is of course different from the die which must come down on one of its six faces.

But what about my two dice version ?

Are the probabilities of getting a spot sum of 10, 11, 12 equal or equal to 1 in 6 ?

There are six possibilities here namely 7, 8, 9, 10, 11, 12.

I think you will find that is why @joigus introduced the idea of a frequency distribution.

On 5/4/2026 at 7:44 PM, studiot said:

Paradoxically, only Bayesian methods offer the one in six equal probabilities as a starting point.

Ok. I must say I'm quite less sophisticated when it comes to probability that you probably picture me to be, and you yourself are.

The cases you comment of disqualified horses, dead heats, etc, IMO, would be completely smoothed out to zero by means of the 'frequentist approach': They almost never happen.

The way I always understood probabilities is: You first make a hypothesis based on symmetries, known features, engineering specs and so on, direct exploration, etc.

Then you do thousands upon thousands of experiments and apply the 'frequency test'. That way, you see whether your statistical hypothesis was correct. In some cases, like physics, you have physical principles that allow you to not guess in the total dark.

In your horse-race example, your hypothesis would be based on a priori conditions on the horses: Their physique, breed, biometrics, and so on.

Then you would have them race with different riders, atmospheric conditions, etc.

Something like that.

I think it's fair to say Bayesian methods give you equal probabilities at first, but that's precisely because the first-order approach is to assume no bias, and then correct your hypothesis as you learn more about the different odds (the heart and soul of Bayesian methods or, as I like to say, probe more and more deeply into the sample space). So the first assignment of probabilities doesn't give you any better insight than the other ones.

On 5/7/2026 at 1:46 PM, dimreepr said:

I'm thinking the infinite monkey hypothesis is wrong bc infinity has limits, there can't be an infinite number of monkeys, as we know them.

I think you're right in your conclusion. But I don't think it's because infinity has limits. I think it's because infinity is not a number, it's more of a topological nature (the boundary of all numbers), so you cannot reach it numerically, which is quite the opposite of what you said in words, even if your intuition might have been right.

Edited by joigus
minor correction

The question I was attempting to clarify was

On 5/3/2026 at 4:27 PM, studiot said:

If an event has never occurred how does a frequentist define its probability and what does he mean ?

If you have no prior information guiding you to probable outcomes, then you cannot assign probability to any event.
The best you can do is that an event has to occur, and divide by the possible number of ways that event can occur.
In effect, the probability of any one of the horses winning the race is 1/6 .

What a priori information would you use to assign probabilities of the first event, never mind frequency distributions which can only be gleaned after many such events ?

6 hours ago, joigus said:

I think it's fair to say Bayesian methods give you equal probabilities at first, but that's precisely because the first-order approach is to assume no bias, and then correct your hypothesis as you learn more about the different odds (the heart and soul of Bayesian methods or, as I like to say, probe more and more deeply into the sample space). So the first assignment of probabilities doesn't give you any better insight than the other ones.

Agreed.

On 5/4/2026 at 6:44 PM, studiot said:

Paradoxically, only Bayesian methods offer the one in six equal probabilities as a starting point.

1 hour ago, MigL said:

The best you can do is that an event has to occur, and divide by the possible number of ways that event can occur.
In effect, the probability of any one of the horses winning the race is 1/6 .

You need to be careful in your specification of 'an event'

On 5/4/2026 at 10:10 PM, studiot said:

Is it not possible for all six horses to be disqualified for some reason?

Then there would be no winner.

Or what if there was a dead heat ?

On 5/4/2026 at 6:44 PM, studiot said:

Worse the total outcomes must now be greater than six to account for events such as not finishing, disqualification etc.

I think my two dice example shows this better than the horses.

Ways of getting

7 | 6 +1

8 | 4 + 4 ; 5 + 3 ; 6 + 2

9 | 5 + 4 ; 6 + 3

10 | 5 + 5 ; 6 + 4

11 | 5 + 6

12 | 6 + 6


Sorry for the presentation

Why does neither Tab nor repeated spaces work in this blighted editor ?

So the probabilities are definitely not equal for these outcomes.

Edited by studiot

21 hours ago, joigus said:

I think you're right in your conclusion. But I don't think it's because infinity has limits. I think it's because infinity is not a number, it's more of a topological nature (the boundary of all numbers), so you cannot reach it numerically, which is quite the opposite of what you said in words, even if your intuition might have been right.

I was thinking in terms of the universe and the space it takes up, I think, as in an infinite boundary does not = infite monkey's.

My apologies for not understanding the math to explain myself more correctly or fully understand your point, but thanks for your patience.

Create an account or sign in to comment

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.