Jump to content
Sign in to follow this  
mad_scientist

what is the likelihood that this universe is a simulation?

Recommended Posts

what is the likelihood that everything we see, taste, touch, feel and smell is merely the product of a highly sophisticated simulation?

 

 

if everything is a simulation, what would happen if we die?

 

 

if we develop technology to live forever, does this mean that we are stuck and those will be stuck in this simulation and can't get out?

Share this post


Link to post
Share on other sites

AAArgh! I wanted to post this Topic! Nevertheless, IMO, there is no way to tell if the Universe is a simulation or not. Is the double slit experiment telling us something? I don't know but I suspect that coincidence may be an illusion as well.

Share this post


Link to post
Share on other sites

what is the likelihood that everything we see, taste, touch, feel and smell is merely the product of a highly sophisticated simulation?

 

 

Somewhere between 0 and 100%.

AAArgh! I wanted to post this Topic!

 

Oh well. Never mind. It comes up very regularly. Wait a week or two and start a new thread on it. :)

Share this post


Link to post
Share on other sites

what is the likelihood that everything we see, taste, touch, feel and smell is merely the product of a highly sophisticated simulation?

 

 

if everything is a simulation, what would happen if we die?

 

 

if we develop technology to live forever, does this mean that we are stuck and those will be stuck in this simulation and can't get out?

 

 

If it's sophisticated enough we'll never find out, so what's the difference?

Share this post


Link to post
Share on other sites
This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.

 

Are You Living In a Computer Simulation? Nick Bostrom. Philosophical Quarterly, 2003, Vol. 53, No. 211, pp. 243-255

Share this post


Link to post
Share on other sites

I think that we are, but the simulation isn't that we are playing in it. We are it. For instance, the double slit experiment gave us one conclusion that the mind is the key (Sorry for lack of intelligence). Our minds are what is controlling everything, and when we die, we can choose whether to stay dead or come back to life, or something else. Humans in my opinion are just a physical form of our greater selves, and I think the physical form is only the smallest part of a greater power, like being in a form you can barely use your powers in. BUT, there is a possibility that we just have't unlocked our true potential yet. We could just now be able to explore this study simply because we are "awakening." And our official "awakening" is when something very bad or very good will happen. Who knows, we barely have any truth to this study, nor do we have any evidence of this. But where we are now, studying non-physical subjects, we will eventually get there.

Share this post


Link to post
Share on other sites

For instance, the double slit experiment gave us one conclusion that the mind is the key

 

 

No it doesn't.

Share this post


Link to post
Share on other sites

our current lives could be a simulation to increase gratefulness and be a big learning opportunity.

how would you react if this was the case and you woke up suddenly from this simulation into a new more sophisticated world that was not made up of protons/neutrons/atoms/quarks etc.?

atoms are merely the buildings blocks of the simulation and not the real world. how seriously do scientists put consideration into this possibility?

Share this post


Link to post
Share on other sites

 

 

Somewhere between 0 and 100%.

 

I would argue 0 or 100%, but nothing in between,

Share this post


Link to post
Share on other sites

I personally hope we don't live in a simulated world because if we find out we do the consequences would be horrible. For one it would mean we are victims of an amoral system that doesn't care about personal suffering. Also it would rule out any chance of us ever finding the true nature of reality. But I can't dismiss the possibility neither. I know this all may seem like a bunch of hippy dippy stoner stuff but I genuinely think this stuff is worth discussing.

Share this post


Link to post
Share on other sites

What makes you think our universe is not an amoral system that doesn't care about personal suffering (or about anything else)?

Share this post


Link to post
Share on other sites

Or, we could be in a moral simulation that does care about suffering, etc.

 

 

 

Also it would rule out any chance of us ever finding the true nature of reality.

 

I don't think there is any possibility of that, either way.

Share this post


Link to post
Share on other sites

oh goodie. another topic that is not allowed on another science forum i used to visit. im liking this place more every day.

 

so, simulation.

10 years ago i could find heaps of stuff on the net with suggestions as to why we are not in a simulation. nowdays i can only find things online suggesting we are in a simulation.

 

because it was 10 years ago when i saw the 'not' suggestions, i cant remember what any of them were.

but there still must be arguments/ideas as to why we are not in a simulation?

anyone?

Share this post


Link to post
Share on other sites

As it is, like solipsism, inherently unfalsifiable I would say that Occam's razor is the most obvious reason to ignore it. Apart from the fact that (like solipsism) it makes no practical difference and so can just be ignored.

Share this post


Link to post
Share on other sites

Would a simulation with suffering be immoral or even irrational? There is a thin philosophical divide between rational egoism, the idea that selfishness is rational, and (hedonistic) utilitarianism, the idea of promoting happiness for all rather than for oneself. This is a good discussion of it:

 

Stanford Encyclopedia of Philosophy: Egoism #Rational Egoism

https://plato.stanford.edu/entries/egoism/#3

 

If it is irrational to cause suffering, then we are left with another question. What are the odds that our creator would (A) fail to acknowledge that we suffer, (B) be irrational enough to allow our suffering, © have created such a simulation out of necessity for the greater good, or (D)unless our simulation has too much suffering to justify its own existence—simply not skilled enough to design a better simulation?



 

Personally, I do not acknowledge any other good than happiness, and I doubt that I ever will. Knowledge and thought are good too, but thought leads me to this conclusion. What it means for something to be "better" or "worse" comes from subjective experience in much the same way that "red" does. If moral behavior is a rational guide to what is better or worse, then where else can we find meaning for those words?

Share this post


Link to post
Share on other sites

Would a simulation with suffering be immoral or even irrational? There is a thin philosophical divide between rational egoism, the idea that selfishness is rational, and (hedonistic) utilitarianism, the idea of promoting happiness for all rather than for oneself. This is a good discussion of it:

 

Stanford Encyclopedia of Philosophy: Egoism #Rational Egoism

https://plato.stanford.edu/entries/egoism/#3

 

If it is irrational to cause suffering, then we are left with another question. What are the odds that our creator would (A) fail to acknowledge that we suffer, (B) be irrational enough to allow our suffering, © have created such a simulation out of necessity for the greater good, or (D)unless our simulation has too much suffering to justify its own existence—simply not skilled enough to design a better simulation?


 

Personally, I do not acknowledge any other good than happiness, and I doubt that I ever will. Knowledge and thought are good too, but thought leads me to this conclusion. What it means for something to be "better" or "worse" comes from subjective experience in much the same way that "red" does. If moral behavior is a rational guide to what is better or worse, then where else can we find meaning for those words?

None of these questions are answerable, and even if they were, the answers would still give no indication about whether we live in a simulation or not.

 

Whoever hypothetically made the simulation might simply want to study our suffering, or with equally indeterminable likelihood, not care about it, because we are only simulated entities.

Share this post


Link to post
Share on other sites

None of these questions are answerable, and even if they were, the answers would still give no indication about whether we live in a simulation or not.

 

Whoever hypothetically made the simulation might simply want to study our suffering, or with equally indeterminable likelihood, not care about it, because we are only simulated entities.

 

What is not answerable? Answerable or not, our creators might be conscious, might realize that we are conscious, and might have a problem with our suffering. This at least makes the scenario less likely. Furthermore, even if the philosophical questions are unanswerable, our creators might have the same questions, which would result in fewer simulations with suffering and raise the likelihood that our simulation would have been happy.

Share this post


Link to post
Share on other sites

They might also enjoy our suffering, like we enjoy playing simulation games where planets are torn apart and millions die. Some of us might be played by them while they compete against each other for the most power, money, territory or headcount. They might be running billions of simulations with varying degrees of suffering.

 

I could make exactly your argument by stating that in a simulation, I would expect nearly invincible cars running like crazy through the streets, trying to hit as many old people as they can for bonus points.

 

It changes nothing about any likelihood. There is no way to even start putting a likelihood on any of it.

Share this post


Link to post
Share on other sites

What might bring them satisfaction is a different question from what they would find rational. The latter might be knowable. As for the former, it is equally likely that they might take satisfaction in our happiness.

Edited by MonDie

Share this post


Link to post
Share on other sites

There is nothing inherently rational or irrational about our suffering. Look at nature, which is entirely indifferent about it.

Share this post


Link to post
Share on other sites

^

 

That is where I think you are dead wrong. I do not want to chop up Stanford's Encyclopedia too much, but the SFN etiquette guide says that everyone should be able to participate without following links.

 

Stanford Encyclopedia of Philosophy - Egoism

https://plato.stanford.edu/entries/egoism/#3

 


 

"Psychological egoism claims that each person has but one ultimate aim: her own welfare. This allows for action that fails to maximize perceived self-interest, but rules out the sort of behavior psychological egoists like to target — such as altruistic behavior or motivation by thoughts of duty alone. It allows for weakness of will, since in weakness of will cases I am still aiming at my own welfare; I am weak in that I do not act as I aim. And it allows for aiming at things other than one's welfare, such as helping others, where these things are a means to one's welfare."

 

"[...] Say a soldier throws himself on a grenade to prevent others from being killed. It does not seem that the soldier is pursuing his perceived self-interest. It is plausible that, if asked, the soldier would have said that he threw himself on the grenade because he wanted to save the lives of others or because it was his duty. He would deny as ridiculous the claim that he acted in his self-interest.

 

The psychological egoist might reply that the soldier is lying or self-deceived. Perhaps he threw himself on the grenade because he could not bear to live with himself afterwards if he did not do so. He has a better life, in terms of welfare, by avoiding years of guilt. The main problem here is that while this is a possible account of some cases, there is no reason to think it covers all cases. Another problem is that guilt may presuppose that the soldier has a non-self-regarding desire for doing what he takes to be right."

"[...] Empathy might cause an unpleasant experience that subjects believe they can stop by helping; or subjects might think failing to help in cases of high empathy is more likely to lead to punishment by others, or that helping here is more likely to be rewarded by others; or subjects might think this about self-administered punishment or reward. In an ingenious series of experiments, Batson compared the egoistic hypotheses, one by one, against the altruistic hypothesis. He found that the altruistic hypothesis always made superior predictions. [...]"


"Rational egoism claims that it is necessary and sufficient for an action to be rational that it maximize one's self-interest. [...]

"In a much-quoted passage, Sidgwick claimed that rational egoism is not arbitrary: "It would be contrary to Common Sense to deny that the distinction between any one individual and any other is real and fundamental, and that consequently 'I' am concerned with the quality of my existence as an individual in a sense, fundamentally important, in which I am not concerned with the quality of the existence of other individuals: and this being so, I do not see how it can be proved that this distinction is not to be taken as fundamental in determining the ultimate end of rational action for an individual” (Sidgwick 1907, 498). This can be interpreted in various ways (Shaver 1999, 82–98)."

"Finally, Sidgwick might be claiming that my point of view, like an impartial point of view, is non-arbitrary. But there are other points of view, such as that of my species, family or country. [...] And if my being an individual is important, this cuts against the importance of taking up an impartial point of view just as it cuts against the importance of taking up the point of view of various groups. Similarly, if the impartial point of view is defended as non-arbitrary because it makes no distinctions, both the point of view of various groups and my individual point of view are suspect."

"[...] Suppose also that, looking back from the end of my life, I will have maximized my welfare by contributing now to the pension. Rational egoism requires that I contribute now. The present-aim theory does not. It claims that my reasons are relative not only to who has a desire — me rather than someone else — but also to when the desire is held — now rather than in the past or future. The obvious justification an egoist could offer for not caring about time — that one should care only about the amount of good produced — is suicidal, since that should lead one not to care about who receives the good. [...]"

"Second, rational egoism might be challenged by some views of personal identity. Say half of my brain will be transplanted to another body A. My old body will be destroyed. A will have my memories, traits, and goals. It seems reasonable for me to care specially about A, and indeed to say that A is identical to me. Now say half of my brain will go in B and half in C. Again B and C will have my memories, traits, and goals. It seems reasonable for me to care specially about B and C. But B and C cannot be identical to me, since they are not identical to one another (they go on to live different lives). So the ground of my care is not identity, but rather the psychological connections through memories, etc. Even in the case of A, what grounds my care are these connections, not identity: my relation to A is the same as my relation to B (or C), so what grounds my care about A grounds my care about B (or C) — and that cannot be identity. (To make the point in a different way — I would not take steps to ensure that only one of B and C come about.) If so, I need not care specially about some of my future selves, since they will not have these connections to me. And I do have reason to care specially about other people who bear these connections to me now."

Share this post


Link to post
Share on other sites

I fail to see your point. You seem to be applying human philosophy/psychology to whatever unknowable creature is running the simulation. Even then, I don't see why that creature would care about the suffering of AI puppets running around in a simulation they run. I definitely don't care about all the AI puppets running around in violent games I play.

 

If you want an argument against the simulation, all you need is Occam's razor.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.