Jump to content

Featured Replies

Bear with me here, my concept will be clear soon, we first need to set up some things, it makes it easier to get the full idea.

As an IT engineer, I daily make use of Virtualisation technology. I bet many people already know this technology, but for those who don't: What we actually do is simulate computer hardware to run Virtual Computers what we call Virtual Machines (VM's).

Imagine a 3D shooting game, you run around in a city, you see cars and buildings, the road. You run in one of the houses. You see a laptop on a table, you start it and it does exactly what a normal computer does. it starts windows (or another operating system), you can read your mail, browse the internet etc, etc.

Now... drop everything in this example, except that Windows (or another operating system) instance. That is what virtualisation does. With it, we can simulate multiple computers, side by side on only one Physical Machine (computer/ server (which we call a Host)).

Why do we do this? Well except from running more VM's on less Hosts, another reason would be that we isolated the running operating system from the real hardware. So we can do some tricks with it. One of those tricks is to move a running VM over the network, to another Host, without skipping a beat. The VM just keeps running. This is great when resources are limited on one Host, we just move the virtual machine to another Host with free resources. Also when there needs to be repairs to a Host, we can just move all running VM's from that Host to another Host and do our work without any downtime.

However, virtualisation does create some issues. One of them is keeping time across all virtual machines. A real computer has a hardware clock and a battery to keep time when the computer is shut down. Virtual machines do not have that, there is no way to simulate a battery and yes, we can simulate a clock, but that can be flaky when the physical Host has a very dynamic load, in these cases the clocks on the VM's will drift a lot.

For this we run services on the Virtual machines (VMs) that sync the time from the physical Host it runs on or even from a location somewhere else, like internet or other time services. If we do not do this, all the virtual machines will drift in time, and this can get huge. The VM itself doesn't care, but it can have huge consequences when it tries to communicate with the outside world.

Another problem with keeping time is that we use concepts like pausing and snapshotting VM's. Pausing when we don't need the VM for a while.

Snapshotting is to capture the entire machine somewhere in time, store it like it was. We do this when we want to make complex changes to the VM that could break functionality. If the change does brake functionality, we can go back to that snapshot, back in time to the moment when we created the snapshot. The computer does this without knowledge what happened. It will be back in time. We then need to correct the time of the VM. Pausing a VM of course also creates the same issue.

It is important to understand, that from the reference point of the VM, it just continues operation where it left off from the moment it was paused or snapshotted. The VM is completely oblivious to what happened.

#################### Thought experiment ########################

It is fun to say the Virtual Machine was oblivious to what happened, so let us push this concept.

Let us run two complex AI programs on this virtual computer. And they talk to each other.

  • The moment we pause the VM, the AI stops in the middle of the sentence. When we continue the VM after a year, the AI will just finish it's sentence. Both AI's will just continue to function. They will still operate with a time of a year ago.

  • The moment we Snapshot the VM in the middle of sentence B, the AI will just continue talking, then sentence C, D, E. The moment we go back to the snapshot, the AI will repeat the part of Sentence B just after we too the snapshot. (and depending on how deterministic the AI is, maybe also C, D and E). Again Both AI's will just continue to function. So we can stop it, and we can go back in time.

  • What would happen if we slow time? What if we make the Host slower (by slowing the virtual clock or limit the computational resources)? The AI will continue to function, we know this because we already seen clock drift on VM's, and this is not an issue for the operation of the VM itself. For us outsiders, the conversation between both AI's will run slower. But they will continue. The AI's won't have any idea that they run slower. They would be oblivious to the difference in speed of time. They don't have a reference point in time.

  • What if we slow it down to a crawl? A minute for the AI takes a year in our time. We, as outsiders, will get very bored of the very slow conversation, but the AI's just work fine.

  • Wat if we replaced the AI's with AI's that are on a human level, yea, I know, not possible yet, but we can imagine it will be possible some day. As long these AI's do not have any contact with the outside world, they will have no clue that their time is running slow (or stops, or repeats, or dynamically go fast and slow)

  • What if we simulate an entire universe on this VM? It will be very slow for us with our level of computational power. I can imagine that 1 second in this virtual universe, takes like 100s of billions of years to calculate. Maybe more, who knows. But would the simulation itself care? Would anything in the simulation even know? How would they know?We already know about the concept of time being relative. All reference frames in the universe have their own speed of time. So the concept isn't to hard to imagine that there also can be a speed of time difference between the inside and the "outside" universe.

  • What if we make the computer simpler and slower? Lets say something ridiculously stupid, a mechanism that can do logical operations by pushing sand with a needle into neat little bytes. We just need a lot of time. Boatloads. And I am not even sure how much sand. It is not hard to imagine this could be a relatively simple "computer" and with the ideal environment, something like this could evolve by itself. I say evolved, so it takes away a more complex idea where we need a builder. Brains can evolve in our universe, why cant a simple computer system evolve in an other ideal environment? The simulation would not care. Even if the mechanism pauses, (or goes back in time)

With this idea, a universe can be simulated by a simple evolved "machine", Computational limitations are gone.

##############Some resources about Simulation Theory#############################

Reasons against Simulation Theory: https://en.wikipedia.org/wiki/Simulation_hypothesis#:~:text=Physicist%20Frank%20Wilczek%20raises%20an,and%20extraneous%20in%20a%20simulation.

  • Some argue that a simulated universe must, by the laws of computation, be less complex than the universe simulating it, leading to the logical problem of a vastly more complex "base reality" than our own.

  • Physicist Frank Wilczek points out that the laws of our universe, with their hidden complexities, seem unnecessary and extraneous if we are in a simulation, suggesting that the simulation hypothesis doesn't offer a simpler explanation.

  • The hypothesis suggests that our universe is a simulation, but it doesn't explain the origin of the simulation itself or the "hardware" running it.

  • This leads to an infinite regress, where each simulated universe requires another, higher-level simulation, and so on, without resolving the fundamental question of reality.

  • Simulating a universe with the complexity and scale of our own would require immense computational power, potentially far exceeding the capabilities of any conceivable computer.

  • This raises the question of whether our universe, with its quantum mechanics and chaotic systems, could even be accurately simulated on a digital computer.

    .

____________________

Here's a more detailed look at Wilczek's views:

  • Complexity as an argument against simulation:

Wilczek argues that the universe exhibits a level of complexity ("wasted complexity" according to Scientific American) and intricacy that would be illogical for a simulation to possess. Building such complexity requires significant resources and time, which wouldn't be necessary for a simulation.

-->My remark-->In my concept, the Simulation is not built, it came by nature. My concept concluded we do not need a powerfull computer, so it should be easier to imagine it could evolve in some kind of way in that reality. It could even be a less complex way than that of how our brain evolved in this reality. Complexity by it self is not a good argument, our brain evolved by nature, those are complex.

  • Laws of physics and their limitations:

Wilczek highlights that the laws of physics are constrained by time and location. These limitations, he argues, are unnecessary and extraneous in a simulation.

-->My remark--> Again, We already know time is relative. But my concept, where a simulation can run asynchronous with the real reality, this argument does not stand.

  • In essence, Wilczek's perspective leans towards the universe being a product of fundamental, rather than simulated, processes, given the apparent wastefulness of resources in creating our complex reality

-->My remark-->: Nature is wasteful. Evolution was wasteful. for instance, the laryngeal nerve of the giraffe. (warning! Bloody autopsy of a Giraffe) https://www.youtube.com/watch?v=cO1a1Ek-HD0

Actually a Simulation that evolved by it self should show strange complexities and wastefulness, there was no design.

_______________________________________________________________________________________________

J. Richard Gott, a professor of astrophysical sciences at Princeton University, made him aware of a strong objection to the simulation hypothesis. The objection claims that the common trait that all hypothetical high-fidelity simulated universes possess is the ability to produce high-fidelity simulated universes. And since our current world does not possess this ability, it would mean that either humans are in the real universe, and therefore simulated universes have not yet been created, or that humans are the last in a very long chain of simulated universes, an observation that makes the simulation hypothesis seem less probable.

-->My remark--> Not sure what is being said here, also not sure if this is a problem, I need to dig into this argument a bit more.

_______________________________________________________________________________________________

someone Tested the simulation theory physically

A method to test one type of simulation hypothesis was proposed in 2012 in a joint paper by physicists Silas R. Beane from the University of Bonn (now at the University of Washington, Seattle), and Zohreh Davoudi and Martin J. Savage from the University of Washington, Seattle.[45] Under the assumption of finite computational resources, the simulation of the universe would be performed by dividing the space-time continuum into a discrete set of points, which may result in observable effects. In analogy with the mini-simulations that lattice-gauge theorists run today to build up nuclei from the underlying theory of strong interactions (known as quantum chromodynamics), several observational consequences of a grid-like space-time have been studied in their work. Among proposed signatures is an anisotropy in the distribution of ultra-high-energy cosmic rays that, if observed, would be consistent with the simulation hypothesis according to these physicists.[46] In 2017, Campbell et al. proposed several experiments aimed at testing the simulation hypothesis in their paper "On Testing the Simulation Theory".[47]

"Under the assumption of finite computational resources, the simulation of the universe would be performed by dividing the space-time continuum into a discrete set of points, which may result in observable effects."

-->My remark--> with my concept, the computational resource can be infinite.

coincidentally one of the Youtubers I watch, did a nice explanation a few days ago of virtualisation and simulation. https://www.youtube.com/watch?v=QFdpvH5K5RI

But he also says, there needs to be faster hardware for a simulating the universe.

-->My remark--> no we don't. It just needs enough time.

Edited by Darksand

  • Darksand changed the title to Too much time on my hand.
Just now, Darksand said:

-->My remark--> no we don't. It just needs enough time.

Have you been reading ?

Chapter 2

There is enough time.

From 'Fundamentals' by Frank Wilczek

  • Author
1 hour ago, studiot said:

Have you been reading ?

Chapter 2

There is enough time.

From 'Fundamentals' by Frank Wilczek

Nope, just his stance on a google search.

I did now, interesting read, but most of it I already read somewhere else, did I miss a specific part that talks about my post? The only thing I found was"

Computers are essentially ageless, and they can revisit previous states precisely, and they can pursue several programs in parallel. An artificial intelligence rooted in those platforms will be able to engineer its psychological time with great precision and flexibility. Notably, it could set up states that lead to pleasure, and relive them repeatedly, while experiencing each as fresh.

Edited by Darksand

3 hours ago, Darksand said:

With this idea, a universe can be simulated by a simple evolved "machine", Computational limitations are gone.

What makes you think it would be useful?

What computational limitation?

Define "machine" please.

These question's need to be answered with concise brevity, before I can be bothered to read the rest.

  • Author
1 hour ago, dimreepr said:

What makes you think it would be useful?

What computational limitation?

Define "machine" please.

These question's need to be answered with concise brevity, before I can be bothered to read the rest.

You obviously also didn't bother to read everything above that sentence. Why should it be useful? And why do I need to define "Machine" with concise brevity? Because else you won't read any further? I cannot be bothered about that.

Edited by Darksand

Moderator Note

We need to lose the snark immediately. Those who can't be bothered to observe a minimum of courtesy don't need to participate. Asking for definitions doesn't need to set an ugly tone.

22 hours ago, Darksand said:

You obviously also didn't bother to read everything above that sentence. Why should it be useful? And why do I need to define "Machine" with concise brevity? Because else you won't read any further? I cannot be bothered about that.

My apologies for the 'snark', but there's literally a wall of text before this conclusion "With this idea, a universe can be simulated by a simple evolved "machine", Computational limitations are gone." and another wall of text following.

I'm dyslexic, so I see patterns of expression much more easily than a bloviated sentence, consequently when replying, my natural brevity can seem more confrontational than I intend.

So let's at least meet half way, and you give me a reasonable synopsis of your idea, please...

  • Author

9 hours ago, dimreepr said:

So let's at least meet half way, and you give me a reasonable synopsis of your idea, please...

Sure, Does this help?

  • Simulating an entire universe with everything in it, takes a lot of computational resources.

  • Time in the simulation does not have to run at the same pace as outside the simulation (where the "computer" is).

    • The simulated will be oblivious to the difference in speed of time.

  • If you have enough time, you don't need a lot of computational resources (the computer does not have to be fast).

  • If you have infinite amount of time, you have infinite amount of computational resources (the computer can be as slow as you want, even 1 operation per minute would be fine).

  • Ergo, with enough time, it is possible that a universe can be simulated on a (very) simple computer.

  • A simple "Machine/Computers/Brain" could evolve in an ideal environment. Just like our brain did.

I see this OP is moved to Speculation. nowhere am I saying we are living in a simulated universe. I am just saying It is possible, and with this post, there is no real good argument against the possibility. But let us not forget, Unfalsifiability does not mean the claim is false, just that it cannot be tested. 

This concept, in its ultimate form; our entire universe, simulated on a ultimately simple computer, would follow Occam's razor ultimately.

Edited by Darksand

13 hours ago, Darksand said:

Sure, Does this help?

  • Simulating an entire universe with everything in it, takes a lot of computational resources.

  • Time in the simulation does not have to run at the same pace as outside the simulation (where the "computer" is).

    • The simulated will be oblivious to the difference in speed of time.

  • If you have enough time, you don't need a lot of computational resources (the computer does not have to be fast).

  • If you have infinite amount of time, you have infinite amount of computational resources (the computer can be as slow as you want, even 1 operation per minute would be fine).

  • Ergo, with enough time, it is possible that a universe can be simulated on a (very) simple computer.

  • A simple "Machine/Computers/Brain" could evolve in an ideal environment. Just like our brain did.

I see this OP is moved to Speculation. nowhere am I saying we are living in a simulated universe. I am just saying It is possible, and with this post, there is no real good argument against the possibility. But let us not forget, Unfalsifiability does not mean the claim is false, just that it cannot be tested. 

This concept, in its ultimate form; our entire universe, simulated on a ultimately simple computer, would follow Occam's razor ultimately.

Nope, you haven't answered any of my question's, but you have convinced me not to bother reading more.

22 hours ago, studiot said:

So far as I can tell, Darksand's proposal is a variation of

the Simulation hypothesis

https://en.wikipedia.org/wiki/Simulation_hypothesis

and perhaps

the Holographic Universe

https://www.vox.com/2015/6/29/8847863/holographic-principle-universe-theory-physics

Thanks for trying +1, but I don't think his idea has the logical consistency to be considered an hypothesis.

  • 2 weeks later...
  • Author

So you just put this in speculations and say no, but nobody can give a logical argument against it?

On 6/21/2025 at 11:43 PM, Darksand said:

If you have enough time, you don't need a lot of computational resources (the computer does not have to be fast).

It gets even worse - when you have a lot of time, you can simulate an analog signal with just 0 and 1 pulses of different widths.

https://en.wikipedia.org/wiki/Pulse-width_modulation

On 6/22/2025 at 9:43 AM, Darksand said:

...

  • If you have enough time, you don't need a lot of computational resources (the computer does not have to be fast).

  • If you have infinite amount of time, you have infinite amount of computational resources (the computer can be as slow as you want, even 1 operation per minute would be fine).

...

You'll need more than time, you'll need enough memory and storage.

https://xkcd.com/505/

  • Author
4 minutes ago, pzkpfw said:

You'll need more than time, you'll need enough memory and storage.

https://xkcd.com/505/

How sure are you about that?

o No Man's Sky features a vast universe with an estimated 18 quintillion (18,000,000,000,000,000,000) possible planet "seeds". While the game doesn't use all of these, it still contains a staggering number of star systems, estimated to be in the trillions. Specifically, there are about 256 galaxies, and within each galaxy, there are trillions of star systems. 

o Man's Sky requires a minimum of 8 GB of RAM. While the game's installation size is relatively small, around 15 GB, the RAM requirement is important for smooth gameplay, especially when exploring planets and landscapes according to Steam and Steam Community discussions

The size of the simulated does not always correspond to the size of the memory needed to simulate it.

Edited by Darksand

Just now, Darksand said:

The size of the simulated does not always correspond to the size of the memory needed to simulate it.

Sure that's true it just depends upon your simulation algorithm

  • Author
16 minutes ago, studiot said:

Sure that's true it just depends upon your simulation algorithm

Correct, and I am not saying the universe is simulated, let alone how. I am just saying it is a possibility.

The biggest argument was always computer resources, with my OP, I think I took that argument away.

Edited by Darksand

Seems like the "bottom turtle," which is the RW programmer, will desire a clock speed that provides whatever it is hoping to get from its simulation in what it construes as a reasonable amount of time. From this RW perspective, using lumps of beach sand and pebbles and waiting a billion years would likely not be the option it chooses. Indeed, a sophisticated hybrid (digital/analog) might be devised, especially where the simulation was populated by conscious and intelligent beings.

Eventually they would arrive at 42, as the answer. Clearly, Doug Adams was the ultimate output of our matrix.

4 hours ago, Darksand said:

No Man's Sky features a vast universe with an estimated 18 quintillion (18,000,000,000,000,000,000) possible planet "seeds"

1.8 x 10^19

Less than a part in 10,000 of Avogadro’s number.

For each particle in your simulation, how many bits do you need to encode the information about it? You have identity (some kind of label), mass, charge, position, velocity, angular momentum. To the extent you can, at least, owing to QM limitations.

18 quintillion might get you memory for the data for the particles in a small puff of hydrogen.

47 minutes ago, Darksand said:

Correct, and I am not saying the universe is simulated, let alone how. I am just saying it is a possibility.

The biggest argument was always computer resources, with my OP, I think I took that argument away.

I think "Alice in wonderland" is a possibility, but the book is a finite resource, ie. it's book sized; I think I took that argument away from you, logically... Sorry about that.

  • Author
1 hour ago, TheVat said:

Seems like the "bottom turtle," which is the RW programmer, will desire a clock speed that provides whatever it is hoping to get from its simulation in what it construes as a reasonable amount of time. From this RW perspective, using lumps of beach sand and pebbles and waiting a billion years would likely not be the option it chooses. Indeed, a sophisticated hybrid (digital/analog) might be devised, especially where the simulation was populated by conscious and intelligent beings.

Eventually they would arrive at 42, as the answer. Clearly, Doug Adams was the ultimate output of our matrix.

  • I am not talking about a programmer. It could evolve naturally.

  • Clock speed? not important, it can go fast one second, sow the other, stop for a million years, go fast again, go backwards. or just be slow all the time. the simulation would have no idea.

23 minutes ago, dimreepr said:

I think "Alice in wonderland" is a possibility, but the book is a finite resource, ie. it's book sized; I think I took that argument away from you, logically... Sorry about that.

Nice strawman, come with a better argument.

58 minutes ago, swansont said:

1.8 x 10^19

Less than a part in 10,000 of Avogadro’s number.

For each particle in your simulation, how many bits do you need to encode the information about it? You have identity (some kind of label), mass, charge, position, velocity, angular momentum. To the extent you can, at least, owing to QM limitations.

18 quintillion might get you memory for the data for the particles in a small puff of hydrogen.

Why are we arguing this? we do know we can create huge simulations just based on a few line of code and a good programmer can do this in very little memory.

Our brains evolved, they have memory, why is it so hard to imagine something that can do that?

Edited by Darksand

7 minutes ago, Darksand said:

Why are we arguing this? we do know we can create huge simulations just based on a few line of code and a good programmer can do this in very little memory.

Code means nothing without the data. How does code do anything with no arguments in equations or matrices that are filled with zeroes?

We’re arguing, it seems, because you don’t recognize this rather obvious point.

  • Author
1 hour ago, swansont said:

Code means nothing without the data. How does code do anything with no arguments in equations or matrices that are filled with zeroes?

We’re arguing, it seems, because you don’t recognize this rather obvious point.

The rather obvious point is that our brain works like a computer, with extreme complexity. No programmer (or code or matrices) there. just evolved in its "perfect" environment. I could imagine the simulator ("hard- and software") could evolve somewhere.

Edited by Darksand

19 minutes ago, Darksand said:

The rather obvious point is that our brain works like a computer, with extreme complexity. No programmer (or code or matrices) there. just evolved in its "perfect" environment. I could imagine the simulator ("hard- and software") could evolve somewhere.

Our brains process information. Why would a simulator evolve with nothing to process?

  • Author
2 hours ago, swansont said:

Our brains process information. Why would a simulator evolve with nothing to process?

Who is saying the simulator is evolving with nothing to process?

Also, some stuff evolved with complete different functions in the past, I don't see why this is relevant.

Let me ask a simple question, would you agree that in my concept, a human brain would have enough computing power to simulate a universe?

32 minutes ago, Darksand said:

Let me ask a simple question, would you agree that in my concept, a human brain would have enough computing power to simulate a universe?

Human brain/CPU consumes energy to process data..

This whole world is about the transfer of energy from particles X+Y to A+B. The (kinetic etc) energies of A and B are more evenly distributed than those of X and Y. For this reason, we do not see (often) reverse chemical and physical reactions. Some call it entropy. Particles with high (kinetic) energies that can cause something unusual are rare..

For an artificial brain to be able to process data indefinitely it would have to not waste energy, which means it would have to be some kind of closed system.

Guest
This topic is now closed to further replies.

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.