Jump to content
Sorcerer

If the universe was a computer program.

Recommended Posts

Could the speed of light be said to be equivalent to the processor speed?

Share this post


Link to post
Share on other sites

I can't see why. We currently use processors to simulate things that are faster than the processor.

Share this post


Link to post
Share on other sites

Oh how does that work?

To clarify; I mean if, "the universe is a simulation", not, "if we were to simulate a universe".

 

I actually didn't use the word simulation on purpose because it conveys a sense of incompleteness.

 

Perhaps to rephrase, if we had a processor which ran at the speed of light, could we simulate a part of the universe in real time. Ego predict the weather with 100% accuracy. Or everything in our galaxy?

Share this post


Link to post
Share on other sites

Maybe it's rendering speed rather than processing speed... ;)

Share this post


Link to post
Share on other sites

If we're in a simulation then our laws of physics are simulated by the computer. Why would the computer necessarily be subject to the laws of physics it is simulating? It may be subject to an entirely different reality. with different laws of physics. The only attribute we could give such a computer is that it is sufficiently sophisticated to simulate our universe, we can ascribe nothing more.

Share this post


Link to post
Share on other sites

Perhaps to rephrase, if we had a processor which ran at the speed of light, could we simulate a part of the universe in real time. Ego predict the weather with 100% accuracy. Or everything in our galaxy?

What does it mean for a processor to "run at the speed of light"?

Share this post


Link to post
Share on other sites

If they wrote the program so that each individual quanta could only influence adjacent quanta then the fastest travelling information would be the size of the smallest quanta divided by the smallest time to calculate the influence. (Think sort of like conway's game of life)

 

In our universe that fastest travelling information is obviously the speed of light, so a quanta (we'll say that in our universe that's the planc length), influences another quanta every 10^-43 seconds (The planc time).

 

From this we can deduce that if we used this method, where information is communicated by adjacent sections. The processor can calculate everything about a single interaction with a frequency of10^31 Terrahertz, that's not a single calculation that's every single calculation about a single quanta.

Share this post


Link to post
Share on other sites

If they wrote the program so that each individual quanta could only influence adjacent quanta then the fastest travelling information would be the size of the smallest quanta divided by the smallest time to calculate the influence. (Think sort of like conway's game of life)

 

In our universe that fastest travelling information is obviously the speed of light, so a quanta (we'll say that in our universe that's the planc length), influences another quanta every 10^-43 seconds (The planc time).

 

From this we can deduce that if we used this method, where information is communicated by adjacent sections. The processor can calculate everything about a single interaction with a frequency of10^31 Terrahertz, that's not a single calculation that's every single calculation about a single quanta.

I don't see how Planck scales apply here. Quanta are not separated by the Planck length.

Share this post


Link to post
Share on other sites

I don't see how Planck scales apply here. Quanta are not separated by the Planck length.

 

I'm not really saying that quanta are separated by the planck length, but rather that if we quantize space itself then it'd make sense to do so as a cube with sides equal to the planck length. I'm not so much talking about the real world but how I imagine a computer simulation might mirror reality as best as possible without hardware limitations ect.

 

I'm mostly going off the idea in my head that if you'd need to express the smallest possible "bit" of information in a computer simulation as a space of volume (planck length)^3 with the variables of every single field and a magnitude associated in order to most accurately simulate reality.

 

Obviously this is the speculation subforum :) it's just fun to think about how you might try to simulate a universe given limitless technology and knowledge

Edited by _Rick_

Share this post


Link to post
Share on other sites

Any quantization means a numerical solution, vs an analytical one. You will get errors that will compound and eventually the simulation fails.

Share this post


Link to post
Share on other sites

Any quantization means a numerical solution, vs an analytical one. You will get errors that will compound and eventually the simulation fails.

 

Are you talking about the degree of accuracy in the information stored for the magnitude of fields in each quanta? Are you saying that any slight error in accuracy would add up and ruin the sim?

 

Is there any way we can carry out an experiment to prove that our universe is analytical?

Edited by _Rick_

Share this post


Link to post
Share on other sites

Are you talking about the degree of accuracy in the information stored for the magnitude of fields in each quanta? Are you saying that any slight error in accuracy would add up and ruin the sim?

 

Is there any way we can carry out an experiment to prove that our universe is analytical?

Error accumulation, and you'd have to show it's a simulation before you worry about whether it's analytical vs numerical.

Share this post


Link to post
Share on other sites

Error accumulation, and you'd have to show it's a simulation before you worry about whether it's analytical vs numerical.

 

Wouldn't any simulation have to be at its roots numerical as we cant store or manipulate infinite data? Every single process of modern computers is reducible to some finite string, are you saying that to have a working simulation we'd have to discover some new tech? Modern day simulations are numerical and they function within their bounds perfectly fine.

Share this post


Link to post
Share on other sites

Oh how does that work?

 

 

Can you be more specific? How does what work?

 

 

 

To clarify; I mean if, "the universe is a simulation", not, "if we were to simulate a universe".

 

what is the difference between those two sentences. They mean the same thing to me.

 

 

 

Perhaps to rephrase, if we had a processor which ran at the speed of light

 

What does that mean?

If they wrote the program so that each individual quanta could only influence adjacent quanta then the fastest travelling information would be the size of the smallest quanta divided by the smallest time to calculate the influence. (Think sort of like conway's game of life)

 

In our universe that fastest travelling information is obviously the speed of light, so a quanta (we'll say that in our universe that's the planc length), influences another quanta every 10^-43 seconds (The planc time).

 

From this we can deduce that if we used this method, where information is communicated by adjacent sections. The processor can calculate everything about a single interaction with a frequency of10^31 Terrahertz, that's not a single calculation that's every single calculation about a single quanta.

 

 

But then you wouldn't be able to model quantum effects that depend on non-locality.

Share this post


Link to post
Share on other sites

 

 

Can you be more specific? How does what work?

 

 

what is the difference between those two sentences. They mean the same thing to me.

 

 

What does that mean?

 

 

But then you wouldn't be able to model quantum effects that depend on non-locality.

 

That's a good point, maybe we could flag some variable to be associated with another that bypasses the locality program we've created.

Share this post


Link to post
Share on other sites

Processor speed is actually given as a frequency.

 

 

Indeed. Or a rate of instruction execution. Or various other benchmarks.

 

But not km/s.

Share this post


Link to post
Share on other sites

There is no reason we would be able to detect the update speed of the simulation. Unless, I suppose, the "universe" is being simulated across a distributed network of processors running a synchronously so that updates in one area have to propagate out through the network. In which case the speed of light would be less likely to be the actual "front" of an update, and more of a fudge to make sure there were no dependencies between processors at opposite ends of the network and prevent inconsistencies arising from latency in the network.

 

Quantum entanglement makes this seem somewhat unlikely, though, since it's exactly the sort of phenomenon that such a set-up would presumably be seeking to avoid, and would probably best be covered by a global variable defining the state of all entangled pairs.

 

In any case, if you're not running it as multiple interconnected simulations in parallel, but updating every part of the simulation before moving on to the next "tick" then the time it takes for the simulation to progress from tick to tick would not be detectable inside of the simulation in any way.

 

Time proceeds at a rate of one second per second, no matter how long it takes each second to render.

Share this post


Link to post
Share on other sites

But then you wouldn't be able to model quantum effects that depend on non-locality.

QM and relativity could only be true in the simulation, but would not be the physics of the universe where the simulation was running.

Share this post


Link to post
Share on other sites

QM and relativity could only be true in the simulation, but would not be the physics of the universe where the simulation was running.

I suppose you could have the "only affected by adjacent cells" thing be a design decision rather than an intrinsic part of the simulation, and then allow for an additional layer of cells to hold information on the quantum state of entangled particles rather than having that information stored locally, with the cell at whatever location each particle is measured at being updated by the master cell, and the Uncertainty Principle acting as a fudge to mask the processing time required to update each cell from the master cell.

Share this post


Link to post
Share on other sites

It seems that computers both inert (the universe) and living (sentient) might arise from a finite component of a non-living mathematical bulk interacting within itself that allows our occurrence within this region of relative stability.

Edited by hoola

Share this post


Link to post
Share on other sites

It seems that computers both inert (the universe) and living (sentient) might arise from a finite component of a non-living mathematical bulk interacting within itself that allows our occurrence within this region of relative stability.

 

 

"It seems"? What does that mean?

 

There is scientific evidence for this? Or you had a dream? Or ... ?

Share this post


Link to post
Share on other sites

I think I can answer this, if we knew everything, we could predict everything... But then what would we do?

Share this post


Link to post
Share on other sites

If the universe is a computer simulation, to me at lease, it makes me wonder if it might be possible to "hack" the simulation...

Share this post


Link to post
Share on other sites

I think I can answer this, if we knew everything, we could predict everything...

 

 

Except we now know that isn't true.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.