Jump to content

We are all computer simulations - Not


Edtharan

Recommended Posts

Again, this is not supposed to be a serious discussion and come to a definitive answer (neither is it just about randomly spouting beliefs). It is supposed to be a discussion about something that is of interest to people (not all). Coming to a definitive conclusion would be good, but it is not part of my expectations.

Hmmm, now you got me stunned !

 

Granted I don't know what "tongue in cheek" is really meaning, foreigner and all. ;)

 

But I didn't actually think that you seriously proposed this as proof, so... :confused:

 

Have I offended you in any way ??? :embarass: If thats the case then I am truly sorry ! :)

 

Or don't you want the discussion to be serious ??? :eek:

Link to comment
Share on other sites

Granted I don't know what "tongue in cheek" is really meaning, foreigner and all.

Ok, the term "Tongue in Cheek" is similar to the " :P " smiley, but without the rude overtones.

 

As such idioms are easily accessible for explanation on the net (see: http://en.wikipedia.org/wiki/Tongue_in_cheek ), I didn't fully explain what I meant by it in my initial post. Sorry for the misunderstandings.

 

But I didn't actually think that you seriously proposed this as proof, so...

What I meant for this thread was not for us to be silly about the posts, but as an opportunity to escape our own beliefs about the subject as consider for a second the "possibility" of something else.

 

The way I wanted people to respond in this thread was assuming an hypothetical discussion, without having serious overtones of such a term.

 

Have I offended you in any way ??? If thats the case then I am truly sorry !

I am difficult to offend ;) , and anyway, you didn't do anything offensive (you just seem to have misinterpreted the purpose of the discussion and I was attempting to clear up the misunderstanding).

 

It is this whole "text" issue of discussion boards. What is written (or types) down, does not necessarily convey the emotional or other non-verbal cues of conversation. It is estimated that around 80% of human communication is non verbal, so these kinds of misunderstandings will occur, so when they do, I try not to get too worked up about them...

 

Or don't you want the discussion to be serious ???

Semi serious is closer to the mark. I wanted a less than serious discussion, so that people would be free to "Speculate" (as this is in the Speculations sub forum :D ) even about concepts that they don't agree with.

 

Rather than marking out positions (as is common with debating and discussion), I wanted to have a discussion where people are not constrained by this way of thinking.

 

If I had just posted "We are in a game simulation" as the topic, then people would only respond with why we are not in one.

 

What I wanted was to explore the options IF we were in a simulation, Could we tell if we were in a simulation? Could we tell what kind of simulation it is? Could we communicate with the creators? And How could we go about doing such things? What would it mean if we were in a simulation (how would it effect our views of each other and the world around us)? And so on.

 

I think such kinds of discussions as this are important, as they not only help us to understand other points of view, they can also help us to understand our selves more as well.

 

On that note:

 

Lets explore those questions I just posted in the paragraphs above...

Link to comment
Share on other sites

Rather than marking out positions (as is common with debating and discussion), I wanted to have a discussion where people are not constrained by this way of thinking.

OK, then I "rest my case" and we leave our disagreements behind, just noting different opinions... :)

 

Not much more for me to add, but I will give you one last thought to chew on. ;)

 

How big would a quantum computer with a memory bank consisting of 400 entangled electrons be ?

 

If you take a system of only 400 particles, and it could be, say, 400 electrons, you can put those into a quantum state called an entangled state, which can be described by a certain string of numbers, and it turns out that you need so many numbers to describe that state that it would exhaust the capacity of the entire universe to store it. In other words, there's more information in that state than can be contained in the entire universe.

 

Entanglement and the universe as a computer

http://www.scienceforums.net/forum/showthread.php?t=23941

Link to comment
Share on other sites

How big would a quantum computer with a memory bank consisting of 400 entangled electrons be ?

Much bigger than just 400 electrons. :D;)

 

If you take a system of only 400 particles, and it could be, say, 400 electrons, you can put those into a quantum state called an entangled state, which can be described by a certain string of numbers, and it turns out that you need so many numbers to describe that state that it would exhaust the capacity of the entire universe to store it. In other words, there's more information in that state than can be contained in the entire universe.

And what if you were trying to simulate those 400 electrons? What if you were trying to simulate the Quantum computer? How much computing power would you need?

 

This is what I was meaning about needing a "Computer" bigger than the system you are simulating.

 

Actually, if the universe is not the most efficient "simulator" of it's self (that is you can simulate in less time, the behaviour of 400 electrons with a quantum computer that uses 400 electrons for it's calculations), then that would be good evidence for us not being in a simulation.

 

Under that assumption, if the universe was a simulation, then the simulation could work faster than the computer simulating it. A simulated computer could perform calculations faster than the computer that is simulating it could. As such a thing would be nonsense, it would act as evidence (although not conclusive evidence - as it doesn't rule out incompetent programmers of our simulation) against us being in a simulation.

Link to comment
Share on other sites

We are already hard pressed by atoms and molecules in miniaturization, as are we by the speed of light. Hard drives and optical systems already hit limits via molecules, CPUs hit electricity speed and so on. Downsizing by an order of magnitude or two ain't gonna be enough to run anything. After all, if you REALLY need an order of magnitude, you go for RAIC/multiprocessing.

 

What we need is a brakethrough and I don't mean by a bit. We need to rediscover data storage and processing.

 

Yes an electron is small, but the gear needed to actually move the charge in a controlled, reliable manner is the size of a building. What is the mass of the magnetized molecules on the surface on a HDD? What is the mass of a HD device? You get the point.

 

It matters not how tightly you package the data, it's how you read/write it. Here: 42. I'll remember this number so you don't have to. The info itself takes no space as far as you are concerned. It's just that you need the two-meter-80-something-kilo-lump of me. And it's unreliable. I need food, drink, entertainment, not to mention I get *really* upset if you store me in a basement for 5 years and I simply refuse to tell you. "Aw, the number? That'll be 34$ please.".

 

The mainframe you speak of that runs such a sim has NOTHING in common whatsoever with today's computers. They are probably so different it's not even binary/logic. And I sincerely doubt they are independent processing unit. Humanity computer most likely, the unified architecture in which we all have terminals and a single monstruosity at the very center. Take a good look at bandwidth evolution and computing power. One order of magnitude in computing power brought about three in a single technology and we aren't even trying - there's nothing to send.

 

OTOH, who said it's realtime? We may run a frame every millennium. If time is not of the essence, but the result is, who's to say? We may have achieved eternal life, via technology, harnessed the universe and taimed it - speed is meaningless to eternal beings.

Link to comment
Share on other sites

OTOH, who said it's realtime? We may run a frame every millennium.

This is only considering if we are the top level simulation.

 

Instead of absolutes, we need to look at probabilities. If we are in a simulation that is being run at 1 frame every millennium, then there are not likely to be many recursive simulation (ieL simulations inside the simulation inside the simulation inside the simulation...). This then limits the probability that we are in such a branch.

 

However, if we are in a simulation that is being run much faster, then this increases our chance that we are in such a branch as there will be more recursive simulations possible.

 

So then just looking statistically, we are not very likely to be in such a slow simulation. We could be in such a one, but this is unlikely.

Link to comment
Share on other sites

The Plank constant is the size of the pixels in our simulation. That's why the sub-atomic world is so wierd. The pixels don't become something until required to do so! It could also explain some of the really strange things about the nature of consiousness, and could also explain quantum entanglement, Psi and the paranormal.

 

I'm not at all opposed to the idea that we are Sims. It would make more sense if we were!

 

The only thing that that makes me doubt it is that I can't believe humans will survive this current mass extinction event, so we'll never be able to build these supercomputers...

 

I don't think an answer either way would change anything though - what is reality anyway?

Link to comment
Share on other sites

Instead of absolutes, we need to look at probabilities. If we are in a simulation that is being run at 1 frame every millennium, then there are not likely to be many recursive simulation

 

You lost me. How did you jump to that conclusion? I fail to see any link between recursion and speed of execution. We could be anywhere.

 

It could also explain [...] paranormal.

 

I wasn't aware that there was something to explain. No such thing was ever documented, so how do you explain something that doesn't exist?

Link to comment
Share on other sites

And what if you were trying to simulate those 400 electrons? What if you were trying to simulate the Quantum computer? How much computing power would you need?

So, if we are SIMs and the system has limits, what will happen when we build a few of these...

 

System failure, shut down, strange physical anomalys, alerting the creators ?

(EDIT: Maybe termination by some Virus protection software ?)

 

Or will our grand children be playing GODs to the SIMs in their "Game Boys" ?

Link to comment
Share on other sites

You lost me. How did you jump to that conclusion? I fail to see any link between recursion and speed of execution. We could be anywhere.

Ok, If a simulation take 1000 years to complete 1 second of simulation. Then if we are in a sub simulation being run at the same rate (1000 years for 1 second of sim) then it will take 1,000,000 years of the top level simulation to produce 1 second for us.

 

If on the other hand, they use fast approximations to increase the sim speed, to 1 second will produce 1000 simulated years, then in 1000 years they will be able to run far more simulations.

 

So on one hand we have a slow simulation and they can only run the 1 (and only 1 second passes in the sim for every 1000 years that the sim is running). Because of this, these people will not produce many simulations over a period of time.

 

On the other hand, we have a fast simulation that in the 1000 years it takes to run 1 second of sim in the other scenario, this group could run billions upon billions. and have then run for a much longer time in that period.

 

This gives us N to the 1 odds that we are in the fast sim group (by "N" I mean that I couldn't be bothered to calculate the actual number as it would be really, really, big).

 

Also, if for every 1 second the sim would experience 1000 years, then this would give the inhabitants time to make their own simulations and run them (the recursion).

 

If all we had was 1 second, we could not simulate much at all. A recursive simulation can not run faster than the parent simulation. It would be like using your PC to emulate a Macintosh which is emulating a PC. that final PC emulation, even though the original PC was fast, would not be anywhere near as fast as the original.

 

So, if we are SIMs and the system has limits, what will happen when we build a few of these...

Well nothing in a simulation can run faster that the original computer. So even if we did use Quantum computers (and we are in a simulation), the host computer would still have to be faster as they are simulating the quantum effects of the entire universe.

 

System failure, shut down, strange physical anomalys, alerting the creators ?

(EDIT: Maybe termination by some Virus protection software ?)

no, no strange effects or anomalies. However, if we are in a simulation, the hosts can turn it off on a whim, so they might shut it down then, or not. they might shut it down because of anything.

 

As for virus protection, well if that behaviour is built into the simulation (that is the quantum mechanics are one of the simulation rules), then we are not operating outside the program, we would not be performing any malicious operations. But of course, any behaviour, by anyone or anything could be the trigger that the hosts use to decide to turn us off.

Link to comment
Share on other sites

So, if we are SIMs and the system has limits, what will happen when we build a few of these...

 

System failure, shut down, strange physical anomalys, alerting the creators ?

(EDIT: Maybe termination by some Virus protection software ?)

Well nothing in a simulation can run faster that the original computer. So even if we did use Quantum computers (and we are in a simulation), the host computer would still have to be faster as they are simulating the quantum effects of the entire universe.

 

no, no strange effects or anomalies. However, if we are in a simulation, the hosts can turn it off on a whim, so they might shut it down then, or not. they might shut it down because of anything.

 

As for virus protection, well if that behaviour is built into the simulation (that is the quantum mechanics are one of the simulation rules), then we are not operating outside the program, we would not be performing any malicious operations. But of course, any behaviour, by anyone or anything could be the trigger that the hosts use to decide to turn us off.

You misunderstood what I meant by "limits"...

 

My point was about computer memory, the system would need to store every state of every particle in the simulation somewhere. So if there are limits on memory storage and we start to create states which need much more memory than the total universe then we force the system to use more and more memory and eventually we might exceed the maximum limit.

 

What happens when you run an application which will consume more and more memory ?

 

Slower computer, strange errors, memory loss, blue screen of death ?

(Virus like behaviour ?)

Link to comment
Share on other sites

If all we had was 1 second, we could not simulate much at all. A recursive simulation can not run faster than the parent simulation.

 

Awww, a child simulation. A typo threw me off to recursive algorithms.

 

Well, it's all relative. You see, simulation can be child-based (simulation running a universe that itself has a civilization running a simulation). That does not mean that it runs THE SAME simulation. It can't.

 

It's like a camera that films its own output. It then sends to the monitor, which gets filmed, which again gets sent in a fractal manner down to a single pixel and no more. Point being, you can't do this forever. At one point the simulation breaks because nothing is infinite, especially in a computer.

 

What happens is, your write Universe.exe and run it.

 

Universe.exe runs 1000 years and hits the development point where the child universe spawns a Universe.exe. At that point, data is added to the system, since those cycles can't be skipped, adding to the load. Be it linear, exponential, or of a different order.

 

Universe.exe runs another 1010(2000? million?) years until the child universe runs another Universe.exe. More data is added.

 

The simulation grinds to a halt. Each cycle is longer, to a point where it doesn't matter.

 

Let me simplify.

 

---

If the simulation runs very fast, say, 1 ms until it evolves and spawns a child,

then the next ms it has 2 sims to run. The third has that too and so on, so after one second you already added 1000 simulations. Next second you do the same. Problem is,that program never completes it cycle.

 

Fast forward to real time and you see it freeze. Infinite recursion and locked loops are what we call "frozen" applications.

 

If course, in real life what you get is a stack overflow/memory error reeeal quick.

 

It would be like using your PC to emulate a Macintosh which is emulating a PC. that final PC emulation, even though the original PC was fast, would not be anywhere near as fast as the original.

 

But that PC runs a mac that runs a PC that runs a mac. When you hit the key it "freezes" because it's locked into a pointless and neverending loop of creating children. It doesn't run slow, it freezes. Remember, this isn't fluent, you need to quantize to compute.

 

Well nothing in a simulation can run faster that the original computer.

 

Oh yes it can, if SIMULATED, you can simulate only what's in use, simulate a different thing, etc. Or if the EMULATED universe is less complex than the original. Their universe may have billions of levels of subparticles / billions of known macrouniverses. We are stuck in a simulation and when i = 1000 then i := -1000. Then we stand there, wondering how traveling in a straight line gets us to where we started.

 

So even if we did use Quantum computers (and we are in a simulation), the host computer would still have to be faster as they are simulating the quantum effects of the entire universe.

 

This is an endless argument. It only needs to simulate AN universe, not THE universe. What we see as uber-complex quantum physics might be childplay in another - or a beta test in another. It might also be very complex because we try to explain what was created using Random(). We live in an overcomplex universe. Real matter is formed from basketballs. Who knows? Chocolate?

 

Also, we only access one billionth(ththth) of the universe at one time. We can't check to see if atom nr 2005 matches atom 1*10^42342452454. We can't. Also, we draw laws from what we see. If some idiot declared speed as 16 bit unsigned we can't travel more than the speed of light. We don't know it's a bug, we find convoluted explanations and create laws that help us understand how it works. And we succeed, because 2 points determine a line. 3 points determine a circle. What we do is draw the circle and look for an explanation as to why they are there, and WHY A CIRCLE? We invent the circle, write an equation. Use 4 points. There's an equation for that, too. 4 billion points? That too. All you have to do is plot random numbers, seed an AI and watch it make sense of it. Maybe we're doing someone's homework.

 

There is no relation between the runners of the simulation and the universe inside. The water-down of the simulation also explains the infinite recursion. Each recursion is simpler to a point where a certain level fails to run a successful simulation. Most likely chatting over a forum saying it's impossible to do :)

 

they might shut it down because of anything.

 

We wouldn't know we've been swapped to disk.

 

As for virus protection

 

We would have giant bugs eating solar systems hole, die of spontaneous combustion, morph into sperm whales then we would be restored from a backup and not remember anything, not even realize it was interrupted, let alone we came really close.

Link to comment
Share on other sites

You misunderstood what I meant by "limits"...

No, and I'll explain why:

 

My point was about computer memory, the system would need to store every state of every particle in the simulation somewhere.

If the simulation stored the states of every particle in the simulation, and a computer within the simulation is just a matter of reordering those particles, then we can not create an "out of memory" error as we are not adding to the memory.

 

The Data in our simulation would have to be stored as states of particles. But these states are already stored in the parent simulation. We are not changing the amount of data in the simulation, no matter what we do, no matter what data we create. It all has to be stored in the states of particle in our universe, which are already stored in the parent simulation. We won't get the BSOD, or virus like activity.

 

It's like a camera that films its own output. It then sends to the monitor, which gets filmed, which again gets sent in a fractal manner down to a single pixel and no more. Point being, you can't do this forever. At one point the simulation breaks because nothing is infinite, especially in a computer.

How does this really effect the discussion?

 

Because there would not have been an infinite time in the top level universe, then there could never be an infinite level of recursion anyway. Because even at the top level Data/Time!=Infinity, this really has no big impact on the arguments.

 

Sure, there is a limit, and this would put a limit on the probabilities, but there was a limit before this too (due to time).

 

The mistake that you are making is thinking that by creating a simulation within a simulation that more data is added to the parent simulation. This is not the case. The computer and the data in it in the child universe is always in the simulation. You have not added to the data of the parent simulation, you have just arranged it differently.

 

If the simulation runs very fast' date=' say, 1 ms until it evolves and spawns a child,

then the next ms it has 2 sims to run. The third has that too and so on, so after one second you already added 1000 simulations. Next second you do the same. Problem is,that program never completes it cycle.

 

Fast forward to real time and you see it freeze. Infinite recursion and locked loops are what we call "frozen" applications.[/quote']

No, again. The matter that makes up the simulated computer and it's data are already being simulated by the parent simulation. So the parent simulation does not see it's simulation slow down, but there is a speed limit of the child's simulation imposed by the simulation of the parent. The thing is, those entities in the child simulation (or even the child's child simulation) will not perceive their universe as running slow as their perception of time is linked directly to the execution speed of the simulated universe that they are in.

 

TO resolve all this. Let us think about "Conway's game of Life". This is a cellular automata. See here for more details: Conway's Game of Life

 

What they have discovered is that it is possible to simulate a Universal Turning Machine (a computer) within this simulation. No they could then use that simulated computer to run another version of Conway's Game of Life. The thing is, even if they did this, the top level simulation would not grind to a halt. It is running the simulation for all the "Cells" in the top level that are being used to make the child commuter and the data storage too. This would be happening even if there was no simulated computer. It would take the same amount of storage space and the same amount of processing speed, regardless of the number of child simulations or child's child simulation that exist.

 

What you are getting confused about is when a computer runs a recursive algorithm. FOr this to occur, each instance of that algorithm needs to be created on the parent computer. This type of recursive processing will need more space on the parent computer, but the simulation within a simulation will not. The simulation within a simulation is not a recursive algorithm (you could do it with recursive algorithm, but it is not necessary to do so).

 

But that PC runs a mac that runs a PC that runs a mac. When you hit the key it "freezes" because it's locked into a pointless and neverending loop of creating children.

Yes, because the simulated computer is created s an instance on the parent computer. In the simulated universe, the child universes are not created as an instance on the parent, but as a rearrangement of the components of that simulation. It would take up no more room and no more processing power. This is the major difference between the recursive simulation and the recursive algorithm.

 

Oh yes it can, if SIMULATED, you can simulate only what's in use, simulate a different thing, etc. Or if the EMULATED universe is less complex than the original.

What I was talking about is that if you had a computer in the parent universe and it was simulating a universe that is functionally identical (although it might be smaller). And in that simulation you created the exact same computer design, then that simulated computer could not run at a faster speed than the parent computer.

 

In fact, this should apply to any system in the parent. It might be possible to do fast approximations in the child universe that might rival the speed that they occur in the parent, but these are fast approximations, not a true simulation (they are approximations).

 

Their universe may have billions of levels of subparticles / billions of known macrouniverses. We are stuck in a simulation and when i = 1000 then i := -1000. Then we stand there, wondering how traveling in a straight line gets us to where we started.

IF such things existed, then they would be these "anomalies" that would allow us to detect that we are in a simulation. So a good systems designer would have accounted for these artefacts and implemented techniques to eliminate them (like making sure that you can't have i=1000).

 

This is an endless argument. It only needs to simulate AN universe, not THE universe. What we see as uber-complex quantum physics might be childplay in another - or a beta test in another. It might also be very complex because we try to explain what was created using Random(). We live in an overcomplex universe. Real matter is formed from basketballs. Who knows? Chocolate?

But what we are talking about is computation. Not what the computers are made of. A Universal Turning machine is capable of simulating any possible computer. This doesn't say "any possible computer that is not made of chocolate.

 

You can simulate a Quantum computer of your desktop computer (it wouldn't run nearly as well as a real quantum computer though), even though the physics of their operation is completely different.

 

If the universe is computable, then we can simulate it. If the universe is not computable, then we can not be in a computer simulation of one.

 

Also, we only access one billionth(ththth) of the universe at one time. We can't check to see if atom nr 2005 matches atom 1*10^42342452454. We can't. Also, we draw laws from what we see. If some idiot declared speed as 16 bit unsigned we can't travel more than the speed of light. We don't know it's a bug, we find convoluted explanations and create laws that help us understand how it works.

From what we do no about atomic particles, all particles of the same type (electrons, etc) are identical. You can not tell one particle form another, except by the values on them (momentum, spin, etc) and as these can be changed, for all we know there might be just 1 particle in the universe, just seen time and time again (how much computer memory would that take up?)

 

There is no relation between the runners of the simulation and the universe inside. The water-down of the simulation also explains the infinite recursion. Each recursion is simpler to a point where a certain level fails to run a successful simulation. Most likely chatting over a forum saying it's impossible to do :)

It is true that a simulation does not have to conform to the physics of the parent universe. It does not have to have be computational. However, if it is not computable, then it can't be a computer simulation, there fore it is the top level (it has no universe above it simulating it).

 

If however, this universe can create a computation machine, then any universes it simulates must be computable. If it can't create simulations, then it just cuts off the chain (and as we have the ability to create computers and create simulation on them that universe can't be ours).

 

So, either it is the top level universe, or it is not our universe. Either way, if we consider this as a potential variation on the universe, it just strengthens the chance that we are in a simulation.

 

Also there is no Infinite recursion, unless the top level simulation has had an infinite time running the simulations. I am not talking about infinite recursions (I have even mentioned infinite recursions), just that you can get recursive simulations in a simulated universe. These would be a finite number of recursions.

 

We wouldn't know we've been swapped to disk.

No, we would never be able to tell that this had occurred.

 

We would have giant bugs eating solar systems hole, die of spontaneous combustion, morph into sperm whales then we would be restored from a backup and not remember anything, not even realize it was interrupted, let alone we came really close.

Yes, we could never tell if the creators had pressed the "Undo" button. What we would not see virus as is "Giant Bugs" eating the solar system. What we would see is nonsensical results in the physics. Parts of the universe behaving randomly, matter and energy would be scrambled. It would be the equivalent of "Static" on a TV screen (and seeing this we could - if we had the time and the virus was not effecting our part of the universe - work out some of the underlying behaviours of the parent system before we got erased or reverted to a previous backup).

 

I doubt that anything as coherent as us morphing into "Sperm Whales" (or a bowl of petunias for that matter - Hitch-hikers Guide to the Galaxy Reference :D:cool: ). Any effect by a virus would, from our perspective, seem random. The virus would not operate on our "physics" but on the underling logic of the host computer.

Link to comment
Share on other sites

The mistake that you are making is thinking that by creating a simulation within a simulation that more data is added to the parent simulation.

 

This might sound personal, but what do you do for a living? I'm not trying to sound superior, but this is theoretical physics applied to an informatics problem.

 

You are thinking of a perfect emulation of every particle which is not really going to happen. If you think that the host computer emulates all known physics, then this reply is void, it's a different discussion, you can skip ahead.

 

Nothing is free. While in theory whatever resources were allocated get used for a simulation, in reality cycles need to be allocated to run the simulation. This runs in parallel with other stuff, but in order to work it has to run, no corners cut. It ads workload to the system, as new data to be processed. Limiting the virtual machine simply pushes the load further away, out of first glance. It will translate into load.

 

Let me do a simplified example:

 

We create a VM for a human. We allocate all cells, all atoms, then we give him a workspace that runs various threads - breathing, pumping of blood, etc. We run all that in the brain space.

 

Now our human gets the number of molecules X to build a PC. We allocate those, and design physics. White, hard, etc. When that computer starts playing Q3, our host computer has to render those operations, because the computer is simulated. It's not real. It's a simulated screen that feeds a Q3 image to our subject. The host runs physics, the human, the computer and Q3.

 

Now emulation, that's different. True, in a fully emulated environment whatever happens in that universe is (almost) free.

 

The thing is, even if they did this, the top level simulation would not grind to a halt.

 

That's a technicality, they simply pre-allocated the resources. They reserved the space for that program, whether they used it or not.

 

Nothing is free, again I say. You get 3 billion operations a second and not a tad more. If you emulate a computer, you use up that amount of it. That computer runs anything, those instructions have to run SOMEWHERE. SOMETHING must tick or flick to make a 1 to a 0.

 

I can write a loop that does a line hopping up and down, one pixel a second. Then I make that line hop up and down twice as fast with no additional gain. How does that work? Well, either

 

a) I was wasting processing power, waiting for each image to draw (so the line doesn't bounce 1.000.000 times a second

b) I'm skipping steps. The second line hops 2 pixes each time, but it's a simulation and I'm cutting corners based on human perception.

 

If one could run something inside something else with no penalty then the first emulation is free? How can that be? How can something do something with zero cost?

 

It is running the simulation for all the "Cells" in the top level that are being used to make the child computer and the data storage too. This would be happening even if there was no simulated computer. It would take the same amount of storage space and the same amount of processing speed, regardless of the number of child simulations or child's child simulation that exist.

 

Wrong. You almost get the point, but steer away at the last minute.

 

Host 1 = 1 Kb.

Life = 1Kb

 

One host that can run Life, life or not = 2K (self plus the life game, regardless of it running).

 

Now add another host.

 

Host 2 = 1K

Host 1 = 1K

Life = 1K

 

A host that could run a host that could run Life is 3K, regardless of host1 running and Life running. It's the cost that ran out of view from the cost of processing power to the cost of resources needed and the cost of programming. Remember, in order to reserve the space and instructions for Life, Host2 needs to be ready to run either Host 1 or Life or the combo. The absolute cost is still there.

 

If Host 2 doesn't already allocate the simulations, then it is just a host ready to run whatever. It's 1K and runs fast. When someone starts Host1, it expands to 2K and slows down to accommodate the load of Host1.

 

What you are getting confused about is when a computer runs a recursive algorithm. FOr this to occur, each instance of that algorithm needs to be created on the parent computer. This type of recursive processing will need more space on the parent computer, but the simulation within a simulation will not. The simulation within a simulation is not a recursive algorithm (you could do it with recursive algorithm, but it is not necessary to do so).

 

All due respect, I'm not confused at all, you are confusing costs. You actually claim with a straight face you created a perpetuum mobile and don't even think it's in any way wrong.

 

All you do is shift the load. A recursive algo that runs on host creates load and memory usage on host. An algorithm that runs on the guest creates a single load on the guest that loads the host. If this wasn't true then I could use a 300MHz computer that can run a guest at 30GHz. What's bothering me is that you see nothing wrong with that. If the computer at 300 can emulate the guest at 30G, then that can run at 30G. Point is, it can't. The 30G is not really 30G. If it loads at <250 MHz, no load is translated. After that, it reaches the max host (minus emulation cost) and the guest starts grinding down because it's asking the host questions the host needs to answer.

 

It's really odd, at some point I couldn't follow you any more. You say we can build machines and run simulations at no cost, then you say each one is slower or needs even larger approximations. Yet you say the host doesn't grind down.

 

If you create a subset, and the computer we run in has this fixed speed that emulates us to the particle then we can build very fast computers but not faster than the host, because nothing in this universe goes faster than the host. We grind down. Our simulation would grind down even more, because their celestial limit is lower.

 

It's the same as the host grinding down when running simulations, except you fix the other point by moving your frame of reference to this universe. If you are in simulation nr 7, then your children slow down, but the host doesn't. If you move to universe 5, we have slowed down. The ripple hasn't changed, you just jumped frames. When you say it doesn't grind down you reference frame the host0's CPU. That never grinds, it ticks at whatever. Your frame of reference should be the universe it's in, that is, the Elder Programmer. From his point of view, each emulation grinds further.

 

IF such things existed, then they would be these "anomalies" that would allow us to detect that we are in a simulation. So a good systems designer would have accounted for these artefacts and implemented techniques to eliminate them (like making sure that you can't have i=1000).

 

Even if not and we run a buggy system we wouldn't know because the actual software is our reality. If 1+1=3 and space-time folds then we get a black hole that's actually a bad memory slot you can't go to or you get lost. Your information is simply lost and instead of shaking our fist and yelling "fix it you lazy bastard" we look there in awe and devise laws to cover it.

 

Our "correct" is based on observation so if we had legs on out torso and hands for walking we'd sit there wondering if we are in a simulation. We wouldn't know, regardless of "quirks".

 

Turning machine is capable of simulating any possible computer. This doesn't say "any possible computer that is not made of chocolate.

 

Hehe. You need to stop thinking physics and start thinking computers. Nothing is free. A Turing can computer anything another machine can compute as longs as it has unlimited resources. When resources are limited, and they always are, it starts to fail as a model.

 

I can compute by hand anything you can compute given infinity as resources. If I have an infinity, I can get a degree in whatever you have and do whatever you do. Things don't look so bright in the real world.

 

I am not trying to make this personal, sorry to stress this, it's just that I'm used to using "you" in examples. Point is, just because something CAN be done, does not mean they are equivalent. Time is also a cost and that's precisely what I was going on about.

 

You can simulate a Quantum computer of your desktop computer (it wouldn't run nearly as well as a real quantum computer though), even though the physics of their operation is completely different.

 

Precisely. The quantum computer would run a one billionth of its capacity and the next emulation would be even more crippled. Move your frame of reference again to quantum computer nr. 2 and you run at full speed. But the host (PC) ground to a halt.

 

If the universe is computable, then we can simulate it. If the universe is not computable, then we can not be in a computer simulation of one.

 

This is (excuse me) a narrow minded approach. There are many shades of gray, you can run a universe on a beefy PC if you sacrifice something that's not needed. Like, the size of the Universe. Or speed. Or whatever else.

 

From what we do know about atomic particles, all particles of the same type (electrons, etc) are identical.

 

We will most likely evolve beyond that. In theory, we could be fed random data every measurement to keep us from evolving thus limiting out simulation to something the host can run to a target speed.

 

A valid point (assuming we get stuck at this level). Theories we have, proof we don't. One could say that gravity was meant to be instantaneous, yet we observe CPU speed. We could also say that because gravity is at a more basic level it's faster. Light gets simulated next, then the rest at sub-light speed as we run in a dynamic, law-based loop, opposed to light/gravity that are embedded in the kernel so they always have the same speed.

 

If it can't create simulations, then it just cuts off the chain

 

If it can't emulate an universe of the same complexity as ours the chain will be cut sooner or later. Which was my point. Force frame of reference as the host machine and you run full speed. It's just that the last ibiverse grinds to a halt. Move to the last universe and the host machine spins like crazy, however, the time in the first universe also spins like crazy so the universe probably ends really, really fast.

 

So, either it is the top level universe, or it is not our universe. Either way, if we consider this as a potential variation on the universe, it just strengthens the chance that we are in a simulation.

 

How do you glue these things together? The only thing all of the above says is "if a universe chain exists as above then we are probably not at the top". That's all. The chance of this chain existing is non-determinable thus the chance of us being in universe 1, 4 or 3 billion is non-determinable. No higher, no lower.

 

This is not the same as saying "we are either at the top or in a simulation". We could be in the only real universe, as well. We could be spinning inside a giant's blood cell. We could be someone's thought. We could be someone's dream.

 

Also there is no Infinite recursion, unless the top level simulation has had an infinite time running the simulations. I am not talking about infinite recursions (I have even mentioned infinite recursions), just that you can get recursive simulations in a simulated universe. These would be a finite number of recursions.

 

It can't be infinite (in spite of the definition of recursive simulation) with the given data. It could be infinite if the host is running everything.

 

E.g. when we press play on our simulation, our universe gets paused and saved (or destroyed) and that simulation now runs on the host. This could happen infinitely.

 

What we would not see virus as is "Giant Bugs" eating the solar system. What we would see is nonsensical results in the physics. Parts of the universe behaving randomly, matter and energy would be scrambled. It would be the equivalent of "Static" on a TV screen

 

That is not necessarily true. You need to take vectorial and objectual(?) concepts into account. When a virus attacks a game, you don't see static. What you get is an error because the system is protecting itself by watching over the game's shoulder and when it sees that it has run amok it kills it.

 

On a dedicated mainframe there are no such safety system, as there are non in dedicated hardware. That's why in a game with a graphics driver bug (the GPU on the card has no system to watch it) you can see the floor disappearing from under your feet and you don't fall. You could also see people running around with flower pots stuck to them, continuously exploding, bodies floating around, looking dead but still shooting.

 

Giant bugs and sperm whales maybe no, I was cracking a joke but floating people? Sure. Non-accessible areas with no laws, like black holes? Yes. Ghosts? Sure.

 

Actually, if it's a simulation, it's likely we get temporary errors like looking with the corner of your eye and seeing a man with no head yet when you look again it gets re-rendered with another part of the mainframe (it gets relocated due to error) and you get the correct image. It is also possible that memory corruption can cause people to malfunction, developing obsessive behavior (stuck processes), forget obvious stuff, inability to find objects in your view (they aren't there) and so on.

 

Scary, though, these things *do* happen. And if the simulation has good error recovery and sanity checks they self-solve. Also, out brain has been pre-programmed to EXPECT things and FILTER things. There have been experiments that have proven this to be true. When you concentrate on some things you are oblivious to everything else (including a woman in a gorilla suit). You'd think you'd notice a woman in a gorilla suit passing by? Think again. We all think that if we saw a sperm whale in our bathroom one night we'd remember. I'm starting to doubt.

Link to comment
Share on other sites

If the simulation stored the states of every particle in the simulation, and a computer within the simulation is just a matter of reordering those particles, then we can not create an "out of memory" error as we are not adding to the memory.

 

The Data in our simulation would have to be stored as states of particles. But these states are already stored in the parent simulation. We are not changing the amount of data in the simulation, no matter what we do, no matter what data we create. It all has to be stored in the states of particle in our universe, which are already stored in the parent simulation. We won't get the BSOD, or virus like activity.

I don't agree, if it's possible to entangle 400 particles so that their new state contains more information than the 400 particles had before, in fact to more information than all particles in the entire universe contains, then that state would also need so much more memory to simulate.

 

If you where to build a computer for simulating the entire universe, where would you draw the limit ?

( X groups of X entangled particles or 1 group of all particles entangled )

 

What is the maximum level of entangled particles in the universe now ? (12 on Earth)

Achieving an entangled state of 400 particles is not easy, but it's not obviously hopeless and the quantum computing industry has set its target at several thousand. They're up to about 12 at the moment but they're very optimistic that within our lifetime they will achieve at least this 400.

(from the link previous posted)

 

How much information is handled if all particles in the universe would be entangled ?

 

Of course if the computer running the simulation is made to handle all the information even if all particles would be entangled, there would not be any problems whatever we do.

 

But then the computer would be so powerful that there would not be any need for fast approximations.

(And the entire human civilization would only be a very tiny speck inside it, less than 400 particles.)

Link to comment
Share on other sites

This might sound personal, but what do you do for a living? I'm not trying to sound superior, but this is theoretical physics applied to an informatics problem.

I don't mind.

 

Well I have been programming computers for the last 22 years or so (I started when i was around 7). Due to an accident, the last 6 years I have not been able to do much programming, but I design computer games (as a hobby at the moment but working towards having it as a job). SO I am quite familiar with Information Technology.

 

We create a VM for a human. We allocate all cells, all atoms, then we give him a workspace that runs various threads - breathing, pumping of blood, etc. We run all that in the brain space.

 

Now our human gets the number of molecules X to build a PC. We allocate those, and design physics. White, hard, etc. When that computer starts playing Q3, our host computer has to render those operations, because the computer is simulated. It's not real. It's a simulated screen that feeds a Q3 image to our subject. The host runs physics, the human, the computer and Q3.

See what has happened here is that you created an initial simulation of the person. Then you created an additional simulation of a computer. So of course that computer will require more memory and processor resource. Just as if you created an additional human simulation.

 

However, if you were simulating all the atoms that made up that person and a table and chair that they could interact with, then moved the atoms around in the table and to make that computer, then no additional memory or processing would be required and the atoms that make the table/chair and the computer were already in memory and you were already simulating them (although they were not doing anything real interesting).

 

By your reasoning, a computer that is switched off and a computer that is operating would require different loads on a server to simulate the interactions of the atoms in it. This is what I don't understand about your augments. If the atoms are in memory and are being simulated, then why does putting them into a different configuration change the processing needs.

 

It would be like being able to use the exact same number of units doing exactly the same thing in an RTS game, but just move them into different places on the map and then getting an "Out of memory" error.

 

Nothing is free, again I say. You get 3 billion operations a second and not a tad more. If you emulate a computer, you use up that amount of it. That computer runs anything, those instructions have to run SOMEWHERE. SOMETHING must tick or flick to make a 1 to a 0.

Yes, the result is that the recursive simulations run at slower speeds as compared by the level above them.

 

Think about it this way. If I was to run a simulation of a computer, but all the parts were disassembled and without power, then using the same simulation program assembled the components and game them "power", would this take more or less than simulating them without power and disassembled?

 

There are fast approximations that can be used (that is not running the scripts that describe what a component does if it doesn't have power), but these fast approximations require a knowledge of what "power" is in the simulation. Is it mechanical, electrical, chemical, light, and so on. Then we the simulation would have to identify the component as something that could do processing and how it did it, what it is made form, etc, etc, etc.

 

There would be less overheads, just to simulate the thing completely, instead of relaying on these fast approximations.

 

This is what they talk about in computer game "Physics". They don't just mean gravity (easy to simulate) and motion (move the character X pixels each time step). But they mean creating a set of "Laws" that the came can follow that allows the various components of the game to interact.

 

This could be the way that two cubes can "click" together to make a Lego type object (eg: side 1 can join with side 6).

 

If one could run something inside something else with no penalty then the first emulation is free? How can that be? How can something do something with zero cost?

Let us use Lego. Instead of talking atoms, let us use Lego bricks.

 

Lego brick have various properties, they can interact (connect together) and so forth. Some Lego bricks can conduct electricity and so forth. With these let us invent some more Lego brick that don't exist. These are variations of the Lego bricks that conduct, but they actually are able to perform logic operation on them (they take two signal inputs and a clock input along with a power line).

 

So we have a program that simulates 10,000 of these bricks, of the various types. Does the way I put these together effect the processing needs of the computer that is hosting them. I don't "Make" any more bricks just the 10,000 of them and they are all constantly simulated weather they are connected to another brick or not.

 

If I was to build a computer out of these, would that effect the processing needs of the host computer? No, the bricks are already being simulated, their configuration does not induce any more processing needs in the host than they would if they were just all heaped in a pile.

 

This is what I am talking about.

 

When you are proposing a new level of simulated universe, you are instancing that in the host computer, not the simulated universe, so yes, your method would require more processing needs, but if the simulated universe could reorganise the matter that is already being simulated (it already exists in the host computer's memory and the processor already calculates it), it would not increase the work load on the host computer as all this already exists as a workload on the host computer.

 

A host that could run a host that could run Life is 3K, regardless of host1 running and Life running.

Yes, that is exactly what I said. I don't see what you are trying to say? You are claiming my point is wrong by claiming it is right?

 

All due respect, I'm not confused at all, you are confusing costs. You actually claim with a straight face you created a perpetuum mobile and don't even think it's in any way wrong.

No, you are claiming that I am claiming that.

 

I even said that you could not have an infinite recursion because the host computer is limited in both processing speed and time to process.

 

All you do is shift the load. A recursive algo that runs on host creates load and memory usage on host. An algorithm that runs on the guest creates a single load on the guest that loads the host. If this wasn't true then I could use a 300MHz computer that can run a guest at 30GHz. What's bothering me is that you see nothing wrong with that. If the computer at 300 can emulate the guest at 30G, then that can run at 30G. Point is, it can't. The 30G is not really 30G. If it loads at <250 MHz, no load is translated. After that, it reaches the max host (minus emulation cost) and the guest starts grinding down because it's asking the host questions the host needs to answer.

It would be possible to simulate a 30GHz computer on a 300Hz computer, however, the 30 GHz computer would not run at 30GHz as viewed form the 300Hz computer. The only point of view that would have the 30GHz computer running at 30GHz is from the simulated computer.

 

I think this is the source of the confusion between what we are trying to say. You are viewing it all from a single point of view: The Top Level computer.

 

I understand that any viewpoint in this is relative. We see time operating at a certain speed. Weather or not this is real time according to a Computer above us in the chain is irrelevant. 30GHz to us is 30GHz to us, regardless of what any other simulation might see it as (above or below in a chain).

 

So when I talk about processing speeds, it is relative to what others would see (above or below).

 

30GHz to us, might be 0.0001Hz to our Host. But they are simulating a computer, that to the inhabitants of that simulation, would see it as 30GHz.

 

Trying to pin down the actual speed of a hypothetical host that we have no what of know the specs on is pointless. Thinking about what it would be like if that hypothetical host existed, that is more constructive and the aim of this thread.

 

Even if not and we run a buggy system we wouldn't know because the actual software is our reality. If 1+1=3 and space-time folds then we get a black hole that's actually a bad memory slot you can't go to or you get lost. Your information is simply lost and instead of shaking our fist and yelling "fix it you lazy bastard" we look there in awe and devise laws to cover it.

This is why speculation is necessary. We can speculate that in a universe that 1+1=3 how would computing be possible. Could that kind of universe support complex life, could that life become intelligent, could that intelligence create a computer and could that computer simulate a mathematics vastly different than the mathematics that governs it's operation?

 

If we just say: "Oh, I can't see how it could, so we may as well not try to understand or ask questions", we wont get anywhere. We will never discover anything.

 

Even if the conclusion is: 'We can never tell if we are in a simulation or not", this is an important discovery. It places limits on various aspects of the universe, for instance, any anomalies are real physical effects, not programming bugs.

 

Hehe. You need to stop thinking physics and start thinking computers. Nothing is free. A Turing can computer anything another machine can compute as longs as it has unlimited resources. When resources are limited, and they always are, it starts to fail as a model.

Yes, there are limits, I have acknowledged them in past posts.

 

This is (excuse me) a narrow minded approach. There are many shades of gray, you can run a universe on a beefy PC if you sacrifice something that's not needed. Like, the size of the Universe. Or speed. Or whatever else.

I didn't mean that we can simulate this entire universe on a PC that fits inside the universe. But, if you can accurately simulate the laws of this universe on a computer than it is computable.

 

If it is impossible to use a computer to simulate any part of this universe completely then it is not computable.

 

We have approximations, but these are approximations, not a true simulation.

 

How do you glue these things together? The only thing all of the above says is "if a universe chain exists as above then we are probably not at the top". That's all. The chance of this chain existing is non-determinable thus the chance of us being in universe 1, 4 or 3 billion is non-determinable. No higher, no lower.

I am not trying to pinpoint our location in a hypothetical chain, just that if one does exist then we can't be at the top.

 

As we haven't simulated any universe, then we must be on the bottom rung. So if any chain does exist, then it exists above us. If the chain does exist, then we can therefore not be at the top.

 

If we are in the only universe, then no such chain exists and therefore all this discussion comes to the conclusion that we are not in a simulation.

 

If we are just in "Giant's blood", this is the equivalent of a simulation and therefore we are not the top of the chain and the conclusion is that we are in a simulation (although not intentional).

 

The only way we can determine this is if we ask these kinds of questions I am asking in this thread... ;)

 

It can't be infinite (in spite of the definition of recursive simulation) with the given data. It could be infinite if the host is running everything.

Not even then. The host would need an infinite amount of computing power.

 

E.g. when we press play on our simulation, our universe gets paused and saved (or destroyed) and that simulation now runs on the host. This could happen infinitely.

Yes, this could be occurring, but even so, the Host would need an infinite amount of time still to run all those simulations. The chain can not be infinite.

 

That is not necessarily true. You need to take vectorial and objectual(?) concepts into account. When a virus attacks a game, you don't see static. What you get is an error because the system is protecting itself by watching over the game's shoulder and when it sees that it has run amok it kills it.

The reason we see the kinds of anomalies that you describe, is that the objects in modern computer games can be though of as a kind of Atom. They are indivisible components in memory and errors cause these components to be joined in odd ways.

 

So, the resolution of today's games is not at what we consider the Partial level, but in a much larger scale. If this kind of error were to crop up in our universe, we would see a mess of quantum particles springing in and out of existence. It wouldn't be just on the quantum scale it would occur in the macro scale too. We wouldn't see discreet objects appearing and disappearing, but we would see the equivalent mass/energy of whole objects doing that. They, also, would not behave according to any known physics, we would see violations of the mass/energy conservations, etc.

 

Giant bugs and sperm whales maybe no, I was cracking a joke but floating people? Sure. Non-accessible areas with no laws, like black holes? Yes. Ghosts? Sure.

Looking at the resolution of this universe, all these objects are specific amalgamations of groups of particles obeying the laws of the universe, an error would be in violation of these laws so the resulting artefacts would not be as coherent.

 

A burst of unexplained photons, maybe, an image of a person (ghost) not a chance.

Link to comment
Share on other sites

I think this is the source of the confusion between what we are trying to say. You are viewing it all from a single point of view: The Top Level computer.

 

Well indeed I do, while I switch points to get a true understanding, we must go back to the mainframe. Point here being, the mainframe doing the simulations needs an infinite time to simulate such a chain, so it can't be done.

 

Theorizing about how it *could* work and what corners we can cut, fine, but emulation of an universe in such a chain - no.

 

It is, however, entirely possible (ignoring probability for a moment) that we could be in a simulation if a smart enough computer is used, that only feeds us the data we use (I'd do it this way, at the scale we're talking about emulating particles would be suicide).

 

So my point is, if such a chain can't be really created, how can we theorize it will exist? It needs to be finite by definition, so it MUST break the chain, each simulation can't run like the other, after a few universes *something* has to break down.

 

Trying to pin down the actual speed of a hypothetical host that we have no what of know the specs on is pointless. Thinking about what it would be like if that hypothetical host existed, that is more constructive and the aim of this thread.

 

Indeed it is, however, we are trying to determine if we are in such a chain. Point here being, if it runs a second of our time in *way too much* time, then such a mainframe can't really exist, now can it? Specifications are pointless if it can't be done.

 

This is why speculation is necessary. We can speculate that in a universe that 1+1=3 how would computing be possible. Could that kind of universe support complex life, could that life become intelligent, could that intelligence create a computer and could that computer simulate a mathematics vastly different than the mathematics that governs it's operation?

 

It is. But this is not the point. Their physics need to be quite different to even simulate what we have here, let alone emulating each particle. It IS possible, perhaps even somewhat probable, that we are being simulated/emulated. However, being in a chain is (arguably) impossible.

 

If we just say: "Oh, I can't see how it could, so we may as well not try to understand or ask questions", we wont get anywhere. We will never discover anything.

 

I'm not saying I can't see how they could simulate us, even if it's beyond our laws of physics. I'm saying that regardless of physics they can't simulate an infinite chain or even virtually infinite chain in a feasible way.

 

So are we discussing if this is a simulation (assuming one simulation - their physics is enough to simulate us) or a very limited chain(we could sim a small patch of the planet ourselves - then our simulations will probably stop at SIMS2)? If so, this is possible and open for discussion. However, if we discuss a parallel-running cascade of universes in which we are one of infinite, then I'd have to say no, even with different physics.

 

Even if the conclusion is: 'We can never tell if we are in a simulation or not", this is an important discovery. It places limits on various aspects of the universe, for instance, any anomalies are real physical effects, not programming bugs.

 

I agree that we could never prove we are a simulation. Ever. Simply because what we get we assume to be real. Every definition of "real" we have would be from the simulation, thus any anomaly in the software we see as anomaly (or expected behavior) in the real life.

 

If one was to be born in a box and never step out, then he would never have any clue that there's anything outside the box (provided full isolation, like a simulation). Unless someone tells him, or he breaks out somehow, everything in there is the universe and aspects like infinity were never implanted. Generations after generations lived on the planet thinking it was flat, limited, and you could fall down. That's as far as they saw. Then they had a telescope and discovered it was round. Then they were enlightened. Now we laugh at them, because we have microscopes and telescopes and fuzzyscopes and say we *finally* have the answer. Statistically speaking, we're wrong.

 

Nobody in year 1 AD could possible conceive that gravity is faster than light. Everything they had around them could not point them to the truth. Such as we, drawing from what we see can't know what the reality holds.

 

I am not trying to pinpoint our location in a hypothetical chain, just that if one does exist then we can't be at the top.

 

Obviously, if we were at the top we'd be doing the simulation. If such a chain exists then we must be (temporarily) at the bottom.

 

If a cascading simulation runs, in which each universe has a lifecycle (it ends) then several could run in parallel - as computing power is available. Model can be 6 billion years to evolution, 1 billion to perfect simulation, then another 13 to destruction of the universe. In which case, after 6 bil another is launched, then after another 6 a third. In 2 years, the top simulation is dead and the child is then moved up in a chain. In such a cascading model, we could be anywhere. (we don't have enough info to determine a probability).

 

As we haven't simulated any universe, then we must be on the bottom rung. So if any chain does exist, then it exists above us. If the chain does exist, then we can therefore not be at the top.

 

I really should read ahead. Agreed, minus the cascading model. So basically we have

 

* A simulation can be made in which we are the simulated ones

* If the simulation runs in a chain model, the chain must be finite so that the host can be feasible.

* If such a chain exists, we can't be at the top (because if we are there chain isn't here yet). We could be anywhere else, including the bottom. Probability of being bottom varies with simulation model (we assume *only* and *last level* are not the same).

 

-- Anomalies

 

The reason we see the kinds of anomalies that you describe, is that the objects in modern computer games can be though of as a kind of Atom. They are indivisible components in memory and errors cause these components to be joined in odd ways.

 

I am aware of that, still, it is unlikely that for 100 atoms it will have 100 threads each simulating the atom's behavior. Rather, once joined, they will behave as an object - having the object's properties.

 

Just like your LEGO example, except that when you join 3 blocks to form a pyramid, and they stick together, you have an object that has its own properties. It's easier to optimize it like that, sharing color and weight, I'm assuming that once joined into something those atoms will have shared traits.

 

For example, if you have 3 joined LEGOs of the same color you can probably store the color as "blue" over the same object. While it is imperative to do so if you run a simulation, it's not mandatory in an emulation (each atom on its own), but still I find it likely someone optimized the software. Assuming it never evolves.

 

So when traits are shared it's easy for an error to apply to a block of atoms. It's also likely the atoms/objects we are made of are stored close in the memory to each other - unlike other stuff. So it is likely that if an error affected a range of memory it will affect consecutive atoms/objects in the same body. Thus, a person could become a ghost (even if concurrent to other errors). One should not assume that by ghost I mean an ethereal being that defies the laws of gravity. By "ghost" I mean it could affect our brain so that we see shapes we associate or interpret as ghosts, it could change the refraction of air for a moment to make objects shift (like mirages).

 

Depending on architecture, errors could also be cancer by an overwrite of the DNA code in a cell.

 

Looking at the resolution of this universe, all these objects are specific amalgamations of groups of particles obeying the laws of the universe, an error would be in violation of these laws so the resulting artefacts would not be as coherent.

 

A burst of unexplained photons, maybe, an image of a person (ghost) not a chance.

 

As we perceive as reality what we see, it might as well be something we interpret - not that it's real. Zombies attacking someone doesn't mean the software created a zombie, it might be that a commonly seen error in the procedure that does out brain has happened. It would explain why some things stick as reports (like ghosts) while other have been lost in time and are considered legends (werewolves? can't think of a better example).

 

I don't see larger artifacts as impossible. Though it's entirely dependent on platform and algorithm. Your view seems to favor atom/particle simulation, while I view it as wasteful since it requires far more processing power than the most complex of simulations - since it requires that everything should be processed at a lower level than needed. It is possible (never did an estimate) that emulating all atoms is as demanding as the sum of all possible simulations (optimized).

 

I vote for simulation. It allows you to model specific rules and correct errors. It also avoids the issue in which your planet never evolves or other random events you can't program in an atom simulation software.

Link to comment
Share on other sites

I agree that we could never prove we are a simulation. Ever. Simply because what we get we assume to be real. Every definition of "real" we have would be from the simulation, thus any anomaly in the software we see as anomaly (or expected behavior) in the real life.

This is only true if we only assume that any glitch must be real. If we change our assumptions to that of assuming that it could be a glitch in a simulation, then we can use that to investigate.

 

If one was to be born in a box and never step out, then he would never have any clue that there's anything outside the box (provided full isolation, like a simulation). Unless someone tells him, or he breaks out somehow, everything in there is the universe and aspects like infinity were never implanted.

But, suppose this person thought: "What if there is something outside of my box?". What if he then spent time thinking about the box, what it is made from, are there any hole that he could look out through, etc?

 

Could he no have a chance to find out if there was anything outside the box, could he in fact "Think outside the box"? (;):rolleyes: (pun intended)

 

Generations after generations lived on the planet thinking it was flat, limited, and you could fall down. That's as far as they saw. Then they had a telescope and discovered it was round. Then they were enlightened.

No, first came the idea that the Earth might not be flat, then they looked for proof with their telescopes. The idea might have come to them because they observed "anomalies". However. they first had to think of them as "anomalies". People were looking at these "anomalies" for a long time and still no one made that connect. The moon is round. The Earth casts a round shadow on it. But they didn't think of these as anomalies.

 

It took a person to say "What if these are anomalies, not reality?", and that is what I am suggesting is the starting point of this kind of investigation.

 

You are saying "if we assume...", I am saying "don't assume...".

 

Nobody in year 1 AD could possible conceive that gravity is faster than light.

And they would have been right. Gravity is not faster than light, observations of Pulsars and Super Nova indicate that it travels at the speed of light.

 

Everything they had around them could not point them to the truth. Such as we, drawing from what we see can't know what the reality holds.

But this is assuming that we know everything. We don't in 2000 years time, the people will look back at as and say "They used to believe XXXX, how could they when we can obviously see if we just look around us that is not the case".

 

If we are going to progress at all, we have to start by questioning our assumptions, even if it ultimately leads to a dead end of questioning.

 

If a cascading simulation runs, in which each universe has a lifecycle (it ends) then several could run in parallel - as computing power is available. Model can be 6 billion years to evolution, 1 billion to perfect simulation, then another 13 to destruction of the universe. In which case, after 6 bil another is launched, then after another 6 a third. In 2 years, the top simulation is dead and the child is then moved up in a chain. In such a cascading model, we could be anywhere. (we don't have enough info to determine a probability).

Yes, in a cascade chain there could be other simulations that come after us. But, they would come after us. We would be being run "NOW", they would not yet exist. Just as if we were able to create a simulated universe, those simulated universe would not exist "yet".

 

So, even with cascading simulations, we are at the bottom of the rungs. At the moment.

 

For example, if you have 3 joined LEGOs of the same color you can probably store the color as "blue" over the same object. While it is imperative to do so if you run a simulation, it's not mandatory in an emulation (each atom on its own), but still I find it likely someone optimized the software. Assuming it never evolves.

But as our universe doesn't seem to run like that, we can eliminate as a potential.

 

So we are back to my point about the fact that the resolution of our universe is individual building blocks all being simulated at the same time.

 

So, if we do create a simulated universe, it will be by rearranging these building blocks into a form that can run a simulations of the universe. So it won't take more memory or processor time on our host.

 

So to sum up the points covered:

1: If we are in a simulation. It has to be the bottom rung on a chain because we are either being executed now and all other sims don't yet exist, or we haven't created any sims yet ourselves.

 

2: There can not be an infinite number of Sims in the chain as processor speeds and memory can not be infinite on the top level host computer.

 

3: A simulated universe (under certain architectures) can rearrange components to create it's own simulations. As these architectures are more flexible, these sims will produce more sims of their own. Therefore we are more likely to be in one of these.

 

4: Our universe operates on a building block "architecture" so we are definitely in a building block architecture sim, if any.

 

5: If we drop the assumption that any phenomena that we see can not be caused by a glitch in the simulation system, then this gives us a handle to investigate the potential of such phenomena to be glitches in a host system.

 

6: There are more computers and more processing time/memory dedicated to running entertainment simulations (games) than are devoted to "universe" simulations.

 

7: Entertainment software requires that "Fast Approximations" need to be made.

 

8: Fast approximations will create "Glitches" and using point 5, we will be able to use these to answer the question about whether or not we are in a simulation.

 

9: There will be more simulations that use fast approximations than not as any chain that uses these will produce more simulations as they will have more processor time and memory to devote to them.

 

10: For a regular universe (like ours seems to be), it requires a host that is also regular.

 

So in conclusion: we will be more likely to be in an Entertainment Simulation that uses Fast Approximations, and thus generates Glitches.

 

Using this, point 5 becomes more relevant and we can answer the question of whether or not we live in a simulated universe by assuming that the universe has glitches in it. IF the host is regular (computable), then it will follow a logic. The glitches we identify will be able to give us information about the host computer system. The operation of the host computer system can give us some information about the physics of the host universe.

 

Given time, we might even be able to exploit these glitches and communicate with the host and then find out if they themselves are a computer simulation.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.