Jump to content

Planck Scale Defines the laws of Physics?


Iwonderaboutthings

Recommended Posts


Here is the link, the area in which this is stated is right a bit above the 10th page, close to the end of the article:


Spacetime at the Planck Scale: The Quantum Computer View





Planck Scale defines the laws of Physics..



Here is the copied area of interest:


As we have already said, we believe that the recursive functions computed by

quantum space-time at the Planck scale are the laws of Physics in their discrete, abstract, and fundamental form.



Does this mean that the laws of nature are " pre-defined" by a deductive logical " computer" system


defined as Quantum Mechanics, or did I miss-interpret something?


Now, I am not condoning any beliefs, nor fashion and or styles of theories, maths and etc and I have quite an opened mind to many things...


But am I reading that article correctly?



I perceive that they are saying that " The Physical Laws"


Such as those from Albert Einstein Max Plank, Newton and many others, are predefined and regulated by this super quantum computer in the universe...



When thinking however about the extremes of the Double Slit, it kinda makes you wonder..

Link to comment
Share on other sites

  • 2 weeks later...

Its based on the quantum information theory. Though poorly done I might add. There is some validity on the quantum information theory, however this article isn't showing the complete picture. I'm still reading this to understand it better myself. I figure the best place to start is either a textbook or a dissertation I chose the latter

 

http://cds.cern.ch/record/476522/files/0011036.pdf

 

there is one line in this dissertation that struck a cord

 

The theory of quantum computation is an attempt to capture the essential elements of a theory of
quantum information processing in a single unifed theory. I say \attempt" because it is not yet clear
that the theory of quantum computation provides a complete account of the information processing
capabilities afforded by quantum mechanics.
right now I would just consider it as a QM alternative in development, the other thing that article mentions is its dependancy upon loop quantum gravity/cosmology. LQC is a strong model but no more or less than LCDM. LQC avoids the information loss of a singularity by having a bounce. In other words once the singularity reaches a planck length it bounces and starts expanding.
http://arxiv.org/abs/1108.0893

 

Planck stars

http://arxiv.org/abs/1401.6562

 

http://arxiv.org/abs/1201.4598 "Introduction to Loop Quantum Cosmology by Abhay Ashtekar

 

The model is gaining weight so make your own judgements

 

 

Edited by Mordred
Link to comment
Share on other sites

 

Its based on the quantum information theory. Though poorly done I might add. There is some validity on the quantum information theory, however this article isn't showing the complete picture. I'm still reading this to understand it better myself. I figure the best place to start is either a textbook or a dissertation I chose the latter

 

http://cds.cern.ch/record/476522/files/0011036.pdf

 

there is one line in this dissertation that struck a cord

 

The theory of quantum computation is an attempt to capture the essential elements of a theory of
quantum information processing in a single unifed theory. I say \attempt" because it is not yet clear
that the theory of quantum computation provides a complete account of the information processing
capabilities afforded by quantum mechanics.
right now I would just consider it as a QM alternative in development, the other thing that article mentions is its dependancy upon loop quantum gravity/cosmology. LQC is a strong model but no more or less than LCDM. LQC avoids the information loss of a singularity by having a bounce. In other words once the singularity reaches a planck length it bounces and starts expanding.
http://arxiv.org/abs/1108.0893

 

Planck stars

http://arxiv.org/abs/1401.6562

 

http://arxiv.org/abs/1201.4598 "Introduction to Loop Quantum Cosmology by Abhay Ashtekar

 

The model is gaining weight so make your own judgements

 

 

 

Hey thanks for the response and the links, I need to say that it is interesting stuff, one thing I do find interesting is how physical formulas are able to predict " out comes" per say, meaning that the laws of nature can be predicted by a formula and yet the prediction appears to follow the evolution of time as a frequency rather energy.

 

The whole thing is cloudy to me as to which is really which...

Link to comment
Share on other sites

Any good model needs to be able to make predictions, if it can't then its useless as a model. One thing to keep in mind when studying different models. Oft times a different model is merely a different mathematical way of describing the same process. Both models can be correct and make the same predictions. They simply have different metrics to describe the same thing. Though sometimes conflicts do occur.

 

For example you can describe the universe according to the FLRW metric, Einstein field equations or LQC. For the majority of cosmology those three work equally in all situations. The FLRW metric and Einstien field equations are in 100% agreement with each other, however LQC handles the singularity problem differently (bounce). Otherwise they essentially describe the remainder of the universe in the same manner.

 

However then you also have models that try to define an influence differently, good example would be replacing dark matter with modifying Newtons gravity. MOND. Or replacing dark energy with spin and torsion, Poplowskiis, universe inside a black hole, These types of models inherently run into conflicts with LCDM. However other than replacing one or two influences they use the same metrics. Ie FLRW and Einstein field equations.

 

Now take this a step further. say you wish to represent the FLRW metric specifically on a computer and use it to simulate the universe? If you tried to run the formulas directly to a computer you will quickly run into problems on a particle to particle basis. The types of calculations would bog down the processing power and th simulations would take forever to run.

To deal with that mathematicians and programmers developed whats called N-body simulations. N-body simulations take a metric and finds another mathematical metric that is easier on the processing done via a computer.

I'll use a simple example. A computer is primarily a binary machine. so if I wish to do the calculation 4*2. I have three options. I can directly multiply, I can add 4 two time, or I can do a bit shift right in binary. the bit shift is a faster process. Now N-body is more complex than this. However the codes finds ways to say describe gravity interactions not by the regular GR formulas, but mathematical formulas with the same relations, but done with processing power in mind.

 

Here is a pdf showing some of the N-body code for gravity. note also the use of Matrices and tree codes, this is essentially a visualization of memory stack operations.

 

http://www.cs.hut.fi/~ctl/NBody.pdf

 

Now how does this apply to the quantum information theory? Well simply put if you can relate the metrics used in cosmology or more specifically in the quantum information theory, QM directly to Boolean algebra, You've just found another way to directly define the universe and quantum processes in terms of Binary directly. So in many ways its similar to N-body simulations except for QM applications.

 

Hope this helps. Oh time in a metric system can often be defined differently, the universe doesn't care how we define it. If it allows the mathematics to work and still fit observation data, then its simply a mathematical methodology. Unless it offers a different understanding to observational data

Edited by Mordred
Link to comment
Share on other sites

Any good model needs to be able to make predictions, if it can't then its useless as a model. One thing to keep in mind when studying different models. Oft times a different model is merely a different mathematical way of describing the same process. Both models can be correct and make the same predictions. They simply have different metrics to describe the same thing. Though sometimes conflicts do occur.

 

For example you can describe the universe according to the FLRW metric, Einstein field equations or LQC. For the majority of cosmology those three work equally in all situations. The FLRW metric and Einstien field equations are in 100% agreement with each other, however LQC handles the singularity problem differently (bounce). Otherwise they essentially describe the remainder of the universe in the same manner.

 

However then you also have models that try to define an influence differently, good example would be replacing dark matter with modifying Newtons gravity. MOND. Or replacing dark energy with spin and torsion, Poplowskiis, universe inside a black hole, These types of models inherently run into conflicts with LCDM. However other than replacing one or two influences they use the same metrics. Ie FLRW and Einstein field equations.

 

Now take this a step further. say you wish to represent the FLRW metric specifically on a computer and use it to simulate the universe? If you tried to run the formulas directly to a computer you will quickly run into problems on a particle to particle basis. The types of calculations would bog down the processing power and th simulations would take forever to run.

To deal with that mathematicians and programmers developed whats called N-body simulations. N-body simulations take a metric and finds another mathematical metric that is easier on the processing done via a computer.

I'll use a simple example. A computer is primarily a binary machine. so if I wish to do the calculation 4*2. I have three options. I can directly multiply, I can add 4 two time, or I can do a bit shift right in binary. the bit shift is a faster process. Now N-body is more complex than this. However the codes finds ways to say describe gravity interactions not by the regular GR formulas, but mathematical formulas with the same relations, but done with processing power in mind.

 

Here is a pdf showing some of the N-body code for gravity. note also the use of Matrices and tree codes, this is essentially a visualization of memory stack operations.

 

http://www.cs.hut.fi/~ctl/NBody.pdf

 

Now how does this apply to the quantum information theory? Well simply put if you can relate the metrics used in cosmology or more specifically in the quantum information theory, QM directly to Boolean algebra, You've just found another way to directly define the universe and quantum processes in terms of Binary directly. So in many ways its similar to N-body simulations except for QM applications.

 

Hope this helps. Oh time in a metric system can often be defined differently, the universe doesn't care how we define it. If it allows the mathematics to work and still fit observation data, then its simply a mathematical methodology. Unless it offers a different understanding to observational data

Hey question here:

 

About computers calculations:

 

Could these calculations be done with a set of constants that can handle "exponentiation" therefore alleviate all the redundant processes time ;) ?

 

I hear much about processing power " issues" of the CPU, I have also read that a computer does not understand ratios which is why we adapted the rounding off to the nearest ten coupled with stack over flow and other precession base issues in the logic and registry of the CPU?? was that right??

 

They say quantum computers are not possible??? Not sure if this is the case, I think a simple algo rhythm with constants that do the exponentiation IE, the distances from manifold to manifold redundant calculations of multiples instances of code/ instruction/ and etc.

 

Even beyond that, it could be a possibility to calculate billions of poly gons, triangulation, tessellation and may " just a thought here" allow video game creation with high " poly counts" versus low res models with " tiny texture maps and normal maps to " fake" high definition in the 3d game pipe line..

 

 

Or am I thinking way ahead of myself??

 

 

Thanks for the PDF...

Edited by Iwonderaboutthings
Link to comment
Share on other sites

Hey question here:

 

About computers calculations:

Could these calculations be done with a set of constants that can handle "exponentiation" therefore alleviate all the redundant processes time ;) ?

I hear much about processing power " issues" of the CPU, I have also read that a computer does not understand ratios which is why we adapted the rounding off to the nearest ten coupled with stack over flow and other precession base issues in the logic and registry of the CPU?? was that right??

 

Thanks for the PDF...

actually variables and constants use up more memory, in compilers, your better off using a pointer to a stack table. for memory savings. Also common calculations can be done faster with a stack table. you can use your exponentation value as the pointer for the stack or look up table. There are numerous tricks. I own an N-body textbook just for gravity interactions, its over 1200 pages long. I'm still lost on the first chapter lol but its a recent purchase. Quantum computers aren't around yet though were getting closer.

http://www.amazon.com/Gravitational-N-Body-Simulations-Algorithms-Mathematical/dp/0521121531

 

by the way this simulation which is the most realistic virtual universe to date is this one. Look at the requirements, should give you some idea of the complexity

http://www.cfa.harva...du/news/2014-10

http://www.illustris-project.org/

 

paper on it

http://arxiv.org/ftp...5/1405.1418.pdf

 

the simulation took

16 million CPU hours were needed to evolve the simulation from the starting redshift z = 127 to z = 0, using 8,192 cores and an equal number of MPI-ranks. An additional 3 million CPU hours were spent on carrying out the on-the-fly galaxy identification with the SUBFIND algorithm.

16,028,568,000 hydrodynamic cells

 

Illustris employs a sophisticated computer program to recreate the evolution of the universe in high fidelity. It includes both normal matter and dark matter using 12 billion 3-D "pixels," or resolution elements.

 

The actual calculations took 3 months of "run time," using a total of 8,000 CPUs running in parallel. If they had used an average desktop computer, the calculations would have taken more than 2,000 years to complete.

 

now here's is the kicker it only tested a region of a few Mpc, and tested the WMAP and planck data parameters set in terms of the LCDM model.....

Edited by Mordred
Link to comment
Share on other sites

actually variables and constants use up more memory, in compilers, your better off using a pointer to a stack table. for memory savings. Also common calculations can be done faster with a stack table. you can use your exponentation value as the pointer for the stack or look up table. There are numerous tricks. I own an N-body textbook just for gravity interactions, its over 1200 pages long. I'm still lost on the first chapter lol but its a recent purchase. Quantum computers aren't around yet though were getting closer.

http://www.amazon.com/Gravitational-N-Body-Simulations-Algorithms-Mathematical/dp/0521121531

 

by the way this simulation which is the most realistic virtual universe to date is this one. Look at the requirements, should give you some idea of the complexity

http://www.cfa.harva...du/news/2014-10

http://www.illustris-project.org/

 

paper on it

http://arxiv.org/ftp...5/1405.1418.pdf

 

the simulation took

16 million CPU hours were needed to evolve the simulation from the starting redshift z = 127 to z = 0, using 8,192 cores and an equal number of MPI-ranks. An additional 3 million CPU hours were spent on carrying out the on-the-fly galaxy identification with the SUBFIND algorithm.

16,028,568,000 hydrodynamic cells

 

Illustris employs a sophisticated computer program to recreate the evolution of the universe in high fidelity. It includes both normal matter and dark matter using 12 billion 3-D "pixels," or resolution elements.

 

The actual calculations took 3 months of "run time," using a total of 8,000 CPUs running in parallel. If they had used an average desktop computer, the calculations would have taken more than 2,000 years to complete.

 

now here's is the kicker it only tested a region of a few Mpc, and tested the WMAP and planck data parameters set in terms of the LCDM model.....

Wow that is a lot of memory chips! 2,000 years on a regular desktop that is outrageous lol...

Well I guess we will need to wait for when a Quantum Computer comes along, I hope to get incredible in my years to come with science and hope that all my efforts pays off...

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.