Jump to content

the physical basis of computer simulation


Recommended Posts

I have been thinking about the topic for several days. When we did some computer simulation in the molecular scale, such as the molecular dynamics , or the monte caro, instead. How can we make sure that what we get from the simulation is the one we need? or, is it right? I think you can get my question, right?

I have learned that the statistical mechanics work here, and there is ergodic hypothesis. But I cannot comprehend them well.

 

 

Link to comment
Share on other sites

It is in fact absolutely not clear, and only when the simulated data agree with experimentally-measured results you can extrapolate a bit and claim that other features you get in the simulation (which might not be measurable in experiment) might also be a proper representation of reality. There is a large amount of potential pitfalls in computer simulations, and most claims made in them are from intuition or hand-waving.

Your question about the ergodic hypothesis (some people insist on calling it "quasi-ergodic hypothesis", btw) is very broad. It's not quite clear what you are asking. Note that the ergodic hypothesis has its roots in statistical physics, and the problems associated with it ("how long is an infinite amount of time?", for example) are not special properties of computer simulations but general problems.

Link to comment
Share on other sites

Even the rigorous ab initio methods are not without problems. Electrons vibronically coupling to nuclei is still a problem. The aqueous proton transport chain is also a huge problem (protons jump between water molecules faster than predicted due to tunneling) on the molecular dynamics side of things. However, the existence of the eigen cation [ce] H_{9}O_{4}^{+} [/ce] was predicted by molecular modeling before being observed in solution so evidently something is working correctly.

Link to comment
Share on other sites

when working with a computer model it is good (and i'd argue necessary) practice to run some validation experiments where you recreate a historical real experiment and retrodict the results. If the model consistently matches reality under a variety of scenarios then it is probably a good model and can be used to perform virtual experiments where performing them for real could be prohibitively costly, difficult etc.

 

It would also be good practice to actually perform a real experiment to double check the model although the testing regime may be reduced to a confirmational run rather than a full blown experimental run.

Link to comment
Share on other sites

Thank you for your reply,timo. It is useful to me. And now there is another question maybe

The physical quantities, such as dielectric constant, polarization,and heat capacity, which can be measured in the experiments are macroscopic. While the information we can get from the molecualr simulation of a finite system are microscopic. What's the bridge between them, from micorscopic quantities in a finite system to macroscopic quantities in experimetns?

 

It is in fact absolutely not clear, and only when the simulated data agree with experimentally-measured results you can extrapolate a bit and claim that other features you get in the simulation (which might not be measurable in experiment) might also be a proper representation of reality. There is a large amount of potential pitfalls in computer simulations, and most claims made in them are from intuition or hand-waving.

Your question about the ergodic hypothesis (some people insist on calling it "quasi-ergodic hypothesis", btw) is very broad. It's not quite clear what you are asking. Note that the ergodic hypothesis has its roots in statistical physics, and the problems associated with it ("how long is an infinite amount of time?", for example) are not special properties of computer simulations but general problems.

Link to comment
Share on other sites

Hi Ricky,

both the applications and the potential problems with computer simulations are numerous, so it's a bit hard to give you a definite statement about what you're asking (plus, I don't know your scientific background). Finding definitions for the macroscopic quantities that also apply to the finite systems simulated on the computer is in many cases not a big deal.

  • Some cases are obvious, like defining the density as the number of particles divided by the volume.
  • Some may take more sophisticated methods, like entropy (don't know the method off my head now) or the chemical potential (sometimes evaluated via the Widom insertion method).
  • The method I use is essentially measuring average values of something and construct the macroscopic properties from them. For instance, statistical physics tells you that the heat capacity of a system is equal to something like <E²>-<E>² (modulo some factors of volume or so), where <...> denotes the average over a large time (more precisely: the statistical ensemble). So if I measure the average squared energy and the average energy over some time and hope that the time was long enough to not run into the huge amount of potential pitfalls, then I've extracted a thermodynamic property from the system.

 

However, all these methods assume that your small system behaves like a real system to at least some controllable extent. This implies the following

  • The interactions in the simulations (called "force field" in MD simulations) and the bead unit, e.g. whether H2O is represented via three atoms, a single molecule, or completely integrated out ("implicit solvent" in MD language), must be chosen appropriately. I don't think there's systematic rules telling you which level of detail is needed. Some people (e.g. our group) make claims from very abstract models (which we justify by believing in universality), but most experimentalists don't seem to like (or even understand) that. Still, a lot of effort is put into developing more abstract ("coarse grained") systems, either from bottom up (by taking a more detailed system and integrating out some degrees of freedom) or by an ad-hoc definition of a more abstract system and modifying it until it shows the desired behavior (top down).
    From what I heard from colleagues, some people, especially in the more biological fields, do not like coarse grained simulations at all, since it is not guaranteed that the effect shown is really an effect that would exist in nature. In that context, the term "atomistic simulations", where each atom is assigned one object in the simulation, is sometimes mentioned. I am not a big fan of this attitude because I feel "atomistic simulations" are just a different -arbitrary- layer of abstraction. I could as well claim that they still completely ignore everything we know about quantum mechanics. But since that's a discussion among more competent people than me, I don't think I'm in a position that I should make claims about this issue.
     
  • The system you simulate needs to be large enough to simulate what you want to see. Imagine you were simulating a vibrating string by taking a piece of string, lay it across the simulation box and use periodic boundary conditions. You might think that due to the periodic boundary conditions, you successfully mimicked an infinite-sized string. That is, however, only partly true. If you describe the deviations of the string from its mean position (via a Fourier transformation), you'll notice that only a certain quantized set of wavelength can appear. Most importantly, wavelength larger than that of the simulation box can not be present. But the high-wavelength parts may just be those that are the most important ones in a real vibrating string (since they have the lowest energy per amplitude). So what you have done is to systematically exclude the most important signal you might have been looking for in your simulation.
    I highlighted the word "systematically" in the previous sentence, because it gives rise to a very powerful method called "finite-size scaling". If you can find a scaling law which tells you how big the error you make is relative to the system size (those predictions come as a proportionality relation, if you had an exact equation you were already done, of course), then you can just run your simulation for several system sizes and use this relation to extrapolate your results to the sought-for infinite-sized system.

 

A more philosophical aspect (but one which feels very real if you work in the field) is related to the amount of data you can handle to still make a proper analysis, and the insight you can get from that. Assume I was able to run a full QM-based (even Standard Model based if you want to) simulation of red blood cells migrating through a vein. What do I learn from that, then? Sure, I can measure the drift velocity. But I could have measured that in an experiment as well, where I know that my system is realistic. The example is very exaggerated, of course. But as soon as the data and conclusions you extract from your simulations are limited to "we see the same thing as in experiment", then you might wonder why you did the computer simulation in the first place.

 

 

Your questions are very broad and I don't know where you are coming from (in the academical sense, not where you live). But I hope that you find the points above interesting.

Link to comment
Share on other sites

I've used the Monte Carlo method, and it begins with knowing that the model realistically simulates the real world in terms of characteristic, quantification, etc. No approximation is perfect, so it's always a matter of degree, and it often relies on intimate knowledge of the real world.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.