Jump to content

timo

Senior Members
  • Posts

    3449
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by timo

  1. My first thought was tiny wounds and the extent to what a blood drop can seal it. Well, actually that was my 2nd thought. The first one was that surface tension is not a property of a substance but the property of an interface between two phases. But after reading this thread (and the Wikipedia article) I get the impression that in most fields it is commonly agreed upon that surface tension of a substance refers to the surface tension of said substance with air. Anyhow, add "+1" to the list of "I am curious to know what this is important for".
  2. Usually, the idea of a poster session is to talk to people. That implies that people are not going to actually read through the poster. I am assuming that is the case for your conference. The design strategy following from this assumption then is to a) have clearly visible parts that attract the right people, and b) have enough details on it to aid in explaining what you did. Contrary to papers, completeness should not be important, since you can always tell people what is missing on the poster. Also note that talking to people is not limited to selling your work. Talking to and getting to know people who do similar work and getting advice from them or learning about what they do is potentially even more valuable (no one runs around at a poster session with the intent to distribute grant money or permanent positions, anyhow).
  3. According to the three next-best Google hits I found the reasons were massive delays in the project and the inability to generate required/planned additional funding from industry (both probably at least partially caused by "a lot of opposition from some environmentalist to any solution that does not exclude coal altogether", as indicated in some of the texts). Link 1: http://www.scientificamerican.com/article/clean-coal-power-plant-killed-again/ Link 2: http://www.washingtonpost.com/news/energy-environment/wp/2015/02/04/the-obama-administration-is-cutting-funds-for-a-major-clean-coal-project/ Link 3: http://www.nationaljournal.com/energy/rip-futuregen-energy-department-kills-troubled-bush-era-coal-electricity-project-20150203
  4. Assume the universe state as a function of time to be a natural number in [1; 10] and the law that if a number would reduce below 1 it would remain 1 and if it increased above 10 it remains 10. Now assume a dynamics that says "there is a 50% chance that after a time T the number has been reduced by one". Clearly, the state 9 will not re-occur once the universe is in a lower state. That is despite a very limited number of possible states and infinite time. If the rules instead were "there is a 50% chance the number decreases and a 50% chance the number increases" then the state 9 will indeed re-appear after some time. That may seem like a very silly example - it is. The point here is that a finite number of states and an infinite amount of time are not sufficient to draw the conclusion that any state will eventually re-appear. EDIT: To make that clear: I am not saying the universe will never re-reach a state that would be considered the same as now. Our current cosmological models point against it. But whatever: extrapolating them to infinite time may be a bit over-ambitious, anyways. My point is that the deceivingly simple argument that with an infinite amount of time everything is possible (or possible to re-occur in this case) is wrong, or at least not complete.
  5. Since you already got two comments on your first two sentences, let me add a comment on the third sentence That is, in a sense, absolutely correct. And not really a problem. Consider classical electromagnetism: The motion of electric charges is influenced by an electromagnetic field. What creates the field? Electric charges. So it actually takes electromagnetism to make electromagnetism - at least in the sense you are referring to. The same is supposed to hold true for gravity where the role of the electromagnetic field is taken by the geometry of space (-time). The side that defines how charges (-> objects) move in the field is called the equation of motion (-> geodesic equation). The side that defines how the field (-> geometry of space) looks like as a function of the charge distribution (-> distribution of objects creating gravity) is called the field equation, which in classical electromagnetism is the Maxwell equations (-> Einstein equations in GR).
  6. That would be the trivial issue. I was more concerned about the conclusion that an infinite amount of time implied that anything that is physically possible will eventually happen. Depending on the exact meaning of "an infinite amount of time" and "physically possible" there are lots of pretty trivial counter-examples. The one possibly closest to your case would be a dampened pendulum started with zero velocity and some non-zero displacement at t=0. Even if time is a real-valued positive variable, which is about as infinite as it gets, you will never re-reach the original displacement for any t>0 despite it being "physically possible" (demonstrated by the state being taken at t=0). Note that this is just one example. I originally posted another one, and there are much more complicated scenarios I'd have in mind, too. This is why I came to the conclusion that it may be easier to skip the scientific issues altogether and stick to pure speculation.
  7. EDIT: IGNORE THIS POST: Originally had some doubts about the original premise here. On second thought, I see a large bunch of issues. So perhaps one can indeed just accept the premise for this thread.
  8. I cannot really say I found the feedback given in this thread so far particularly constructive. At least I do miss the part where improvement proposals suitable for Theroetical were made. "Do an experiment" is not such a proposal. "Dismissive" or "arrogant" seem to describe the comments better. Not taking myself out from this criticism. To be helpful I'd first have to re-watch the video very carefully and try decipher its meaning. And this leads to a common problem in science forums: If any topic requires serious work, then unless you happen to have a personal interest in the topic or lots of free time any serious researcher is better off doing their own research or putting the time into guiding their own students, instead. Sorry for being off-topic, but the tone towards Theoretical by at least a few posters annoyed me for a few pages now. In case someone was offended by Theoretical coming in with the stereotypical "I revolutionized physics and it has something to do with Einstein": Don't be so over-reactive to simple forum red-flags and maybe also consider the quote in my signature.
  9. The math is supposed to be an identical copy of what your program does (by reverse-engineering, not by understanding the intent). The summation sign represents the outer loop and the summation in the loop. The integral is the inner loop (I promoted the sum to an integral because of the many tiny summation steps you do). The two addends under the integral sign are your two "probability_same += ..." lines. The steps taken are then just another method to do get the result (called "solving analytically" in contrast to "solving numerically"). The 1/2 that results of course is supposed to equal the 0.5 you get. It's just a different way to do the same calculation which, in my experience, is often more insightful than playing around with simulations. Simulations usually are used in cases too complicated for an analytical solution. Part of the reason putting your code into math was also to demonstrate that what you are talking about is a relatively simple case. Nothing actually wrong with doing calculations by the computer if that suits you better, though.
  10. I do not completely get your point. And I think you are a bit over-excited about your computer program. Since you did not generate any replies so far, let me start with a few random comments: 1) The calculation your computer program does can be performed analytically (with the help of Wolfram Alpha for the integrals): [math]\frac 13 \sum_{P = 0, 2\pi/3, 4\pi/3} \frac{1}{2\pi} \int_{0}^{2\pi} \cos^2 \alpha \, \cos^2 ( \alpha - P) + \left( 1- \cos^2 \alpha \right) \left( 1-\cos^2 ( \alpha - P) \right) \, d\alpha[/math] [math]= \frac 13 \sum_{P = 0, 2\pi/3, 4\pi/3} \frac{1}{2\pi} \int_{0}^{2\pi} 1 + 2 \cos^2 \alpha \, \cos^2 ( \alpha - P) - \cos^2 \alpha -\cos^2 ( \alpha - P) \, d\alpha[/math] [math]= \frac 13 \sum_{P = 0, 2\pi/3, 4\pi/3} \left[ 1 + \frac{1}{2\pi} 2 \left( \frac{\pi}{4}(\cos(2P) +2 ) \right) - \frac 12 -\frac 12 \right][/math] [math]= \frac 13 \sum_{P = 0, 2\pi/3, 4\pi/3} \left[ \left( \frac{1}{4}\cos(2P) \right) +\frac 12 \right] = \frac 12[/math] 2) I was expecting entanglement to play a role. But I do not see how this is reflected in your code. 3) Similarly, I do not see the connection to hidden variables. Are there any in your code? 4) The movie is very nicely made. But my first impression is that it should be considered a nicely made movie about a topic, and that you should not expect a 7-minute movie to give a complete or accurate picture. I guess the point I am trying to make is: Do expect that the movie is made to make you start exploring the topic, not to completely cover it. Some arguments made there may be incomplete as presented.
  11. Maybe it would help if you could specify what you are doing and what the "freak results" you expect are supposed to be. I did a check with the c++ random number generator to test if a linear congruential rnd-gen can possibly create a series of 20 subsequent "tails" on a dice throw (head and tail having been defined as the random number being in the upper or lower half of [0; RAND_MAX], respectively). I had my doubts. The little experiment indicates that frequencies with which n subsequent "tails" appear seem to be what you'd expect from basic probability theory (P(n+1)/P(n) = 1/2). So in this respect the linear congruential random number generator you are likely to use seems sufficient. Btw.: The probability not to see a number lower than 1110 in n random numbers uniformly drawn from [1100;1110] is roughly 0.9^n. The approximate number of runs you have to perform to see such a freak result is therefore 1/(0.9^n). In other words, if your sequence consists of n=100 numbers you'd have to look at ~40000 runs to see the freak result "no number lower than 110" - in this case it would be expected not to see such a result in only 100 runs. In case I did not mention it: I strongly suggest to be much more detailed/precise about what you are looking at. My feeling is that you questions/issues are very basic and that a lot of people in this forum could provide helpful comments if they knew what you are doing. EDIT: As a remark: It is not necessarily the random function that is predictable. Random numbers are surprisingly predictable in large amounts, at least for some observables. This is why casinos and (some) lotteries work. More scientifically, a large fraction of our scientific theories base on the predictability of randomness (statistical physics/thermodynamics and many data analysis techniques for scientific experiments).
  12. According to a PDF I googled (http://turing.une.edu.au/~amth142/Lectures/Lecture_13.pdf) scilab's rand() method, which I assume you may be using, is a linear congruential random number generator. This type of random number generator is usually considered unsuitable for scientific use. I would not be too surprised if such a random number generator was technically incapable of throwing "heads" twenty times in a row. If you feel you need a better random number generator, use a better random number generator.
  13. The interaction with the Higgs Field is one way to generate mass. It is not the only mechanism. The mass generated from quark-gluon bounds is responsible for most of the proton and neutron mass, which in turn is considered the main source of gravity. There is an expected correlation between the Higgs Field an gravity in the literal sense: Since some excitations of the Higgs fields are massive they might act as a source of gravity so the two observables "Higgs Field" and "gravity" are expected to be correlated. That's probably not what you were asking, though. It is generally not assumed that the Higgs fields has much to do with explaining quantum gravity. The similarities stop beyond "both somehow deal with mass". Already the expected mathematical structures, the behavior with respect to rotations and the expected relations to other elementary particles are different.
  14. I wonder if the process of this thread comes from the reputation system being a simple topic that everyone can say something about (other than the actual topic which is Ophiolite saying goodbye) or from virtual reputation in a room full of strangers having taken up a key role in this forum. From my personal forum experience, I can strongly recommend putting people on the ignore list before feeling compelled to insult them (incidently, Ophiolite was one of those people who ended up on my ignore list as the result of such a process). It may not be very action-hero like to not battle everyone you consider an idiot or simply wrong to the bitter end. But it's very time-effective and good for the nerves. Putting people on ignore may seem to defeat the purpose of a discussion forum. But in reality once you are at the level where you consider putting people on ignore the trade-off is not between ignorance and healthy discussion, but between ignorance and name-calling or smart-assing.
  15. I do not completely get your drawing. I feel you still talk about an example where knowing the field in the volume outside a sphere does not tell you the charge distribution inside a sphere. This is absolutely correct: "inside" and "outside" are different volumes, even if one is topologically enclosed in the other. In any other case two possible attempts of a proof would be: Proof 1: Proof 2: The charge distribution is given by [math] \rho = \nabla \vec E[/math]. The derivative is unique. Therefore, so is the charge distribution. It's of course the same as in proof 1, but some people are more impressed by mathematical symbols than by construction algorithms. I have the feeling I am repeating myself a bit too often to still consider this a discussion. Looks more like a battle of egos to me. So I am out of this thread. Feel free to learn something from what I have said here or ignore it.
  16. The charge distribution in the cube is [math] \nabla \vec E = \nabla \text{const} = 0[/math]. The charge distribution outside the cube that created the field is not the question here, as I mentioned in pretty much every post in this thread.
  17. Given that I sketched an explicit algorithm of how to deduce the charge density from the field in my first post I don't think that "the answer is still no" really adds value to the discussion. A counter-example or an explanation why Gauss law does not apply would be great. I can also rephrase my first post if you have trouble understanding it (adding the paragraph apparently did not help). Consider a differential equation dE/dx = d. Knowing d in a certain interval does not define E, since there is an integration constant that can be chosen arbitrarily. This integration constant can be fixed by a boundary condition, as the OP already mentioned. Knowing E, on the other hand, uniquely defines d. It can be calculated by means of taking the derivative of E with respect to x (because of d = dE/dx and the derivative being unique). Now, imagine E to be the electric field and d to be the charge density.
  18. Not sure I am getting this one. The scenario I had in mind was having the field in a volume V and deducing the charge distribution in V. From what I understand your example still is about not being able to take the field in V and deduce the charges outside V (let's assume the volume to be an open volume, meaning to behave like an open interval, for simplicity). My line of thinking with the 1D example (which works analogously in higher dimensions) goes towards that you can only find deviations from the vacuum state and that in the example the respective vacuum state is the uniform charge density. But it leaves the taste of having flung around a buzzword (vacuum state) to create the illusion of explanation without being any step further to understand what goes wrong, there.
  19. The OP seems very clear to me with respect to identical regions for field and charges. But you two guys seem to have a different opinion than me, so I accept this as experimental proof. I guess the best thing is to wait and see what the OP makes out of the replies generated so far. In the meantime, there is another issue I just happened to think about (warning to OP: this probably goes beyond what you were actually asking): At least in some rather pathological cases you can get into a situation where you can not determine the charges. Take a 1D world with periodic boundary conditions and a uniform charge density, for example. For symmetry reasons there should not be any non-zero vector-valued field and the potential must be constant. I think you cannot tell a zero charge density from a non-zero one in this case. Not really sure what that means. But what is actually bugging me: If there is one instance where knowing the field in region A does not tell you the charge in region A (despite the charge simply being the derivative), what other instances are there?
  20. I was assuming the case that the volumes in which you know the field and the volume whose charge you ask about are identical (which could be inferred from the OP asking about "in that region of space"). You are indeed correct that knowing the field in region A does not automatically tell you the charge distribution in a different region B.
  21. I self-conceived my post as a rather obvious "yes, the field defines the charge distribution". I'll add a paragraph to make it more clear.
  22. Gauss Law tells you the field defines the total charge in the enclosed volume. This is true for any sub-volume in your field. Hence, for any region the field defines the charge, including arbitrary small regions. Hence, knowing the field defines the charges. Only exception is if you count a negative and a positive charge at the same position to be different than having no charge at this location (or 2x plus and 1x minus to be different than a single plus charge). This can not be resolved - neither by the argument with Gauss Law nor by the the field at all, because only the net charge density goes into the field in the first place.
  23. Maybe some people find finding a solution to a possible scenario interesting? In that case, finding a solution seems extremely easy to me. But knowing the original problem and the constraint when a solution exists may have helped in that.
  24. It is not too common to include lecture material in an application. So its quality may actually be not that relevant for determining the strength of an application. The reputation of the grade-granting institution plays a role at some level, of course. But the question appears to be more about school grades than how an okay grade from a well-known institution compares to a "summa cum laude" from a diploma mill.
  25. Marshalscienceguy was probably asking about the real world, where grade inflation not only exists but is extremely dominant. E.g. university grades: In Germany, the possible passing grades are "very good" (A), "good" (B), "satisfactory" © and "sufficient" (D) - the real grades (in physics, chemistry and biology) are "very good" meaning average and better, "good" meaning okay to so-so and anything below meaning "we really did not want to see this student turn up in an exam again so we gave him/her a passing grade". The statement of average meaning average by definition is certainly correct, but may be irrelevant in this case. The relevant question would be how the grades are actually distributed among a relevant peer group (which is not how many points you need to get the grade).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.