Jump to content


Senior Members
  • Posts

  • Joined

  • Last visited

Posts posted by DrRocket

  1. -

    But, when I give my try to solve one of the question...


    Find the value of [math] \lim_{n \to -1} \frac{n^2 - 1}{n + 1} [/math]


    [math] y = (n^2 - 1)(n + 1)^-1 [/math]


    [math] y + \delta y = [(n+\delta n)^2 - 1] [n + \delta n + 1] ^-1 [/math]


    [math] y + \delta y = \frac{n^2 + 2n\delta n + \delta n^2 - 1}{n + \delta n + 1} [/math]


    [math] \delta y = \frac{n^2 + 2n\delta n + \delta n^2 - 1}{n + \delta n + 1} - \frac{n^2 - 1}{n + 1} [/math]


    well, I think I'm just not good in fraction or actually I don't understand why I'm coming to this step actually(probably just follow from the given example)


    hope anyone can give a hand, thank you.


    EDIT: btw, the answer is = -2


    This has nothing to do with taking a derivtive, by any method.


    [math] \lim_{n \to -1} \frac{n^2 - 1}{n + 1} [/math] = [math] \lim_{n \to -1} \frac{(n - 1)(n+1)}{n + 1}=-2 [/math]


    I think you need to go back and understand limits a bit better. Then and only then are you ready to understand derivatives.


    You are concentrating on calculating and "finding the answer" when you need to concentrate on what the concepts mean.

  2. Exact text ch.10 page 348 say's " The initial singularity in the universe. The expansion of the universe is in many ways similar to the collapse of a star, except that the sense of time is reversed.We shall show in this chapter that the conditions of theorems 2 and 3 seem to be satisfied, indicating that there was a singularity at the beginning of the present expansion phase of the universe."




    Without space-time or matter or anything but the singularity one would think it would have no choice but to be stable.



    I'll wait for your rebuttle.





    One more time.


    Within the context of general relativity, there are NO singular points in spacetime (aka the universe). Therefore the statemetn that "the universe began as a singularity" is meaningless.


    The singularity theorems of Penrose and Hawking show that it is impossible to indefinitely continue timelike geodesics into the past. That is the sense in which there is no "before" the big bang.


    Quite a few people who write popularizations, including physicists who one would think should know better, do not understand this point. Perhaps they too ought read the original papers by Hawking and Penrose or the book by Hawking and Ellis, The large scale structure of space time and learn what the singularity theorems actually say.


    I have no interest in continuing to rebut each and every over-simplification or popularization that you care to drag out. As I have told you there are a lot of misconceptions, over-simplifications, and downright erroneous statements that have been publlished. The nature of the singularity theorems is precisely as I have stated. The usual interpretation is that general relativityis inadequate to describe the earliest moments of the universe, not that the "universe began as a singularity". Given the apparent breakdown of our best available theories, no one has a clue what happened at t=0. No one includes the people who wrote the books that you are reading. Unfoertunately the accurate answer, "I don't know" doesn't sell books.


    Any suggestions? Please, this is driving me insane and I can't get past it to chapter two and I need to also get back to making progress in Schaum's Outline of Tensor Calculus, Outline of Linear Algebra, and Outline of Group Theory too.



    Trying to do differential geometry, group theory, linear algebra, and tensor calculus simultaneously is neither realistic nor logical.


    You need to understand linear algebra thoroughly before you undertake differential geometry or tensor calculus. In fact you need a good deal more to study those subjects, including basic real analysis and topology.


    Also, Schaum's outllines are not the best way to study advanced subjects. They are intended as supplements to other material, a set of lectures or a text, and tend to emphasize symbol pushing over fundamental understanding. This is apparent in the nature of your questions. You might do better to read some real books. A very good book on linear algebra, suitable for study of analysis and geometry is Finite Dimensional Vector Spaces by Paul Halmos.

  4. You say i can't think of energy as a substance.I do not agree with you 100%.I can think of the vacuum of space as negative energy,a very dilute expanse of expanding (negative)anti-matter.Any fluctuations that create matter/anti-matter virtual particles,being effected by the negative vacuum,matter(positive) to the centre/anti-matter(negative) to the circumference.

    Atoms(of matter) form with positive energy at centre negative energy at circumference(because the vacuum is negative).


    This is nonsense, word salad. You need to learn some physics.

  5. Let me preface this by saying that I've just recently begun studying physics. Okay, so I'm wondering about electromagnetism. Since the photon is both a quantum of light and the carrier of the electromagnetic force, then how is it that the electromagnetic force is able to function in an environment where there is no light? If there was an absence of the electromagnetic force, then molecules and atoms would no longer hold together, which suggests that once the light was switched off in a room, matter could not exist in that room. The only explanation I can come up with is that there is another type of photon that is not emitted from a light source.


    The electromagnetic force does not "function in an environment where there is no light".


    But you must realize that the term "light" in the context of physics includes ALL frequencies, not just visible light. Your eyes are sensitive to only a very small part of the entire electromagnetic spectrum. There are electromagnetic waves just about everywhere, else your cell phone would not work and it certainly does work in a "dark" room.


    Moreover, at the quantum level the carrier of the electromagnetic force is actually a virtual photon rather than a real photon. The electrostatic force, for instance, is quite real even in the absence of propagating electromagnetic waves (i.e. real photons).

  6. Connexions allows anyone to publish, on the web, anything about any subject. There are no referees to shoot down your paper. Your paper does not need to be new information. If you feel that you are knowledgeable about some well researched subject, then write a summary paper. Connections will publish it for you on the web. Connexions makes it easy for anyone to become a published author. Do you have a paper in you?


    It you call that "published" then you are in need of much higher standards. If an organ publishes anything then nothing that it publishes can be assumed to be of value without a good deal more research into what the paper says.


    There is a reason that high-value scientific papers are publilshed in peer-reviewed journals with high standards and a high rejection rate.

  7. In the mean time, yesterday I did further reading on this topic.


    Is this something to do with the Holographic principle?




    In the above article P.C.W Davies talks about the status of the Laws of Physics and the "the unreasonable effectiveness of mathematics in the physical sciences" as it was said by Wigner.


    The traditional approach to view the laws of physics and matter is kind of like this.


    A. Laws of Physics --> matter --> Information.


    Here mathematics is viewed as "platonic forms existing in their own realm" and space-time is ontological.


    According to the Holographic principle, information is ontological and it is the basis of the universe.


    C. Information --> Laws of Physics --> Matter.


    So he goes on to say that with this approach the universe is self sufficient and self consistent because "universe computes with in the universe". What we call mathematics is nothing but information processing with in the cosmological system and hence the notion of mathematics exists in their own platonic realms is not required.


    This leads to an ontological problem of Information as to "what those bits are" and space-time originates as complex computational states and I don't know how this fits with the geometry of space-time.


    In the experiment they say that they are going to measure two signals from two interferometers and if all noise is eliminated and if they find that both the signals appear encoded or same then that will be the indication that 'space=time is digital' and again I don't see just by observing some correlations with signals how can you conclude that "universe computes with in the universe".


    Is the universe computable or non-computable? If it is computable then How can we see the truth value of Godel's statements while turing machines fail to do so.


    Thank You.


    I think you can safely ignore most things from Davies.

  8. I read this interesting article on Scientific American from the RSS feed of my blog.





    Can any physicist give me a picture of how the results of that experiment is going the change the way we think about what fundamental reality is and what will be the consequences or the new physics and possibilities that is going to emerge if space is indeed found to be digital?


    I read the article and the comments and I am not really getting any picture at all and it has confused me even more.




    There have been a number of attempts to model space as discrete. They have not panned out.


    There is still ongoing research involving other approaches to model space as discrete. They may eventually bear fruit, but they have not done so yet.


    Since there is not viable theory that is based on discrete space, there is no sensible to forsee what might result from a future theory that is based on such a picture of space. I doubt that this lack of clarity will be much of a barrie to those who purvey speculation as science in the popular literature. Ask Michio Kaku if rank speculation is what you seek.

  9. How can a cosmological constant be constant? Any energy density or force should still follow conservation principles, shouldn't it? So how can a constant remain stable or even accelerate without some sort of input?



    Also shouldn't this conservation law be apllied to gravity? Does gravity weaken over time by energy conservation?



    It seems to me that force should also be accounted for when figuring the overall density of the universe, and not just baryonic matter. It seems that energy or force can be detrimental to a systems density or pressure. Why is it not included in such a measurement?




    Just some short questions for now.



    Eneergy conservation in general relativity is a bit of a problem. Energy can be shown to be conserved locally -- at any single point. But energy is not necessarily conserved over a non-zero volume. So there is no global conservation of energy law in general relativity. This is not as serioius an issue as you might think since enrgy conservation is normally thought of as applying between two instants of time, and there is not such thing as global time in general relativity either.


    To compound that problem, gravitational potential energy is not clearly defined in general relativity.


    The cosmological constant does not "accelerate" but rather is a factor in the field equations that describe the spacetime metric and it is metric expansion of space that is accelerating.


    Pressure is included in the stress-energy tensor that determines spacetime curvature. A positive cosmological constant is equivalent to a negative pressure term, and that is the possible connection between the quantum mechanical notion of the zero point energy of the vacuum and the cosmological constant. Unfortunately the best estimate of the cosmological constant in terms of that vacuum energy overpredicts the observed cosmological constant by 120 orders of magnitude.

  10. I think people do that for they have a least satisfactory answer to define it but time doesn't have such a definition.


    The definition of distance is not one bit "better" or more fundamental than the definition of time.

  11. Thank you for the book recommendations; they're certain to be my first stop after I've mastered complex analyses. Speaking of which, is there any book you might recommend on that topic?


    There are lols of books on complex analysis.


    At an introductory to intermediate level there is new one that is very good -- Complex Variables by Joseph L. Taylor. It is published by the American Mathematical Society and therefore is relatively inexpensive (in the expensive realm of math and science books). There is a discount for members.


    At a somewhat higher level there is Real and Complex Analysis by Walter Rudin, which contains a lot more than just complex analysis.


    Then there a number of older classic books which are still extremely good: Theory of Functions of a Complex Variable by Caratheodory; Analytic Function Theory by Einar Hille; Complex Analysis by Lars Ahlfors.


    At the most elementary level there is also Complex Variables and Applications by Churchill et al.








    Great. Now we're on the move. ...It may be true that what I'm doing is not mathematics but I don't actually care. If it works then I don't see that it matters. ...

    Let's start with an empty number line and create the numbers by a dynamic process fo multiplication and division. The numbers 0 and 1 create nothing. When we add 2 we get the powers of 2, and when we add the 3 we get a combination wave of products that ensures that 4 out of six numbers cannot be prime.


    So now we have a twin prime at every 6n number.


    ...Okay. This is not number theory as we know it.... ...


    I won't bore you with more. The point is just that as we add each consecutive prime we can calculate its maximum impact on the density of twin primes further up the line. So for the products of 7 we know that the most they can reduce the as yet unsieved twins is 2 in 42, or 2 in 7 of the possible locations. Some of these will already be products of 5, but this doesn't matter. If we assume that 2/7 locations are crossed off then we have a worst case scenario.



    DrR - Would you have time to go back to that sentence I wrote (the nonsense one) and tell me exactly at what point it goes wrong? I thought at least it started out okay.


    This is complete nonsense. It has NOTHING to do with the twin prime conjecture and in fact it has NOTHING to do with much of anything.


    We already know the asymptotic distribution of the primes. That is the content of the Prime Number Theorem discussed earlier. So we know that the primes are distributed rather sparsely (asymptotically the number of primes less than x behaves like x/ln(x) ). We have no idea how many of those infinitely many prime numbers are twins.


    The twin prime conjecture is that there are infinitely many twin primes. We already know that they are sparsely distributed, obviously at least as sparsely distributed as the set of all primes. We don't know how sparse. We don't know how sparse to the extent that we don't know if there are only a finite number of them. Your reasoning has NOTHING to do with settling that issue.


    Your fundamental problem is that you don't understand that you don't understand.

  12. I don't know that there is an exact answer to that. What we do know is that they evolved that way and they seem to work just fine.





    Now, if the process were that of an "intelligent designer" rather than that messy evolutioin thing, then perhaps the color would be different.


    Or maybe it is just a matter of being green with envy.

  13. If one is suicidal couldn't this be considered to be contradictory by definition?




    Not at all.


    It could well be the signature of a rational mind grappling with unavoidable impending doom.


    Think of someone with Alzheimer's disease who is in the last stages of being able to think ratiionally. Such a person might very well wish to end his life. At this point in time no one, not even family, can legally assist that person in carrying out his very rational wish. The result is immense expense and turmoil for the family, while the patient slips from the grip of reality, into an existence that cannot be called living.


    Been there.



    Now, because of the antisymmetry property of the linear ordering then, as follows:


    if aRb and bRa then a = b ---> Does this conclude the least element in the subset if it exist is unique?



    Yes, it really is just this simple. If as least element exist it is unique simply because any two least elements are equal.

  15. Now was that really so hard?



    Yes. This is elementary stuff and I do not intend to waste any more time on you. It merely codifies the obvious.


    You should have done this for yourself or else stopped offering nonsensical arguments.


    When you don't know what in the hell you are talking about it sometimes advisable to shut up, listen and do your own damn homework.

  16. . I think some of that comes down to the fact that undergrad labs are little more than mere recipe following; it's a necessary format because students really have no clue what they're doing when they first enter a lab, but comes at a cost when you consider that your typical student will put minimal thought into why they are being told to do what they're being told to do.



    As I recall freshman chemistry lab consisted of following by rote a procedure that can from on high to perform a series of steps using equipment that probably originally belonged to Robert Boyle and had not been maintained since.


    The primary objective seemed to be to complete the assignment and clean up the mess within the prescribed 2-hour laboratory period and then cobble together a "report" on the outcome of following the recipe. Hopefully one accomplished this with minimal threat to life and limb. It was very artificial, very contrived, and basically worthless.


    I hope things have improved a bit since then.


    The point is that "practical" laboratory experience, while very beneficial to those who understand the fundamental principles involved, is indistinguishable from cooking following a cookbook or witchcraft to those who have yet to grasp the basic theory.


    The same comments apply to freshman physics labs. I recall one class where we used iron filings to map magnetic field lines. We cut the paper diagonally to split the map between students. Most of the grade was based on how one folded that triangle to get it into the prescribed laboratory report book.


    "Hands on" approaches only work when the student has sufficient knowledge to engage the brain at the same time. That does not necessarily occur concurrently with the first glimmer of understanding of the basic theory. I found good classroom demonstrations infinitely more enlightening than struggling with poor laboratory equipment.


    Now, once one gets to a somewhat more advanced setting all of these problems seem to disappear. Students have greater understanding of the fundamentals. The equipment in research laboratories and upper class labs is much better, and may actually work.

  17. Which is what I said. There exist black holes with extreme tidal forces outside the event horizon. Now that we've got that established, let's get to the question.


    Is it possible for a black hole to have tidal forces strong enough to rip apart atom (or even constituent nucleons)?


    We have established nothing except that you are not listening.


    It is fairly easy to calculate the gradient as a function of radial distance from the center of the gravitational acceleration for both neutron stars at their surface and black holes at the event horizon.


    For a neutron star that gradient is about [math] -3.4 \times 10^{8} \frac {1}{s^2}[/math] and for a black hole of only 2 solar masses it is about [math] -5.2 \times 10^{9}\frac {1}{s^2}[/math] That qualifies as severe, as quoted by MigL, but hardly extreme in the context of inter-atomic or nuclear forces. For instance, two one kilogram masses separated by one millimeter would experience differential force (i.e. tidal force) of about 76,210 lbf in the first case and 115,000 lbf in the second case. These forces are well within what can be routinely applied in Earth-bound tensile testing machines, and in fact are well within the structural capability of common steel bars of one-inch cross section.


    Atomic and nuclear masses are quite a bit less than 1 Kg and separation distances are quite a bit more than 1 mm. But the same forces that operate at those distances and masses are what hold ordinary material together. Mushroom clouds have not been reported over the sites of mechanical pull testing machines.


    The tidal forces at the event horizon, of even a smallish black hole are not at all "extreme" in the context usually associated with the "spaghettificataion" discussed in popularizations for children. They are not even extreme in the context of typical material properties for structural materials used in everyday construction.

  18. As DrR has stated several times, a large, galactic centre sized (>1000000 solar masses ) black hole would have very weak tidal ( differential gravity ) forces until well inside the event horizon. You wouldn't notice any difference upon crossing the event horizon until much later. A very small black hole or, for that matter even a neutron star, would have severe tidal forces ( even outside the event horizon of the small black hole ).


    Completely, totall utterly correct.


    You got me; tidal forces don't exist.


    You are getting more ignorant rather than more informed. Neat.

  19. Hello everybody! :)


    I'm looking for publications (dissertations, articles in journals,... whatever is scientifically based) concerning the temperature between sliding surfaces. I need this information for my own bachelor's thesis and I need to calculate the temperature expected on the contact surface between two bodies. These two bodies will be rubbing against each other at high speed. Body 1 will be some defined steel, Body 2 will consist of the same steel coated with a thin film of some friction-reducing material.


    So what I am looking for is in the ideal case some formula, where I can put in my materials' parameters and get the contact temperature. But any publication that's close to this is also welcome.


    I have been searching the web of course, but I thought perhaps there might be someone here who has some concrete idea where I could find an answer to my problem.:huh:





    Thank you for any helpful response!


    You would need to know the the geometry of the steel plate (or make the approximation of an infinite plate in which case thickness is sufficient), the normal force per unit area, the coefficient of friction, the thermal diffusivity of the steel the density of the steel, the initial temperature, the temperature of the surroundings of the steel, and the emissivity of the steel. If the coating you intend to use has a significant effect other than on the coefficient of friction, you will need those parameters as well.


    It is fairly easy to calculate the energy per unit area generated as heat by sliding the plate. It is a lot more difficult to calculate the temperature profile through the steel as a function of time and it involves many more variables.


    If this is for a bachelor's thesis, should you not have some idea of the basic physics involved and should you not be looking for something of greater depth than a cook book formula (which cannot exist given the dependence of the temperature sought on many parameters) ?

  20. Good thing I wasn't talking about what goes in inside the event horizon, but rather talking about what happens before it even gets there.


    In which case it is abundantly clear that nothing at all happens. Things are rather ordinary outside and even immediately inside the event horizon of a large black hole.

  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.