Jump to content

Zorgoth

Members
  • Posts

    4
  • Joined

  • Last visited

Posts posted by Zorgoth

  1. The previous answer provides a good basic explanation of how probability works over infinite sets. Indeed, the purpose of measure theory is primarily to properly understand integrals and probability.

     

    In basic measure theory, the relations between infinity and other numbers are specifically defined. infinity*0=0. So an integral over a set of measure zero is always zero, even if you are integrating a function which is infinite on that set, and an integral of a zero function is always zero, even if you integrate it over a set of measure infinity.

     

    This definition ONLY applies to measure theory (and specifically to the definition of the integral)! You cannot use this identity in any other context (for example, you cannot say that lim_{n->infinity}(n/n)=0 just because lim{n->infinity}(n)=infinity and lim{n->infinity}(1/n)=0).

     

    Note that measure theory kind of gets rid of ratios in probability where they don't make sense. Everything is expressed in terms of products, sums, and limits. So you don't really have to worry about dividing by infinity. In general, it's probably always a safe bet to say that 1/infinity is zero though. infinity/infinity or 0/0 is another matter. 0/0 can appear when conditioning on an event of probability zero; the way this is handled in infinite sets is with conditional probability density functions.

  2. In another thread (now locked) "Dr.Rocket" and others offer a useful perspective on the use of differentials.

     

    For example, it is concluded that dx/dy (in usual mathematics) is not a ratio. I have seen differentials defined to be real numbers - really just del(x), del(y) in which case they can be a ratio, but this is not what I want to ask about.

     

    I see many cases in the physical sciences, including the earth sciences and especially in thermodynamics, where differentials are used in what I'll call "casual" (or short cut) derivations. I'll include an example below.

     

    My particular interest is not to rain on anyone's parade; the contexts in which I find these derivations persuade me that the authors are not schlocks. I just seek some guidance/assistance in making such "derivations" a little more explicit. In the previous related thread, Dr.Rocket pointed out that many of these shortcuts can be made explicit by referring to the chain rule. I don't see how that might apply here, but maybe there are other implicit justifications that are being invoked.

     

    In the following excerpt, we start with two simple equations involving differentials (isolated differentials - I don't understand the meaning we should give that, either!) and proceed to combine them into another equation and then integrate. Can we make this derivation explicit?

    [edit: the editor is apparently not WYSIWG; I apologize for the awkward math typography.]

    -------------------------------------------------------------------------------------------------------------------------

    dA = k

    A A, dB = kB B, where k's are rate constants for the forward reaction. Assume no back reaction at all (e.g., dry wind blowing across a lake, so there's essentially no possibility that an evaporated H2O will get back into the lake).

    but Kb/Ba

    = a ,

    and dB/dA = a (B/A), 1/B (dB) = a 1/A (dA) ...integrate

    getting ln (B/Bo) = a ln(A/Ao)

    ...

     

     

     

     

    I do note that I can choose to interpret dA to mean dA/dt, etc. in which case the conclusion follows.

     

     

    This is an example of the method of separation of variables for ODEs, which is perfectly standard mathematics and is taught in any college level Differential Equations course.

     

    Like the chain rule, it intuitively splits up the dx, dy, or what have you in a derivative like it was a ratio. As for what dx, dy, etc are, the precise definition comes from measure theory and/or differential geometry. For the non-mathematician, take things like separation of variables on faith and think of dx, dy, etc as an infinitesimal over which we sum when we integrate, like the delta(x) in the Riemann integral when the step size is tending to zero.

  3. I am a PhD student and am wondering if there are good publications not relating my direct field of study that are targeted at scientists who aren't experts in a specific field. That is to say, I am looking for leisure-time science reading material that is targeted at people who know lots of math and understand science rather than at the general public. I'm not really too fussy about what sort of science or math I'm reading about.

     

    Any suggestions?

  4. Don't waste your time on integral(e^-(x^2)) :)

     

    It's indefinite integral is called erf (the error function) and doesn't have a closed form (i.e. it cannot be expressed algebraically using standard functions). To prove it converges, just use a test rather than evaluating it.

     

    You *can* integrate it from -infinity to infinity using a trick from multivariable calculus, and the answer is sqrt(pi).Don't say that in your homework though, because you presumably can't prove it using calculus 2 methods.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.