Jump to content

KJW

Senior Members
  • Posts

    321
  • Joined

  • Last visited

  • Days Won

    1

KJW last won the day on January 15

KJW had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

KJW's Achievements

Atom

Atom (5/13)

40

Reputation

  1. One thing about the integral of [math]x^n[/math] that I find interesting is the case of [math]n = -1[/math]: [math]\displaystyle \int x^n\, dx = \begin{cases}\ \dfrac{x^{n+1}}{n+1} + C & \text{if } n \neq -1 \\ \\ \ \log(x) + C &\text{if } n = -1 \end{cases}[/math] Note that: [math]\displaystyle \int x^{-1 + \varepsilon} \ dx = \dfrac{x^{\varepsilon}}{\varepsilon} + C[/math] for all [math]\varepsilon \neq 0[/math] regardless of how small [math]\varepsilon[/math] is. Furthermore, note that [math]x^{-1 - \varepsilon}[/math] can be deformed to [math]x^{-1 + \varepsilon}[/math] without discontinuity at [math]x^{-1}[/math]. Therefore, one would expect that: [math]\displaystyle \int x^{-1 - \varepsilon} \, dx[/math] can be deformed to: [math]\displaystyle \int x^{-1 + \varepsilon} \ dx[/math] without discontinuity at: [math]\displaystyle \int x^{-1} \ dx[/math] even though the above formula seems to indicate that this is not the case. But let's consider the definite integral: [math]\displaystyle \lim_{\varepsilon \to 0} \displaystyle \int_{1}^{x} u^{-1 + \varepsilon} \ du[/math] [math]= \displaystyle \lim_{\varepsilon \to 0} \dfrac{x^{\varepsilon} - 1}{\varepsilon}[/math] [math]= \log(x)[/math] Thus, it can be seen that the definite integral of [math]x^{-1 + \varepsilon}[/math] is continuous with respect to [math]\varepsilon[/math] at [math]x^{-1}[/math]. Interestingly, this notion can be extended to the definite integral of [math]\log(x)[/math] as follows: [math]\displaystyle \int_{1}^{x} \log(v) \ dv[/math] [math]= x \log(x) - x + 1[/math] And: [math]\displaystyle \lim_{\varepsilon \to 0} \displaystyle \int_{1}^{x} \displaystyle \int_{1}^{v} u^{-1 + \varepsilon} \ du \ dv[/math] [math]= \displaystyle \lim_{\varepsilon \to 0} \displaystyle \int_{1}^{x} \dfrac{v^{\varepsilon} - 1}{\varepsilon} \ dv[/math] [math]= \displaystyle \lim_{\varepsilon \to 0} \dfrac{x^{\varepsilon + 1}}{\varepsilon (\varepsilon + 1)} - \dfrac{x}{\varepsilon} - \dfrac{1}{\varepsilon (\varepsilon + 1)} + \dfrac{1}{\varepsilon}[/math] [math]= \displaystyle \lim_{\varepsilon \to 0} \dfrac{x^{\varepsilon + 1}}{\varepsilon (\varepsilon + 1)} - \dfrac{x (\varepsilon + 1)}{\varepsilon (\varepsilon + 1)} - \dfrac{1}{\varepsilon (\varepsilon + 1)} + \dfrac{(\varepsilon + 1)}{\varepsilon (\varepsilon + 1)}[/math] [math]= \displaystyle \lim_{\varepsilon \to 0} \dfrac{x^{\varepsilon + 1}}{\varepsilon (\varepsilon + 1)} - \dfrac{x \varepsilon}{\varepsilon (\varepsilon + 1)} - \dfrac{x}{\varepsilon (\varepsilon + 1)} - \dfrac{1}{\varepsilon (\varepsilon + 1)} + \dfrac{\varepsilon}{\varepsilon (\varepsilon + 1)} + \dfrac{1}{\varepsilon (\varepsilon + 1)}[/math] [math]= \displaystyle \lim_{\varepsilon \to 0} \dfrac{x^{\varepsilon + 1}}{\varepsilon (\varepsilon + 1)} - \dfrac{x \varepsilon}{\varepsilon (\varepsilon + 1)} - \dfrac{x}{\varepsilon (\varepsilon + 1)} + \dfrac{\varepsilon}{\varepsilon (\varepsilon + 1)}[/math] [math]= \displaystyle \lim_{\varepsilon \to 0} \dfrac{x^{\varepsilon + 1}}{\varepsilon} - \dfrac{x \varepsilon}{\varepsilon} - \dfrac{x}{\varepsilon} + \dfrac{\varepsilon}{\varepsilon}[/math] [math]= \displaystyle \lim_{\varepsilon \to 0} \dfrac{x^{\varepsilon + 1}}{\varepsilon} - x - \dfrac{x}{\varepsilon} + 1[/math] [math]= x \Big(\displaystyle \lim_{\varepsilon \to 0} \dfrac{x^{\varepsilon} - 1}{\varepsilon}\Big) - x + 1[/math] [math]= x \log(x) - x + 1[/math] However, if one starts with [math]x^{\varepsilon}[/math] and form the derivative: [math]\displaystyle \lim_{\varepsilon \to 0} \dfrac{dx^{\varepsilon}}{dx}[/math] [math]= \displaystyle \lim_{\varepsilon \to 0} \varepsilon x^{\varepsilon - 1}[/math] [math]= 0[/math] If we consider [math]\varepsilon[/math] to be small but not infinitesimal, then for the integral, we start with [math]x^{\varepsilon - 1}[/math] and end with [math]\dfrac{x^{\varepsilon}}{\varepsilon}[/math], whereas for the derivative, we start with [math]x^{\varepsilon}[/math] and end with [math]\varepsilon x^{\varepsilon - 1}[/math]. That is, the derivative is smaller than the integral by factor of [math]\varepsilon[/math], becoming zero in the limit. Thus, although repeated integration starting from [math]x^{\varepsilon - 1}[/math] can use the power function integration formula, the resulting sequence of functions are distinct from power functions obtained by starting from, for example, [math]x^0[/math].
  2. I'm inclined to think that "dangerous chemical" means dangerous to those who work with the chemical as a chemical as well as to those in the vicinity of any accident from working with the chemical. Dangerous chemicals require more stringent safety protocols, which reduce the likelihood of deaths but not the danger. The danger from sugar does not come from it being a chemical, but rather from it being a food. Similarly, the danger from drowning in water does not come from water being a chemical. On the other hand, safety protocols demand that no one travel in an elevator with liquid nitrogen. That is, liquid nitrogen might not be especially dangerous, but it does have its hazards which can lead to death. Ethers are not especially dangerous... unless they're old, in which case, distilling them can lead to an explosion. Also, dangerous chemicals need not be just about death, but also serious injury. For example, osmium tetroxide is dangerous because it can lead to blindness if any gets on the eyeball.
  3. According to Wikipedia, nitrogen triiodide is more sensitive, being the only known chemical explosive that detonates when exposed to alpha particles and nuclear fission products. I doubt that. It is my understanding that the most toxic known substance is botulinum toxin, with an estimated human median lethal dose of 1.3–2.1 ng/kg Interestingly, what might actually be the strongest known acid, the only known acid to protonate carbon dioxide, carborane acid, is considered to be "gentle". I often walk past 1kg bags of sugar while shopping in a supermarket. I do so without any fear that my life is in danger. I can't exactly say the same about lithium-ion batteries in the home. And if I saw "chlorine trifluoride" written on a railway tanker somewhere, I think I would very much like to be somewhere else.
  4. Sugar a most dangerous chemical??? You people have a weird notion of what a dangerous chemical is. I'm going with the stuff that burns through concrete.
  5. @Max70, you appear to have the view that the expansion of the universe can be explained by the tidal effect external to a gravitational source. No, such a tidal effect has the property of being "volume preserving". In other words, a free-falling sphere distorts into the shape of a prolate spheroid of the same volume. On earth, this ideally gives rise to two antipodal high tides separated by a ring of low tide. By contrast, the universe is expanding in all directions. A free-falling sphere becomes a larger sphere... not volume preserving. It's worth noting that the flat-space FLRW spacetime that ideally describes our universe is entirely devoid of the type of curvature associated with a black hole.
  6. You have the correct products, but you've drawn the loops incorrectly. The oxygen atom on each carboxylate anion comes from the hydroxide ion, not the glycerol molecule. The nucleophilic attack by hydroxide ion is on the carbonyl carbon atom, not the glyceryl carbon atoms. And we know this from isotopic labelling experiments.
  7. Consider the differential equation: [math]\dfrac{dy}{dx} = \dfrac{1}{\sqrt{1 - 2/x}} = \dfrac{\sqrt{x}}{\sqrt{x - 2}}[/math] Let: [math]x = u + 1[/math] ; [math]dx = du[/math] [math]\dfrac{dy}{du} = \dfrac{\sqrt{u + 1}}{\sqrt{u - 1}} = \dfrac{\sqrt{u + 1}}{\sqrt{u - 1}} \dfrac{\sqrt{u + 1}}{\sqrt{u + 1}}[/math] [math]= \dfrac{u + 1}{\sqrt{u^2 - 1}}[/math] [math]y - C = \sqrt{u^2 - 1} + \textrm{arccosh}(u)[/math] Let: [math]u = \cosh(v)[/math] [math]y - C = \sinh(v) + v[/math] [math]v = \textrm{Lsinh}_2(y - C)[/math] [math]u = \cosh(\textrm{Lsinh}_2(y - C))[/math] Therefore: [math]x = \cosh(\textrm{Lsinh}_2(y - C)) + 1[/math]
  8. While attempting to solve the differential equation: [math]\dfrac{dr'}{dr} = \dfrac{1}{\sqrt{1 - \dfrac{2GM}{c^2 r}}}[/math] expressing [math]r[/math] in terms of [math]r'[/math], I encountered a novel family of transcendental functions called "Leal-functions". These functions are similar to the Lambert W function (the function [math]W(x)[/math] that solves [math]W(x)e^{W(x)} = x[/math]), but (apparently) can't be derived from it. The link to the full article about these functions: https://www.sciencedirect.com/science/article/pii/S2405844020322611 The link to the section that defines these functions: https://www.sciencedirect.com/science/article/pii/S2405844020322611#se0040 Below is a list of Leal functions and their definitions: [math]y(x) = \textrm{Lsinh}(x)[/math] [math]\iff[/math] [math]y(x) \sinh(y(x)) = x[/math] [math]y(x) = \textrm{Lcosh}(x)[/math] [math]\iff[/math] [math]y(x) \cosh(y(x)) = x[/math] [math]y(x) = \textrm{Ltanh}(x)[/math] [math]\iff[/math] [math]y(x) \tanh(y(x)) = x[/math] [math]y(x) = \textrm{Lcsch}(x)[/math] [math]\iff[/math] [math]y(x) \textrm{ csch}(y(x)) = x[/math] [math]y(x) = \textrm{Lsech}(x)[/math] [math]\iff[/math] [math]y(x) \textrm{ sech}(y(x)) = x[/math] [math]y(x) = \textrm{Lcoth}(x)[/math] [math]\iff[/math] [math]y(x) \coth(y(x)) = x[/math] [math]y(x) = \textrm{Lln}(x)[/math] [math]\iff[/math] [math]y(x) \ln(y(x) + 1) = x[/math] [math]y(x) = \textrm{Ltan}(x)[/math] [math]\iff[/math] [math]y(x) \tan(y(x)) = x[/math] [math]y(x) = \textrm{Lsinh}_2(x)[/math] [math]\iff[/math] [math]y(x) + \sinh(y(x)) = x[/math] [math]y(x) = \textrm{Lcosh}_2(x)[/math] [math]\iff[/math] [math]y(x) + \cosh(y(x)) = x[/math] The authors say that the Leal family of functions can be extended to solve other transcendental equations, and provide examples of other similar functions. They even say that users can propose their own functions, applying the methodology used in the article. It turns out that the solution to the above differential equation for the coordinate transformation of the [math]g_{rr}[/math] component of the Schwarzschild metric to [math]g_{r'r'} = -1[/math] involves the [math]\textrm{Lsinh}_2(x)[/math] Leal-function defined above.
  9. It's funny that you say this because I have also had the idea that the arrow of time is connected to spinors. If you disagree with the ontology, then in what way are you agreeing with special and general relativity? It seems to me that you think time dilation is a physical effect acting on clocks. This conflicts with the principle of relativity which says that the laws of physics are the same in all frames of reference. This means that an ideal clock ticks at the same intrinsic rate in all frames of reference, and therefore time dilation is the result of something other than a physical effect acting on the clock. You say you agree with the equations, but you seem to disagree with the principles upon which the equations are based. It's as if you think Einstein got lucky with a wrong theory that happens to make correct predictions.
  10. I found this article titled "Calculus Before Newton and Leibniz - An in-depth article on the history of calculus". Here is the introductory section of the article: The Development of Calculus History has a way of focusing credit for any invention or discovery on one or two individuals in one time and place. The truth is not as neat. When we give the impression that Newton and Leibniz created calculus out of whole cloth, we do our students a disservice. Newton and Leibniz were brilliant, but even they weren’t capable of inventing or discovering calculus. The body of mathematics we know as calculus developed over many centuries in many different parts of the world, not just western Europe but also ancient Greece, the Middle East, India, China, and Japan. Newton and Leibniz drew on a vast body of knowledge about topics in both differential and integral calculus. The subject would continue to evolve and develop long after their deaths. What marks Newton and Leibniz is that they were the first to state, understand, and effectively use the Fundamental Theorem of Calculus. No two people have moved our understanding of calculus as far or as fast. But the problems that we study in calculus—areas and volumes, related rates, position/velocity/acceleration, infinite series, differential equations—had been solved before Newton or Leibniz was born. It took some 1,250 years to move from the integral of a quadratic to that of a fourth-degree polynomial. But awareness of this struggle can be a useful reminder for us. The grand sweeping results that solve so many problems so easily (integration of a polynomial being a prime example) hide a long conceptual struggle. When we jump too fast to the magical algorithm and fail to acknowledge the effort that went into its creation, we risk dragging our students past that conceptual understanding. This article explores the history of calculus before Newton and Leibniz: the people, problems, and places that are part of the rich story of calculus.
  11. I don't know precisely how Newton or Leibniz obtained the product rule of differential calculus, but it seems rather easy to obtain to me: [math]\text{By definition:}[/math] [math]\dfrac{df(x)}{dx} \buildrel \rm def \over = \displaystyle \lim_{h \to 0} \dfrac{f(x + h) - f(x)}{h}[/math] [math]\text{Therefore:}[/math] [math]\dfrac{df(x)g(x)}{dx} = \displaystyle \lim_{h \to 0} \dfrac{f(x + h) g(x + h) - f(x)g(x)}{h}[/math] [math]= \displaystyle \lim_{h \to 0} \dfrac{f(x + h) g(x + h) - f(x) g(x + h) + f(x) g(x + h) - f(x)g(x)}{h}[/math] [math]= \displaystyle \lim_{h \to 0} \dfrac{f(x + h) g(x + h) - f(x) g(x + h)}{h} + \displaystyle \lim_{h \to 0} \dfrac{f(x) g(x + h) - f(x)g(x)}{h}[/math] [math]= \displaystyle \lim_{h \to 0} \dfrac{f(x + h) g(x) - f(x) g(x)}{h} + \displaystyle \lim_{h \to 0} \dfrac{f(x) g(x + h) - f(x)g(x)}{h}[/math] [math]= \left(\displaystyle \lim_{h \to 0} \dfrac{f(x + h) - f(x)}{h}\right) g(x) + f(x) \left(\displaystyle \lim_{h \to 0} \dfrac{g(x + h) - g(x)}{h}\right)[/math] [math]= \dfrac{df(x)}{dx} g(x) + f(x) \dfrac{dg(x)}{dx}[/math]
  12. KJW

    KJW Mathematics

    [math]\buildrel \rm def \over =[/math]
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.