Jump to content

Xerxes

Senior Members
  • Posts

    254
  • Joined

Everything posted by Xerxes

  1. In a math forum, I find this a curious comment. But leave that, here's the general case that HalfWit asked for Consider the sum of products [math]ab+a(-b)+(-a)(-b)[/math]. By the associative law write this as [math]ab+[a(-b)+(-a)(-b)][/math] and by the right distributive law write this as [math]ab+[a+(-a)](-b) = ab+0(-b) = ab[/math] from the multiplicative property of zero On the other hand, using the exact same rules [math]ab+a(-b)+(-a)(-b) = [ab+a(-b)]+(-a)(-b)][/math] [math]=a[b+(-b)]+(-a)(-b) = a0+(-a)(-b)=(-a)(-b)[/math] (although you use the left distributive law here) Which implies that [math]ab = (-a)(-b)[/math] as desired. As a further generalization assume the above applies to the ring [math]\mathbb{Z}[/math] of integers. Let us be a bit naughty and call [math]0[/math] a proper sub-ring. Then from ring theory we have that an ideal [math]I[/math] of any ring [math]R[/math] is a subring such that for all [math]x \in I[/math] and all [math]a \in R[/math] that [math]xa \in I[/math] and [math]ax\in I[/math]. It is common to abuse notation somewhat and write this as [math]Ia =aI[/math] (actually equality here implies that we have a two-sided ideal). let's call [math]0[/math] a proper two-sided ideal of the integers (proper because the ring itself is also an ideal) Then with very little modification, the above proof goes through for any ring if you replace [math]0[/math] by [math]I[/math]
  2. Ha! You guys are too too tactful - I was being stupid I had mis-read my text. The correct statement in NOT that the basis for an infinite-dimensional space contains only finitely many non-zero elements, rather it is Each element in the basis has only finitely many entries. Here's my simple example. Consider the set [math]P(x)[/math] of polynomials of arbitrary degree.This is of course vector space by the usual axioms Now from the identity [math]p(x)=0[/math] (from which we extract the roots of this polynomial/vector, assuming not all coefficients are zero), we may infer that, say [math]x^0, x^1, x^2,.........[/math] are linearly independent and may form a basis for [math]P(x)[/math] But elements of this basis must themselves be polynomials, so let's write this basis as, say [math]x^0+0+0+.......[/math] [math]0+x^1+0+0+......[/math] [math]0+0+x^2+0+......[/math] and so on. It is easy to see that, by taking sums of finitely many of these, with scaling (if required) we recover any polynomial whatever
  3. I am ashamed to ask this, but here goes..... We know that if a vector space is finite-dimensional, then there exist subsets of finite cardinality that will seve as a basis for this space iff all elements of these sets are linearly independent. We also know that this implies that any subset containing the zero vector cannot be a set of linearly independent vectors, and is therefore not a suitable basis for our space. I now find that, in the case my vector space is not finite-dimensional, then the basis set may contain "only finitetly many non-zero elements". On the assumption that a non-finite vector space has a basis set of non-finite cardinality, then it seems either (or both) of the following must be true: 1. In the case of a non-finite vector space, we don't worry about linear independence of the basis vectors, or 2. In the case of a non-finite vector space linear independence has no real meaning Or am I gibbering?
  4. Points taken, thanks for your efforts to help me anyway
  5. Otherwise known as the Laplacian. So I should apologize for the length of this post and for taking no prisoners here, but it would take me WEEKS to flesh out the background, so I dive in...... Given a finite [math]n[/math]-dimensional vector space [math]V_n[/math], define a space of [math]p[/math]-vectors by [math]\Lambda^p(V_n)[/math]. Now define the exterior derivative operator by [math]d:\Lambda^p(V_n) \to \Lambda^{p+1}(V_n)[/math]. It seems that to this operator one may assign an adjoint provided only one has an inner product on the space of [math]p[/math] -vectors and that these vectors, as differential forms, are exact. Fine. Thus, for [math]\alpha,\,\beta \in \Lambda^p(V_n) [/math] and writing [math](\alpha,\beta)[/math] for the inner product, one may have that [math](d \alpha.\beta) = (\alpha, d^{\dagger} \beta)[/math]. Which is the usual way of expressing the adjoint of an operator EXCEPT it seems that, whereas [math]d:\Lambda^p(V_n) \to \Lambda^{p+1}[/math] one has that [math]d^{\dagger}: \Lambda^p(V_n) \to \Lambda^{p-1}(V_n)[/math]. I am having some difficulty making sense of these switching of dimensions. Whatever. The Hodge-de Rham operator is now defined as [math]\not{d} \equiv d + d^{\dagger}[/math]. Obviously the domain of this operator is [math]\Lambda^p(V_n)[/math], but from the above I cannot find the codomain. Is it simply [math]\Lambda^{p+1-1}(V_n)[/math]? This seems to fly in the face of the algebra of exponents, doesn't it? Anyway, finally the square of the Hodge-de Rham operator is called the Laplacian by my text, ie it is what in "ordinary" vector calculus is a second-order differential operator that sends scalar and vector fields to scalar and vector fields, respectively. Simple algebra and the Lemma of Poincare gives that [math]\Delta = \not{d}^2 = (d +d^{\dagger})(d +d^{\dagger}) = dd^{\dagger} + d^{\dagger}d[/math], but I cannot equate this to any expression of the Laplacian with which I am familiar. Specifically, and sticking with differential forms, it is not hard (and is in fact a fun exercise) to show that [math]\Delta = \ast d \ast d[/math] where the Hodge operator [math] \ast:\Lambda^p(V_n) \to \Lambda^{n-p}(V_n)[/math] and the exterior derivative is as above. More specifically, I want that [math]\ast d \ast d = dd^{\dagger}+ d^{\dagger}d[/math]. Know what? I cannot show this. Please help
  6. I thank you both for your helpful comments, but note I used the qualifier "roughly speaking" when I said that continuous mappings send open sets to open sets. I confess I am too lazy to find the example requested by DrRocket, but will instead offer this: Suppose that [math]S,\,\,T[/math] are topological spaces and that [math]f:S \to T[/math]. Let [math]U \in S[/math] be open. Then [math] f[/math] is an open mapping if [math]f(U) =V \in T[/math] is open. Suppose now that [math]f[/math] is also continuous, that is the preimage [math]f^{-1}(V) \in S[/math] is open. This implies there may be some continuous open mapping, say [math]g:T \to S[/math] such that [math]g \cdot f = id_S[/math] and [math]f \cdot g = id_T[/math]. In this circumstance one says that these 2 functions are mutual continuous inverses, and that our spaces are homeomorphic, [math]S \simeq T[/math]. Now homeomorphic topological spaces are topologically equivalent, which (again speaking loosely) means that whatever I "do" to one must carry over to the other under the homeomorphic mapping. So I now consider the top. space [math]\mathbb{R}^1[/math] with the standard topology, and assert that whenever the set [math]U \in \mathbb{R}^1 [/math] is open then so is the set [math]U \times U \in \mathbb{R}^1 \times \mathbb{R}^1 \equiv \mathbb{R}^2[/math]. I next consider the spaces [math]\mathbb{R}^1\setminus \{0\}[/math] and [math]\mathbb{R}^1\setminus \{0\} \times \mathbb{R}^1 \setminus \{0\} \equiv \mathbb{R}^2 \setminus \{0,0\}[/math]. Now [math]\mathbb{R}^1\setminus \{0\}[/math] is not connected, whereas [math]\mathbb{R}^2 \setminus\{0,0\}[/math] is connected, and conclude that there can be no homeomorphism [math]\mathbb{R}^1 \simeq \mathbb{R}^2[/math], since connectedness is a topological property that is preserved under topological equivalence The upshot being that I may have an open mapping between topological spaces which is not a homeomorphism, which implies that not all open mappings are continuous.
  7. I am thoroughly ashamed to be asking such an elementary question, but WTF. I'll start at the beginning, for want of a better place to start...... Suppose that [math]S[/math] is a point set, and that [math]\mathcal{P}(S)[/math] is the powerset on [math]S[/math]. Then one defines a topology [math]\tau[/math] on [math]S[/math] by [math]\tau \subseteq \mathcal{P}(S)[/math], and ones calls the pair [math]S(\tau)[/math] as a topological space (though usually the parenthetical \tau is omitted in favour of the assertion that we are dealing with a top. space). Elements in [math]\tau[/math] are called the open sets in [math]S(\tau)[/math], and elements in the complement [math]\tau^c[/math] are closed sets (of course it may happen that a set is both open and closed). Right. Let's drop the \tau, and assert that [math]S,\,\,T[/math] are top. spaces. Then the mapping [math]f:S \to T[/math] is said to be continuous iff, for every set [math]U \in S[/math] that whenever [math]f(U)[/math] is open in [math]T[/math] then [math]f^{-1}(U)[/math] is open in [math]S[/math] (this is the preimage BTW, not necessarily the inverse mapping). So we have that open sets map to open sets under continuous mappings, roughly speaking. So my first question, to which I am pretty sure I know the answer: It seems to me that this concept can be restated in terms of closed sets, id est continuous mappings send closed sets to closed sets. Correct or not? (leaving aside pathological topologies like the discrete or the trivial, for example). Now suppose that the set [math]S =V[/math] is a vector space, and insist that this will become a topological vector space iff a) singletons in [math]V(\tau)[/math] are closed, id est [math]\{x\} \notin \tau[/math] b) vector addition and scaling are continuous in the above sense. Umm..... assuming the standard topology on the vector space [math]\mathbb{R}^1[/math] (again dropping the parentheical \tau) id est the union of all open sets of the form [math](a,b)[/math], and write, for example, vector addition as [math]+:\mathbb{R}^1 \times \mathbb{R}^1 \to \mathbb{R}[/math] so that for any [math]x,\,\,y \in \mathbb{R}^1[/math] that [math](x,y) \in \mathbb{R}^1 \times \mathbb{R}^1 \mapsto x+y = \{z\} \in \mathbb{R}^1[/math]. Now if singletons are closed in [math]\mathbb{R}^1[/math] then presumably the ordered pairs [math](x,y)[/math] are likewise closed in the Cartesian product - or is this false reasoning? If not, then addition on the real numbers (as a vector space) is continuous since it sends closed sets to closed sets? Which quite unnecessarily long post brings me back to my point. First is it true that ordered pairs of closed sets in the Cartesian product of vector spaces are closed in the product topology, and second is it reasonable to define continuity in terms of closed sets rather than the more usual in terms of open sets? Third, in this particular example, the "mapping" called addition is clearly surjective (there are more ways to the woods than one!) but is is true in general? Or am I way out to lunch? PS by edit: On proofing this I see I can answer my own questions. Yes, yes, yes and quite possibly. But I shall let it stand, as it may interest someone somewhere out there
  8. Ya well, I seem to have effectively killed this thread. My apologies to the OPer. When I think I am being "interesting", like here, I am usually on an ego-trip. I cannot help myself - is counselling available? And lurkers beware, I am as often wrong as I am right, but in the absence of posted corrections, how would you know?
  9. Then I have a more Stalinist view than you. If what you call a lurker is unwilling to join this, or any other forum, as in, ask questions, seek clarification etc, then they are fully entitled to all the mis-information that abounds on these fora. And more fools they For "intrigues" read "depresses" and I would agree. I have spent most of my adult life arguing against the proposition that here in the UK educational standards have fallen over time, hence more students get higher grades. I now freely confess I was wrong - students feel entitled to passing grades, funding bodies require a high percentage of passing grades for each dept; result - falling standards due to pressure from both sides. I remember in my first year of lecturing I failed two thirds of scripts, and was told rather firmly by our Chair that I couldn't do that as it would essentially impoverish our department. "Please re-evaluate". What? Make something wrong into a half-right? Needless to say I did, toady that I am. Is disillusioned the word?
  10. Ha! I am going senile! Looking back I see my last post was just a re-hash of an earlier one of mine in this thread. Sorry about that. Anyhoo, to continue my thought train....... Recall I said that for [math]g_u \in V^*,\,\, v \in V[/math] it is permissible to think of the vector [math]v \in V [/math] as an element in V** (via a natural embedding) such that [math]v(g_u) = g_u(v) = g(u,v)[/math] provided only that V a finite-dimensional vector space. First I believe the qualification above is false in all generality - there is a difficult (for me, at least) theorem of Riesz that states that this is also true of any Hilbert space of arbitrary dimension. Ho hum..... Anyway. Look at [math]v(g_u) = g_u(v)=g(u,v)[/math]. This seems mad, right? Vectors and covectors are simultaneously treated as functionals and each as arguments of the other. One restores sanity by defining the bilinear form, sometimes called the natural pairing [math]\langle \, \cdot \,,\, \cdot\rangle: V \times V^* \to \mathbb{R}[/math] where [math]\langle g_u, v\rangle[/math] is called the scalar product of a covector and a vector. Note it cannot possibly be an inner product. So. Legend has it (true or false) that considerations like the above led P.A.M. Dirac to "invent" the so-called bra-ket notation. It goes like this..... Suppose, for now without prejudice (as lawyers say) that [math]\langle u|v \rangle[/math] defines an inner product in the obvious way, then we might just as well regard this as the natural pairing above and say that the object, the bra, [math]\langle u|[/math] is a covector dual to the ket [math]|v\rangle[/math], a vector. This notation is not without its drawbacks, which I don't have time right now to argue, but note that, in general, dual vectors (covectors, 1-forms) always exist and always act on vectors, but do not need an inner product to justify their existence. Now Dirac (as I understand) was trained as a mathematician, but made truly MAJOR contributions to physics, where inner products (or more generally) metrics are pretty much taken for granted, so the ambiguities introduced by this notation (at least as I see them) do not apply. Why am I such a wind-bag?
  11. DrRocket, ajb, me: Why bother with this guy? This is the second question almost identical to the first. We have all tried to help in our own way, maybe others have too, but I forget. And what do we get for our trouble? A "thank you, I get it now"? (thanks are always welcome!). I didn't see one Perhaps a "thanks but I don't quite follow - please explain" would be even better. Or maybe (s)he decided one, some or all of us were wrong. It would have been nice to know. But I seriously doubt there will be any more follow-up here than in the previous thread. Please slap my wrist if I ever try to help out in this sub again. Supplicants here are mostly leaches - they suck your knowledge and give ziltch back, not even a "yummy, nice blood there fella"
  12. I don't understand either. But Schroedinger raises an interesting point. Lemme ramble a bit.... A bilinear form is a mapping, say, [math]\langle\cdot \,,\,\cdot \rangle:V \times V \to \mathbb{R}[/math] that is linear in each argument taken separately (though our field does not need to be real, neither do we need always to work with vector spaces - any commutative ring will do). If there is a bilinear form acting on our vector space [math]g:V \times V \to \mathbb{R}[/math] such that [math]g(u,v) \in \mathbb{R}[/math], one declares that we are in a "metric space". The construction [math]g(u,v)[/math] is called an "inner product" - it defines distance length and angle in the loosest possible sense of these terms. By virtue of this mapping this defines the bilinear form [math]g[/math] to be a type (0,2) tensor. Confused? You will be.... wait and I will try to explain notation Nice thing is it easily follows that [math]g(u,\cdot): V \to \mathbb{R}[/math] is a type (0,1) tensor, i.e. a covector, or one-form, or a linear functional aka an element in [math]V^*[/math]. Hold on to this Schroedinger...... Writing [math]g(u,\cdot) \equiv g_u[/math] one has that [math]g_u(v) = g(u,v) \in \mathbb{R}[/math] for any [math]v \in V[/math] (though some peeps invert the order here). In the general case that our vector spaces are finite dimensional, whenever [math] V^{**}: V^* \to \mathbb{R}[/math], it is permissible to identify V** with V, so one has [math]v(g_u) = g_u(v) = g(u,v)[/math] so that v is acting as a type (1,0) tensor, a notion that comes as no surprise! caveat - be aware there a lot of hand-waving going on here. I can give chapter and verse if required (though I doubt it would be welcomed). Note also that every type (1,0) tensor is a vector, the converse is not true (likewise tensors of type (0,1)) I should point out that in standard notation, it is customary to refer to tensors by their scalar components, so if, say, [math]v =\sum\nolimits_i V^ie_i[/math] in some basis [math]\{e_i\}[/math] then one refers to this type (1,0) tensor as [math]V^i[/math], similarly for type (n,0) and type (0,n) and type (n,m) tensors, but with indices placed in the lower (or both upper and lower) positions accordingly. There is a good reason for this. Anyway, with a slight departure from this notation, the tensor [math]g_{ij}[/math] is called the "metric tensor" So finally. The vector potential is indeed a 1-form, as the notation [math]A_{\mu}[/math] hints, though this doesn't make it (or [math]g_u[/math] for that matter) a metric. Which should be obvious - a metric measures the "distance" or "angle" between 2 entities, not one But I cannot see where kinetic energy comes in. Maybe this claim could justified by the poster.
  13. What?? If, as you claim [math]L:V \to V, \ \ x \in V[/math]. then the equation [math]Lx = nx[/math] defines [math] n[/math] as an eigenvalue and [math]x[/math] as its associated eigenvector. Where does [math] r[/math] enter the picture? I have no idea how you got this. Try [math]L^{-1}(L(x)) = x[/math] by the simple fact, as given by you that the transformation is bijective The identity operator/matrix acting on any vector is the vector itself. How can it be that [math]x = L^{-1}nx?[/math] So, I firmly believe that students should do their own homework, but here is a big hint. Assume that you mis-typed, and meant that [math]Lx = nx[/math] defines the eigenvalues(s) [math] n[/math] for this operator acting on this vector [math]L^{-1}x =rx[/math] defines the eigenvalue(s) [math]r[/math] for this operator acting on the same vector. So, first rearrange each of these 2 equalities, and using any, all or none of the above, find a relation between [math]r[/math] and [math]n[/math] such that your rearrangement (cleanly done by factorization) makes sense. Good luck!
  14. Which gets to the heart of this totally pointless thread, One third is exactly 0.333...., just the same way that 0.999... is exactly 1. There are no approximations, no equivocations, these facts have been rehearsed over and over, here and elsewhere. Agreed, if that is what was said it's nonsense - I have no idea what an "unreal number" is But what does this mean? Is one third aka 0.333... not finite? I am pretty sure that both these numbers, whether or not you (or anyone else, for that matter) agrees they are equal, they are still both greater than zero and less than 1 i.e. finite numbers.
  15. If it's all the same to you I shan't. Or rather, if by "fraction" you mean a rational number, then these barely scratch the surface of all possible real numbers. And to claim that the whole set of real numbers (including the irrationals) are rather useful in describing what another poster somewhat pretentiously called "the universe" would be something of an understatement. Though to claim that they tell the full story, when the complex numbers are available, would just be wrong
  16. I am given a definition: One defines a generalized function [math]\chi(x)[/math] as a sequence [math]f_n(x)[/math] of functions (with certain not very restrictive properties) such that for any other function [math]g(x)[/math] with the same property, the limit [math]\lim_{n \to \infty}\int_{-\infty}^{\infty}f_n(x)g(x)dx = \int_{-\infty}^{\infty}\chi(x) g(x)dx[/math] exists. It seems to me this makes some sort of sense; since taking of limits and integration do not commute in general, it cannot be the case that [math]\chi(x) = \lim_{n \to \infty}f_n(x)[/math] But I am having a hard time seeing what the limit of integrals of a sequence might be (though you might be forgiven for thinking this pulls the rug from under my assertion above). Moreover, I am told that it is permissible to treat a generalized function thus defined just as though it were an "ordinary" function, all of which is frying my brains. Please help with some intuition. And, oh, if it is important, this is in the context of the mis-named (Dirac) delta "function"
  17. I think you may take it as a bad one. I assume it refers to fact that 0.000.......1, where the ellipses represent an infinite string, has absolutely no meaning whatever. Moreover (though I doubt that DrRocket would subscribe to this), even if it did have meaning, your construction implies that 0.999..... both equals 1 and doesn't equal 1, a very strange state of affairs! Seriously, this problem crops up over and again on boards like this, and, to use your expression, has been dispatched ad infinitum.
  18. I have just been listening to a very interesting podcast from the BBC on this subject (which sadly may not be available to non-UKers) It seems that Rutherford (um, or was it Chadwick?) showed that the energy spectrum of beta decay is continuous, and it was generally realized that this experimental fact seemed to violate energy conservation. Question: How is this conclusion implied from the finding? It is not obvious to me. It further seems that W. Pauli, in the 1930's, proposed that beta decay "generated" (in a manner that is not clear to me) an hitherto un-detected particle called the neutrino that in some sense restored energy conservation, and that in the 1950's this particle was detected experimentally. What precisely was Pauli's argument? Question: Since the neutrino has non-zero but un-measurable mass/energy, and since the beta decay spectrum may be anything from essentially zero to whatever is its allowed maximum, how can these neutrinos make up the mass/energy deficit? Is it simply that when beta decay "carries away" from the nucleus its lowest allowable energy in the form of ejected electrons, there must be a whole load of neutrinos, and vice versa? Sorry for the naivety of these questions, but my physics is weak; nonetheless I found it a fascinating, albeit pop-sci, listen-to. Hope you all can here can hear it too
  19. Ha! My functional analysis text is a treasure! It first states the Thm of Cauchy: If [math]f(z)[/math] is analytic within and on the closed contour [math]C[/math] and the derivtive [math]f'(z)[/math] is continuous throughout this region, then [math]\oint_C f(z) \,dz = 0[/math]. (Parenthetically, given the amazing power of this thm., the proof is surprisingly straightforward. In fact I have seen two, one using Green's Thm and the other using the closely related Thm of Stokes. Um well - neither of these is so easily proved I guess) It then points out that Goursat showed that the statement about continuity is superfluous, and re-states the Thm as Cauchy-Goursat. It then comes out with this priceless gem: "Some authors (never mathematicians!) define an analytic function as a differentiable function with continuous derivatives.......But this is a mathematical fraud of cosmic proportions" No words minced there, then.
  20. Your meaning is not clear. In what sense is "the universe not defined"? Again, it is hard to attach any meaning to this assertion. What do you mean by "the numeric system"? Why is the "numeric system" of your choice "not definable"? By stating that "Bob can't count......" you are implicitly talking about a set of "numbers" with uncountable cardinality, say the Real numbers. Is this what you mean? Yes we can. A set is infinite if and only if it is not finite, which is precisely your starting point about Bob
  21. I thank you for that. Without wishing to appear ungrateful, I had come to a similar conclusion: in [math]\overline{f}(z) \equiv \overline{f(z)}[/math] we are taking the conjugate of the image point in the codomain, whereas in [math]f(\overline{z})[/math] we are taking the conjugate of an element in the domain. Which is just wordy way of re-stating you point So do me another kindness, and see if the following floats your goat: We know from elementary analysis that [math]\overline{\sin(z)} = \sin (\overline{z})[/math] and is analytic (indeed entire) and also that where [math]f(z) = z^n[/math] for any nonzero real [math]n[/math] that [math]f(z)[/math] is analytic also. Using fingers and toes only, it appears that for small [math]n[/math] then [math]\overline{f(z)} \equiv \overline{f}(z) = f(\overline{z})[/math]. Likewise any other polynomial function. I somewhat rashly propose the following generalization: if a function is analytic then [math]\overline{f}(z) = f(\overline{z})[/math] always. True or false? I confess I am having trouble with the converse, namely that analycity is required for this equality to hold. I use as an example [math]f(z) = |z|^2 \equiv z\overline{z}[/math] which clearly not analytic, but where the equality seems to hold (unless I made a mistake). Is this gibberish?
  22. This should be laughably easy, but I am a little confused here. Suppose that the function [math]f: \mathbb{C} \to \mathbb{C}[/math]. Suppose further that [math]z \in \mathbb{C} = x +iy,\,\, x,\, y \in \mathbb{R}[/math]. What is meant by the complex conjugate of this function? My thoughts (such as they are!). Set [math]z = x +iy[/math], and set [math]f(z) = ax+iby[/math] and [math]\overline{f(z)}= ax-iby[/math] Apparently this can be written as the identity [math]\overline{f}(z) = \overline{f(z)}= ax-iby[/math], which I don't quite get. Moreover,...... ...... how does this differ from, say, [math] f(\overline{z}) =\overline{ax+iby}[/math]?
  23. Well, it has nothing to do with units, tyres, wind or anything else. I ME got close. Given the OPer's self-confessed lack of expertise in simple mathematics, I doubt that the following will quite hit the spot, though high-school graduation with a mathematical content should suffice. But, maybe, who knows, somebody somewhere may find it useful..... So, here's Einstein in 1905 (hugely paraphrased): Consider a material body B with energy content [math] E_{\text{initial}}[/math]. Let B emit a quantity of light for some fixed period of time [math] t[/math]. One easily sees that the energy content of B is reduced by [math]E_{\text{initial}} - E_{\text{final}}[/math], which depends only on [math]t[/math]. Let [math]E_{\text{initial}} - E_{\text{final}} = L[/math] i.e.the light energy "withdrawn" from B. Now, says Einstein, consider the situation from the perspective some body moving uniformly at velocity [math]v [/math] with respect to B. Then, evidently, by Lorentz time dilation, the light energy withdrawn from [math]B[/math] is measured (from the perspective of the moving body) as [math]L'[/math] which likewise depends only on [math]t'[/math], which is [math] t(1 - \frac{v^2}{c^2})^{-\frac{1}{2}}[/math] (this is Lorentz time dilation) The difference between [math]L[/math] and [math]L'[/math] is simply [math]L' - L = L[(1 - (\frac{v^2}{c^2})^{-\frac{1}{2}} - 1].[/math]. By expanding [math](1 - \frac{v^2}{c^2})^{-\frac{1}{2}}[/math] as a Taylor series, and dropping terms of order higher than 2 in [math]v/c[/math], he finds that [math]L(1 + \frac{v^2}{2c^2} - 1) = L \frac{v^2}{2c^2} = \frac{1}{2}(\frac{L}{c^2})v^2[/math]. With a flourishing hand-wave Einstein now says something like this: the above is an equation for the differential energy of bodies in relative motion; but so is [math]E = \frac{1}{2}mv^2[/math], the equation for kinetic energy, and these can only differ by an irrelevant additive constant, so from the above set [math]\frac{L}{c^2} = m[/math] and so [math]L= mc^2[/math]. But, says he, [math]L[/math] is simply a "quantity" of energy, light in this case, that now, from the above, depends only on [math]m[/math] and [math]c^2[/math] so...... [math]E = mc^2[/math]. It's fun, but slightly audacious of the old boy, wouldn't you say? PS. He wrote this down when he was 28 or so, and is said to have written to a friend something along the lines that "this seems inescapable, but maybe the gods are laughing at me!"
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.