Jump to content

hobz

Senior Members
  • Posts

    243
  • Joined

  • Last visited

Posts posted by hobz

  1. The product rule:

     

    [math]

    y=u(x)\cdot v(x)

    [/math]

    [math]

    y = u\cdot v

    [/math]

    [math]

    y+dy=(u+du)\cdot(v+dv)

    [/math]

    [math]

    y+dy=uv+udv+vdu+dudv

    [/math]

    now [math]dudv[/math] is discarded on the grounds of being "too small".

    If I were to include it, later on (by subtracting y and dividing through by dx), it would become [math]\frac{dudv}{dx}[/math]. What does that mean?

  2. I think you missed a key point. I said "What goes around comes around. The dot product and cross product for three vectors can be expressed in terms of the quaternion product."

     

    I showed how the quaternion product can be expressed in terms of the dot product and cross product. That is, as is much of math and physics, an after the fact presentation. What happened in history was that the quaternion product was defined first; i.e. without using the dot and cross product. Gibbs & Heaviside developed our modern vector mathematics as an offshoot of the Hamilton's quaternions. In particular, they showed how to first define the dot product and cross product in terms of the quaternion product. Only then did they show how to do it without the quaternion stuff.

     

    BTW, there are still a few vestiges of the quaternion heritage in our vector notation. Sometimes you will see the unit vectors designated as [math]\hat x[/math], [math]\hat y[/math], and [math]\hat z[/math], but other times as [math]\hat i[/math], [math]\hat j[/math], and [math]\hat k[/math]. The x,y,z hat stuff makes sense, but where does that i,j,k stuff come from. The answer to that is some graffiti Hamilton drew on a bridge in Dublin: [math]i^2=j^2=k^2=ijk=-1[/math] His i, j, and k represented the three different imaginary units (c.f. i in the complex numbers). His i, j, and k became [math]\hat i[/math], [math]\hat j[/math], and [math]\hat k[/math] in the initial development of vector analysis.

     

    Very interesting. So the quaternion came first.

    Can you recommend some reading on the history of this, and perhaps math in general? It helps quite a deal to know the chronology and history behind.


    Merged post follows:

    Consecutive posts merged
    hobz ,i wonder why you don't look for an explanation in WIKIPEDIA ,the explanation there is excellent.

     

    Vector dot product is very common concept and the WEB is full of it

     

    I have. The explanation from Wikipedia is more a general introduction, where as I am seeking a more intuitive way of thinking about the dot product. For instance, from the definition (given at wikipedia) all else follows which in turn is the motivation for the definition. Sort of a chicken-egg to me. I enjoyed DH's historically based answer, which had a beginning and en end.

     

    I came across this, when I tried to arrive at the definition of the dot product, from the geometrical point of view (area of parallelograms).

    Here is occured to me, that what the dot product is normally used for (at least to my knowledge) it could just as easily be defined as the sine.

    So if [math]\vec{a}\cdot\vec{b} =|\vec{a}||\vec{b}| \sin \theta[/math]

    then [math] \vec{a} \perp \vec{b} \Rightarrow \vec{a}\cdot\vec{b} =|\vec{a}||\vec{b}|[/math]

     

    However, as DH pointed out, there are some disadvantages with this.

  3. Okay, so v and r are just to clarify the parts (although you already bolded the vectors).

     

    As I understand your snarky response (through your not so snarky) is that the cross product "naturally" arises in the quaternions "imaginary" part from the definition of quaternions. the Tesponse 4=3-1 is due to q-nions are 4-dimensional, and the cross product only occurs in three of those?

     

    Thanks for the insight. I will have a readup on quaternions. It looks interesting.

  4. Suppose I have a small (infinitesimal) quantity [math]dy[/math] and another small quantity [math]dx[/math] and they are related by [math]dy = k \cdot dx[/math]. Will that automatically imply that [math]\frac{dy}{dx}=k[/math] is the derivative [math]\left(\frac{\mathrm{d}y}{\mathrm{d}x}=\lim_{\Delta x\rightarrow 0}\frac{f(x+\Delta x)- f(x)}{\Delta x}\right)[/math] of [math]x[/math] with respect to [math]y[/math]?

     

    I have seen several examples of such things occuring in engineering textbooks, such as electrical relations between the charge on a capacitor and the voltage across it. (I can't remember the details, and my notes are safely stored in the basement).

  5. I think Feynman is taking advantage of the fact that at arbitrarily small distances, any function looks linear. Does he present this as an actual proof of Stokes' Theorem or just as an explanation of why it works?

     

    From the tone of his lectures, the latter seems more correct.

     

    It's always perfectly reasonable to ignore negligible terms. So long as the closer you get to zero you can get the other terms to be arbitrarily much smaller than the first, they're safe to ignore.

     

    It seems like the second term is included to prevent the whole thing from being zero.

    If the third term is discarded, then it is really more of an approximation, although it is stated as an equation.

     

    So why not keep the third term when keeping the second term?

  6. While reading the Feynman lectures, I stumbled upon a passage which I do not quite understand.

     

    In Vol II, 3-9, Feynman is telling us about Stokes' theorem.

     

    He writes:

     

    [math]

    \oint \vec{C} \cdot d\vec{s} = C_x(1) \Delta x + C_y(2)\Delta y -C_x(3) \Delta x - C_y(4)\Delta y

    [/math]

    and he looks at

    [math]

    [C_x(1) -C_x(3)]\Delta x

    [/math]

    and writes:

    "You might think that to our approximation the difference is zero. That is true to the first approximation. We can be more accurate, however, and take into account the rate of change of [math]C_x[/math]. If we do, we may write

    [math]

    C_x(3)=C_x(1)+\frac{\partial C_x}{\partial y} \Delta y

    [/math]

    "If we included the next approximation, it would involve terms in [math](\Delta y)^2[/math], but since we will ultimately think of the limit as [math]\Delta y \rightarrow 0[/math], such terms can be neglected."

     

    How can the theorem be correct if he neglects terms?

    Why doesn't the neglecting logic apply to the first order derivative? (The derivative term vanishes when we let [math]\Delta y \rightarrow 0[/math])

    I thought [math]dy=\frac{\mathrm{d}y}{\mathrm{d}x}dx[/math] without any other terms.

  7. It is perhaps worth mentioning that Euler (and his contemporaries) did not have any geomtric interpretation of the imaginary number. They treated it as a number that had the property of [math]i=\sqrt{-1}[/math].

     

    Think about the real number axis. The introduction of [math]i[/math] gaves rise to numbers that did not exist on the real number axis, so these special [math]i[/math] numbers, can be given their own axis.

    Placing the two axes perpendicular to eachother gives an intuitive representation, where by multiplication by a negative number changes the direction by [math]180[/math] deg, and multiplication by [math]i[/math]; [math]90[/math] deg. Thus the complex plane illustrates the property of [math]i[/math] very nicely.

     

    Of course it complecates matters a bit, since you might have functions that take complex numbers as an input and produce complex number as an output. And that can be hard to graph.

  8. I have a flashlight angled at [math]\phi[/math] from zenith (the z-axis).

    the flashlight can be rotated around the z-axis so the beam forms a cone (angled at [math]\phi[/math] from zenith).

     

    Moreover, the bulb in the flashlight can also be angled [math]\phi[/math], so the resulting angle from zenith can vary from [math]0[/math] to [math]2\phi[/math] degrees.

     

    The question is, how do I transform a cartesian coordinate representation in space, into the two axes? So, for instance, [math](x,y,z)[/math] becomes [math](r,\theta_1,\theta_2)[/math] where [math]\theta_1[/math] is the angle of the flashlight itself. and [math]\theta_2[/math] the angle of the bulb. [math]r[/math] is the radial distance which is probably [math]\sqrt{x^2+y^2+z^2}[/math].


    Merged post follows:

    Consecutive posts merged

    I just realized that there are several solutions to the cartesian coordinates. Perhaps it is better to define [math]\theta_2[/math] as the difference between the bulb and the flashlight. Thus [math]\theta_2 = 0[/math] when the resulting beam angle from zenith is [math]2\phi[/math].

    In that case, what would the transform then look like?

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.