hobz

Posts
243 
Joined

Last visited
Content Type
Profiles
Forums
Events
Posts posted by hobz


There's always more complicated math.
What is todays most complicated math?
0 
Great stuff!
Why is it necessary for the quaternions to have the real part, and later set to zero? Why not just have the three imaginary axis?
Merged post follows:
Consecutive posts mergedAhhh.. since Q1*Q2=Q3 should apply and because the product of the two produce a real part, which has no place to be in a "triternion", you need the real part.
0 
I think you missed a key point. I said "What goes around comes around. The dot product and cross product for three vectors can be expressed in terms of the quaternion product."
I showed how the quaternion product can be expressed in terms of the dot product and cross product. That is, as is much of math and physics, an after the fact presentation. What happened in history was that the quaternion product was defined first; i.e. without using the dot and cross product. Gibbs & Heaviside developed our modern vector mathematics as an offshoot of the Hamilton's quaternions. In particular, they showed how to first define the dot product and cross product in terms of the quaternion product. Only then did they show how to do it without the quaternion stuff.
BTW, there are still a few vestiges of the quaternion heritage in our vector notation. Sometimes you will see the unit vectors designated as [math]\hat x[/math], [math]\hat y[/math], and [math]\hat z[/math], but other times as [math]\hat i[/math], [math]\hat j[/math], and [math]\hat k[/math]. The x,y,z hat stuff makes sense, but where does that i,j,k stuff come from. The answer to that is some graffiti Hamilton drew on a bridge in Dublin: [math]i^2=j^2=k^2=ijk=1[/math] His i, j, and k represented the three different imaginary units (c.f. i in the complex numbers). His i, j, and k became [math]\hat i[/math], [math]\hat j[/math], and [math]\hat k[/math] in the initial development of vector analysis.
Very interesting. So the quaternion came first.
Can you recommend some reading on the history of this, and perhaps math in general? It helps quite a deal to know the chronology and history behind.
Merged post follows:
Consecutive posts mergedhobz ,i wonder why you don't look for an explanation in WIKIPEDIA ,the explanation there is excellent.Vector dot product is very common concept and the WEB is full of it
I have. The explanation from Wikipedia is more a general introduction, where as I am seeking a more intuitive way of thinking about the dot product. For instance, from the definition (given at wikipedia) all else follows which in turn is the motivation for the definition. Sort of a chickenegg to me. I enjoyed DH's historically based answer, which had a beginning and en end.
I came across this, when I tried to arrive at the definition of the dot product, from the geometrical point of view (area of parallelograms).
Here is occured to me, that what the dot product is normally used for (at least to my knowledge) it could just as easily be defined as the sine.
So if [math]\vec{a}\cdot\vec{b} =\vec{a}\vec{b} \sin \theta[/math]
then [math] \vec{a} \perp \vec{b} \Rightarrow \vec{a}\cdot\vec{b} =\vec{a}\vec{b}[/math]
However, as DH pointed out, there are some disadvantages with this.
0 
Okay, so v and r are just to clarify the parts (although you already bolded the vectors).
As I understand your snarky response (through your not so snarky) is that the cross product "naturally" arises in the quaternions "imaginary" part from the definition of quaternions. the Tesponse 4=31 is due to qnions are 4dimensional, and the cross product only occurs in three of those?
Thanks for the insight. I will have a readup on quaternions. It looks interesting.
0 
Interesting stuff!
A few questions.
What are "r" and "v" supposed to indicate in the quaternions?
I will repeat; "Okay, smartass. What's so special about 4 and 8?". I fail to see the connection between the 4 and 8 and the 3 and 7.
0 
Alot of good arguments.
A comment the cross product "making sense". The cross product only exists in those dimension because it is required to be perpendicular to both the vectors that are crossed. Right?
0 
When defining the dot product, the cosine of the angle between two vectors is chosen. Why not the sine?
What advantages are there by choosing the cosine?
0 
Suppose I have a small (infinitesimal) quantity [math]dy[/math] and another small quantity [math]dx[/math] and they are related by [math]dy = k \cdot dx[/math]. Will that automatically imply that [math]\frac{dy}{dx}=k[/math] is the derivative [math]\left(\frac{\mathrm{d}y}{\mathrm{d}x}=\lim_{\Delta x\rightarrow 0}\frac{f(x+\Delta x) f(x)}{\Delta x}\right)[/math] of [math]x[/math] with respect to [math]y[/math]?
I have seen several examples of such things occuring in engineering textbooks, such as electrical relations between the charge on a capacitor and the voltage across it. (I can't remember the details, and my notes are safely stored in the basement).
0 
So it (Stokes theorem) is at best a good approximation?
0 
I would agree with Feynman, if he didn't mention the higher order approximations.
Wouldn't the first order derivative alone reveal how much the field [math]C[/math] changes along [math]\Delta y[/math]?
0 
True.
But by that logic, [math]\lim_{\Delta x \rightarrow 0}\frac{\partial y}{\partial x}\Delta x = 0[/math], just as [math]\lim_{\Delta x \rightarrow 0}\frac{\partial^2 y}{\partial x^2}(\Delta x)^2 = 0[/math], which is discarded for its negligibility.
0 
I think Feynman is taking advantage of the fact that at arbitrarily small distances, any function looks linear. Does he present this as an actual proof of Stokes' Theorem or just as an explanation of why it works?
From the tone of his lectures, the latter seems more correct.
It's always perfectly reasonable to ignore negligible terms. So long as the closer you get to zero you can get the other terms to be arbitrarily much smaller than the first, they're safe to ignore.It seems like the second term is included to prevent the whole thing from being zero.
If the third term is discarded, then it is really more of an approximation, although it is stated as an equation.
So why not keep the third term when keeping the second term?
0 
Yes. Feynman has a nice drawing in his book which I didn't bother to include at first.
I have scanned it and attached it.
[math]C_x(1)[/math] is the tangential component of the vector field [math]C[/math] (denoted [math]1[/math] on the illustration.)
0 
Considering what he is arriving at, it seems a bit informal to include one term and neglect others.
I thought Stokes' theorem was more formal/rigid than what is explained here.
0 
Interesting. Does "have to" correspond to some theorem in calculus?
0 
While reading the Feynman lectures, I stumbled upon a passage which I do not quite understand.
In Vol II, 39, Feynman is telling us about Stokes' theorem.
He writes:
[math]
\oint \vec{C} \cdot d\vec{s} = C_x(1) \Delta x + C_y(2)\Delta y C_x(3) \Delta x  C_y(4)\Delta y
[/math]
and he looks at
[math]
[C_x(1) C_x(3)]\Delta x
[/math]
and writes:
"You might think that to our approximation the difference is zero. That is true to the first approximation. We can be more accurate, however, and take into account the rate of change of [math]C_x[/math]. If we do, we may write
[math]
C_x(3)=C_x(1)+\frac{\partial C_x}{\partial y} \Delta y
[/math]
"If we included the next approximation, it would involve terms in [math](\Delta y)^2[/math], but since we will ultimately think of the limit as [math]\Delta y \rightarrow 0[/math], such terms can be neglected."
How can the theorem be correct if he neglects terms?
Why doesn't the neglecting logic apply to the first order derivative? (The derivative term vanishes when we let [math]\Delta y \rightarrow 0[/math])
I thought [math]dy=\frac{\mathrm{d}y}{\mathrm{d}x}dx[/math] without any other terms.
0 
Also give Calculus Made Easy by Silvanus P. Thompson a look. It is almost 100 years old, but it has focus on the differentials, which is almost completely ignored in modern texts.
0 
Yes I know. However, it's with a twist because the actual position is a combination of the angles.
0 
Noone an expert on coordinate transforms?
0 
It is perhaps worth mentioning that Euler (and his contemporaries) did not have any geomtric interpretation of the imaginary number. They treated it as a number that had the property of [math]i=\sqrt{1}[/math].
Think about the real number axis. The introduction of [math]i[/math] gaves rise to numbers that did not exist on the real number axis, so these special [math]i[/math] numbers, can be given their own axis.
Placing the two axes perpendicular to eachother gives an intuitive representation, where by multiplication by a negative number changes the direction by [math]180[/math] deg, and multiplication by [math]i[/math]; [math]90[/math] deg. Thus the complex plane illustrates the property of [math]i[/math] very nicely.
Of course it complecates matters a bit, since you might have functions that take complex numbers as an input and produce complex number as an output. And that can be hard to graph.
0 
I have a flashlight angled at [math]\phi[/math] from zenith (the zaxis).
the flashlight can be rotated around the zaxis so the beam forms a cone (angled at [math]\phi[/math] from zenith).
Moreover, the bulb in the flashlight can also be angled [math]\phi[/math], so the resulting angle from zenith can vary from [math]0[/math] to [math]2\phi[/math] degrees.
The question is, how do I transform a cartesian coordinate representation in space, into the two axes? So, for instance, [math](x,y,z)[/math] becomes [math](r,\theta_1,\theta_2)[/math] where [math]\theta_1[/math] is the angle of the flashlight itself. and [math]\theta_2[/math] the angle of the bulb. [math]r[/math] is the radial distance which is probably [math]\sqrt{x^2+y^2+z^2}[/math].
Merged post follows:
Consecutive posts mergedI just realized that there are several solutions to the cartesian coordinates. Perhaps it is better to define [math]\theta_2[/math] as the difference between the bulb and the flashlight. Thus [math]\theta_2 = 0[/math] when the resulting beam angle from zenith is [math]2\phi[/math].
In that case, what would the transform then look like?
0 
Hmm.. Just thought there were some way like with overdetermined systems.
0 
No one familiar with underdetermined systems?
0 
But as [math]\Psi[/math]?
0
Differentials in the product rule
in Analysis and Calculus
Posted
The product rule:
[math]
y=u(x)\cdot v(x)
[/math]
[math]
y = u\cdot v
[/math]
[math]
y+dy=(u+du)\cdot(v+dv)
[/math]
[math]
y+dy=uv+udv+vdu+dudv
[/math]
now [math]dudv[/math] is discarded on the grounds of being "too small".
If I were to include it, later on (by subtracting y and dividing through by dx), it would become [math]\frac{dudv}{dx}[/math]. What does that mean?