Jump to content

vectors


ydoaPs

Recommended Posts

i just want to know why use dot product? why use cross product?

 

This is like asking in arithmetic, when do I use multiplication and when do I use division. You use each when it it appropriate. When it makes sense to do so you do it. In a mathematics course, you will be busy solving problem after problem simply to get you accustomed to the operation (the same way elementary school children practice subtraction and division again and again).

 

Applications come later. I can bring up many many more examples if you would like. I've used the Lorentz force example earlier. matt's example was also a good exercise in seeing the usefulness the dot product.

 

that and this section of the book is called "VECTOR MULTIPLICATION".

 

Just to note, but I'm sure you already know, there's a difference between vector dot and cross products and vector multiplication. Also, there is as far as I know (I've read in another thread that there actual may be) and as far as most applications are concerned, no vector or matrix division. You can cross two vectors to form a third vector. But with this third vector and one of the original vectors, you cannot determine the other original vector. There are infinite possibilities.

Link to comment
Share on other sites

the title "vector multiplication" is at best a "title by analogy" and at worst misleading. With very few exceptions can you mutliply vectors in a sense that makes the name reasonable and accurate in its suggestiivity. The reason I dislike it is because it immediately makes people think that you can divide vectors, and you cannot, in general.

 

here's another use for th dot product. suppose that i want to describe all points lying in some plane P. I can do it like this. Let r be some point in the plane, and let n be a vector orthogonal to the plane. imagine drawing the vector n with its tail at r. then you can see that a point x is in the plane exactly when its displacement vector from r is orthogonal to n. that is

 

P={x : (x-r).n=0}

 

is the description of the plane.

 

you can do the same thing with lines.

 

let L be a line. suppose that p is a displacement vector parallel to L, and that s is a point on the line. then some point y is also on the line exactly when the displacement vector from s to y is parallel with p, that is

 

L={y: (y-s)xp=0}

Link to comment
Share on other sites

I do not wish to detract from the approach that matt is taking with this subject, but (since were talking about uses) I thought I'll throw in an example (from physics, sorry matt) to help illustrate when one might use a dot or cross product.

 

Consider an electric dipole of dipole moment [imath]\vec{p}[/imath] in an electric field [imath]\vec{E}[/imath]. There are two quantities that immediately spring to mind in this case, the interaction energy U (between the dipole and the field) and the torque [imath] \vec{ \tau }[/imath] (felt by the dipole). Both quantities have units which are the product of the units of the moment and the field. However, the torque is a vector quantity and the energy is a scalar. So, it should not come as a surprise that the torque is given by the vector (cross) product, while the energy is given by the scalar (dot) product.

 

[math]\vec {\tau} = \vec{p} \times \vec{E} [/math]

 

[math]U = - \vec{p} \cdot \vec{E} [/math]

 

It is in this context that it makes some sense to refer to these operations as "multiplications". But, as cautioned earlier, they are not at all the same as the usual multiplication that is defined on a field like the reals (which are scalars).

 

Why a dot product produces a scalar while a cross product produces a vector has been answered somewhere before, I believe - because that is how they are defined.

Link to comment
Share on other sites

that is a very simplistic gross anatomy of it, and not correct in general. for instance a scalar can be recovered as a component of a vector (eg use of the vector product might yield the vector (1,0,0) from whcih the scalar 1 can be read off). that said, it isn't a bad "rough" idea to remember. there is more than one way to do any question.

 

example: find a vector orthogonal to a=(1,-1,0) and b=(2,0,1) can be fouund simply as axb, and is useful for finding the equation of planes) or you can solve the two equations

 

a.x=0 and b.x=0 simultaneoulsy (there are infintely many solutions for x). by inspection (1,1,-2) will do it, and is infintely easier than using the cross product)

 

one important note is that the dot prodcut can be generalized to any dimension, but the vector prodcut is perculiar to the 3 d world.

Link to comment
Share on other sites

one important note is that the dot prodcut can be generalized to any dimension, but the vector prodcut is perculiar to the 3 d world.
Just gave this a moment of thought now...and I can't see why not. Given any n-1 vectors in n-space, why can't I find a vector that is normal to them all (even, if necessary, by solving the determinant) ?
Link to comment
Share on other sites

i meant the vector product as an operation on two vectors in simple terms of its components. Of course, given any two vectors (or n-1) i can find some vector orthogonal to both (or all), but that isn't the generalization i meant. to be technical it's because (deep breath):

 

/\^2(R^3) is isomorphic as a vector space to R^3, and this is a perculiarity of the fact that 3=2+1. Tecnhically the isomorphism is via the hodge dual, and indeed given any n-1 linearly independent vectors in R^n then there is a nonzero element of /\(R^n) labelled y=x_1/\../\x_{n-1} which yields a unique vector x_n such that y/\x_n is the unique voume form. The so called hodge star duality (x_n is written as *y).

 

Phew, ok?

Link to comment
Share on other sites

Sry that I didn´t want to read up the Hodge star operator (already didn´t understand that one some time ago) so I couldn´t understand your argument, Matt.

 

However, back when I studied math I did one of my seminar talks about a "generalization" of the cross product:

From [math] v_i = \epsilon_{ijk} v^j v^k [/math] you simply generalize to:

[math] v_i = \epsilon_{i j1 \dots jm} v^{j1} \dots v^{jm} [/math]

EDIT: ^^ the v´s with different indices are of course supposed to be different vectors.

 

Attributes like perpendicularity, multiliniearity and an alternating sign are directly inherited from the antisymmetric pseudotensor (didn´t realize that back then, that´s why i had enough stuff for a talk :P).

 

Like i said, I didn´t understand why the cross product cannot be generalized to an alternating map (R^n)^n-1 -> R^n in above way for any finite dimension n. Above generalization was good enought for a seminar talk in university, at least (ok, it was only 2nd semester, but the Prof was quite happy that I came up with a topic myself).

 

If fact, you can generalize above further so that on the left side you do not have a vector but an object with any number of indices. All vectors extracted from this object by taking all but one of those indices constant will also be perpendicular to the vectors on the right-hand side.

Link to comment
Share on other sites

To repeat myself, when i said you cannot generalize the cross product I was talking about a bilinear "multiplication" of vectors. not a multilinear form.

 

your epsilon with the subscripts is an unnecessarily complicated way of writign the sign of a permutation, but you have discovered the idea of the exterior algebra if you didnt' know it already.

 

The hodge dual is in spirit this:

 

to integrate over a volume in R^n i need a priori to have picked a convetion about which direction is "positive", right, ie in 1 dimensions i take the integral in the usual sense and we know what we mean by a positive and negative area, and if we reverse the limits in the integral we reverse the sign of the integral, similarly in R^2 and so on. so, we pick our infinitesimal volume

 

dxdydzd......

 

with n letters in some order and so on. Now, roughly, given a set of r vectors we look at the r-dimensional volume they span and thin how our infintesimal volume dxdydz.... restricts to this volume, and then try and add in other vectors until we make someting that roughly agrees with dxdyz.. on the infintesimal level. The other vectors we pick and the orientation we tahe for them give us the "hodge dual" of our original r vectors. this is not very accurate but it is right in spirit. Example.

 

Suppose we pick our standard example in R^3 and we take our integrals with respect to dxdydz in the usual sense. No let me take, for simplicity, the unit vectors i,j and let me work out the hodge dual, well, the i and j span the dx dy bits of the volume so i need to take a vector in the dz bit to make it a proper volume, that is the vector k. if i were to do it for j,i in that order i've switched the orientation of the span so i need to take -j. extend this linearly and so on.

 

so the "hodge dual" takes in r vectors and spews out n-r other vectors that allow us to do integrals over R^n with the correct sign (and zero if the r vectors are not linerly independent)

 

i am glossing over some facts. such as the fat that the hodge dual is actually defined on /\^r but then /\^r can be thought of as the r-tuples of vectors with a + or - sign modulo some equivalence relation. the equivalence realtion is related to the reording and sign changes that you've come up with your self.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.