Jump to content

Johnny5

Senior Members
  • Posts

    1611
  • Joined

  • Last visited

Posts posted by Johnny5

  1. not necessarily, though it is often presumed. the author was being more correct than the average mathematician who usually presumes that when we say "we denote by FOO those things with property BAR" that it is implicit that we are saying that this uniquely characterizes them. Ie that only FOO's have the property BAR. But that is again a convention, and you hate those.

     

    Not for nothing, but this made no sense.

     

    Definitions of the type found in mathematics involve 'if and only if'

     

    Let me think of another example.

     

     

    WRONG:

    Definition: A is a set if either there is at least one X, such that X is an element of A, or A is equal to the empty set.

     

    RIGHT:

    Definition: A is a set if and only if (either there is at least one X, such that X is an element of A, or A is equal to the empty set.)

     

    Now, perhaps the definitions above lead to a contradiction which is beyond Patrick Suppes' ability to detect, but that is another issue.

     

    My point has been made, and I'm of course right on this logical point.

  2. Well, i'd suggest everyone doing maths as an undergrad knows those definitions in the UK as they are taught in the first semester. Moreover most people doing highschool mathematics are taugh one to one and onto here too. so they understand them even if those words aren't always used. besides you are posting in a pseudo authoratitive manner on mathematics so tough, you should know the basics.

     

    It's just that I couldn't remember the difference between 'injective' and 'surjective'

     

    I know what a one-to-one mapping is of course, because when you say it like that, the meaning is clear.

     

    But anyways, I went away and thought about it for awhile, and will go so far as to say that the study of relations, and functions is part of set theory, and so an individual who wants to use set theory to communicate, should go to the trouble to learn which term goes to which. It's just that for some reason, I seem to have a hard time remembering which is which.

     

    I am not one who memorizes by rote.

     

    Something is lacking in the way set theory is laid out, and with an axiomatic set theory available, i doubt i would have any trouble remembering what an injective function is, or a surjective function is, or what a function is for that matter.

     

    But, as I told you once long ago, i tried to learn axiomatic set theory, from a fellow by the name of Patrick Suppes, and my conclusion was that "axiomatic set theory" is a bit off.

     

    I do not regard there as being a consistent and complete axiomatic set theory available to man.

  3. Coquina brought up a very good point, about the hip bone, which connects to the very reason I got involved in this thread in the first place... because almost twenty years ago, I figured out what killed the dinosaurs.

     

    I have sort of been dancing around the answer, to see what responses I'd get, but it seems this thread has died, so I will finally reveal my hidden motive for discussing the 'hip bone' issue, and the 'hopping' dinosaur issue.

     

    The theory came to me as i was reading an encyclopedia article on dinosaurs. It was an old encyclopedia, and there was an artists rendition of a bunch of dinosaurs. The sketch was of a beach line, and showed some dinosaurs in the ocean. Off in the distance, there was a brontosaurus, with an extremely long neck. In the air, there was a pterydactl flying.

     

    Now, I looked at this picture for a very long time, because something just didn't seem right, though I wasn't sure what.

     

    Then it hit me all at once.

     

    "How could something that big hold its neck up against the pull of gravity?How could something that heavy fly? Why were dinosaurs so big compared to the sizes of animals today?"

     

    Now at that time, I'd already learned the Newtonian theory of gravity, and I reasoned as follows:

     

    [math] F_{gravity} = G \frac{M_{earth} m_{dino}}{R^2} [/math]

     

    Dinosaur weight = w = mg, therefore

     

    [math] F_{gravity} = \frac{G}{g} \frac{M_{earth} w_{dino}}{R^2} [/math]

     

     

    Postulate: In the prehistoric past, before the moment of dinosaur extinction, the force of gravity of earth was less.

     

    Let w denote the weight of the dinosaur before earths gravity changed, and let W denote the weight of the same dinosaur moments after earth's gravity suddenly dramatically changed.

     

    Right after something drastic happened, Pterydactyls were too heavy to fly, and were pinned to the earth.

     

    The brontosaurus could no longer hold up his neck.

     

    Alligators which used to be able to run, were now squashed.

     

    Snakes which maybe used to have tiny legs, learned to slither on their bellies.

     

    Turtles, now took to water to ease the burden of the increase in gravity on their ability to move.

     

    Etc, you hopefully get the idea here.

     

    So then i thought, how much would be enough to devestate them?

     

    Doubling their weight seemed good, but possibly hard to achieve using the equation above.

     

    The idea ended up being this...

     

    Before the dinosaurs died, earth had a second moon, a prehistoric moon.

     

    Hence the prehistoric moon theory of dinosaur extinction.

     

    The moon spiralled in, blanketed the KT boundary with iridium, as it spiraled around faster and faster.

     

    I figured that a 1.5 increase in their weight seemed appropriate, and do-able, within the framework of Newton's gravity formula.

     

    Well so anyways, that's why i chimed in when I saw Coquina's comment about the hip bones. It made me think of my theory.

     

    Also one more thing.

     

    Before you go trying to down a theory, that can't possibly be downed, let me send you one additional thing to think about, which is this...

     

    http://www.edwardtbabinski.us/mpm/struthers.html

     

    And here is the relevent quote:

     

    SEE THE TWO DIAGRAMS OF THE RIGHT WHALE'S PELVIS, FEMUR AND TIBIA

    (based on dissections by Struthers)

     

    "Nothing can be imagined more useless to the animal than rudiments of hind legs entirely buried beneath the skin of a whale, so that one is inclined to suspect that these structures must admit of some other interpretation. Yet, approaching the inquiry with the most skeptical determination, one cannot help being convinced, as the dissection goes on, that these rudiments [in the Right Whale] really are femur and tibia. The synovial capsule representing the knee-joint was too evident to be overlooked.

     

    The inescapable conclusion is this...

     

    Whales used to be able to walk on land.

     

    Which brings me to the very first post of this thread:

     

    does the idea of a huge meteor extinting 75% of all life forms satisfy you? yes or no.

     

    Nope.

     

     

    Regards

  4. It wasn't my personal choice, its in the formula for the quadratic as i explained to you.

     

    As for the product symbol, i use it in a manner consistent with the field axioms, nothing more, nothing less.

     

    And you aren't going to find too many non-professional mathematicians, who will be able to explain the meanings of

     

    injection

    bijection

    surjection

     

    to you, and furthermore

     

    those definitions will boil down to a simple minded usage of binary logic anyways, point being that the definitions aren't necessary to be known, since one can communicate using just binary logic, and some set theory, and some first order logic.

     

    Example given...

     

    More formally a function f : A → B is injective if, for every x and y in its domain A, if not (x =y) then also not( f(x) = f(y) ).

     

    And now i offer the following correction to this wikipedia entry

     

    Let F denote a function from set A to set B.

    f : A → B

     

    The function f is injective if and only if, for every x and y in its domain A, if not (x =y) then also not( f(x) = f(y) )

     

    When you define something, you also get the converse for free. It's not something that needs to be deduced.

     

    I guess my only point here, and it's rather minor, is that even if a person doesn't know the meaning of the term 'injective,' you can still discuss anything you want to about an injective function with that person, by using the definition.

     

    It may be more verbose, but you can still say anything you want, and furthermore you can tell them at that moment in time, that we call this kind of function injective.

     

    In my case, i remember the terms, but just not which term goes to which kind of function.

     

    And the reason for that was, I didn't like the material for some reason. I didn't like the way it was presented in any book which i had, and after introducing the definition, the author never bothered to use it again.

     

    A seemingly pointless definition.

  5. In what sense would you say I am using it.

     

    Also keep in mind, the terms

     

    injective, bijective, surjective, never really took hold with me.

     

    I memorized the definitions, knew them for the test, and then dumped them.

     

     

    I know they are 'adjectives' which operate on 'functions'... classification of different types of functions and all, but i never committed the definitions to memory.

     

    I followed you about 'principle' branch though. Not really an official definition, but better than nothing.

     

    I could swear that comes from complex variables though.

  6. no, what about the square root of 4? is it 2 or -2? no complex variables at all

     

    suppose that x*x = 4.

     

    Then, either x=2 or x=-2.

     

    There are two roots to the equation.

     

    But if i see the following:

     

    [math] \sqrt {4} [/math]

     

    That single number will be 2. I don't look at that symbolism as representing two numbers.

     

    Thats why in the quadratic formula we write:

     

    [math] \pm \sqrt{B^2 - 4AC} [/math]

     

    So if faced with the following

     

    Solve for x

     

    [math] x^2 = 4 [/math]

     

    The answer is expressed as

     

    [math] \pm \sqrt{4} = \pm 2 [/math]

     

    So i still don't follow you about branches. I thought branches show up in complex variables, becase a rotation through 2pi radians results in the same complex number.

  7.  

    you did omit branches when you claimed to definethe power 1/3 or 1/7.

     

    Yes' date=' well branches as I recall show up when you use complex variables right?

     

    Pick a complex number Z at random. Then, there are real numbers x,y such that:

     

    [math'] Z = x + iy [/math]

     

    Where i is the square root of negative one. (which has its own problems)

     

    [math] Z = R e^{i \theta} = R[cos \theta + i sin \theta] [/math]

     

    Where

     

    [math] x^2 + y^2 = R^2 [/math]

     

    [math] tan \theta = \frac{y}{x} [/math]

  8. In any case' date=' none if this has indicted that you're explaining anything about 0^0 that is a genuine "funky" problem that we are glossing over in mathematics[/quote']

     

    That whole issue about 0^0 is secondary and deserves a thread of its own. Keep in mind, I was only answering someone's elses question about the meaning of x^y.

     

    I am going to move this to a thread of its own.

  9. incidentally' date=' x^y is defined as exp{ylogx} defined for all x strictly positive, and even negative y picking a branch of log. Branches being yet one more thing you forgot in your attempt to define powers.

     

    [/quote']

     

    I wouldn't say I 'forgot' because I wouldn't use natural log to define x^y. Primarily because historically, x^y preceded logarithms. To define it that way, would be a bit anachonistic.

     

    Nonetheless, I am not sure why you keep writing that. I saw you write that somewhere else. Maybe you could explain why anyone would want to define x^y as exp{ylog x}, at least then i could see where you are coming from.

  10. You wrote

     

    [math]\prod_{k=1}^{y}x = x_1x_2\ldots x_y[/math]

     

    that is a false statement - the LHS and the RHS are not equal. There is nothing in your post that states what x_i is at all we are forced to introduce what looks like y new variables.

     

    No' date=' it's not false, you are not interpreting it properly.

     

    However, i do understand your point about notation, when you say, "what looks like y new variables." More frequently than not, the subscript under the same letter indicates a new variable... hmm.

     

    Yes, that notation is of my own choosing, so i suppose i have to state that the x_i are all equal, but i did that somewhere, after you commented on it earlier in the thread.

     

    But again, it's not false based upon the meaning.

     

    Do you have any better suggestion as regards notation for that?

     

    You do understand that the LHS and RHS are supposed to be equal at this point.

     

    Probably this would do just fine:

     

    [math'] x^y \equiv \prod_{k=1}^{y} x = x_1 x_2 \ldots x_y [/math]

     

    where

     

    [math] x = x_1 = x_2 = \ldots = x_y [/math]

  11. Johnny5: I didn't ask for the meaning of x^y because I don't know what it mean myself, but because you don't seem to have a total understanding of the expression.

     

    To be honest, x^y isn't hard to understand.

     

    Suppose that

     

    [math] A = x^{\frac{1}{3}} [/math]

     

    Now, cube both sides to obtain:

     

    [math] A^3 = x [/math]

     

    So pick any real number A, at random.

     

    Multiply A by itself y times.

     

    Suppose that y = 7, then we have:

     

    AAAAAAA=A^7

     

    Set that number equal to X

     

    X = A^7

     

    Now, take the seventh root of both sides, to obtain the number that you started with...

     

    [math] A = X^{\frac{1}{7}} [/math]

     

    So that gives you more of an idea about exponents.

     

    In order to really say that you have expressed the meaning of X^y is going to require a lot of work, but it can be done, its not impossible.

     

    You just have to handle things one case at a time.

     

    Consider something like

     

    [math] A^{\sqrt{2}} [/math]

     

    Even this is allowable, and has a unique answer. You can approximate the answer by approximating root 2 as 1.414, and you can keep making a better approximation by using more decimals.

  12. were you able to invent something over the weekend, johnny?

     

     

    There's nothing to invent, I already know the basis of the argument. Let me see what the heck this thread was about again though.

     

    Ah yes this:

     

    Prove the following:

     

    [math]

    \Gamma (n+1) = n!

    [/math]

     

    By definition, the following statement is true:

     

    [math] \Gamma(n+1) \equiv \int_{x=0}^{x=\infty} t^n e^{-t} dt [/math]

     

    So we can focus on the integral.

     

    Here is the integration by parts formula:

     

    [math] \int u dv = uv- \int v du [/math]

     

    let dv = e^-t dt, whence it follows that v = -e^-t

    let u = t^n, thus du = n t^(n-1) dt

     

    Substituting we have:

     

    [math] \int_{t=0}^{t=\infty} t^n e^{-t} dt = -t^n e^{-t} |_{t=0}^{t=\infty} + n\int_{t=0}^{t=\infty} e^{-t} t^{n-1} dt [/math]

     

    Therefore:

     

    [math] \int_{t=0}^{t=\infty} t^n e^{-t} dt = 0 + n\int_{t=0}^{t=\infty} e^{-t} t^{n-1} dt [/math]

     

    Now, replace n by n-1 in the statement above, to obtain:

     

    [math] \int_{t=0}^{t=\infty} t^{n-1} e^{-t} dt = (n-1) \int_{t=0}^{t=\infty} e^{-t} t^{n-2} dt [/math]

     

    Thus, we have:

     

    [math] \int_{t=0}^{t=\infty} t^n e^{-t} dt = n(n-1) \int_{t=0}^{t=\infty} e^{-t} t^{n-2} dt [/math]

     

    Continuing on in this manner, eventually, you will reach the following integral:

     

    [math] \int_{t=0}^{t=\infty} e^{-t} t dt [/math]

     

    This integral will be reached when n-k=1, hence when n=k+1.

     

    When k=1, the coefficient was n. When k=2, the coefficient was n(n-1). When k=3, the coefficient would be n(n-1)(n-2). Thus, when k=(n-1), the coefficient would be n(n-1)(n-2)(n-3)... (n-(n-1-1)) or rather

     

    n(n-1)(n-2)...(n-n+1+1)

     

    Therefore:

     

    [math] \int_{t=0}^{t=\infty} t^n e^{-t} dt = n(n-1)(n-2)...(2)\int_{t=0}^{t=\infty} e^{-t} t dt [/math]

     

    Now, evaluate the following integral:

     

    [math] \int_{t=0}^{t=\infty} t e^{-t} dt [/math]

     

    Using the integration by parts formula, with u=t, and dv = e^-t dt we have:

     

    [math] \int_{t=0}^{t=\infty} t e^{-t} dt = -te^{-t} |_{t=0}^{t=\infty} + \int_{t=0}^{t=\infty} e^{-t}dt

     

    [/math]

     

    Whence it follows that:

     

    [math] \int_{t=0}^{t=\infty} t^n e^{-t} dt = n(n-1)(n-2)...(2)\int_{t=0}^{t=\infty} e^{-t} dt [/math]

     

    Now, evaluate the following integral:

     

    [math] \int_{t=0}^{t=\infty} e^{-t} dt [/math]

     

    And this is easy.

     

    [math] \int_{t=0}^{t=\infty} e^{-t} dt = -e^-t |_{t=0}^{t=\infty} = e^-t |_{t=\infty}^{t=0} = 1-0 [/math]

     

    Hence:

     

    [math] \int_{t=0}^{t=\infty} t^n e^{-t} dt = n(n-1)(n-2)...(2)1 = n![/math]

     

    The LHS is the gamma function of n+1, therefore:

     

    [math]

    \Gamma (n+1) = n!

    [/math]

     

    QED

     

    Oh wait, that is the proof you cited, which was used at wolfram, i just looked down far enough, and saw it.

     

    Man, I feel like i did all that for nothing.

     

    Regards

  13. no' date=' you are iontroducing a deduction that the reader will guess x_i=x for all i.[/quote']

     

    Thats why i gave an example. You are just splitting hairs now. Some of your other comments are noteworthy, but this one is just splitting hairs.

  14. It is not necessary to define anything in mathematics, but so long as definitions are consistent with everything which you hold to be mathematically true, there is no logical error you are making, and then you can 'do some math'.

     

    The definitions which i have presented thus far, were to answer someone elses question.

     

    They wanted to know the meaning of x^y.

     

    Now, the field axioms are being used, and this i feel goes without saying.

     

     

    As for the suffixes, they are not necessary at all, but they help you to understand how many times x is being multiplied by itself. In other words, there is nothing wrong with defining using the suffixes, since it is clear that

     

    [math] x_1=x_2=x_3... = x [/math]

     

    The suffixes are helpful, though not necessary. But, if you omit them, then you are assuming that the reader can infer the number of times x is being multiplied by itself, or 1/x is being multiplied by itself. By using the suffixes, you are lessening the number of deductions to be made by the external reasoning agent.

  15. there should be no sufficies on the x's on the rhs

     

     

    Why not?

     

    no' date=' that is not true - unless you are claiming 0^{-1} is a real number[/quote']

     

    I went back and fixed that.

     

    note the word DEFINE in there. there is no reason to assume it x^0 as to be defined for any x.

     

    Yes' date=' i know. But x^y is being defined for as many cases as are possible in stages, because his question was what does x^y mean.

     

     

    but this presumes that x^0 is defined a priori. there is no reason to assume that.

     

    No, x^0 was not being defined right there, that was a theorem that x^0 must equal 1(given that not(x=0)), as a consequence of the fact that x^m times x^n must be equal to x^{m+n}.

  16. To both of you...

     

    I just read the last two posts (each once), and was quite impressed. Let me see if i have any worthwhile comments to make.

     

    There is sort of a blind alley with 0^0, sort of not-defined-thing. I must ask another time: What does the expression x^y mean?

     

    You ask me, what does x^y mean?

     

    In the case where y is a natural number, we have the following definition:

     

    Definition:

    [math] x^y = \prod_{k=1}^{k=y} x = x_1 \cdot x_2 \cdot x_3 ... x_y [/math]

     

    So for example, if y = 3, we have, using the definition above:

     

    [math] x^3 = \prod_{k=1}^{k=3} x = x \cdot x \cdot x [/math]

     

    In the case where y is a negative integer, and not(x=0), we have the following definition:

     

    Definition:

    [math] x^y = \prod_{k=1}^{k=-y} \frac{1}{x} = \frac{1}{x_1} \cdot \frac{1}{x_2} \cdot \frac{1}{x_3} ... \frac{1}{x_{-y}} [/math]

     

    Where [math] x_i =x [/math] for any i.

     

    So, for example if y is -4, we have:

     

    [math] x^{-4} = \prod_{k=1}^{k=4} \frac{1}{x} = \frac{1}{x} \cdot \frac{1}{x} \cdot \frac{1}{x} \cdot \frac{1}{x} = \frac{1}{x^4} [/math]

     

     

     

    Now, we practically have the meaning of x^y for all integers y, since we have handled two out of three mutually exclusive and collectively exhaustive cases, and x was arbitrary (in the case of y being an element of the natural numbers (and nonzero in the case of y being an element of the negative integers). The only integer left to define is y=0. Once this has been done, you will have the meaning of x^y for all integers y, and any real number x, with the one exception of case(y negative integer and x=0).

     

    Case I: y=0 and not(x=0)

     

    Using the field axioms, we can prove that x^0 = 1.

     

    Assume we have already proven that:

     

    [math] x^m \cdot x^n = x^{(m+n)} [/math]

     

    Let m=0, so that we have:

     

    [math] x^0 \cdot x^n = x^{(0+n)} = x^n [/math]

     

    given that not(x^n = 0) we have:

     

    [math] x^0 = x^n \cdot \frac{1}{x^n} = 1 [/math]

     

    Now, if x^n=0 then x=0, so we have handled the case properly, for x^0, given that not(x=0).

     

    Now, we have only the case 0^0 to consider.

     

    All I can offer you here, is the following...

     

     

    Theorem: if 0!=1 then 0^0 =1.

     

    Proof

     

    consider e^x

     

    [math] e^x = \frac{x^0}{0!} +\frac{x^1}{1!} +\frac{x^2}{2!} + ... [/math]

     

    In the case where x=0, we must have:

     

    [math] e^0 = \frac{0^0}{0!} [/math]

     

    And it has already been proven that A^0 must equal 1, if not(A=0), hence we must have:

     

    [math] 1 = \frac{0^0}{0!} [/math]

     

    Therefore if we insist that 0!=1, then it must follow that:

     

    [math] 1 = \frac{0^0}{1} = 0^0 [/math]

     

    Therefore, if 0!=1 then 0^0=1.

     

    QED

  17.  

    if ij = a+bi+cj then multipliying by i on both sides means

     

    -j=ai-b+cij=ai-b+a+bi+cj

     

    thus c=-1' date=' a-b=0 and a+b=0 or a=b=0 so ij --j, but then i(1+j)=0 so we have zero divisors, which means it can't be a field, thus meaning it isn't that much like C.

     

    Introducing k removes this problem, though we lose commutativity so strictly we only have a division algebra not a field.

     

    All of these terms would pretty much have been meaningless to Hamilton, I imagine.[/quote']

     

    I think you made a mistake up there. You seem to have lost a c?

     

    You start off with

     

    if ii=-1 & ij = a+bi+cj then

     

    iij=i(a+bi+cj)

     

    hence

     

    -j = ai-b+cij

     

    (see i think you lost that boldfaced c right there)

     

    And since the assumption is that ij = a+bi+cj it would follow that:

     

    -j = ai-b+cij = ai-b+c(a+bi+cj )

     

     

    You say "thus c=-1" but i do not see how you drew that conclusion.

     

    If I assume that c=-1, then we have:

     

    -j = ai-b-ij = ai-b-(a+bi-j )

     

     

    ai-ij = ai-a-bi+j

     

    -ij = -a-bi+j

     

    ij = a+ bi - j

     

     

    Which oddly enough is true, if c=-1.

     

    But how did you draw the following three conclusions?

     

    1. c=-1

    2. a=0

    3. b=0

     

    ?

     

    I just don't see it.

     

    Regards

  18. The product symbol cannot be used under any other circumstances' date=' at least as long as we use the given convention, and that is indeed what we do.

    [/quote']

     

    I don't see what is so hard to follow about the commutativity of multiplication.

     

    Perhaps I can demonstrate what I'm telling with addition, which is a simpler concept to understand than multiplication...

     

     

    Consider the following sum:

     

    [math] \sum_{n=1}^{n=3} n = 1+2+3 [/math]

     

    Consider the following sum:

     

    [math] \sum_{n=3}^{n=1} n = 3+2+1 [/math]

     

    the first is equal to 3+3 =6, and the second is equal to 5+1, which is also equal to 6, hence the two sums are equivalent. Yes, we are permitted to draw the conclusion that they are equivalent, since 6=6.

     

    Through good old trial and error you can convince yourself that:

     

    [math] \sum_{n=a}^{n=b} f(n) = \sum_{n=b}^{n=a} f(n) [/math]

     

    The above can be interpreted as an instance of commutativity of addition, in summation notation.

     

    I have to ask another question: Why do you (Johnny5) have so severe problems with conventions? When people in some cases use the convention 0^0 = 1' date=' this should not be a problem at all. Firstly, 0^0 is not defined, so the convention would not work against anything, and secondly, as Matt Grime already has written, people only use the convention when they say they use it, and not without saying. It seems that for you, it is the not-defined part that is the problem. But have you then asked yourself what the expression x^y means?']

     

    It's not that I have a problem with conventions, there is something else going on with 0^0, that is not a "conventional" issue.

     

    Look at it this way...

     

    When x isn't equal to zero, there is a simple proof that x^0=1.

     

    And, when x=0, there is a simple proof that x*y=0, for any y.

     

    So there's a sort of blind alley with 0^0.

     

    0=1 thing.

     

    As I say, it's not that I have a problem with conventions, but few things in mathematics are conventions. Most of the structure of mathematics is purely logical, and this is why the subject attracts good minds IMHO.

     

    I'm not sure yet what the best way to handle 0^0, and 0! is. For now i do use the conventions, but still in the back of my mind, something isn't right.

     

    Regards

     

     

    PS: And lastly, you can use whatever conventions you wish to, and vice versa, and as long as we state what they are, and that we are using them in such and such an instance, no confusion can result. But keep in mind, that not all mathematical issues can be arbitrarily decided, when an issue isn't up to a random choice, or convention, logic must be used to make the decision... not human whim.

  19. Consider the product from k=1 to k=2 of some arbitrary function of k, f(k).

     

    [math] \prod_{k=1}^{k=2} f(k) = f(1)f(2) [/math]

     

    In the 'scalar multiplication' being considered here, both f(1)f(2) are elements of the real number system. They are not matrices, or anything else which doesn't necessarily commute. Hence...

     

    f(1)f(2) = f(2)f(1)

     

    In the case where the lower indice is greater than the upper indice we have:

     

    [math] \prod_{k=2}^{k=1} f(k) = f(2)f(1) [/math]

     

    Since the kind of multiplication being considered here is commutative we have:

     

    [math] \prod_{k=1}^{k=2} f(k) = \prod_{k=2}^{k=1} f(k) [/math]

     

    Then an induction argument will prove the general case.

     

     

    A computer could carry out the algorithm as follows:

     

    First compare the indices. If the lower indice is less than the upper indice, the variable k will be incremented by one unit repeatedly, until k is equal to the upper indice. On the other hand, if the lower indice is greater than the upper indice, then the variable k will be decremented by one unit repeatedly, until equalling the upper indice.

     

    And this is fine for the case where the multiplicand (that which is interior to the product symbol) is a real number.

     

    So the equality which I've repeatedly stated, is a consequence of the field axioms. To say anything else would be to say, "This axiom is true, and it is false."

     

    In other words, I am not making a suggestion, or a convention. I am informing you that what I am saying must be true, if that which is interior to the product symbol is a real number.

     

    Regards to all.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.