Jump to content

johnny5 - an edited post.


matt grime

Recommended Posts

Here is an inline synopsis without using latex. and with lots of deletions to save space. the original is post 13 in the thread on manual evaluaiton of exponents. johnny asked me to point out where I thought it was wrong so here it is.

 

 

No it is easy' date=' once you understand your first problem. let me finish the computation of square root of two for you.

 

 

 

Now, focus on the following quantity:

 

\prod_{k=0}^{k=1} \frac{\frac{3}{2}-k}{k}

 

[/quote']

 

this is not defined when k=0

 

When k=0, we appear to have division by zero error, however, this is avoided because 0! is defined to be equal to 1. This requires some explanation though.

 

no, you are presuming that 0! is writable as a product (it is - the empty product, but I suspect you don't get that) you have rewritten the taylor series and introduced a mistake. The next bit is you attempting to correct this self introduced mistake.

 

Look at 1/n! for a moment.....

 

<unnecessary latex cut>

 

Multiplication is commutative therefore:

 

 

yes, why tell us?

 

\prod_{k=1}^{k=0} \frac{1}{k} = \prod_{k=0}^{k=1} \frac{1}{k}

 

Now, if we expand the product, we have:

 

\prod_{k=0}^{k=1} \frac{1}{k} = (\frac{1}{0}) (\frac{1}{1})

 

note this product is indexed from 1 to 0. This isn't allowed.

 

By transitivity, it now follows that:

 

1 = \frac{1}{0}

 

Which is false. This contradiction was caused by the definition 0!=1.

 

no, it was caused by you thinking 0! was a product from 0 to 1. at best that would be an empty product (there are no things to multiply together) and the empty product is 1 as well.

 

However let us look at our situation...

 

<more editing>

 

But this leads to an error in Newton's binomial formula. Let me explain.

 

Here is Newton's binomial formula again:

 

1+x)^\alpha = \sum_{n=0}^{n=\infty} x^n \prod_{k=0}^{k=n} \frac{\alpha +1-k}{k}

 

 

Consider the case where alpha = 2, and x is an arbitrary real number.

 

By direct substitution, we have:

 

 

(1+x)^2 = \sum_{n=0}^{n=\infty} x^n \prod_{k=0}^{k=n} \frac{2+1-k}{k}

 

Using only the axioms of the real number system, we can figure out what the left hand side is.

 

 

erm, you just multiply it out; there's no need to dress it up

 

(1+x)^2 = (1+x)(1+x) = 1+x+x+xx = 1+2x +x^2

 

Now figure out what the RHS must equal.

 

 

\sum_{n=0}^{n=\infty} x^n \prod_{k=0}^{k=n} \frac{2+1-k}{k} = \sum_{n=0}^{n=\infty} x^n \prod_{k=0}^{k=n} \frac{3-k}{k} = \sum_{n=0}^{n=\infty} x^n \frac{1}{n!} \prod_{k=0}^{k=n} 3-k

 

 

 

The first term of the series is:

 

 

x^0 \frac{1}{0!} \prod_{k=0}^{k=0} 3-k = x^0 (3-0) = 3x^0

 

 

Using the axioms of algebra, it is easy to prove that

 

if not(x=0) then x^0 = 1.

 

Thus, in the case where x isn't equal to zero, we must have:

 

I thought you were explaining an error in newton's formula? well? what about it? now you're onto something else entirely.

 

 

 

 

 

<more snipping>

 

So that, regardless of the value of x, it needs to be true that:

 

1 = \frac{0^0}{0!}

 

this is true in this context. Both of those expressions are declared equal to 1, and all of the stuff I've snipped is unnecessary.

 

 

Now, because we have started off the multiplication from 1, instead of zero, we are not faced with division by zero error. The only problem is the formula is not as compact as we would like it to be.

 

 

we never were faced with a problem. you were owing to mixing up notations and not defining things properly.,

 

Before going any further, we can try to use the formula above to compute the square root of two, which is the main point of this post.

 

Ha! given how much time and bandwidth you just wasted doesn't that strike you as odd? You dont' even compute the square root of 2 in the first post!

 

 

However, from the top indice, you can see that we have division by zero error. Let us pull out the k, as n!, so that we have this:

 

if there were a division by zero error then simply manipulating the symbols won't fix it. it is all a product of your choice of convention and writing an expansion in an unadvised way.

 

 

 

Suppose that we stipulate that the n=0 term of the series is equal to 1, then we can write:

 

we (the rest of the mathematical community) already do

 

From which it follows that we must define 0 factorial to be equal to 1.

 

it already is

 

 

 

Now, look at the definition of n factorial for a moment:

 

yes, when k is strictly greater than 0, and 0! is 1

 

<rest of unnecessary post snipped>

 

<end of post 1 note no single calculation of sqrt(2)>

 

 

Second post calculates sqrt(2) in some sense, but does not acutally prove the series converges, merely evaluates a few terms

 

then there is a post on radius of convergence. the ratio test does not show the series converges, nor does it show it diverges.

 

All of those posts could have been summed up as:

 

there is an formula for evaluating (1+x)^t when |x|<1

 

(1+x)^t = 1+tx +t(t-1)x^2/2+ t(t-1)(t-2)x^3/3! + ...

 

if t is a positive integer this agrees with the binomial expansion, if it is negative or fractional the series converges for |x|<1. It may also converge when x=1 and it does when t=1/2 so that the square root of 2 can be written as

 

1+1/2 +(1/2)(-1/2)/2! + (1/2)(-1/2)(-3/2)/3!+....

 

if you work out the first few terms and add them up you'll see it is a quite good approximation after only a few additions. I won't prove it converges here.

 

If you're interested this comes from the "taylor series" of (1+x)^t, a technique that lets us write lots of functions as power series like this.

 

 

 

6 lines? some unnecessary, really, no latex and quite clear.

Link to comment
Share on other sites

  • Replies 55
  • Created
  • Last Reply

Top Posters In This Topic

There are so many things here, I cannot possibly address them all at once so I will do so one at a time.

 

You mention this:

 

[math] \prod_{k=0}^{k=1} \frac{\frac{3}{2}-k}{k} [/math]

 

If you pull k out of the product symbol you have:

 

[math] \frac{1}{n!} \prod_{k=0}^{k=1} \frac{3}{2}-k [/math]

 

Which is fine at k=0. Now, when n=0, n! =1, because 0! is defined to be equal to 1, so consider the following...

 

[math] \prod_{k=0}^{k=n} \frac{\frac{3}{2}-k}{k} [/math]

 

The above is defined when k=n=0, because 0! is defined to be 1.

 

As for product from k=0 to 1, of (3/2-k)/k, i know for a fact that divsion by zero isn't allowed, so that the above isn't defined, and i made a point of pointing out this problem in my post, and showed that it suggests that there is a problem with defining 0!=1. I derived a contradiction somewhere.

 

Actually now i see what you are talking about, where you say i corrected a self introduced mistake. That's not what I did. I am well aware of how to use the symbols i was manipulating. The first 'mistake' was intentional, to reveal a problem with 0!=1, which i know about.

 

The 'fix' as you put it, was merely me writing the actual formula which avoids confusion, by writing the first term of the series explicitely, and the rest of it as a series, without any worry about division by zero issues. I was well aware of what I was doing.

 

 

Some place you say, "why, tell us?" in reference to me using the fact that multiplication is commutative.

 

Are you asking me to explain to you why multiplication is commutative? I dont understand your question.

 

Next you say that you cannot index the product from 1 to zero, in general the product from k=1 to k=0 is equivalent to the product from k=0 to k=1, since multiplication is commutative. So unless there was some constraint somewhere I missed, they are equivalent. I would need to see the line of my work which is bothering you.

 

I have no idea what the empty product is, so you are right about that. And i dont think that 0! is the product from 0 to 1 or whatever you are saying. In fact, i think there is a problem with 0!=1 and 0^0 = 1, as i recently explained to you.

 

The rest of your post seems to be you misreading something which i subtly glossed over, which is an issue about 0!=1 and 0^0=1, which i no longer wish to belabor.

 

Once upon a time I asked you to prove that 0!=1 and 0^0=1, which proof you never supplied. I am no longer waiting.

 

Regards

Link to comment
Share on other sites

You mention this:

 

[math] \prod_{k=0}^{k=1} \frac{\frac{3}{2}-k}{k} [/math]

 

 

this is undefined so what follows is spurious.

 

to reveal a problem with 0!=1' date=' which i know about.[/quote']

 

there is no problem.

 

 

Some place you say, "why, tell us?" in reference to me using the fact that multiplication is commutative.

 

Are you asking me to explain to you why multiplication is commutative? I dont understand your question.

 

I am asking you to tell us why you thought it necessary to remind us that multiplication of real numbers is commutative

 

Next you say that you cannot index the product from 1 to zero, in general the product from k=1 to k=0 is equivalent to the product from k=0 to k=1, since multiplication is commutative. So unless there was some constraint somewhere I missed, they are equivalent. I would need to see the line of my work which is bothering you.

 

you may not reverse the meaning of the upper and lower terms, it is not like integration.

 

[math]\prod_{k=r}^{k=s}x_s[/math]

 

means the product

[math] x_rx_{r+1}\ldots x_s[/math]

 

this is not defined unless r <=s; you are just misusing notation that is all I am saying

 

 

 

Once upon a time I asked you to prove that 0!=1 and 0^0=1, which proof you never supplied. I am no longer waiting.

 

Regards

 

 

There is no proof of those since they are conventions.

 

I can prove the following:

 

lim x^0 as x tends to zero is 1 this fact is what we use in Taylor series for stating 0^0=1

 

lim 0^x as x tends to 0 (from above) is 0, this is why we don't have a universal and unequivocal meaning for 0^0

 

lim x^x as x tends to zero (again from above) is 1, since x^x is exactlexp(xlogx) and xlogx tends to 0 as x tends to zero.

 

 

Now, as for factorials, 0! is a useful convention and, as it happens is the number of orderings of the empty set - it has exactly one, the empty ordering, but that is almost, again, a by fiat defintion.

 

I cannot prove to you 0!=1 since it is not something one proves, it is a convention.

Link to comment
Share on other sites

you may not reverse the meaning of the upper and lower terms' date=' it is not like integration.

 

[math']\prod_{k=r}^{k=s}x_s[/math]

 

means the product

[math] x_rx_{r+1}\ldots x_s[/math]

 

this is not defined unless r <=s; you are just misusing notation that is all I am saying

 

Exactly what are you saying?

 

In the end after you expand a product you have a sequence of terms being multiplied together

 

ABCDEFG

 

And since multiplication is commutative

 

ABCDEFG=GFEDCBA

 

Why in the world would you want the notation to force you to multiply in some specific order, and not allow an equation to be formed such as...

 

 

[math] \prod_{k=a}^{k=b} f(k) = \prod_{k=b}^{k=a} f(k) [/math]

 

Which statement is proven using nothing more than commutativity of multiplication.

 

 

Or to rephrase my question another way, prove that the statement above leads to a contradiction.

Link to comment
Share on other sites

Why should it lead to a contradiction? I didn't say it did. I said you're ignoring the fact that the index of a product (or sum) is an ordinal, and indexing it from 1 to 0 isn't using an ordinal.

 

I could adopt other conventions, but they are just conventions, Johnny, they are not empirically true.

Link to comment
Share on other sites

I didn't say it had to be an ordinal.

 

And whose convention? Not mine.

 

Let me google something, see what i find, not that it will change my stance.

 

Well I didn't find anything, but i see absolutely no reason at all to choose some strange minded convention, when i will always be able to use commutativity of multiplication to prove my statement.

 

I am at a loss for what you are saying I guess.

Link to comment
Share on other sites

Well you're doing mathematics using mathematical conventions, or you ought to be. If you are using your own interpretations of things but still concluding that mathematics is flawed rahther than your attempts to do it, then that is even more wrong.

 

As it is, by DEFINITION 0!=1 in maths and this causes no problems.

 

By convention, in certain parts of mathematics we adopt the identification of 0^0=1 too. There are no issues since these conventions only apply within the areas where they are declared true.

 

As it is I don't see this thread going anywhere, especially if you adopt a stance you won't change that is different from mathematical ones and then proceed to deduce mathematics contains some "funky problems" with its definitions.

 

You have written and expression that involves dividing by zero then manipulate it to get rid of this error and have issues with 0! and so on. This is a fault with your chosen conventions.

Link to comment
Share on other sites

I said "ought" since you are drawing conclusions about the mathematical interpretation of things so you ought to adopt the mathematical meanings for those things, and accept their limits, and understand what they do.

 

If I attempt to do geometry in the hyperbolic sense using the laws of euclidean geometry I will not make any good conclusions, will i?

Link to comment
Share on other sites

If I attempt to do geometry in the hyperbolic sense using the laws of euclidean geometry I will not make any good conclusions' date=' will i?[/quote']

 

 

Nope. But before digressing, what is 'wrong' with this:

 

 

[math]

\prod_{k=a}^{k=b} f(k) = \prod_{k=b}^{k=a} f(k)

[/math]

 

 

You don't catch what I'm saying. Since multiplication has to be commutative, the statement above must be true. There is no reason to adopt any other 'convention', since commutativity of multiplication forces the statement above upon us.

 

That's all I'm saying.

Link to comment
Share on other sites

No, it must not force that upon us, and there is no reason to suppose it should, moreoever multiplication does not have to be commutative (it isn't for matrices). Even with real numbers, if the product, or summation, is over an infinite index then the order of the operations affects the outcome - rearrange the series for log(2) using the Taylor expansion and the outcome equals log(3/2).

 

You are just breaking a convention, nothing more.

Link to comment
Share on other sites

There is no reason to adopt any other 'convention'' date=' since commutativity of multiplication forces the statement above upon us.

 

That's all I'm saying.[/quote']

 

multiplication does not have to be commutative (it isn't for matrices). Even with real numbers, if the product, or summation, is over an infinite index then the order of the operations affects the outcome - rearrange the series for log(2) using the Taylor expansion and the outcome equals log(3/2).

 

 

Why must it not force that upon us?

 

did you not read his post at all?

Link to comment
Share on other sites

Don't forget this:

 

Why should it lead to a contradiction? I didn't say it did. I said you're ignoring the fact that the index of a product (or sum) is an ordinal' date=' and indexing it from 1 to 0 isn't using an ordinal.

[/quote']

Link to comment
Share on other sites

Consider the product from k=1 to k=2 of some arbitrary function of k, f(k).

 

[math] \prod_{k=1}^{k=2} f(k) = f(1)f(2) [/math]

 

In the 'scalar multiplication' being considered here, both f(1)f(2) are elements of the real number system. They are not matrices, or anything else which doesn't necessarily commute. Hence...

 

f(1)f(2) = f(2)f(1)

 

In the case where the lower indice is greater than the upper indice we have:

 

[math] \prod_{k=2}^{k=1} f(k) = f(2)f(1) [/math]

 

Since the kind of multiplication being considered here is commutative we have:

 

[math] \prod_{k=1}^{k=2} f(k) = \prod_{k=2}^{k=1} f(k) [/math]

 

Then an induction argument will prove the general case.

 

 

A computer could carry out the algorithm as follows:

 

First compare the indices. If the lower indice is less than the upper indice, the variable k will be incremented by one unit repeatedly, until k is equal to the upper indice. On the other hand, if the lower indice is greater than the upper indice, then the variable k will be decremented by one unit repeatedly, until equalling the upper indice.

 

And this is fine for the case where the multiplicand (that which is interior to the product symbol) is a real number.

 

So the equality which I've repeatedly stated, is a consequence of the field axioms. To say anything else would be to say, "This axiom is true, and it is false."

 

In other words, I am not making a suggestion, or a convention. I am informing you that what I am saying must be true, if that which is interior to the product symbol is a real number.

 

Regards to all.

Link to comment
Share on other sites

I will ask a simple question, Johnny5: What does the product symbol tell?

 

And the answer: It is just a product symbol. By definition, this symbol can only be used under the following circumstance:

 

Suppose n and m are integers, and that [MATH]n \leq m[/MATH]. Also, suppose f is a function such that [MATH]f(i)[/MATH] is defined for all natural numbers i that satisfy [MATH]n \leq i \leq m[/MATH], and suppose we have defined some multiplication on the set in which all [MATH]f(i)[/MATH] are members. Then we can use the product symbol, by the mean that

 

[MATH]\prod_{i = n}^{m} f(i) := f(n) \cdot f(n + 1) \cdot ... \cdot f(m)[/MATH].

 

(I have possibly not given all restrictions, which would be a result of lack of knowledge.)

 

The product symbol cannot be used under any other circumstances, at least as long as we use the given convention, and that is indeed what we do. So to write that

 

[math]\prod_{k=a}^{b} f(k) = \prod_{k=b}^{a} f(k)[/math]

 

is in fact completely meaningless, unless [MATH]a = b[/MATH]. On the other side, the expression

 

[math]\prod_{k=a}^{b} f(k) = \prod_{k=a}^{b} f(a + b - k)[/math]

 

in some situations make sence (and there you got your commutative law).

 

 

I have to ask another question: Why do you (Johnny5) have so severe problems with conventions? When people in some cases use the convention 0^0 = 1, this should not be a problem at all. Firstly, 0^0 is not defined, so the convention would not work against anything, and secondly, as Matt Grime already has written, people only use the convention when they say they use it, and not without saying. It seems that for you, it is the not-defined part that is the problem. But have you then asked yourself what the expression x^y means?

 

(A little regression: The product symbol can be used in expressions such as [MATH]\lim_{m\to\infty}\prod_{i = n}^{m} f(i)[/MATH].)

Link to comment
Share on other sites

The product symbol cannot be used under any other circumstances' date=' at least as long as we use the given convention, and that is indeed what we do.

[/quote']

 

I don't see what is so hard to follow about the commutativity of multiplication.

 

Perhaps I can demonstrate what I'm telling with addition, which is a simpler concept to understand than multiplication...

 

 

Consider the following sum:

 

[math] \sum_{n=1}^{n=3} n = 1+2+3 [/math]

 

Consider the following sum:

 

[math] \sum_{n=3}^{n=1} n = 3+2+1 [/math]

 

the first is equal to 3+3 =6, and the second is equal to 5+1, which is also equal to 6, hence the two sums are equivalent. Yes, we are permitted to draw the conclusion that they are equivalent, since 6=6.

 

Through good old trial and error you can convince yourself that:

 

[math] \sum_{n=a}^{n=b} f(n) = \sum_{n=b}^{n=a} f(n) [/math]

 

The above can be interpreted as an instance of commutativity of addition, in summation notation.

 

I have to ask another question: Why do you (Johnny5) have so severe problems with conventions? When people in some cases use the convention 0^0 = 1' date=' this should not be a problem at all. Firstly, 0^0 is not defined, so the convention would not work against anything, and secondly, as Matt Grime already has written, people only use the convention when they say they use it, and not without saying. It seems that for you, it is the not-defined part that is the problem. But have you then asked yourself what the expression x^y means?']

 

It's not that I have a problem with conventions, there is something else going on with 0^0, that is not a "conventional" issue.

 

Look at it this way...

 

When x isn't equal to zero, there is a simple proof that x^0=1.

 

And, when x=0, there is a simple proof that x*y=0, for any y.

 

So there's a sort of blind alley with 0^0.

 

0=1 thing.

 

As I say, it's not that I have a problem with conventions, but few things in mathematics are conventions. Most of the structure of mathematics is purely logical, and this is why the subject attracts good minds IMHO.

 

I'm not sure yet what the best way to handle 0^0, and 0! is. For now i do use the conventions, but still in the back of my mind, something isn't right.

 

Regards

 

 

PS: And lastly, you can use whatever conventions you wish to, and vice versa, and as long as we state what they are, and that we are using them in such and such an instance, no confusion can result. But keep in mind, that not all mathematical issues can be arbitrarily decided, when an issue isn't up to a random choice, or convention, logic must be used to make the decision... not human whim.

Link to comment
Share on other sites

There is sort of a blind alley with 0^0, sort of not-defined-thing. I must ask another time: What does the expression x^y mean?

 

Let us define the function f(x,y) = x^y, at every position (x,y) where x^y is normally defined, with a possible exception for (0,0) (that is, we don't bother about whether 0^0 is defined or not). Then it is natural to say that for specific numbers a and b, a^b is only defined if the limit for f(x,y) when x and y gets close to a and b respectively exists. What do we then find for the case of a = 0, b = 0? That the limit does not exist, that is, 0^0 is not defined.

 

You say mathematics does not have many conventions? Are you sure about that? You know, we cannot just use pure logic without having anything to start with, so I would rather say mathematics is full of conventions, but, and this is important, not more nor less conventions than logic could survive. For instance, is the definition of a group, or the name "group", purely based upon pure logic? No, it is led to by cause of human mind, but in mathematics it is now stated completely by means of logic. "Human whim" indeed has much to say for mathematics; we are the ones exploring mathematics (or possibly creating it), and our way of thinking settles the way of exploring.

 

A little tip: Start looking at the definitions (of for instance n!) before you go into something, and then you possibly would avoid some of the worst flaws (that 1 = 1/0 if 1 = 0!, for instance; n! =1 * 2 * 3 * ... * n only works for natural numbers n, not every single integer).

Link to comment
Share on other sites

It is also nothing more than a convention to declare that n!=n(n-1)..1 for n a positive integet, Johnny, and you have no problem with that.

 

No one is saying that multipication of real numkbers is not commutative, merely that you are using the sum and product symbol in an odd manner. All the problems and attempted fixes you created in writing out Newton;s expansion are your own making and are easily avoided.

Link to comment
Share on other sites

To both of you...

 

I just read the last two posts (each once), and was quite impressed. Let me see if i have any worthwhile comments to make.

 

There is sort of a blind alley with 0^0, sort of not-defined-thing. I must ask another time: What does the expression x^y mean?

 

You ask me, what does x^y mean?

 

In the case where y is a natural number, we have the following definition:

 

Definition:

[math] x^y = \prod_{k=1}^{k=y} x = x_1 \cdot x_2 \cdot x_3 ... x_y [/math]

 

So for example, if y = 3, we have, using the definition above:

 

[math] x^3 = \prod_{k=1}^{k=3} x = x \cdot x \cdot x [/math]

 

In the case where y is a negative integer, and not(x=0), we have the following definition:

 

Definition:

[math] x^y = \prod_{k=1}^{k=-y} \frac{1}{x} = \frac{1}{x_1} \cdot \frac{1}{x_2} \cdot \frac{1}{x_3} ... \frac{1}{x_{-y}} [/math]

 

Where [math] x_i =x [/math] for any i.

 

So, for example if y is -4, we have:

 

[math] x^{-4} = \prod_{k=1}^{k=4} \frac{1}{x} = \frac{1}{x} \cdot \frac{1}{x} \cdot \frac{1}{x} \cdot \frac{1}{x} = \frac{1}{x^4} [/math]

 

 

 

Now, we practically have the meaning of x^y for all integers y, since we have handled two out of three mutually exclusive and collectively exhaustive cases, and x was arbitrary (in the case of y being an element of the natural numbers (and nonzero in the case of y being an element of the negative integers). The only integer left to define is y=0. Once this has been done, you will have the meaning of x^y for all integers y, and any real number x, with the one exception of case(y negative integer and x=0).

 

Case I: y=0 and not(x=0)

 

Using the field axioms, we can prove that x^0 = 1.

 

Assume we have already proven that:

 

[math] x^m \cdot x^n = x^{(m+n)} [/math]

 

Let m=0, so that we have:

 

[math] x^0 \cdot x^n = x^{(0+n)} = x^n [/math]

 

given that not(x^n = 0) we have:

 

[math] x^0 = x^n \cdot \frac{1}{x^n} = 1 [/math]

 

Now, if x^n=0 then x=0, so we have handled the case properly, for x^0, given that not(x=0).

 

Now, we have only the case 0^0 to consider.

 

All I can offer you here, is the following...

 

 

Theorem: if 0!=1 then 0^0 =1.

 

Proof

 

consider e^x

 

[math] e^x = \frac{x^0}{0!} +\frac{x^1}{1!} +\frac{x^2}{2!} + ... [/math]

 

In the case where x=0, we must have:

 

[math] e^0 = \frac{0^0}{0!} [/math]

 

And it has already been proven that A^0 must equal 1, if not(A=0), hence we must have:

 

[math] 1 = \frac{0^0}{0!} [/math]

 

Therefore if we insist that 0!=1, then it must follow that:

 

[math] 1 = \frac{0^0}{1} = 0^0 [/math]

 

Therefore, if 0!=1 then 0^0=1.

 

QED

Link to comment
Share on other sites

Definition:

[math] x^y = \prod_{k=1}^{k=-y} \frac{1}{x} = \frac{1}{x_1} \cdot \frac{1}{x_2} \cdot \frac{1}{x_3} ... \frac{1}{x_{-y}} [/math]

 

 

there should be no sufficies on the x's on the rhs

 

Now' date=' we practically have the meaning of x^y for all integers y, since we have handled two out of three mutually exclusive and collectively exhaustive cases, and x was arbitrary.[/quote']

 

no, that is not true - unless you are claiming 0^{-1} is a real number

 

The only integer left to define is y=0. Once this has been done, you will have the meaning of x^y for all integers y, and any real number x.

 

note the word DEFINE in there. there is no reason to assume it x^0 as to be defined for any x.

 

Case I: y=0 and not(x=0)

 

Using the field axioms, we can prove that x^0 = 1.

 

Assume we have already proven that:

 

[math] x^m \cdot x^n = x^{(m+n)} [/math]

 

but this presumes that x^0 is defined a priori. there is no reason to assume that.

 

 

here let me give an examle that shows assumptions can be stupid.

 

 

the largest integer is 1. suppose L is the largest integer . L^2=>L but L is largest so L^2=L thus L=0 or 1 and 1 is bigger than 0 QED.

Link to comment
Share on other sites

there should be no sufficies on the x's on the rhs

 

 

Why not?

 

no' date=' that is not true - unless you are claiming 0^{-1} is a real number[/quote']

 

I went back and fixed that.

 

note the word DEFINE in there. there is no reason to assume it x^0 as to be defined for any x.

 

Yes' date=' i know. But x^y is being defined for as many cases as are possible in stages, because his question was what does x^y mean.

 

 

but this presumes that x^0 is defined a priori. there is no reason to assume that.

 

No, x^0 was not being defined right there, that was a theorem that x^0 must equal 1(given that not(x=0)), as a consequence of the fact that x^m times x^n must be equal to x^{m+n}.

Link to comment
Share on other sites

johnny, you are, again, missing the point. Who states that x^0 must be defined for any x, or that it must follow that x^nx^m=x^{n+m} if n+m=0? It is clearly true from the definitions if n+m=/=0 (and x=/=0 as necessary).

 

And from that reasoning you can only demonstrate that x^0=1 makes sense , ie is consisitent, which is what we're trying to do here, for x=/=0.

 

That last choice of "defined" you ocmment on was a bad one - better to say "exists" or perhaps "can be defined" if it can be defined unambiguously and if it follows the other rules of indices, then x^0=1 for x=/=0

And there should be no suffices on the right since there were none on the left in the product. But then you don't understand what how the product symbol works.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.