Jump to content

Johnny5

Senior Members
  • Content Count

    1611
  • Joined

  • Last visited

Everything posted by Johnny5

  1. And while I'm at it, is the axiom of choice true or false Matt? Or is it one of those kind of statements whose truth value can vary? Regards
  2. Not for nothing, but this made no sense. Definitions of the type found in mathematics involve 'if and only if' Let me think of another example. WRONG: Definition: A is a set if either there is at least one X, such that X is an element of A, or A is equal to the empty set. RIGHT: Definition: A is a set if and only if (either there is at least one X, such that X is an element of A, or A is equal to the empty set.) Now, perhaps the definitions above lead to a contradiction which is beyond Patrick Suppes' ability to detect, but that is another issue. My point has been made, and I'm of course right on this logical point.
  3. It's just that I couldn't remember the difference between 'injective' and 'surjective' I know what a one-to-one mapping is of course, because when you say it like that, the meaning is clear. But anyways, I went away and thought about it for awhile, and will go so far as to say that the study of relations, and functions is part of set theory, and so an individual who wants to use set theory to communicate, should go to the trouble to learn which term goes to which. It's just that for some reason, I seem to have a hard time remembering which is which. I am not one who memorizes by rote. Something is lacking in the way set theory is laid out, and with an axiomatic set theory available, i doubt i would have any trouble remembering what an injective function is, or a surjective function is, or what a function is for that matter. But, as I told you once long ago, i tried to learn axiomatic set theory, from a fellow by the name of Patrick Suppes, and my conclusion was that "axiomatic set theory" is a bit off. I do not regard there as being a consistent and complete axiomatic set theory available to man.
  4. Coquina brought up a very good point, about the hip bone, which connects to the very reason I got involved in this thread in the first place... because almost twenty years ago, I figured out what killed the dinosaurs. I have sort of been dancing around the answer, to see what responses I'd get, but it seems this thread has died, so I will finally reveal my hidden motive for discussing the 'hip bone' issue, and the 'hopping' dinosaur issue. The theory came to me as i was reading an encyclopedia article on dinosaurs. It was an old encyclopedia, and there was an artists rendition of a bunch of dinosaurs. The sketch was of a beach line, and showed some dinosaurs in the ocean. Off in the distance, there was a brontosaurus, with an extremely long neck. In the air, there was a pterydactl flying. Now, I looked at this picture for a very long time, because something just didn't seem right, though I wasn't sure what. Then it hit me all at once. "How could something that big hold its neck up against the pull of gravity?How could something that heavy fly? Why were dinosaurs so big compared to the sizes of animals today?" Now at that time, I'd already learned the Newtonian theory of gravity, and I reasoned as follows: [math] F_{gravity} = G \frac{M_{earth} m_{dino}}{R^2} [/math] Dinosaur weight = w = mg, therefore [math] F_{gravity} = \frac{G}{g} \frac{M_{earth} w_{dino}}{R^2} [/math] Postulate: In the prehistoric past, before the moment of dinosaur extinction, the force of gravity of earth was less. Let w denote the weight of the dinosaur before earths gravity changed, and let W denote the weight of the same dinosaur moments after earth's gravity suddenly dramatically changed. Right after something drastic happened, Pterydactyls were too heavy to fly, and were pinned to the earth. The brontosaurus could no longer hold up his neck. Alligators which used to be able to run, were now squashed. Snakes which maybe used to have tiny legs, learned to slither on their bellies. Turtles, now took to water to ease the burden of the increase in gravity on their ability to move. Etc, you hopefully get the idea here. So then i thought, how much would be enough to devestate them? Doubling their weight seemed good, but possibly hard to achieve using the equation above. The idea ended up being this... Before the dinosaurs died, earth had a second moon, a prehistoric moon. Hence the prehistoric moon theory of dinosaur extinction. The moon spiralled in, blanketed the KT boundary with iridium, as it spiraled around faster and faster. I figured that a 1.5 increase in their weight seemed appropriate, and do-able, within the framework of Newton's gravity formula. Well so anyways, that's why i chimed in when I saw Coquina's comment about the hip bones. It made me think of my theory. Also one more thing. Before you go trying to down a theory, that can't possibly be downed, let me send you one additional thing to think about, which is this... http://www.edwardtbabinski.us/mpm/struthers.html And here is the relevent quote: The inescapable conclusion is this... Whales used to be able to walk on land. Which brings me to the very first post of this thread: Nope. Regards
  5. It wasn't my personal choice, its in the formula for the quadratic as i explained to you. As for the product symbol, i use it in a manner consistent with the field axioms, nothing more, nothing less. And you aren't going to find too many non-professional mathematicians, who will be able to explain the meanings of injection bijection surjection to you, and furthermore those definitions will boil down to a simple minded usage of binary logic anyways, point being that the definitions aren't necessary to be known, since one can communicate using just binary logic, and some set theory, and some first order logic. Example given... And now i offer the following correction to this wikipedia entry Let F denote a function from set A to set B. f : A → B The function f is injective if and only if, for every x and y in its domain A, if not (x =y) then also not( f(x) = f(y) ) When you define something, you also get the converse for free. It's not something that needs to be deduced. I guess my only point here, and it's rather minor, is that even if a person doesn't know the meaning of the term 'injective,' you can still discuss anything you want to about an injective function with that person, by using the definition. It may be more verbose, but you can still say anything you want, and furthermore you can tell them at that moment in time, that we call this kind of function injective. In my case, i remember the terms, but just not which term goes to which kind of function. And the reason for that was, I didn't like the material for some reason. I didn't like the way it was presented in any book which i had, and after introducing the definition, the author never bothered to use it again. A seemingly pointless definition.
  6. In what sense would you say I am using it. Also keep in mind, the terms injective, bijective, surjective, never really took hold with me. I memorized the definitions, knew them for the test, and then dumped them. I know they are 'adjectives' which operate on 'functions'... classification of different types of functions and all, but i never committed the definitions to memory. I followed you about 'principle' branch though. Not really an official definition, but better than nothing. I could swear that comes from complex variables though.
  7. I don't remember the definition of 'principle branch' so if it's not too hard can you say what it is quickly? And which branch of mathematics it shows up in? I googled it, and didn't find a nice answer.
  8. suppose that x*x = 4. Then, either x=2 or x=-2. There are two roots to the equation. But if i see the following: [math] \sqrt {4} [/math] That single number will be 2. I don't look at that symbolism as representing two numbers. Thats why in the quadratic formula we write: [math] \pm \sqrt{B^2 - 4AC} [/math] So if faced with the following Solve for x [math] x^2 = 4 [/math] The answer is expressed as [math] \pm \sqrt{4} = \pm 2 [/math] So i still don't follow you about branches. I thought branches show up in complex variables, becase a rotation through 2pi radians results in the same complex number.
  9. Yes' date=' well branches as I recall show up when you use complex variables right? Pick a complex number Z at random. Then, there are real numbers x,y such that: [math'] Z = x + iy [/math] Where i is the square root of negative one. (which has its own problems) [math] Z = R e^{i \theta} = R[cos \theta + i sin \theta] [/math] Where [math] x^2 + y^2 = R^2 [/math] [math] tan \theta = \frac{y}{x} [/math]
  10. No' date=' it's not false, you are not interpreting it properly. However, i do understand your point about notation, when you say, "what looks like y new variables." More frequently than not, the subscript under the same letter indicates a new variable... hmm. Yes, that notation is of my own choosing, so i suppose i have to state that the x_i are all equal, but i did that somewhere, after you commented on it earlier in the thread. But again, it's not false based upon the meaning. Do you have any better suggestion as regards notation for that? You do understand that the LHS and RHS are supposed to be equal at this point. Probably this would do just fine: [math'] x^y \equiv \prod_{k=1}^{y} x = x_1 x_2 \ldots x_y [/math] where [math] x = x_1 = x_2 = \ldots = x_y [/math]
  11. To be honest, x^y isn't hard to understand. Suppose that [math] A = x^{\frac{1}{3}} [/math] Now, cube both sides to obtain: [math] A^3 = x [/math] So pick any real number A, at random. Multiply A by itself y times. Suppose that y = 7, then we have: AAAAAAA=A^7 Set that number equal to X X = A^7 Now, take the seventh root of both sides, to obtain the number that you started with... [math] A = X^{\frac{1}{7}} [/math] So that gives you more of an idea about exponents. In order to really say that you have expressed the meaning of X^y is going to require a lot of work, but it can be done, its not impossible. You just have to handle things one case at a time. Consider something like [math] A^{\sqrt{2}} [/math] Even this is allowable, and has a unique answer. You can approximate the answer by approximating root 2 as 1.414, and you can keep making a better approximation by using more decimals.
  12. There's nothing to invent, I already know the basis of the argument. Let me see what the heck this thread was about again though. Ah yes this: Prove the following: [math] \Gamma (n+1) = n! [/math] By definition, the following statement is true: [math] \Gamma(n+1) \equiv \int_{x=0}^{x=\infty} t^n e^{-t} dt [/math] So we can focus on the integral. Here is the integration by parts formula: [math] \int u dv = uv- \int v du [/math] let dv = e^-t dt, whence it follows that v = -e^-t let u = t^n, thus du = n t^(n-1) dt Substituting we have: [math] \int_{t=0}^{t=\infty} t^n e^{-t} dt = -t^n e^{-t} |_{t=0}^{t=\infty} + n\int_{t=0}^{t=\infty} e^{-t} t^{n-1} dt [/math] Therefore: [math] \int_{t=0}^{t=\infty} t^n e^{-t} dt = 0 + n\int_{t=0}^{t=\infty} e^{-t} t^{n-1} dt [/math] Now, replace n by n-1 in the statement above, to obtain: [math] \int_{t=0}^{t=\infty} t^{n-1} e^{-t} dt = (n-1) \int_{t=0}^{t=\infty} e^{-t} t^{n-2} dt [/math] Thus, we have: [math] \int_{t=0}^{t=\infty} t^n e^{-t} dt = n(n-1) \int_{t=0}^{t=\infty} e^{-t} t^{n-2} dt [/math] Continuing on in this manner, eventually, you will reach the following integral: [math] \int_{t=0}^{t=\infty} e^{-t} t dt [/math] This integral will be reached when n-k=1, hence when n=k+1. When k=1, the coefficient was n. When k=2, the coefficient was n(n-1). When k=3, the coefficient would be n(n-1)(n-2). Thus, when k=(n-1), the coefficient would be n(n-1)(n-2)(n-3)... (n-(n-1-1)) or rather n(n-1)(n-2)...(n-n+1+1) Therefore: [math] \int_{t=0}^{t=\infty} t^n e^{-t} dt = n(n-1)(n-2)...(2)\int_{t=0}^{t=\infty} e^{-t} t dt [/math] Now, evaluate the following integral: [math] \int_{t=0}^{t=\infty} t e^{-t} dt [/math] Using the integration by parts formula, with u=t, and dv = e^-t dt we have: [math] \int_{t=0}^{t=\infty} t e^{-t} dt = -te^{-t} |_{t=0}^{t=\infty} + \int_{t=0}^{t=\infty} e^{-t}dt [/math] Whence it follows that: [math] \int_{t=0}^{t=\infty} t^n e^{-t} dt = n(n-1)(n-2)...(2)\int_{t=0}^{t=\infty} e^{-t} dt [/math] Now, evaluate the following integral: [math] \int_{t=0}^{t=\infty} e^{-t} dt [/math] And this is easy. [math] \int_{t=0}^{t=\infty} e^{-t} dt = -e^-t |_{t=0}^{t=\infty} = e^-t |_{t=\infty}^{t=0} = 1-0 [/math] Hence: [math] \int_{t=0}^{t=\infty} t^n e^{-t} dt = n(n-1)(n-2)...(2)1 = n![/math] The LHS is the gamma function of n+1, therefore: [math] \Gamma (n+1) = n! [/math] QED Oh wait, that is the proof you cited, which was used at wolfram, i just looked down far enough, and saw it. Man, I feel like i did all that for nothing. Regards
  13. How is it false as it stands?
  14. It is not necessary to define anything in mathematics, but so long as definitions are consistent with everything which you hold to be mathematically true, there is no logical error you are making, and then you can 'do some math'. The definitions which i have presented thus far, were to answer someone elses question. They wanted to know the meaning of x^y. Now, the field axioms are being used, and this i feel goes without saying. As for the suffixes, they are not necessary at all, but they help you to understand how many times x is being multiplied by itself. In other words, there is nothing wrong with defining using the suffixes, since it is clear that [math] x_1=x_2=x_3... = x [/math] The suffixes are helpful, though not necessary. But, if you omit them, then you are assuming that the reader can infer the number of times x is being multiplied by itself, or 1/x is being multiplied by itself. By using the suffixes, you are lessening the number of deductions to be made by the external reasoning agent.
  15. Why not? Yes' date=' i know. But x^y is being defined for as many cases as are possible in stages, because his question was what does x^y mean. No, x^0 was not being defined right there, that was a theorem that x^0 must equal 1(given that not(x=0)), as a consequence of the fact that x^m times x^n must be equal to x^{m+n}.
  16. To both of you... I just read the last two posts (each once), and was quite impressed. Let me see if i have any worthwhile comments to make. You ask me, what does x^y mean? In the case where y is a natural number, we have the following definition: Definition: [math] x^y = \prod_{k=1}^{k=y} x = x_1 \cdot x_2 \cdot x_3 ... x_y [/math] So for example, if y = 3, we have, using the definition above: [math] x^3 = \prod_{k=1}^{k=3} x = x \cdot x \cdot x [/math] In the case where y is a negative integer, and not(x=0), we have the following definition: Definition: [math] x^y = \prod_{k=1}^{k=-y} \frac{1}{x} = \frac{1}{x_1} \cdot \frac{1}{x_2} \cdot \frac{1}{x_3} ... \frac{1}{x_{-y}} [/math] Where [math] x_i =x [/math] for any i. So, for example if y is -4, we have: [math] x^{-4} = \prod_{k=1}^{k=4} \frac{1}{x} = \frac{1}{x} \cdot \frac{1}{x} \cdot \frac{1}{x} \cdot \frac{1}{x} = \frac{1}{x^4} [/math] Now, we practically have the meaning of x^y for all integers y, since we have handled two out of three mutually exclusive and collectively exhaustive cases, and x was arbitrary (in the case of y being an element of the natural numbers (and nonzero in the case of y being an element of the negative integers). The only integer left to define is y=0. Once this has been done, you will have the meaning of x^y for all integers y, and any real number x, with the one exception of case(y negative integer and x=0). Case I: y=0 and not(x=0) Using the field axioms, we can prove that x^0 = 1. Assume we have already proven that: [math] x^m \cdot x^n = x^{(m+n)} [/math] Let m=0, so that we have: [math] x^0 \cdot x^n = x^{(0+n)} = x^n [/math] given that not(x^n = 0) we have: [math] x^0 = x^n \cdot \frac{1}{x^n} = 1 [/math] Now, if x^n=0 then x=0, so we have handled the case properly, for x^0, given that not(x=0). Now, we have only the case 0^0 to consider. All I can offer you here, is the following... Theorem: if 0!=1 then 0^0 =1. Proof consider e^x [math] e^x = \frac{x^0}{0!} +\frac{x^1}{1!} +\frac{x^2}{2!} + ... [/math] In the case where x=0, we must have: [math] e^0 = \frac{0^0}{0!} [/math] And it has already been proven that A^0 must equal 1, if not(A=0), hence we must have: [math] 1 = \frac{0^0}{0!} [/math] Therefore if we insist that 0!=1, then it must follow that: [math] 1 = \frac{0^0}{1} = 0^0 [/math] Therefore, if 0!=1 then 0^0=1. QED
  17. It's not that I have a problem with conventions, there is something else going on with 0^0, that is not a "conventional" issue. Look at it this way... When x isn't equal to zero, there is a simple proof that x^0=1. And, when x=0, there is a simple proof that x*y=0, for any y. So there's a sort of blind alley with 0^0. 0=1 thing. As I say, it's not that I have a problem with conventions, but few things in mathematics are conventions. Most of the structure of mathematics is purely logical, and this is why the subject attracts good minds IMHO. I'm not sure yet what the best way to handle 0^0, and 0! is. For now i do use the conventions, but still in the back of my mind, something isn't right. Regards PS: And lastly, you can use whatever conventions you wish to, and vice versa, and as long as we state what they are, and that we are using them in such and such an instance, no confusion can result. But keep in mind, that not all mathematical issues can be arbitrarily decided, when an issue isn't up to a random choice, or convention, logic must be used to make the decision... not human whim.
  18. Consider the product from k=1 to k=2 of some arbitrary function of k, f(k). [math] \prod_{k=1}^{k=2} f(k) = f(1)f(2) [/math] In the 'scalar multiplication' being considered here, both f(1)f(2) are elements of the real number system. They are not matrices, or anything else which doesn't necessarily commute. Hence... f(1)f(2) = f(2)f(1) In the case where the lower indice is greater than the upper indice we have: [math] \prod_{k=2}^{k=1} f(k) = f(2)f(1) [/math] Since the kind of multiplication being considered here is commutative we have: [math] \prod_{k=1}^{k=2} f(k) = \prod_{k=2}^{k=1} f(k) [/math] Then an induction argument will prove the general case. A computer could carry out the algorithm as follows: First compare the indices. If the lower indice is less than the upper indice, the variable k will be incremented by one unit repeatedly, until k is equal to the upper indice. On the other hand, if the lower indice is greater than the upper indice, then the variable k will be decremented by one unit repeatedly, until equalling the upper indice. And this is fine for the case where the multiplicand (that which is interior to the product symbol) is a real number. So the equality which I've repeatedly stated, is a consequence of the field axioms. To say anything else would be to say, "This axiom is true, and it is false." In other words, I am not making a suggestion, or a convention. I am informing you that what I am saying must be true, if that which is interior to the product symbol is a real number. Regards to all.
  19. Why must it not force that upon us?
  20. I don't hate it, it's just not what I'm thinking of. I'll come up with something, then post it. Basically, what I have in mind uses integration by parts in the proof, but its been so long since i formally proved it, i was hoping you knew the argument. Regards
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.