
Posts
417 
Joined

Last visited
Content Type
Profiles
Forums
Events
Posts posted by John


Well, the general argument seems reasonable, but a formal proof using the pumping lemma will require some fleshing out. You'll want to assume your language is regular (or contextfree, etc., depending on which property you're trying to prove), then use the properties guaranteed by the appropriate pumping lemma to arrive at a contradiction, i.e. to show that the result of applying the pumping lemma to your language is a string not in the language.
1 
Well, again, mathematical logic and metamathematics are probably the areas to check.
Rautenberg's A Concise Introduction to Mathematical Logic is decent reading so far, though I admit I haven't gotten terribly far into it. It seems much nicer than the text my logic class used, which is now out of print anyway.
Something like Kleene's Introduction to Metamathematics may be closer to what you're seeking, but I haven't read it at all.
It's difficult to really recommend very specific things to study. The various subfields of mathematics connect in sometimes surprising ways, and while foundational areas may be somewhat more isolated than others, the fact remains that the fruitful study of mathematics involves exposing oneself to a variety of topics. It sounds like you've already taken some initial steps. Now just strike out on some interesting path and see where it leads.0 
Mathematical logic is probably what you're looking for if you're trying to dig deep into the foundations of mathematics. It encompasses things like set theory, model theory, and proof theory, on which much of mathematics is built. Depending on how far down the rabbit hole you wish to go, there's also metamathematics to consider, and of course you can then get into the philosophy of mathematics itself.
"Foundations of mathematics" classes vary by school. Mine essentially consisted of propositional logic, methods of proof, set theory, and a bit of group theory. The latter doesn't strike me as something that really belongs in a foundations class, but there it was, anyway.
0 
Okay, maybe I misunderstood the article.
Then what is "formal" power series?
The formal power series is a generalization of the polynomial.
Formally, a polynomial in a single indeterminate [math]x[/math] can be expressed as the following sum:
[math]\sum_{i=0}^{\infty}a_{i}x^{i}[/math]
where each [math]a_{i}[/math] is a number (or, generally, an element of a ring), and with the restriction that only finitely many of the [math]a_{i}[/math]'s can be nonzero. While we're used to thinking of polynomials in the context of polynomial functions or equations, where we substitute in various values for [math]x[/math] to evaluate or solve a function or equation respectively, in this formal definition [math]x[/math] is simply [math]x[/math], i.e. it doesn't represent anything but itself. So while we might solve the polynomial equation
[math]x+6=3x[/math]
and find that "[math]x=3[/math]," the polynomials [math]x+6[/math] and [math]3x[/math] themselves are simply not equal.
A formal power series in a single indeterminate [math]x[/math] has a similar definition, except that it allows for infinitely many nonzero coefficients. Since [math]x[/math] doesn't really represent anything except itself, questions of convergence and divergence are meaningless.
0 
While tan(x) increases without bound as x approaches 90 degrees, tan(90) itself is undefined.
Therefore, thinking of cot(90) simply as 1/tan(90) isn't the way to go, since you're essentially trying to divide 1 by an undefined value. The better way to look at it is that since tan(x) = sin(x)/cos(x), then cot(x) = cos(x)/sin(x), which is defined at x = 90. Since cos(90) = 0 and sin(90) = 1, cot(90) = 0/1 = 0.
Edit: To elaborate a bit, when we say [math]\lim_{x\to{a}} f(x)=\infty[/math], we're simply saying that the value of f(x) increases without bound as x approaches a. In calculus, we actually say the limit doesn't exist in this case, but we can extend the real number line to include [math]\pm\infty[/math] and use these symbols as convenient shorthands, keeping in mind that they still aren't real numbers.0 
I think my favorite part was when Ken Ham said all scientific dating methods are fallible, but the eyewitness accounts recorded in the Bible are infallible. As I recall, Bill Nye had trouble really responding to Ken that time (but ultimately pulled it together), which is understandable.
The debate went better than I expected. I shared some of the doubts I'd read elsewhere regarding Bill's ability to handle a debate like this, but he did very well after all. Honestly, Ken did better than I expected him to as well, though of course his arguments were still pretty silly.
I didn't realize the YECs (or at least, Ken) had moved so close to actually accepting evolution. Of course, "fewer than a thousand 'kinds' evolved into millions of species within a few thousand years" isn't much better than "all species have always existed in their present forms," but it's something, I suppose.
I also had to cringe a few times when it seemed Bill's jokes fell flat.
It's unlikely that many minds were changed, but hopefully a few seeds were planted in a few minds and will force those few to reexamine their beliefs. I'm satisfied with how the debate went.
Edit: The Q&A had me worried for a bit as it seemed ready to essentially become an "I don't know" vs "It's in the Bible" exchange with not much substance, but fortunately that was ultimately averted for the most part.1 
You'll want to use a double factorial.
Using that, I have [math]\sum_{k=0}^{\infty} \frac{1}{(2k+1)!!}x^{2k+1}[/math].
Summing from 1 instead of from 0 (if you prefer that for whatever reason) simply requires changing each instance of 2k+1 to 2k1.
1 
This proof is fairly straightforward, and therefore it's difficult to really help without just giving the whole thing away. Whatever your intentions, the purpose of this forum is to give or receive hints not solutions.
Also, this website is giving me a headache.
0 
I'm not sure what the first line means, but every natural number divides zero.
The second statement is true. For assume a divides a + b. Then a + b = na for some integer n. Thus b = na  a = (n  1)a. Since n is an integer, n  1 is an integer as well. Let m = n  1. Then b = ma for some integer m, i.e. a divides b.
Now let a divide b. Then b = na for some integer n. Thus a + b = a + na = (1 + n)a. Since n is an integer, 1 + n is an integer as well. Let m = 1 + n. Then a + b = ma for some integer m, i.e. a divides a + b.
0 
With which reason?
If it's the main problem you're talking about...
[math]\left[a=0\oplus b=0\right]\Rightarrow \frac{(a+b)!}{a!+b!}\in\mathbb{N}[/math] isn't necessarily true:
[math]\frac{(5+0)!}{5!+0!}=\frac{120}{121}\notin\mathbb{N}[/math]
Yes, sorry. I should have been clearer in my language. For a = 0 or b = 0, (a + b)!/(a! + b!) is not a natural number. I was actually going to edit the post to reflect that, but figured my meaning would be clear.
0 
Another option is to let either a or b equal 0.
0 
Not sure what John is talking about, Python is no more difficult to learn than PHP or Ruby or JavaScript, it is easier than Java unless your background is strictly in C++.
My point was that Python is a fairly easy language to get into. Java isn't bad either, but it may be a bit more complicated initially.
I personally started with C++, so perhaps my perspective is skewed. However, I stand by my second paragraph. Most languages aren't that difficult to learn. Programming concepts are the bigger hurdle.
0 
I don't know if anyone is still worried about this problem, but I was thinking about it today, and I've found what I believe is an accurate formula that may be easier to work with than the one I gave before. Here it is:
[math]f(m,n) = {n \choose n}n^{m}  \left[{n \choose {n1}}(n1)^{m}\left[{n \choose {n2}}(n2)^{m}\ldots\left[{n \choose 2}2^{m}{n \choose 1}1^{m}\right]\right]\right][/math]This can be written as the following sum:
[math]f(m,n) = \sum_{i=0}^{n1}\left[(1)^{i}{n \choose {ni}}(ni)^{m}\right][/math]
The expanded formula can be simplified a bit if we realize that [math]{n \choose n}n^{m} = n^{m}[/math] and [math]{n \choose 1}1^{m} = n[/math], but I didn't make those simplifications, in hopes that the pattern is clearer than it would have been otherwise.
Example: Consider again the case of m = 4, n = 1 to 4. Then we have
[math]\begin{array}{rcl}f(4,1) & = & \sum_{i=0}^{0}\left[(1)^{i}{1 \choose {1i}}(1i)^4\right] \\ & = & (1){1 \choose 1}(1)^{4} \\ & = & 1\end{array}[/math]
[math]\begin{array}{rcl}f(4,2) & = & \sum_{i=0}^{1}\left[(1)^{i}{2 \choose {2i}}(2i)^4\right] \\ & = & (1){2 \choose 2}(2)^{4} + (1){2 \choose 1}(1)^{4} \\ & = & 16  2 \\ & = & 14\end{array}[/math]
[math]\begin{array}{rcl}f(4,3) & = & \sum_{i=0}^{2}\left[(1)^{i}{3 \choose {3i}}(3i)^4\right] \\ & = & (1){3 \choose 3}(3)^{4} + (1){3 \choose 2}(2)^{4} + (1){3 \choose 1}(1)^{4} \\ & = & 81  48 + 3 \\ & = & 36\end{array}[/math]
[math]\begin{array}{rcl}f(4,4) & = & \sum_{i=0}^{3}\left[(1)^{i}{4 \choose {4i}}(4i)^4\right] \\ & = & (1){4 \choose 4}(4)^{4} + (1){4 \choose 3}(3)^{4} + (1){4 \choose 2}(2)^{4} + (1){4 \choose 1}(1)^{4} \\ & = & 256  324 + 96  4 \\ & = & 24\end{array}[/math]
as we did using the other formula.
Reasoning: I was thinking again about the general method of taking all possible mlength numbers given n distinct digits and subtracting the ones that don't meet our criteria. It occurred to me that one method might be to simply subtract off the numbers that only contain (n1) of the available digits. We'd have to subtract this for each possible combination of (n1) digits, i.e. we'd need n^{m}  C(n, n1)(n1)^{m}.
But of course, this subtracts off too much, since some numbers will be counted multiple times (for instance, given m = 5 and n = 4, the digit combinations 123 and 124 will both include numbers like 12212). What must be done, then, is to exclude numbers containing only (n2), (n3), ..., 1 of the available digits, saving them to be subtracted later. This is similar to the reasoning used in the formula I gave a few posts back. Thus we have n^{m}  [C(n, n1)(n1)^{m}  [C(n, n2)(n2)^{m}  ...  [C(n, 2)(2)^{m}  C(n,1)]]].
I don't know if this explanation is extremely clear, but hopefully it conveys the idea to some extent.1 
It may help if I explain my reasoning.
If we have n digits, and we want numbers of length m, then the number of possible numbers is n^{m}. However, this will include all the numbers that don't contain at least one of each digit, so we'll need to subtract those, leaving us with only the numbers that do contain at least one of each digit.
Of course, if n = 1, then it's pretty obvious that there is only one possible number.
But now consider the case where n = 2. Then we have 2^{m} possible numbers, but we must subtract the numbers that don't contain both digits, i.e. the numbers that contain only one digit. Since we have two digits, we can split this, essentially, into two cases of n = 1, and we've already shown that for n = 1, the number of possibilities is 1. Therefore, we have 2^{m}  2(1) numbers that contain at least one of each digit.
Moving on to n = 3, we see that we have 3^{m} total possible numbers, but again we must subtract the numbers that don't contain all three digits. In this case, what needs to be subtracted are the numbers containing only one of the digits and the numbers containing only two of the digits. The first is fairly easy, as using the previous reasoning, we can see that this amounts to three cases of n = 1, so we first subtract 3(1), giving us 3^{m}  3. To determine what else we must subtract, we must realize there are [math]{3\choose2} = 3[/math] ways to choose 2 of the three digits, so we also have three cases of n = 2 to subtract. From our discussion of n = 2, we see that this amounts to 3(2^{m}  2), so in the end we have 3^{m}  3(1)  3(2^{m}  2) total numbers that contain at least one of each digit.
For n = 4, we follow the same reasoning. Now we have our 4^{m} total possibilities, minus the 4 cases of n = 1, minus [math]{4\choose2} = 6[/math] cases of n = 2. We must finally subtract the cases of n = 3, which amount to [math]{4\choose3} = 4[/math]. This leaves us with the final formula 4^{m}  4(1)  6(2^{m}  2(1))  4(3^{m}  3(1)  3(2^{m}  2(1))).
Clearly, as n gets larger, the final formula becomes pretty unwieldy, which is why I defined it recursively in my previous post. What you may also notice is we're ending up with binomial coefficients. So for n = 2, we subtract 2 times some stuff. For n = 3, we subtract 3 times some stuff and 3 times some other stuff. For n = 4, we subtract 4 times some stuff, then 6 times some other stuff, then 4 times still other stuff. And so on.1 
If we allow multiple counting, I think the formula is [math]{{m}\choose{n}}{n!} \times n^{mn}[/math].
My reasoning is as follows: In order to ensure that we have at least one of each digit, we must reserve [math]n[/math] spaces to each hold a different digit. Given an [math]m[/math]digit number, there are [math]m \choose n[/math] ways to select these spaces, and for each of these, the number of possible orderings of our [math]n[/math] digits is [math]n![/math], thus the [math]n! {{m}\choose{n}}[/math].
For each of these, the remaining [math]mn[/math] spaces can be occupied by any of our [math]n[/math] digits, giving us [math]n^{mn}[/math] possibilities.
This would provide a solution to the original problem, except that certain possibilities are counted multiple times. For example, given [math]m = 4[/math] and [math]n = 3[/math], the possible spacings 123_ and 12_3 will both include 1233.
So for the case of [math]m = 4[/math] and [math]n = 3[/math], we'd have [math]{{4}\choose{3}}(3!)(3^{1}) = 4(6)(3) = 72[/math]. In this example, the actual solution is 36, so the formula doubles the desired result. But for example, if [math]m = 6[/math] and [math]n = 2[/math], then the formula yields 480 while we know the answer is 62. Finding a modification to subtract the excess for general [math]m[/math] and [math]n[/math] is proving trickier for me than it seems it should be. I'm probably missing something simple.
Note that I'm not absolutely confident in this, but I think my reasoning is sound.Edit (because the forum software won't let me put this into a new post):
I may be on to something, but it's a tad complicated. If we let [math]f(m,n)[/math] be the function we're looking for, then I think
[math]f(m,n) = \begin{cases}1 & n = 1 \\ n^{m}  \sum_{i=1}^{n1} \left({{n}\choose{i}} f(m,i)\right) & n>1\end{cases}[/math]
For instance, consider m = 4. Now, as mentioned previously, m cannot be less than n. Using the definition I just gave then, we have
[math]\begin{array}{rcl}f(4,1) & = & 1\end{array}[/math]
[math]\begin{array}{rcl}f(4,2) & = & 2^{4}  \sum_{i=1}^{1} \left({{2}\choose{i}} f(4,i)\right) \\ & = & 2^{4}  {{2}\choose{1}}f(4,1) \\ & = & 2^{4}  2(1) \\ & = & 14\end{array}[/math][math]\begin{array}{rcl}f(4,3) & = & 3^{4}  \sum_{i=1}^{2} \left({{3}\choose{i}} f(4,i)\right) \\ & = & 3^{4}  \left[{{3}\choose{1}}f(4,1) + {{3}\choose{2}}f(4,2)\right] \\ & = & 3^{4}  3(1)  3(14) \\ & = & 36\end{array}[/math]
[math]\begin{array}{rcl}f(4,4) & = & 4^{4}  \sum_{i=1}^{3} \left({{4}\choose{i}} f(4,i)\right) \\ & = & 4^{4}  \left[{{4}\choose{1}}f(4,1) + {{4}\choose{2}}f(4,2) + {{4}\choose{3}}f(4,3)\right] \\ & = & 4^{4}  4(1)  6(14)  4(36) \\ & = & 24\end{array}[/math]
each of which matches my manual counts.
If you want, test this for other m's and n's. I've tried a few and it seems to work out, but I've by no means proved that this formula is correct.
0 
I think the last entry is incorrect, unless I'm completely misunderstanding the problem. With two digits, there are only 2^m possible numbers period, and 2^6 = 64 < 78. Given the pattern and some reasoning, I think the entry here should be 62, i.e. 2^6  2, which is the total possible variants minus the ones consisting of only one digit.OK  I always start simple. I enumerated various m for n=2
n m variants
2 2 2
2 3 6
2 4 14
2 5 30
2 6 78
This may help with finding a general formula, but I'm at work and can't really look into it right now.
0 
Keep in mind you'll probably want to know the basics of linear algebra in addition to calculus. Some (most?) of the topics covered in introductory physics involve vector quantities, and some familiarity with manipulating vectors and matrices is handy. This will also be required for learning vector calculus down the line.
Of course, the resources I've seen for learning vector calculus and introductory physics briefly cover the linear algebra topics required anyway, so perhaps a dedicated course isn't essential (though it will be if you intend to seriously get into video game programming).
Khan Academy has video series covering elementary algebra, trigonometry and precalculus, and following those will probably be adequate preparation for learning calculus.
As for the tutorial currently being posted in the Mathematics forum:
1. It covers integral calculus in such a way that some familiarity with differential calculus is assumed. There is a tutorial on differential calculus here, and of course links to other calculus learning resources have been provided in the thread containing Endercreeper's tutorial.
2. While it is looking to be a handy reference, it is a work in progress and will probably be cleaned up and fleshed out a bit if it's to become a super official SFN tutorial. Especially given your lack of experience with calculus on top of this, don't fret too much over the fact that you're having trouble following it.0 
I don't see it. Also, another slight edit: In the antiderivatives section, you refer to the antiderivative as an "improper" integral. This should be "indefinite" integral. I'm assuming you'll discuss improper integrals later.
1 
I'm assuming, given the way you've worded this, that any two or three people who were together on one day can still be together on the next, just not all four. For instance, if John, Jacob, Jingleheimer, and Schmidt were a group the first day, then the second day we could still have something like John, Jacob, Jingleheimer, and Gertrude. In that case, my first guess would be [math]\frac{{204\choose4}}{51}[/math].
My reasoning is as follows: On the first day, we have [math]204\choose4[/math] ways to choose four people given that order doesn't matter. We select 51 such groups to man our 51 machines. The second day, then, we have the original [math]204\choose4[/math] possible groups, minus the 51 we chose the day before. On the third day, we lose the 51 groups we had on day two. And so on. Thus, the number of days we can do this is the number of times we can subtract 51 from [math]204\choose4[/math], i.e. [math]\frac{{204\choose4}}{51}[/math].
If no two people can share a group twice, then the options are, of course, much more limited. On the first day, we still have [math]204\choose4[/math] possible groups. On the second day, we lose not just the 51 groups from the first day, but all the groups containing any two people who were grouped together the first day. This is a bit more complicated to work out after the second day, unless I'm missing something simple.0 
Neither Python nor Java is particularly difficult to learn and understand. Both are commonly used as beginner languages in computer science curricula. Python in particular is very clear. Java is perhaps slightly more complex at the beginning, but it has the advantages of being fairly widely used and sharing a lot of syntax with handy languages like C/C++ and Perl.
In any case, the main challenge in learning to program is understanding programming concepts themselves. Outside of a few deliberately designed esoteric languages, learning syntax is a small part of the process and not generally difficult.
BASIC strikes me as an odd recommendation in 2013, heh. I suppose Visual Basic is handy if you're into that sort of thing, and TIBASIC might be fun if you happen to have a TI graphing calculator and want to write new programs for it, but generally speaking, there are much more useful and still simple to learn languages available these days.0 
One good resource is Paul's Online Math Notes. The last couple of sections of Calculus I introduce integration, substitution (one of several techniques of integration), and applications of the integral. Calculus II then continues with more integration techniques and applications before getting into other calculus topics. There are also notes for Calculus III (which is multivariable calculus, though some of that material is introduced at the end of Calculus II) and Differential Equations. PDFs of the notes are available if you prefer those over the website, and Calculus I and II (but not III or DE) also come with practice problems and solutions.
There's also William Chen's lecture notes, which cover quite a few different mathematical subjects, including calculus. The notes include exercises, but I don't think worked solutions are available.As for tutorials here, there has been some talk (though not really recently, that I know of) of adding more tutorials, but with the huge number of resources already available online for pretty much any math topic, adding more tutorials here probably isn't a priority.
0 
The Relation of Mathematics and Physics
I thought this might be interesting to those who have participated in these types of discussions. It's the second Messenger Lecture delivered to students at Cornell by Richard Feynman in 1964, now hosted by Microsoft's Project Tuva. For those who don't have Microsoft's Silverlight plugin installed and aren't interested in installing it, I'm sure copies of the video can be found on YouTube and various other websites.
Although the entirety of the lecture (and indeed, the other lectures, and pretty much anything else by Feynman) is worth watching, the main portion of the lecture that applies to this thread starts just after the 48 minute mark. Of course, he makes reference to things discussed earlier in the lecture, but the main point comes across regardless.0 
Einstein was mathematically gifted from a young age. He struggled a bit with differential geometry, which he learned as part of his efforts in developing general relativity, and he consulted with others as part of this learning.
Most (all?) of physics that can be intuitively discovered has been discovered already. Modern theory relies heavily on mathematics, and an idea presented without mathematical backing is unlikely to be of real substance.
I'm not sure the phrase "something that existed before math existed" necessarily makes sense. While we do choose the axioms and rules of inference on which to build mathematical systems, these are typically justified (at least in the case of mathematics used to analyze physical systems) by how reality seems to work. In some sense, then, the conclusions drawn in mathematics are properties of reality itself, i.e. they have been the case for at least the entire history of the universe.0 
Almost again, heh. Remember that [math](F \vee G) \implies (I \wedge J)[/math] is true by assumption. You're on the right track, though.
0
proof by pumping lemma
in Homework Help
Posted · Edited by John
It's not really a matter of doing something wrong. It's more that some steps have been left out.
For instance, you mention pumping once, but the pumping lemma simply asserts that there exists some integer p ≥ 1 such that every string w in L with length at least p satisfies the various properties. So it's better to start by assuming that there does exist such a p, then continue on to show that, given the other properties guaranteed by the lemma, the result is a string not in L.
Wikipedia actually offers a decent example which is common enough that you may have seen it before.
Edit: Rereading your posts, it may be that you have a fully fleshed out proof written and simply haven't shared it all with us. If you'd like to post the entirety of what you have, it'd be appreciated.