Jump to content

Aeternus

Senior Members
  • Posts

    349
  • Joined

  • Last visited

About Aeternus

  • Birthday 01/09/1987

Contact Methods

  • Website URL
    http://aeternus.no-ip.org

Profile Information

  • Location
    South Wales
  • Favorite Area of Science
    Computer Science
  • Biography
    See Homepage
  • Occupation
    Student

Retained

  • Atom

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Aeternus's Achievements

Atom

Atom (5/13)

12

Reputation

  1. I assume your alphabet is [math]\{1\}[/math]? In this case, you can just proceed by induction on the length [math]n[/math] of the string. Take a couple of base cases to make it simple. It makes things much simpler if you first of all reduce it to (111*)* = 111*, since clearly 111* has any string of length 2 or greater, which is the same as (111*)*. In the case [math]n = 0[/math], then due to the outer most Kleene Star in both cases, the empty word is in both languages. In case [math]n = 1[/math], then as we have no string in either expression of length less than 2, no amount or concatenation or choice will result in a string of length one, so "1" is not in either language. In case [math]n = 2[/math], then this is clearly in both languages from the definitions of Kleene star and +. In case [math]n = 3[/math], then this is clearly in both languages from the definitions of Kleene star and +. Assume for the induction hypothesis that we have the string [math]S[/math] of size [math]n[/math] in the language of both, and the string S' of size [math]n+1[/math] also in the language, and [math]n \ge 3[/math], then consider what happens if we have the string S'' of length [math]n+2[/math]. Clearly S'' is in the language 111* (as this matches any string of length 2 or greater). In case [math]n+2[/math] is odd, then we have that S of length [math]n[/math] is in the language (11 + 111)* and by the definition of the Kleene star S11 is also in the language, and the same argument holds if [math]n+2[/math] is even. Basically, from 11 and 111, you can make any odd or even sized string of 1s of length greater than 2 by simply adding 11 to either repeatedly (to make all the even or odd sizes.
  2. Why not look at things such as - http://en.wikipedia.org/wiki/Adder_(electronics) ? A nice simple example of computation being done with logic gates.
  3. Note here that [math]n\sharp[/math] is not the same as [math]P_n\sharp[/math] and I imagine you should be considering [math]n\sharp[/math] if you are looking at all the primes up to [math]n[/math], which will be much better for you. However, even with this improvement, this still gives a space complexity of [math]O(n)[/math] (from the wiki), which is still worse than Sieve of Atkins. Note here, that whatever your space complexity, your time complexity must be at least that (since you need to write to that many memory locations). You still need to worry about time, since the time to calculate just the final primorial (i.e over the course of the entire algorithm) is too big. Even if your algorithm wasn't already worse time and space complexity (baring in mind, this is just from calculating that one number, I'm not assuming anything other than the calculation of one primorial), there is also the fact that you still need to calculate the GCD for these huge numbers, which in of itself has quite a bad time complexity from what I know, see http://en.wikipedia.org/wiki/Euclidean_algorithm#Running_time . I imagine there are faster algorithms but from what I gather they are not much faster, and they would have to be at least O(log(n)) (afaik) as you'll need to read the numbers (assuming we are considering [math]n[/math] as the number and not the size of the number - this is something one should be careful of, as various resources approach this differently, and considering it in the size of the number throughout would probably be more appropriate). If you are genuinely serious about trying to understand this, then I'd honestly suggest you read up on Big O etc, and work through things, rather than hand waving with "one huge number has to be better than lots of smaller numbers", which just doesn't seem to be true. If I've made some mistakes, please point them out.
  4. If you look at http://en.wikipedia.org/wiki/Factorial#Rate_of_growth , you'll see that for bigger [math]n[/math], [math]n![/math] has at least [math]n (\log(n) -1)[/math] digits, which means that for a prime with thousands of digits, you are talking of not thousands of digits but of the order of [math]n\log(n)[/math] digits where [math]n[/math] is thousands of digits itself. I.e, you are talking at least [math]O(n \log(n))[/math] space complexity as far as I can see, which is worse than that for Sieve of Atkin given at http://en.wikipedia.org/wiki/Sieve_of_Atkin#Computational_complexity , i.e [math]O(n^{\frac{1}{2} + O(1)})[/math] (presumably the constant in the exponent is very small, as it is supposedly better than the Sieve of Eratothenes, although either way, the space complexity can't be worse than the time complexity) although I don't know the constants involved (See http://en.wikipedia.org/wiki/Big_O_notation for a better idea of O notation. If you make another thread of anything you don't understand, people can explain ). Also, you have to look at the time it takes to calculate a factorial, remembering that multiplication is not a constant time operation. If you look at http://en.wikipedia.org/wiki/Factorial#Computation you'll see that to calculate [math]n![/math] is far worse time complexity than that used for Sieve of Atkins (See http://en.wikipedia.org/wiki/Sieve_of_Atkin#Computational_complexity again). If you look at the time and space complexities for Sieve of Eratothenes (one of the most well known and oldest algorithms for calculating all prime numbers up to a number), I believe your algorithm would still be worse. I'm not sure here whether you are suggesting calculating factorials or primorials for each prime test and going through them, but again this is far worse than simply using the sieves or just going through the list of numbers with individual primality tests. If your algorithm only tests whether a single number is prime, and does not find all prime numbers less than that number, then there are MUCH faster algorithms for testing individual primality without calculating all primes - See http://en.wikipedia.org/wiki/Primality_test#Fast_deterministic_tests . I might be misunderstanding what you are suggesting. If so, could you please explain what I'm misunderstanding.
  5. Well consider that given a number n, you need [math]\log_2(n)[/math] bits to display it, so you have [math] N = \log_2(n) [/math] [math]2^N = 2^{\log_2(n)} [/math] [math] 2^N = n [/math] So you are left with [math]O(2^n)[/math]. Two points to note : a) Big O and Big Theta are two different things. Big O is only an upper bound, whereas Big Theta is much tighter and must be also a lower bound. See http://en.wikipedia.org/wiki/Big_O_notation#The_family_of_Bachmann-Landau_notations . b) You only need to check for factors up to [math]\sqrt{n}[/math] rather than [math]\frac{n}{2}[/math]. Consider a number bigger than the square root, then you'd need to multiply it by a number smaller than the square root to get n, otherwise you'd get a number larger than n.
  6. [hide] Or the fact that there is an eldest child, (potentially) implying there aren't 2 eldest children if you only consider integer ages. [/hide]
  7. Given personal experience, experiences of friends and also looking around online, I'd say that, while macs USED to be higher quality than your average PC, they are probably about on par these days.
  8. Sounds right. Although you could probably get away with [math] (C \setminus A) \cup (B \setminus C) [/math] since [math]C \setminus A[/math] will contain the bits that you need from C.
  9. I think you have the wrong idea. From what I gather, when one talks of 10 dimensions, one is not talking about different universes or anything like that (you might be thinking of http://en.wikipedia.org/wiki/Many_worlds but that suggests many more than 10 "universes") but is talking about spacial dimension, in the same way that you have 3 dimensions (i.e height, width and breadth). The idea being that you don't have just 3 spacial dimensions but in fact 10 spacial dimensions (this is very difficult to imagine, hence the video and various other resources trying to provide a way to imagine it). From what I remember, the ideas that suggest a greater number of spacial dimensions usually say that the dimensions other than the 3 we normally consider are very small and unnoticable, but that sort of thing is probably a massive oversimplification told to the general public on documentaries etc (I'm not a physicist).
  10. The nice thing is that it's not limited to SQL backed systems, you can use LINQ to run through various datastructures that you are using or implement a few interfaces and have your own datastructure usable (through IQueryable etc I believe). Saying that, the underlying idea isn't completely original, as bascule mentioned previously, things like ActiveRecord and ADO.NET have done similar things (not specifically the query part, but bringing the handling of the data and types into the programming language) but it'll be interesting to use. Personally I'm more interested in them bringing various features that they require for LINQ, such as lambda expressions, and more functional concepts (which LINQ seems to be part of) and so on, to languages such as VB.NET. There are many times when you just want to do something in a functional manner because it is the quickest and most readable way to do something, and a nice fusion of imperative and functional ideas in languages certainly benefits developers.
  11. You know you can preview your post right ?
  12. I don't think that's correct, you are averaging over distance, not time. I believe you can't do it. Take the speed we travel back with to be [math]x[/math]. We have [math] \textrm{average speed} = \frac{\textrm{total distance}}{\textrm{total time}} [/math]. We have the total distance as 60, and the time is going to be the 1 hour we have already used, added to the time it will take us to cover the remaining 30 miles, so [math]\frac{30}{x}[/math]. This gives us [math] 60 = \frac{60}{1 + \frac{30}{x}} [/math] [math]1 + \frac{30}{x} = 1 [/math] [math]\frac{30}{x} = 0 [/math] As you can see, there is no solution. If you could travel for another hour at 90 mph, then it would be fine, but you only have 30 miles left to cover.
  13. Well, it's fairly inexpensive, so why not buy it, or get it from the local library or something. If you find it hard to read, it might make a reasonable door stop
  14. Strangely it seems they are from different roots (one from greek, one from latin) http://en.wikipedia.org/wiki/Greek_and_Latin_roots_in_English
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.