Jump to content

DaviFN

Members
  • Posts

    7
  • Joined

  • Last visited

Everything posted by DaviFN

  1. Given a number n, what is the best approach to find the most compact representation for n in terms of sums of powers, such that the bases can't surpass a given value? As in this image, the betas are the base and are limited in range (they are positive and can't surpass a given value, say, for instance, 2^32). Amongst all possible representations, how could we find the one(s) that have the less non-zero betas possible? As an example of an instance of the problem, suppose n = 558545864083284009. Amongst all possible such representations of this number, there is at least one that requires only two betas: beta1 = 2, beta21 = 7, such that 2^1 + 7^21 = 558545864083284009. Is it possible to solve the problem? I mean... Of course it's possible, but is there any approach better than brute-forcing? Thanks!
  2. This is the binary search to find the lower bound for i: while(1) { if(executeBinarySearch==0) { break; } initialI = (rangeFinish+rangeStart)/2; mpz_rootrem(nthRoot,rem,n,initialI); int logic = mpz_cmp_ui(ithRoot,2147483647); if(mpz_cmp_ui(ithRoot,1)==0) { // i is too large and we got 1, let's pick a smaller i rangeFinish = initialI; } // logic > 0 means ithRoot is greater, logic == 0 means it's equal and logic < 0 means it's smaller. if(logic > 0) { //ith root is greater, so we have to pick a greater i rangeStart = initialI; } if(logic < 0) { //ith root is smaller, so we have to pick a smaller i rangeFinish = initialI; } if(rangeFinish<=rangeStart + 1) { break; } if(logic == 0) { break; } if(!mpz_cmp_ui(ithRoot,2147483647)>0) { break; } } Similarly, this is the binary search to find the upper bound for i: rangeStart = initialI; while(1) { finalI = (rangeFinish+rangeStart)/2; mpz_rootrem(ithRoot,rem,n,finalI); int logic = mpz_cmp_ui(ithRoot,2147483647); if(rangeFinish<=rangeStart + 1) { break; } if(mpz_cmp_ui(ithRoot,1)==0) { // i is too large and we got 1, let's pick a smaller i rangeFinish = finalI; } if(mpz_cmp_ui(ithRoot,1)>0) { // we can pick a larger i rangeStart = finalI; } } And this is the loop brute-forcing every i-th root possible, updating the smallest remainder when a smaller is found: for(i=initialN;(i<=finalN && i>=initialN);i++) //for(i=finalN;i>=initialN;i--) { mpz_rootrem(ithRoot,rem,n,i); if(mpz_cmp(rem,smallestRem)<0) { // new smallest remainder mpz_set(smallestRem,rem); smallestRemI = i; } } My CPU is single-core. I've set the priority of the process to the highest one (not real-time), but it's still too slow. Is there a non-brute force approach to the problem?
  3. Given a number n, what is the most assymptotically fast algorithm to express it in terms of base^exponent + rem such that rem is the smallest possible and base is limited from 2 to the maximum number that can be represented by a unsigned int? I'm currently using gmp bignum library in C, and my algorithm is as follows: It first finds the smallest possible base and the greatest possible base given the circunstances (binaries searches are performed) and then we know the range of i. Then, for each possible i, we take the i-th root of n and the corresponding remainder. The smallest remainder is updated if the current remainder is smaller than it. The algorithm does work, but it's far too slow (even using such a powerful library as gmp). For comparison, for a number with roughly 6 mega bits when expressed in binary the algorithm takes ~90 days to complete. Intuitively, I think there is a better way to solve the problem (i.e. faster way, an algorithm with a smaller time complexity). What can be done to speed up this process? Obs: Problem: Given n, find i such that the remainder, n - (floor(ithRootOf(n))^i, is the smallest possible. 2 <= floor(ithRootOf(n)) <= 2147483647
  4. Endy0816, I do understand a general algorithm to compress everything doesn't exist. But, given a specific file, how can you compress it the most? That's the question.
  5. Sensei, this can be useful even if compression takes a long tine. Primes and their power are cheap to compute. They are computed in logarithmic time regarding to the power. Famous big files used worldwide could be compressed with joint effort, thus lowering down cost of data storage. If this can be done, even if it takes a long time to compress, decompression will be fast. And I have started it. Running for three days so far, managed to get a power of a prime whose binary representation happens to be that of a substring of 34 bits of the famous Lena image. That's not relevant at all, hence I came here for math help.
  6. Sensei, the goal is compressing a specific file the most. Compression is indeed slow, but once compresses it's trivial to decompress. I'm kind of aware of how the vast majority of compression takes place; search for patterns, build a probabilistic model, represent more likely things with less units of storage. That's not my approach; I'm taking a more numerical approach that may work on any (specific) data. A pre-calculated array of primes would take too much space, but they could be stored on disk (focus is on decompression, not on compression). Compression's goal is to find a function and parameters to it such that the number that makes up the file is the output. Of course, there are infinite functions and arguments that satisfy this. Hence, we want one that doesn't take much space on disk. Is there any theory that states this approach is impossible/possible?
  7. I'm interested in the following problem: given a random number n (n can be gigantic), how do we find a pair function+input(s) whose output is n such that the input(s) are relatively small in size? This problems arises in data compression; consider the bits that make up a file (or a substring of bits of the file) and treat it as a number (i.e. the bits are the binary representation of this number). If we could write a pair function+input(s) whose output happens to be the substring, this whole substring can be replaced by the function+input(s). I've thought of expressing the number as sums (or differences) of relative big powers of prime numbers. Is this a good approach? And, if not, what would be a good one? And how to proceed? Motivation of the question: A simples function like raising the nth prime number to a power S can result (depending on the values of p and S) on various outputs, each of which is unique (given that any number has only one prime factorization). If we pick p = 17 and S = 89435, for example, that's computationally trivial to compute (takes logarithmic time), and will result in a somewhat gigantic number. We can then generate a file whose bits are the same of the binary representation of this number (or at least some of the bits are). (This is just a rough example). The problem is going the other way: Given a bit string (hence, a number), how to express this specific bitstring with less bits (very few, actually) through a function that results in the number. Any ideas/answers/comments are welcome!
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.