Jump to content

2^N (2 to the power of N)


fredreload

Recommended Posts

I need an efficient way to calculate 2 to the power of N possibilities. The number N would be something like 1100, where there are 1100 bits (one thousand and one hundred bits), so that makes it 2^1100 possibilities. I want to design a program that would allow me to loop through every single possibilities, unfortunately 2^1100 is too big of a number. So, how should I go about tackling this problem from a mathmatical, statistical point of view?

Link to comment
Share on other sites

32 bit long/int is limited to 32 bits,

64 bit long long is limited to 64 bits,

But you can make your own custom integers with "infinite" precision, as much as you have memory in computer.

To store 1100 bits there is needed just 138 bytes.

Make you own custom C++ class, which is adding/subtracting/multiplying/dividing/and/or/xor/invert/lshift/rshift,

or other operations on them.

 

Split calculations to CPU threads (you will learn how to make multi-threading programs)

split calculations to GPU threads (you will learn how to make CUDA/OpenCL programs),

modern GPU have 1024 cores+, (how much do you have in your GPU?)

 

Statistical will be not precise.

There is needed more details to further advice..

Edited by Sensei
Link to comment
Share on other sites

I want to design a program that would allow me to loop through every single possibilities, unfortunately 2^1100 is too big of a number. So, how should I go about tackling this problem from a mathmatical, statistical point of view?

 

Writing such a program is extremely easy, it's just a counter loop that counts from 0 to 2^1100-1 (yes, that's very easy, just do additions with carry). In a practical sense there's no point because it takes to long to count that high.

 

It's not practically possible unless you have an extremely large amount of time to wait for the program to complete. No way around it, I'm afraid.

Edited by Thorham
Link to comment
Share on other sites

2^ 1100 is a lot more than there are particles in the known universe.

That's not relevant.

 

Counting to it is impossible.

 

It's not impossible in principle. It's just not practically manageable. In fact, any finite number can be counted to in principle.

Edited by Thorham
Link to comment
Share on other sites

That's not relevant.

 

 

It's not impossible in principle. It's just not practically manageable. In fact, any finite number can be counted to in principle.

In principle, yes, but that doesn't help because, in practice, it's impossible- like I said.

 

Just work out how long it would take to count to 10^331.

that many seconds is more than 10^300 times the age of the universe.

 

So, do you accept that saying "well it's possible in principle" is completely unhelpful here?

Link to comment
Share on other sites

So, do you accept that saying "well it's possible in principle" is completely unhelpful here?

 

I already said in my first post that it takes too long. You said it's impossible, which it's not, of course. Saying it's impossible is what's not helpful, because it's not correct, and doesn't show the problem with doing this.

Edited by Thorham
Link to comment
Share on other sites

 

I already said in my first post that it takes too long. You said it's impossible, which it's not, of course. Saying it's impossible is what's not helpful, because it's not correct, and doesn't show the problem with doing this.

Pointing out that it is impossible (whether that's in principle or in practice) forces you to move on, and reconsider the problem. And that's why I asked "What question are you trying to answer?"

Saying that it's possible- but not actually possible- achieves little or nothing.

On the other hand saying that the number is bigger than the number of particles in the universe does explain why it's never going to happen. (Because it's a vastly more time consuming problem that counting all the particles in the universe) so it does show the problem with doing it.

Link to comment
Share on other sites

@Op: Is this in relation to a programming class?

It's not a programming class, but I'd like to explore the problem both from a programming perspective and mathmatical perspective. For programming, right pretty much as Endy's said, using the least amount of time and most efficient way to solve this huge problem, if solvable. For mathmatics I'm looking for a way to reduce this equation. The problem is about computation of neurons an its synapses for all possible states. Each synapse has a binary state of either on with electrical signal running or off with no signal running. The max number of synapses I found that connects to a single neuron is 1100 (one thousand and one hundred) synapses. Therefore with a possible on and off states this neuron would contain a total of 2^1100 states. Now if You stack this single neuron with another neuron, the states increment like this. Imagine another neuron with 5 synapses connect to it, it would have a total of 2^5 states. The total states of this neuron and the previous 1100 synapses neuron would be ( 2^1100 * 2^5 = 2^(1100+5) ). Now it's just a matter of adding another billion neurons and you have something like 2^(1100*1 billion) states if every neuron has 1100 synapses. I'm not sure if such a number is crunchable and if there is a way to reduce it. I'd like to hear what you guys think

Link to comment
Share on other sites

On the other hand saying that the number is bigger than the number of particles in the universe does explain why it's never going to happen. (Because it's a vastly more time consuming problem that counting all the particles in the universe) so it does show the problem with doing it.

 

It doesn't. The number of particles in the universe is just a big number, not necessarily a number you can't practically count to. Take the number of atoms in the universe. That's about 10^82. You can practically count to that with super computers. Stuff like that just doesn't mean anything.

Link to comment
Share on other sites

If you want to do arithmetic with 1100 bit numbers, then I would recommend using one of the existing libraries to do this. Anything other beyond addition rapidly gets complicated and needs testing.

 

For example: https://gmplib.org

 

On the other hand, it sounds like you need to reconsider your approach.

Link to comment
Share on other sites

 

It doesn't. The number of particles in the universe is just a big number, not necessarily a number you can't practically count to. Take the number of atoms in the universe. That's about 10^82. You can practically count to that with super computers. Stuff like that just doesn't mean anything.

" You can practically count to that with super computers."

No you can't.

The fastest computers run at about 100 petaflop- so then have a count rate of 100 PHz or so.

they can count to about 10^ 17 in a second.

So to count to 10^82 would take about 10^65 seconds

That's about 10^57 years

or about 10^47 times the age of the universe.

It's still impossible if we give everyone on earth one of these computers- that only takes a factor of about 10^10 off how wrong you are.

 

Now please stop pretending this is in any way helping the OP

The task is actually impossible by counting, so they have to find a different way.

Link to comment
Share on other sites

Even if you have 2^1000 connections, you don't need to count them all. You just need to evaluate each set of inputs as they happen. Which is entirely practical. Simulations of microprocessors deal with more states than that.

Link to comment
Share on other sites

how wrong you are.

Yeah, you're right, I over estimated super computers. It probably won't stay like that, of course.

 

Now please stop pretending this is in any way helping the OP

I'm not pretending anything. I already explained it in my first post.

Edited by Thorham
Link to comment
Share on other sites

Yeah, you're right, I over estimated super computers. It probably won't stay like that, of course.

 

 

I'm sorry, but it probably will. Just to be able to count that high in 1 year, we would need a computer 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 times faster than a modern supercomputer.

 

That's unrealistic to expect to ever happen, frankly.

Link to comment
Share on other sites

That's unrealistic to expect to ever happen, frankly.

 

I'm not expecting it to happen any time soon, of course, but who can say what's possible in a million years? Perhaps it'll be child's play by then.

 

Remember the Eniac? Could do 5000 additions per second. Now look at supercomputers. Astonishing difference.

Edited by Thorham
Link to comment
Share on other sites

 

I'm not expecting it to happen any time soon, of course, but who can say what's possible in a million years? Perhaps it'll be child's play by then.

 

Remember the Eniac? Could do 5000 additions per second. Now look at supercomputers. Astonishing difference.

In this case, that's like a progression of candle > lightbulb > star.

Link to comment
Share on other sites

In this case, that's like a progression of candle > lightbulb > star.

 

The first light bulb without vacuum 1802, the first light bulb with vacuum (both platinum) 1840, the first fusion device ~1950..

Link to comment
Share on other sites

 

The first light bulb without vacuum 1802, the first light bulb with vacuum (both platinum) 1840, the first fusion device ~1950..

A fusion device is not an actual star.

 

This is where a quantum computer comes in? Anyone know how that thing works?

Quantum computers are not actually faster computers. They are significantly faster at solving specific kinds of problems that take current computers a very long time to solve.

 

But if we're talking simple counting, or analyzing every possible solution to a problem rather than just narrowing it down to one specific solution, they're not really going to help.

Link to comment
Share on other sites

I over estimated super computers. It probably won't stay like that, of course.

 

I'm not pretending anything. I already explained it in my first post.

You continue to pretend that your idea (that it was ever going to be possible) has any validity.

If we "magically" turned every particle in the universe into a supercomputer and tied them together then it would still be too slow to count to 2^1100 in the lifetime of the universe.

Not "slightly too slow", or a "bit slow, but we might get better computers".

Counting to 10^331 at 100PHz takes 10^314 seconds

Have every particle in the universe helping out and it's still 10^232 seconds

Just to be utterly insane, turn every particle in the universe into another universe then turn each of the particles in each of the universes into a supercomputer and it still takes 10^150 seconds

About 10^142 years

about 10^128 times the age of the universe

 

Are you beginning to get a grip on how big this number is, and how wrong you were?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.