Jump to content

First commercially viable quantum computer


bascule

Recommended Posts

Aside from just speed, quantum computers may be just what the AI field needs to start making actual 'intelligence' and to attempt systems for object recognition that actually work.

Link to comment
Share on other sites

how much faster are these things supposed to be compared to the ones we use today? And will this technology increase transfer speeds over the internet and such?

 

From what I remember reading, they wouldn't necessarily be faster except in certain kinds of tasks. Specifically, tasks which require checking different possibilities of something, like codebreaking. I might be wildly off about this, but it has something to do with every permutation existing in a probability function, then collapsing in such a way that only the right answer can be real. In other words, crazy magic.

 

I suspect there would be more general increases in performance, though, since other processes could probably be done in a different way so as to exploit the quantum properties. Again, though, I don't know much about it.

Link to comment
Share on other sites

From what I remember reading, they wouldn't necessarily be faster except in certain kinds of tasks.

The interesting point about quantum computers is that due to their different operation mechanism they allow for algorithms that are not possible with classical computers. These algorithms can, for some applications, way outperform classical algorithms.

Specifically, tasks which require checking different possibilities of something, like codebreaking.

I think that´s just one example. Another one is, iirc, related to doing fourier transformations and related tasks like frequence filtering very effectively.

Link to comment
Share on other sites

Probably shouldn't get too excited. After all look at the Transputer from the 80's that was damn good but died a death . . . . though i hear its been taken up again. :D

 

Representation of more states simultaniously will make the quantum computer good for algorithms, but nothing for AI.

AI . . proper AI not just programmed resonse will take a machine that is split into cells working seperately, but equally aware of the whole in order to think . . . if anything the PS3 is the best jump forward recently

Link to comment
Share on other sites

The ability to have weighted logic states as opposed to yes or no seems to lend itself to AI because the actual functioning of our neurons is based upon strength of connection between the neurons (weighted states).

 

Why do you suggest that doesn't help AI?

Link to comment
Share on other sites

The ability to have weighted logic states as opposed to yes or no seems to lend itself to AI because the actual functioning of our neurons is based upon strength of connection between the neurons (weighted states).

 

Why do you suggest that doesn't help AI?

 

What you're suggesting is that an analog computer is better suited to the purpose.

 

That's not necessarily the case.

Link to comment
Share on other sites

What you're suggesting is that an analog computer is better suited to the purpose.

 

That's not necessarily the case.

 

Well limitations to analog computers made them all but obsolete in the digital age. The ability to have multiple states (not just 0 and 1) while enjoying the benefits and strengths of digital computers would make a quantum computer much better geared for AI than an analog computer.

 

I think it would be easier to model the neocortex with a quantum computer.

Link to comment
Share on other sites

From what I remember reading, they wouldn't necessarily be faster except in certain kinds of tasks. Specifically, tasks which require checking different possibilities of something, like codebreaking. I might be wildly off about this, but it has something to do with every permutation existing in a probability function, then collapsing in such a way that only the right answer can be real. In other words, crazy magic.

 

I suspect there would be more general increases in performance, though, since other processes could probably be done in a different way so as to exploit the quantum properties. Again, though, I don't know much about it.

 

What about computer graphics?

 

 

I think that field would be vastly affected by quantum computers. :)

Link to comment
Share on other sites

The most surprising fact that I personally find regarding this development is the actual technology that they ended up using: superconduction material. http://www.dwavesys.com/index.php?page=hardware

 

I mean it seems rather brilliant and maybe even obvious in hindsight. But even as recent as only 3 years ago, when I did my undergrad. thesis on Quantum Computation, they have still toying around (or even focusing on, at least based on my impression) with other ideas such as using Atomic lasers traps, optical circuits, or Liquid NMR (the one topic that I chose).

Link to comment
Share on other sites

Well limitations to analog computers made them all but obsolete in the digital age. The ability to have multiple states (not just 0 and 1) while enjoying the benefits and strengths of digital computers would make a quantum computer much better geared for AI than an analog computer.

 

The important point to consider with non-binary computers is the complexity of programming them. Computers based on "trinary" (i.e. tertiary/third-order logic) have been lauded for some time, but are for many reasons impractical, even as languages begin to implement third order logic constructs.

 

With quantum algorithms, the complexity increases to the point that specific knowledge of quantum physics and field theory are necessary to design algorithms. While this may be great for finding "shortcuts" to NP-complete problems (which general intelligence no doubt is) I doubt AI will practically come from anything but a general purpose binary computer, as that's the target platform that all general purpose programming systems are currently abstracted on top of.

 

I think it would be easier to model the neocortex with a quantum computer.

 

Perhaps you meant "faster" in lieu of easier.

Link to comment
Share on other sites

What about computer graphics?

 

I think that field would be vastly affected by quantum computers. :)

 

If you're talking about display devices, then no doubt. A new form of display technology, called a surface-conduction electron-emitter display (SED), uses quantum mechanical properties and nanotechnology to provide a cathode ray emitter per picture element. A lawsuit filed by Applied Nanotech has prevented the launch of SED displays.

Link to comment
Share on other sites

Weighted decisions via analogue quantum states, WOULD prove useful for decision making. Believe me I've had to try program decision making recently and its hard, very hard.

But The Cell Processor on the PS3 Is a brilliant step towards fully distributed computing - For one you can keep adding N cells and get N performance increase, normally this drops.

Each individual cell may not currently carry the pattern of the whole unit, but its a step in the right direction.

 

Plus the technology is out there and in use, and quantum processing has been a pipe dream for longer than Cell processors. You tend to find that if theres a will theres a way, and someone has finally pushed hard enough for quantum computing, but you got to wonder why the big companines haven't pushed the technology. - Lets face it if IBM, Intel, Toshiba or any big companies wanted to make it commercially viable they could

Link to comment
Share on other sites

I doubt AI will practically come from anything but a general purpose binary computer, as that's the target platform that all general purpose programming systems are currently abstracted on top of.

It is very doubtful that simple binary computers will result in true AI, whether or not the target platform for programming is binary computers.....

 

 

Perhaps you meant "faster" in lieu of easier.

No, easier.

 

When neurons fire they connect with multiple other neurons with varying strength. The varying strength of connection between neurons creates the 'weighted' states. This would be easier to model with a logic system with weighted states as opposed to one without weighted states.

Link to comment
Share on other sites

When neurons fire they connect with multiple other neurons with varying strength. The varying strength of connection between neurons creates the 'weighted' states. This would be easier to model with a logic system with weighted states as opposed to one without weighted states.

 

 

definately true, maybe a mix of this and cell processing?

 

The problem is making each cell aware of the entire pattern of thought

Link to comment
Share on other sites

Possibly.

 

I have followed Numenta a little of late in their attempts at creating object recognition (the field I eventually want to migrate too) and they are approaching it by trying to model the neocortex similarly.

Link to comment
Share on other sites

It is very doubtful that simple binary computers will result in true AI

 

So you don't think the computations involved in thought/perception are universal? (in the Church-Turing sense)

 

Also you didn't bother to justify this statement whatsoever

 

That said I don't think thought involves quantum behavior (which would make it hypercomputational) and all that's needed is a programming model that provides for concurrency and asynchronous messaging (the Actor model)

 

When neurons fire they connect with multiple other neurons with varying strength. The varying strength of connection between neurons creates the 'weighted' states. This would be easier to model with a logic system with weighted states as opposed to one without weighted states.

 

You're thinking too low level, and even so, what you're describing is easily implemented using the Actor model on a traditional binary computer.

 

I have followed Numenta a little of late in their attempts at creating object recognition (the field I eventually want to migrate too) and they are approaching it by trying to model the neocortex similarly.

 

Numenta has created Hierarchical Temporal Memory, an implementation of Jeff Hawkins' Memory-Prediction Framework. Note that they have completely abstracted away the low-level behavior of neurons, and are instead focusing on the high-level behavior of hierarchies of neocortical columns.

 

Also note that neocortical columns can be implemented as Actors.

Link to comment
Share on other sites

So you don't think the computations involved in thought/perception are universal? (in the Church-Turing sense)

Eh? Not sure what you mean.

 

Also you didn't bother to justify this statement whatsoever.

We have had nearly 50 years to create AI with simple binary logic. And the proponents of such AI still insist more speed and computational power is needed, which thought experiments can disprove. Further, the human brain doesn't work in binary, thus if we want to model the operation of our brain and create intelligence we probably wont be able to do so with binary logic.

 

You're thinking too low level, and even so, what you're describing is easily implemented using the Actor model on a traditional binary computer.

Hardware is always faster than software. Modeling something in software limits performance, scalability, and accuracy in this case.

 

If we want to model the brain and create intelligence, there is no such thing as thinking on too low of a level.

 

Numenta has created Hierarchical Temporal Memory, an implementation of Jeff Hawkins' Memory-Prediction Framework. Note that they have completely abstracted away the low-level behavior of neurons, and are instead focusing on the high-level behavior of hierarchies of neocortical columns.

 

Also note that neocortical columns can be implemented as Actors.

Can doesn't mean should. And yes, numenta is attempting to create intelligence/object recognition via modeling the neo cortex and creating HTMs. The aren't modeling an entire brain or on the level of neurons, but on the level of columns, this of course is a first step and not a culmination of years of modeling brains via electronics....

Link to comment
Share on other sites

Eh? Not sure what you mean.

 

Any universal computer can emulate any other universal computer. If consciousness results from universal computations then any universal computer can be conscious, regardless of how it fundamentally operates.

 

We have had nearly 50 years to create AI with simple binary logic. And the proponents of such AI still insist more speed and computational power is needed

 

Correlation does not imply causation.

 

I'd say a more human brain-focused approach is needed, and the previous approaches based on ad hoc guestimations of how thought actually worked were previously necessary because emulating the brain in software was too expensive computationally.

 

The computational power of the average human brain still exceeds the fastest supercomputers by multiple orders of magnitude. That's why a brain-focused approach, even one built on high degrees of abstraction, was and still isn't immediately practical and probably won't be for at least a decade.

 

However, none of that says anything about the soundness of the approach. Your attitude is: The approach hasn't worked until now, therefore it will never work!

 

which thought experiments can disprove.

 

What thought experiments are those? Searle's Chinese Room? I'm no fan of Searle, or neutral monists, and the Chinese room proves nothing but Searle's inability to consider that all universal computers are capable of the epiphenominalism he alleges only biological systems allow for, with little defense of why his theory of biological naturalism is applicable only to biological systems. Searle is incapable of seeing the epiphenominal possibilities of computers. The simple answer to Searle's question of "what part of the Chinese room did the thinking?" is "The gestalt did"

 

Further, the human brain doesn't work in binary, thus if we want to model the operation of our brain and create intelligence we probably wont be able to do so with binary logic.

 

Any universal computer can emulate any other universal computer. It doesn't matter how the brain actually works, as long as the computations its performing are universal we can emulate them on any platform we like.

 

Hardware is always faster than software.

 

Custom hardware is stagnant. The commodity platform, driven by intense market demand, vastly outpaces all custom hardware solutions.

 

Software is universal. While ASICs can generally outperform software running on the commodity platform at a specific point in time, in a year those ASICs are obsolete, and the commodity hardware will have almost doubled in performance.

 

Moving beyond ASICs to FPGAs, even these models have all failed to evolve as their hardware is continually rendered obsolete. (quite ironic considering they're typically referred to as "evolvable hardware") Hugo de Garis and Genobyte's CAM Brain Machine (developed in my hometown of Boulder) is perhaps the foremost example, however Genobyte has since gone out of business, because the approach was immensely expensive and never lead to anything worthwhile.

 

The problem with a hardware model is it requires considerable material investment with an almost nonexistent.

 

Modeling something in software limits performance, scalability, and accuracy in this case.

 

Contrast Genobyte with Numenta, who is developing software, which requires comparatively little investment and allows their technology to evolve with the speed of general purpose hardware.

 

If we want to model the brain and create intelligence, there is no such thing as thinking on too low of a level.

 

There certainly is. The lower level you go, the more computations you waste emulating behaviors the brain is trying to abstract away.

 

While the BlueBrain project is starting with a molecular model, they certainly aren't going to stick with it. Their goal is to use knowledge gained from the molecular model to model the function of the neocortical column at higher and higher levels of abstraction. This means more computations can be spent actually doing the calculations the neocortical column is performing rather than wasting the moving individual molecules around in a molecular simulation.

 

Can doesn't mean should. And yes, numenta is attempting to create intelligence/object recognition via modeling the neo cortex and creating HTMs. The aren't modeling an entire brain or on the level of neurons, but on the level of columns, this of course is a first step and not a culmination of years of modeling brains via electronics....

 

It really involves a scrapping of all previous approaches to the problem of AI, both ones using special-purpose hardware and ones which involved developing software based upon ad hoc observation of how thought functions, rather than intense scrutiny of neurophysiology. Jeff Hawkins has pursued the highest level model he could scientifically defend of the brain's thalamocortical/hippocampal systems, and is developing software which comes nowhere close to modeling individual neurons, but tries to model the neocortex as a whole at the highest level of abstraction.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.