Jump to content

Concerning Accelerating Technology...


Luminal

Recommended Posts

I'm sure this has been a subject debated to death in the past, and that I'm adding nothing new to the conversation, however there is one little point I'd like to get across. This is a point about which I feel very strongly.

 

Most of you have probably heard of Ray Kurweil and his arguments about accelerating returns in technology, specifically computer science. Supposedly, this will eventually lead to "singularity" of technological progression in the 21st Century. The exponetial graph eventually goes "straight up".

 

Even the most wildly optimistic transhumanists often assume that the majority of (economically feasible) computers in the world need to be operating at the necessary rates in which to achieve the "singualrity" which launches greter-than-human intelligence. With this assumption, we figure a time-frame in the 2040's for "straight up" section of the exponential trend of technology. I disgaree.

 

For those unaware, it's been estimated the human brain operates between 100 trillion "cps" (computations per second) and 10 quadrillion cps. Also, it's been predicted that based on current progression in Moore's Law that we'll meet even the highest of those estimates in supercomputers in under 6 years, and reach them in a $1000 computer in under 15 years.

 

Yet, I want to make several points clear.

 

1) The majority of computers don't need to be running at greater-than-human speeds to be able to program the initiation of "Seed A.I." All it takes in a single computer with the correct programming.

 

2) Nowhere near the whole human brain needs to be mapped in order to tap into the intelligence required to generate new technology. Primarily, we need to simulate human pattern recognition and creativity. Human phenomena of intelligence we don't need to generate:

 

A) Emotions

B) Personality

C) Sensory perceptions (although, being able to process audio and visual information)

D) Regulation of internal bodily functions, including heatbeat, breathing, and so on.

E) Physical and/or intellectual skills.

 

All we need is a program that can functionally simulate pattern recognition of utilities that will advance technology (or its own coding). Since computers are vastly more efficient in storing and instantly recalling vast amounts of information, they will be able to create new technology constantly.

 

As you can probably surmise on your own, just one such computer with such a program would initiate a run-away effect that would not end until the physical limits of computations and complexity of matter had been reached. This would be beyond dozens of orders of magnitudes surpassing biological intelligence. Remember, less than 3 orders of magnitude serparate mice and humans. Could you imagine what a being with 10^55 computations per second would be capable of achieving?

Link to comment
Share on other sites

One criticism I would raise would be that the faster these technologies evolve, the less meaningful our predictions about them become. We really have no idea what computers will or will not be capable of at given computational capacities, and we don't yet even understand human creativity well enough to make more than the most general of analogies with AI.

 

Further, there is actually no reason something like exponential technological growth will continue in all the necessary areas. Moore's Law has applied for a surprisingly long time, and that's amazing, but who's to say that a) it won't drastically slow quite soon, and b) that computational speed is the only factor that needs to advance. The whole notion of accelerated change --> singularity is based on essentially static analysis, which is a notoriously unreliable predictor. In other words, exponential growth for one period of time does not necessitate or even imply that it will continue indefinitely. The humorous illustration of this often cited is that static analysis predicts that disposable razors will have an infinite number of blades by the year 2015. Similar predictions include the entire Earth covered in Walmarts by 2035, etc.

 

Now don't get me wrong. I'm not saying any of this is impossible. I believe that strong AI will make its appearance, and the effect of that appearance will have effects that can't possibly be predicted yet. I believe, also, that we'll experience forms of exponential growth that will lead to truly bizarre changes in the way we live, and I look forward to it with deep optimism. They just almost certainly won't be the changes we expect. Vintage science fiction is full of predictions of the wrong kinds of accelerating progress. e.g. we don't have Martian cities or flying cars or home nuclear reactors or global landfills, but we do have the internet. True AI could be right around the corner, or it could far, far away. But something is right around the corner...

Link to comment
Share on other sites

I agree mostly.

 

With Moore's Law, though, the exponential curve has actually increased slightly rather than slowing down. There's nothing to indicate that the shrinking of transistors will stop until they've hit the molecular scale. That limit, by itself, is easily enough to create strong A.I., but it certainly isn't the limits of computation. 3-D computing, Quantum computing, and other technologies scientists have yet even to ponder.

 

Now, about being unable to predict the right kinds of progress. I do certainly agree with this, and on the other hand, I think that predicting "greater-than-human intelligence" is broad enough term to cover whatever technologies might be generated in the future. BCIs, Genetics, Strong A.I., global networks that "awake"... who knows? But I think we can be sure that with that information and computing advances of the last 20 years projected forward, will be seeing some amazing technologies.

 

Consider the difference between true genius and "gifted" intelligence. It only takes a few dozen of I.Q. points to separate a bagboy from producing General Relativity or Superstring Theory. Now, if only such a relatively minute degree of higher intelligence can cause such a massive advance in all of science, can you imagine what a few orders of magnitudes would cause?

Link to comment
Share on other sites

Even that, though, relies on the basic assumption that achieving AI analogous to human intelligence is just a matter of speeding up conventional computers, which I find highly questionable. The fundamental structure of the way we currently build computers might well be dead end as far as true AI goes, and that's the only thing that Moore's Law applies to. Certainly the human brain, which is really the only example of true sentience we have, operates on completely different principles from man-made computers. We might have to build organic-brain-like computers, and we don't even really know how the brain works! I suspect the quickest route there would be in simulating a much more rudimentary biological brain, then "evolving" it at the fastest rate we can manage, according to criteria that we hope will make it think like us. Really, that would be even more unpredictable, though...

Link to comment
Share on other sites

Even that, though, relies on the basic assumption that achieving AI analogous to human intelligence is just a matter of speeding up conventional computers, which I find highly questionable. The fundamental structure of the way we currently build computers might well be dead end as far as true AI goes, and that's the only thing that Moore's Law applies to. Certainly the human brain, which is really the only example of true sentience we have, operates on completely different principles from man-made computers. We might have to build organic-brain-like computers, and we don't even really know how the brain works! I suspect the quickest route there would be in simulating a much more rudimentary biological brain, then "evolving" it at the fastest rate we can manage, according to criteria that we hope will make it think like us. Really, that would be even more unpredictable, though...

 

Well, I believe that with the growing number of programmers, game designers, web designers, CE/CS majors,and generally technologically-oriented populations in the world, that whatever increase in the potential of computer power, the programs that use every bit of that power are months (at most, years) behind.

 

So, I believe that when the power to emulate the human brain is at the fingertips of almost every programmer in the world, it won't be long at all before some individual, gamers, business, or research group is right behind to take advantage and actually do it. And of course as I've said, not even close to the entire human brain needs to be emulated to initiate a run-away effect of technological progression.

Link to comment
Share on other sites

In its literal form, Moore's Law will be dead soon, since Moore's Law applies to transistor counts and the useful life of the transistor as the heart of microprocessors is nearing its end.

 

The world's first commercially viable quantum computer was just launched, for example:

 

http://discovermagazine.com/2007/may/quantum-leap

 

Regarding the OP, yes it's wrong for Kurzweil to stick any kind of date on the Singularity. It could happen tomorrow or it could be 100 years.

Link to comment
Share on other sites

Sisyphus, I agree completely. Achieving intelligence in computers has very little to do with processor speed.

 

A human can look at a picture and in less than half a second respond with "yes that is a cat" or "no, that is not a cat".

 

In that time only 100 sequential neurons could have fired in the brain.

 

There is no computer or software that can in 100 steps determine if a cat is or isn't in a picture (object recognition). Thus, AI isn't dependent upon speed.

 

Moore's law is close to having "run its course", as transistor sizes are approaching physical limitations. However, that doesn't equate to the end of progress in computing.

Link to comment
Share on other sites

Huh?

 

Of course you can look at sequential firing of neurons as "steps".....

 

A "step" isn't CS terminology anyway, it is general terminology describing sequencing of events.

 

The comparison is between a sequence of events within the brain which can activate vast numbers of neurons, however only 100 SEQUENTIALLY due to the time constraint. Like a datapath.....

 

And the process can't currently be handled with computer architecture or software in 100 SEQUENTIAL operations. Well, object recognition can't be done period for that matter.

 

Therefore, proponents of "computers aren't fast enough for AI yet" and "the brain works in a massive parallel architecture which is why it is so fast" are wrong.

Link to comment
Share on other sites

I mean, don't get me wrong, I get where you're coming from. I mean, I'm guessing you're pulling this straight from On Intelligence, as Jeff Hawkins covered this exact subject (the shortness of the object identification sequence)

 

The comparison is between a sequence of events within the brain which can activate vast numbers of neurons, however only 100 SEQUENTIALLY due to the time constraint. Like a datapath.....

 

It would be comparable to a chain of 100 Actors classifying an object (each sending one message to the other in sequence). In a proper Actor implementation of the NCC, my belief is it would take less (imagine an Actor-per-NCC approach)

 

And the process can't currently be handled with computer architecture or software in 100 SEQUENTIAL operations...Therefore, proponents of "computers aren't fast enough for AI yet" and "the brain works in a massive parallel architecture which is why it is so fast" are wrong.

 

You don't see the ENORMOUS non-sequitur in this line of thinking?

 

Your argument is "A can't do B at present, therefore it never will"

 

This is pretty much the same line of reasoning that people used to argue that heavier-than-air machines could not fly.

 

We had empirical proof that heavier-than-air objects could fly under their own power: birds can fly. Nature figured it out.

 

Despite this, we had people like Lord Kelvin opining in 1895: "Heavier-than-air flying machines are impossible."

 

Yes, for some reason, it was entirely conceivable that nature could do it, but man could not. How dare you compare a bird to a man-made contraption! Nature is some how special, and technology distinct, with separate limits. Surely they don't obey the same physics!

 

This is the same argument being leveraged against brains, especially by individuals like Searle and Penrose. While I'm sure Kelvin would admit the same physics apply to birds and flying machines, just as Searle and Penrose would admit the same physics apply to brains and thinking machines, they all see the technologification of a natural concept as impossible.

 

If you don't think consciousness is computable on a universal computer, what do you think it is? Would you classify it as hypercomputational, or non-computable? You are essentially saying it has been proven that the former is not the case, so will you try taking a stab at what you actually think consciousness is?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.