Jump to content

Do Raymond Kurzweil's ideas have any basis in fact?


A Tripolation

Recommended Posts

 

He basically suggests that to survive the coming technological singularity, we'll have to merge with AI machines in a synthetic evolution, or some such nonsense.

 

He also suggests that:

...and in fact evolutionary processes move in an exponential fashion in these very properties, never becoming infinite, never becoming quite god-like but moving to become more god-like. And so, in that regard, you could say that evolution is a spiritual process

 

I don't think this is how evolution works. I don't think that it is going along a path to create a god-like entity, only change better suited for the current environment. Is he right?

 

And also, as a random aside, would an AI that had achieved singularity level intellect still be able to understand anything like humans? Or would they transcend us?

Edited by A Tripolation
Link to comment
Share on other sites

I want to preface this by saying that while Kurzweil should get credit for keeping AI issues in the popular press, his popular sci-fi view does misrepresent what legitimate AI thinkers are doing.

 

That being said, the problem of "surviving" a technological singularity is [hypothetically] real. When and if self-improving tech is created then computers (or whatever you want to call it) will become better at building computers than humans. That's why moore's law will break and evolutionary forces won't really apply - mostly because the time scale will be really fast.

 

The biggest problem is that it's very difficult [impossible] to predict what a self-improving artificial intelligence's motivation and utility function will look like. If there's no term for humans in an AI's utility function, then we are just atoms that can be used for something else that an unfriendly AI can use for something else. And since these AIs will be smarter than us and self-improve exponential faster, there won't be much we can do to prevent this global existential risk.

 

I think what Kurzweil is implying is that uploading and becoming self-improving AI ourselves will solve this problem, though I'm not convinced. Our best bet (seems to be) making sure we only create AIs that are friendly in the first place.

 

Some more intelligent readings along these lines:

general definitions: http://wiki.lesswrong.com/wiki/Friendly_artificial_intelligence

 

http://singinst.org/upload/artificial-intelligence-risk.pdf

http://sl4.org/wiki/KnowabilityOfFAI

 

why people don't take global existential risks seriously: http://singinst.org/upload/cognitive-biases.pdf

Link to comment
Share on other sites

I take the risks seriously, I just don't see us as ever being able to have singularity-like intelligence. Our brains wouldn't be able to handle that.

 

Our brains would no longer be organic after upload, so the distinction between AI and human would be meaningless. That's Kurzweil's view anyway.

 

Whole brain emulation: http://www.fhi.ox.ac.uk/Reports/2008-3.pdf

 

I'm not sure what you mean by "evolution" in this context

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.