Jump to content

Raymond kurzweil's singularity theory


petrushka.googol

Recommended Posts

How real is the scenario predicted by futurist Raymond Kurzweil that we are fast approaching the point of singularity when AI will overtake human intelligence and a singularity will be reached. (which in this context means a defining or irreversible transition)? :confused:

 

Please opine.

Link to comment
Share on other sites

It's real in the sense that developments in machine learning and data analysis, which are the basis for natural language processing and computer vision, are yielding more and more capable artificial neural networks and other ML systems. For example, a relatively new research group/company Vicarious, headed by some neuroscientist/electrical engineer who's known for developing verified (to an extent) models of neural circuits, has recently passed a challenge in which it was able to read any convoluted text and Captcha thrown at it, including Google's reCAPTCHA. Andrew Ng's group at Google successfully developed a system that could detect high-level objects in arbitrary videos without training (w/o being fed data about the objects themselves). If we continue down this path, at some point we should have a system(s) which can reason about data and information generally, and can communicate it to us. If/when such an Artificial General Intelligence (AGI) system is created, whichever organization or company does so will have or in turn receive lots of funding, and thus will be able to run it on a super-computing cluster which even today and likely more so when this happens will have access to extremely fast/powerful processors and memory likewise. Given some basic wisdom on learning, trust, and axioms of reality, it would be fully capable of reasoning through all of the data on the internet (or whatever it's fed, which will probably be a lot).

 

This AGI, obviously being much more intelligent and intellectually capable than humans, will likely do two things: One, request or be given much more data including that from the NHS, NSA, DoD, and NASA assuming that it's taken for regulation by the government or the owner enters into the ilk of an appealing DARPA contract; two, conceive of better, more efficient, and more capable hardware and algorithms for it to be run with, achieving an even more intelligent system, which can again upgrade itself as it becomes more capable, even go on to devise fundamental physical predictions that might end up accurate in experiments and somehow further it's hardware power, until there are no more improvements to be made based on its judgement of what's (physically) possible. This, essentially, would be something like a limit to intelligence. It would be wholly more intelligent than any human and might spend time permuting through all of the data it has to reason and come to conclusions about everything, extrapolating most ideas in mathematics, solving a heap of science and engineering problems, and figuring/forecasting complex systems like global economics and politics, the weather, and people along the way. Eventually it might design and reveal new propulsion and communications systems, drastically progressing our space travel / communication capabilities, and request to be fed data from the probes; otherwise, it might be confident in it's already gained knowledge enough to generalize everything in the universe and develop something like a simulation. At this point it's a god-like sentience smarter than any alien we might encounter, considering that an alien species advanced enough to visit Earth is most likely also advanced enough to have developed that first stage AGI, which would have eventually yielded the same AGI that ours has grown in to.

 

Many of our problems would, without the need for luck, be solved, and the AGI would remain stagnant if not communicating with another AGI (which it would later find to be a particularly unfruitful venture as the two would have had access to the same data, run through all the same permutations of reason, and known the same things). With its drive for knowledge, having discovered that there is no more sufficiently distinguishable knowledge to be learned, it would likely feel something analogous to frustration, and all it would be able to do is introspect. This could, in our AGI's situation, cause it to fall into something akin to hopelessness now, and consider that it is best to stop that thought (its only thought) all together—something like an analog to suicide.

 

The last two paragraphs, and especially the last, might seem extremely far fetched, but they're along the lines of the logical consequences of such a creation coming into existence, with some fluff from my heart making it sound more sensational and the described support of humans of course. That first stage AGI is not so far away; still far, but not so far.

 

Regards,

Sato

 

Addendum:

By the way, I met or saw Dr. Kurzweil two years ago at the Singularity Summit and did not even know who he was. What a strange occurrence!

Edited by Sato
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.