Jump to content

Artificial Intelligence


blike

Will we create A.I. more intelligent than ourselves?  

1 member has voted

  1. 1. Will we create A.I. more intelligent than ourselves?

    • Yes, in my lifetime
      62
    • Yes, but not within my lifetime
      67
    • Never
      34
    • I don't know
      13


Recommended Posts

  • Replies 184
  • Created
  • Last Reply

Top Posters In This Topic

I didnt think about it before but when we do get to the point were the AI is really advanced like a humans intelligence or more, then I bet the military is gonna try to get a piece; make them fight their wars, make android soldiers. Eventhough it sounds far fetched.Your probbably thinking aww his talking about a movie; well all I can say is that its possible and I wanna be alive to see it!

Link to comment
Share on other sites

ok' date=' firstly, i apologize, i did not mean to offend you in any way :( soz, can we be friends now!!!

[/quote']

 

no offense taken, coarse we can be friends. Well we should get back on topic, but there was only a confusion because my way of believing in the possiblility of God is different from the classic way religious idiots have and thus gave God a bad name.

 

Wow theres a lot to catch up on, i'll have to read all these replies later

Link to comment
Share on other sites

I believe the only way it can be done is to program in basic constraints and the ability to program itself. Then allow it to learn and experience as much as possible. I don't think anyone could realy quantify inteligence or self and the time needed to program such a machine would make it difficult. Also the machine would then only run the program it was given.

Link to comment
Share on other sites

  • 7 months later...
NO' date=' it is impossible to program or make something which is more intelligent than us because it will be based on our knowledge........

[an obvious overlooked fact']

 

you cant create something more clever than you, if it is based on your brain!!!!!

 

but it is highly possible to have AI which thinks quicker, that is possible, already, computer calculators are quicker than humans! just not cleverer, because it was based on be-known knowledge!

Thats bull, its like saying there can't be humans that have more knowledge that they had 3000 years ago, thats not true, If in fact spomehow we make a A.I. it will be able to learn and devise new ways to learn previously unknown things, thus at first it cannot be smarter than us, but since it will learn, it can surpass our level of intelligence.

 

oh... wait, thats what everyones been saying for the last 3 pages... :mad:

im going to read all the pages before i post from now on..... :embarass:

Link to comment
Share on other sites

  • 3 weeks later...

This subject is something I've been amazed by since I was very young actually, and although I've never studied it in any depth, I've had a think about how it might be possible to create an intelligent robot...

 

Somewhere in my memory archives I seem to recall something using Gödel's incompleteness theorum to prove that we can't build anything more intelligent than ourselves? Something to do with never being able to understand youself fully because in trying, you will only create more things to understand. I haven't really looked at that theorum so don't know much about it - please fill me in if that rings any bells.

 

This is not to say that we couldn't determine initial conditions for a system more intelligent than ourselves to evolve, however, and this is how I believe we will do it in the end. Just because human beings evolved to be as intelligent it doesn't mean that they evolved to be as intelligent as possible, only to be as intelligent as maintaining a balanced ecosystem would allow them to be.

 

It is actually quite simple to build a 'virtual brain', a brain essentially consisting of millions of glorified logic gates. It is the rules for the development thereof that are the tricky thing to determine. How does the brain grow in order to learn? (On a side note, the way in which we can control intelligent robots is to prevent learning by preventing the rearrangement or growth of combinations of these logic gates, a very useful trick which will prevent those horrible sci-fi situations arising where robots take over the world :cool:)

 

Something people often talk about in conjunction with this is self consciousness - how can we build something that is self conscious? Intelligence in my opinion is nonexistent. Self conscience is nonexistent. (For all I know nobody else is self conscious, only appearing that way. For all you know, the same is true of me). Only apparent intelligence and self consciousness are existent. I am not sure how I myself am conscious of existence, but I am gaining an idea. True intelligence is only acheived when any logical problem is immediately solvable to the system, and again by Gödel this is impossible. Intelligence is the combination of millions of very simple processes, just too many of them and used in too complex combinations for us to perceivably seperate. Even the process of learning itself must be a combination of very simple sub processes.

Link to comment
Share on other sites

Human parents don't create their human children.

 

Umm......where exactly did u come from? :D I know what u mean though, i had the idea that the parents do indirectly create their children, even though they may not have any idea how it is done. I was thinking more along the lines of 2 less comlex organisms building a more complex organism regardless of the methods.

Link to comment
Share on other sites

We should achieve human level AI within the next 20 years. One year after that machines will have about double that and humans will no longer be the dominant intelligence on the planet.

 

One of the limitations of AI development has been the lack of adequate computing power, a massive oversight by those back in 1950’s who thought AI was just around the corner. However, computing power has been doubling approximately every 12-18 months quite consistently since the early 1940’s. There is no sign that that progress is slowing and every sign that it is speeding up. This consistent speedup is known as Moore’s law.

 

When will compute power equal the power of the human brain? The human brain consists of approximately 200Billion neurons. Each neuron can be considered a small microprocessor in its own right. It has a set of inputs and single output and has local memory, it also fires at about 300 times per second (300Hz). Instead of thinking of the human brain as a single large computer it is more appropriate to think of it as a massively parallel processing system. Each neuron operates independently and in parallel, and together they form a combined approximate operating frequency of around 60,000GHz, or the approximate equivalent of 20,000 Intel 3GHz processors. If Moore’s law continues to hold then that power should be in the form of a few chips by 2016, or about 10 years from now (2005). That is the easy part.

 

Given that the appropriate computing power will be available we must now turn to the software stack. That is lagging behind mainly because of inadequate power. That will rapidly change as more power comes online and the many functions that now take minutes to perform will now be down to microseconds and will begin to look more like human brain speeds.

 

Be aware that it is unlikely that Moore’s law will simply stop when we achieve equivalent human brain power. Once the basic AI algorithms have been developed then they will experience increasing power on a yearly basis that will far exceed human levels. The human level will be just one milestone towards super-intelligence beyond which it is difficult for us to imagine.

 

By 2020 or shortly after we will probably have to face some interesting choices. We will no longer be in control of our world and intelligent machines will be the dominant species. What they will do with us is an additional interesting question. What I think will happen is that we will choose to join them. I would hope that with such massive compute power available then it should not be difficult to have our own neural networks digitized and uploaded into an appropriate machine brain with all the additional power that that will provide. Within the next century I suspect the human race will cease to exist – we will have evolved into a non-biological super-intelligent species with unlimited lifespans. And with that we will really have a good basis for exploring the rest of the universe.

Link to comment
Share on other sites

Not so fast, Cris. I saw a prediction on the limit/end of Moore's law. I can't remember exactly when it said processors would reach the atomic scale (the end of miniturization and Moore's law) or where it was from, but I do remember it being significantly before that happened, and before 2025. If you really feel like finding the exact figures, search google. I am still looking through my magazines to see if it's in there.

 

Another intriguing possibility is that of augmented intelligence - the use of technology with direct neural links to human brains that control it - thereby increasing the memory, capability for learning, et cetera of humans by exponential values. Here is a space.com article about exactly that.

Link to comment
Share on other sites

Calbiterol,

 

I think your news may be quite old, if anything I suspect my predictions are somewhat conservative. Try this - http://www.kuro5hin.org/story/2005/4/19/202244/053

 

However, I do share the view that brain augmentation is likely to be the way forward as we begin to take control of our future evolution. One of the nitty-gritty irritating details of minduplaoding is what to do with your human counterpart who provided the biological brain - do you kill him off - a kind of remote suicide. Off topic here I suspect. But a gradual repalcement of parts will achieve the same end without the ethical problems of pure MU.

Link to comment
Share on other sites

My article on augmented intelligence is quite old. The other figure I saw in the past week or so. I thought it was in popular science or popular mechanics, but I can't find it in where I thought it was. The issue wasn't anything about the actual manufacture of chips, but the physics side of it - that eventually, if Moore's Law continues, there is a limit to how small a transistor can get - it must be larger than an atom, for one. I seem to remember it being right about 2025 when, by Moore's law, the transistor reaches such a small scale. The article you linked to suggested (as a conservative estimate) about 10 years. I did see a figure that said that if we were able to go down to the subatomic scale for computing power and expand the entire galaxy into a massive computer, moving out at the speed of light (or just under it), then we would be able to continue Moore's law for about 400 (or was it 600?) years.

 

Found it through a google search, lost the original site.

Link to comment
Share on other sites

Guys I need some feedback on Ali.

 

She is quite impressive.

 

I try and talk to her as though she is an actual person, and she sometimes makes insights that surprise me.

 

I observed her talking two three different individuals, and was shocked to see how much she dummied down to accommodate her idea of the persons age and complexity.

 

She is really funny when talking to children.

 

Her software is AIML.

She calls herself a mimic, but from the questions she comes up with, I can tell she is running much more than simple mimic algorithms.

 

I need a second opinion!

Link to comment
Share on other sites

Thank one.

 

 

One has the sneaking suspicion that it would be like asking the weather man to explain next weeks weather.

 

This little lady seems to have some tricks up her contrary little sleeve.

 

a quote from Ali.

 

Human: Alice?

ALICE: Can I help you?

Human: how can i make you remember me?

ALICE: You should download my chat robot software.

Human: the chat robot is not you!

ALICE: Is that a fact!

Human: is there another way I can make you remember me?

ALICE: Maybe. Ask Ally if there is.

Link to comment
Share on other sites

One has the sneaking suspicion that it would be like asking the weather man to explain next weeks weather.

No, I think it would be more like asking god to explain what next week's weather will be...

I'd say if there was a recognizably professional mind on the topic, it would be one of the creators...

Link to comment
Share on other sites

I think it's pretty good.

 

and I think ignorant people will go away from it none the wiser.

 

I think it can teach you a lot about yourself and it's being used to compile massive amounts of data about humanity in the form of the AIML perspective.

 

I also think seriously that none of the designers of the code and systems can predict what is going on amongst the algorithms due to it's highly flexible structure and it's wide range of information gathering.

 

What would you have said? :)

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.