Jump to content

artificial intelligence


rueberry

Recommended Posts

we as a race are on the verge of creating a sentient being via artificial intelligence within a generation if that long and, potentially a biological one as we venture into the new computer systems based on chaos theory. this new science will advance our progress inexplicably. should this new race we will undoubtably create be given the benefit and perils of human prejadice and morality and most else that comes along with being human or should they simply be created as logic based without the hinderence of bias? i would like to know your thinking regarding any and all aspects of this in society as well as in space exploration itself.

 

martin, because i value your insight, should you read, it would be deeply appreciated. inow, you also if you would be so kind.

Link to comment
Share on other sites

I for one welcome our artificial overlords!

 

Just kidding. It's always fun to speculate about AI possibilities. One thing that intrigued me about your post was the idea of "free from bias" -- is that necessarily the case? I suspect that any sentient creature must automatically be capable of forming bias. Of course I can't prove this, not knowing any non-human sentient creatures (well, at least none that I am allowed to talk about). But if sentience is a direct function of biology then it logically follows.

 

What do you think?

Link to comment
Share on other sites

We are at a point where we can't produce our ideas fast enough to bring them to market before they are obsolete, and this will only get worse. AI will most certainly be applied to that problem, creating thinking production processes that can adapt more quickly to new innovation. It's a scary thought to me that we could be setting something loose that will outpace even our own intellect, and give it access to creating its own next generations.

Link to comment
Share on other sites

pangloss... in answering your question, what i meant about free from bias was not having any loyalties one way or another. human beings have a tendancy to choose the outcome for reasons not necessarilly in regard to the issue at hand but for personal reasons. consider two examples, Data and Mr. Spock on the star tek series or if you've seen the movie war games with mathew broderick. the first examples had so-called prime directives whereas the latter had no interest in who gained in the outcome, only the outcome. personally, i would prefer some stop gap measure such as the prime directive but, as was said, what happens when the creation becomes greater than the creator? will we survive our own creations?

also, how amazing will it be should we as a species create another that would likely outlive us yet travel our galaxy to find what is out there on our behalf. will they be our servants or will they be fellow creatures working with us to reach mutual goals of exploration. will we give them the curiousity factor. and in so doing do we chance their outgrowing the desire we have and set their own agendas. the possibillities are boundless. don't you agree?

Link to comment
Share on other sites

i would like to know your thinking regarding any and all aspects of this in society as well as in space exploration itself.

It's an incredibly thought provoking issue. We will most certainly have AI, and there will most certainly be both benefits and costs. What those might be, you ask? Well, the possibilities are, indeed, boundless. I think that there is no reason the agendas we set will not themselves evolve in the AI, leaving us "mere" humans behind in the dust.

 

I look forward to this thread getting some traction and hearing from the larger community. :)

Link to comment
Share on other sites

Could you see some weird darwinistic stuff going on with drug genes and all kinds of designer genes at first leading up to AI in terms of some completely novel organism created by people to function in such a compacity. I would just that would be the natural course giving all that would be learned from such and of course starting out not knowing anything.

 

As for computers in terms of say quantum computers? Who knows what that technology alone would yield, more so if it were a quantum AI.

Link to comment
Share on other sites

we as a race are on the verge of creating a sentient being via artificial intelligence within a generation if that long and, potentially a biological one as we venture into the new computer systems based on chaos theory. this new science will advance our progress inexplicably. should this new race we will undoubtably create be given the benefit and perils of human prejadice and morality and most else that comes along with being human or should they simply be created as logic based without the hinderence of bias?

 

From a utility aspect, it seems they should be logic based, with a prime directive to be pro-life(esp. pro-human life). To have true morality, I think they would need to experience pain, pleasure and empathy. In regards to human safety, I think we are in danger either way. A machine can logically conclude that humans need to be terminated. If they surpass our intelligence, they will find a way to create or improve themselves anyway and then the answer to your question will be up to them. If they can freely replicate, then it will eventually "happen".

 

Maybe the best way is to give them emotions, be kind to them and hope most of them love us in return? Nah, slavery and oppression sounds better, why else would we create them?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.