Jump to content

Humanity, Post Humanity, A.I & Aliens


Intoscience

Recommended Posts

5 hours ago, Intoscience said:

I think the counter to that is, our brains for example in basic physical terms are just complex biological computers.

Metaphorically speaking. But technically, is it a TM? We don't know.

TM is a mathematical architecture of organization and function. It is independent of physical / chemical / biological implementations. Computers are TM, but it is unknown about the brain. It is an architectural rather than physical / biological question.

Link to comment
Share on other sites

40 minutes ago, dimreepr said:

But the level of complexity is order's of magnitude greater than that of a machine.

You're essentially chasing a ghost because a) currently computers aren't sentient and there is no known way to change that. b) there is no known way to determine if they do.

ATM this speculation is fantasy, and while that may change in the future; it's no different than speculating about FLT, physics says no.

A modern computer is orders of magnitude more complex than an abacus. 

I'm not chasing anything, I'm simply not dismissing something that is potentially possible just because its not presently possible. I'm not claiming that any of my suggestions are true or may prove to be so. I'm stating a fact that A.I is and continues to develop at a rate where it has the potential to become far more intelligent than we are and even could imagine. Whether sentience or consciousness emerges as a result remains to be seen. Also are either of those emerging traits required for A.I to be an uncontrollable threat?

You are very confident in your convictions that A.I won't become so "powerful" that they will ever pose a threat to humans. I don't know your age, but may I hazard a guess (assuming it will take another 50 + years) that you may not be around to see if your convictions are verified. So it doesn't really matter to you one way or another.

   

Link to comment
Share on other sites

5 minutes ago, Intoscience said:

A modern computer is orders of magnitude more complex than an abacus. 

 

“Space is big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist's, but that's just peanuts to space.” ― Douglas Adams, The Hitchhiker's Guide to the Galaxy

Edited by dimreepr
Link to comment
Share on other sites

3 hours ago, Intoscience said:

I'm not claiming that any of my suggestions are true or may prove to be so. I'm stating a fact that A.I is and continues to develop at a rate where it has the potential to become far more intelligent than we are and even could imagine. Whether sentience or consciousness emerges as a result remains to be seen. Also are either of those emerging traits required for A.I to be an uncontrollable threat?

My speculation is that absence of consciousness is what would magnify the threat of AGI.  Without it we could have an AI more easily arrive at absurd solutions to problems or pursuing one goal to the exclusion of all others.  It's what some AI theorists have called "the paperclip maximizer" problem.  (Nick Bostrom, iirc)

Having self aware consciousness might allow an AGI to question its own drives, especially if they were narrowly focused.  Say that it wants to solve an exceedingly difficult math problem and decides to convert the whole earth into computational machinery to implement this.  

Link to comment
Share on other sites

3 hours ago, TheVat said:

My speculation is that absence of consciousness is what would magnify the threat of AGI.  Without it we could have an AI more easily arrive at absurd solutions to problems or pursuing one goal to the exclusion of all others.  It's what some AI theorists have called "the paperclip maximizer" problem.  (Nick Bostrom, iirc)

Having self aware consciousness might allow an AGI to question its own drives, especially if they were narrowly focused.  Say that it wants to solve an exceedingly difficult math problem and decides to convert the whole earth into computational machinery to implement this.  

There is a bunch of scientists that have got together to state that we need to understand consciousness in paralell for the reasons you describe. Can't remember where I read it; yesterday.

Edited by StringJunky
Link to comment
Share on other sites

15 hours ago, TheVat said:

My speculation is that absence of consciousness is what would magnify the threat of AGI.  Without it we could have an AI more easily arrive at absurd solutions to problems or pursuing one goal to the exclusion of all others.  It's what some AI theorists have called "the paperclip maximizer" problem.  (Nick Bostrom, iirc)

Having self aware consciousness might allow an AGI to question its own drives, especially if they were narrowly focused.  Say that it wants to solve an exceedingly difficult math problem and decides to convert the whole earth into computational machinery to implement this.  

I share a similar view, which is why I alluded towards the "paperclip maximizer"  idea. My point really was about ways in which an advanced A.I could be threatening towards us and in ways that we may not expect or realise, or even imagine. Consciousness was brought into the conversation as a familiar mechanism in which the A.I could essentially "think" like humans. 

Think about it this way, any intelligence that surpasses our own and has technological capabilities is by definition a potential threat. If we remain in control of this intelligence then we can secure our safety. But if, the intelligence gains independence then well... that's a whole new potential problem.   

Link to comment
Share on other sites

22 hours ago, Intoscience said:

You are very confident in your convictions that A.I won't become so "powerful" that they will ever pose a threat to humans.

I've never said that, what I said is that, if treated properly AGI is far more likely to benefit humanity than destroy it; but if it's left to the greedy to develop, with no ethical braking system, it will destroy us even more quickly than we are currently managing to.

I just think the idea of sentience is a red herring and only distracts the conversation.

Link to comment
Share on other sites

20 minutes ago, dimreepr said:

I just think the idea of sentience is a red herring and only distracts the conversation

The insertion of sentience is around the AGI sensing its environment. To a degree certain AGI maybe be required in such a way to operate at it's most useful, where awareness of it's environment maybe pivotal.  Many A.I machines already receive and process sensory information, why not take that a step further? 

Link to comment
Share on other sites

3 minutes ago, Intoscience said:

The insertion of sentience is around the AGI sensing its environment. To a degree certain AGI maybe be required in such a way to operate at it's most useful, where awareness of it's environment maybe pivotal.  Many A.I machines already receive and process sensory information, why not take that a step further? 

To what end? That just adds another parameter to be calculated, I see no mechanism for sentience and or consciousness, but without sentience consciousness is as irrelevant as ant without its hill. 

Gogito ergo sum, the ability to think selfishly...

Link to comment
Share on other sites

17 minutes ago, dimreepr said:

To what end? That just adds another parameter to be calculated, I see no mechanism for sentience and or consciousness, but without sentience consciousness is as irrelevant as ant without its hill. 

 

Sentience: capable of sensing or feeling conscious of or responsive to the sensations of seeing, hearing, feeling, tasting, or smelling

This is an argument for the A.I experts, philosophers & neurologists, of which I'm non.

My argument is around A.I developed/evolved/alien that is more intelligent than we are, and out of our control.  

Edited by Intoscience
spelling
Link to comment
Share on other sites

1 minute ago, Intoscience said:

 

Sentience: capable of sensing or feeling conscious of or responsive to the sensations of seeing, hearing, feeling, tasting, or smelling

This is an argument for the A.I experts, philosophers & neurologists, of which I'm non.

My argument is around A.I developed/evolved/alien that is more intelligent than we are, and out of our control.  

Sentience is like alien life, we only have one source of evidence... 😉🖖

Link to comment
Share on other sites

22 minutes ago, Intoscience said:

My argument is around A.I developed/evolved/alien that is more intelligent than we are, and out of our control.  

However, as of today, there is no intelligence in "artificial intelligence". It is marketing gimmick. See for example, There’s no such thing as Artificial Intelligence – Australian Data Science Education Institute (adsei.org). Some quotes:

Quote

The term Artificial Intelligence is used instead to apply to anything produced using techniques designed in the quest for real AI. It’s not intelligent. It just does some stuff that AI researchers came up with, and that might look a bit smart. In dim light. From the right angle. If you squint. ...

It’s a shame, really, that the term AI has morphed into referring to systems that are really quite horribly dumb. And even if we don’t have to worry about AI becoming sentient and taking over the world any time soon, there are plenty of dangers in the cavalier way we use AI and machine learning. We tend to trust them too easily, and fail to evaluate them critically. ...

Meanwhile, next time someone around you is panicking about computers becoming more intelligent than us, you can set them straight! 

 

Link to comment
Share on other sites

The spouse and I sometimes refer to AI as Artificial Idiocy.  

Note that the IBM "Watson" after defeating Ken Jennings and the other contestant on Jeopardy, was not able to participate in the chat at the end of the show.  Or figure out that Toronto was not an American city, or that having a leg and missing a leg are not the same thing (a question about an Olympic athlete), or notice that another contestant had just given the wrong answer ("the 1920s") and amend its own answer accordingly.

Bear in mind that Waton's entire purpose and design was to play Jeopardy.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.