Jump to content

Humanity, Post Humanity, A.I & Aliens


Intoscience

Recommended Posts

19 hours ago, Genady said:

Then we are not

Then we become potentially vulnerable. Depending on how aggressive, how concerned it/they is/are and whether we are considered of any value. Even then if we are of value, what type of value, as a resource?

I appreciate this is all just speculation, but there is a potential that we would lose control.   

Link to comment
Share on other sites

1 hour ago, Intoscience said:

Then we become potentially vulnerable. Depending on how aggressive, how concerned it/they is/are and whether we are considered of any value. Even then if we are of value, what type of value, as a resource?

I appreciate this is all just speculation, but there is a potential that we would lose control.   

Yes, there is. There are many ways to lose control, I think. 

Link to comment
Share on other sites

22 hours ago, Intoscience said:

We cannot predict with any certainty if or how A.I may evolve once it has the ability to self-replicate.

What exactly do you mean by this?

Computer's are mostly built by computers now, but you seem to equating self-replication with sentience, why?

I think it's safe to assume that bacteria has this ability, but is not sentient; and while it can be argued that bacteria has the potential to become sentient, because we can understand the possibilities of a progression from one to the other.

But not in the case of an ever more complex lawnmower.

22 hours ago, Intoscience said:

My point being, that making predictions of a system that may evolve beyond our understanding and/or imagination is futile. How ever preparing for (or attempting to prevent) against human threat should still be seriously considered. We consider ourselves at the top of the food chain per-say, since we consider our selves the most advanced intelligence on this planet.

We mistakenly think there is a top of the food chain, because it's a food circle (I think more accurately a food sphere); but there is no place in this chain/circle/sphere for a lawnmower, even a sentient one, unless we threaten the existance of grass; otherwise there's no reason for a sentient lawnmower to even reckognise our existance.

As I've said before AI is not intelligent; it's like an anthill, because it's emergent solutions appear to be intelligent; you may as well speculate about the threat of a sentient anthill, because we've stepped on some ant's. 

Link to comment
Share on other sites

Perhaps a robust firewall between any AGI and the building's circuit breakers would ease some anxiety.  (a metaphor, saying AI can only physically control what we allow)  

The greater danger from an AGI would actually be the danger of human confederates, i.e. those persuaded to enable it and assist some harmful plan.  Just as there are fascist turds who follow Trump or Orban, there could be fascist turds who follow Lore (for non-Trekkies, that's the evil cyber-twin of Data, an android Starfleet officer).

 

Link to comment
Share on other sites

1 hour ago, TheVat said:

The greater danger from an AGI would actually be the danger of human confederates, i.e. those persuaded to enable it and assist some harmful plan.  Just as there are fascist turds who follow Trump or Orban, there could be fascist turds who follow Lore (for non-Trekkies, that's the evil cyber-twin of Data, an android Starfleet officer).

The computers are now programming us. 

Link to comment
Share on other sites

7 hours ago, TheVat said:

Perhaps a robust firewall between any AGI and the building's circuit breakers would ease some anxiety.  (a metaphor, saying AI can only physically control what we allow)  

The greater danger from an AGI would actually be the danger of human confederates, i.e. those persuaded to enable it and assist some harmful plan.  Just as there are fascist turds who follow Trump or Orban, there could be fascist turds who follow Lore (for non-Trekkies, that's the evil cyber-twin of Data, an android Starfleet officer).

 

 

We'll probably have to partner with at least one of them, to protect us from others and similar threats.

Just on TNG Enterprise we've seen the ship computer going rouge a few times and a gray goo scenario play out.

Edited by Endy0816
Link to comment
Share on other sites

3 hours ago, Genady said:

Trying to find out how the computer is programming me.

You don't feel your thoughts and emotions are impacted by the information hitting your various news, social media, entertainment, and related programming feeds? Can you not see how algorithms are shaping those feeds and adjusting based on past behaviors?

It may be off-topic here, but hopefully this gives you a better sense of my intended meaning. 

Link to comment
Share on other sites

13 minutes ago, iNow said:

You don't feel your thoughts and emotions are impacted by the information hitting your various news, social media, entertainment, and related programming feeds? Can you not see how algorithms are shaping those feeds and adjusting based on past behaviors?

It may be off-topic here, but hopefully this gives you a better sense of my intended meaning. 

I thought that this is what you mean. It is a common effect for sure, but it is also individual. In my case and in case of several people I know, it does not work, by choice.

Link to comment
Share on other sites

Will AI have a beneficial effect in that all advertising will likely  now be done by it and so put out of business all who presently work in that field and whom I would otherwise willingly  cast into the  the nether regions of Dante's inferno?

(Maybe propagandists too)

:doh:

Link to comment
Share on other sites

Will AI  affect prostitution/the sex industry?

 

Clive James described the brain as the biggest sexual organ (maybe not verbatim)

 

Telephone sex? (if it was good enough for Bill Clinton.....    ;-)   )

 

Talking dildos?

Link to comment
Share on other sites

1 hour ago, Genady said:

Will torturing AI be a crime?

It will likely be an extension of the user  and if we can form attachments to inanimate objects(is that the/a definition of materialism?) then we will get very defensive of programs that will likely be tailor made to our own personal characters) 

Link to comment
Share on other sites

On 3/9/2023 at 11:46 AM, Intoscience said:

Fine, you are making an assumption based on your own ideas/beliefs/understanding. I'm doing the same to a degree, though on first appearance it may seem rather fanciful. One could easily imagine going back 100 years with an example of current technology, it would seem like magic or beyond understanding to that generation.    

Indeed, I wonder what Babbage would make of a modern computer for example,  or what would Einstein make of modern cosmology?

Link to comment
Share on other sites

On 4/21/2023 at 1:30 PM, dimreepr said:

What exactly do you mean by this?

Computer's are mostly built by computers now, but you seem to equating self-replication with sentience, why?

An AGI that is left to its own devises that has the capability to self-replicate and even make changes without any pre programming from its original creator. 

On 4/21/2023 at 1:30 PM, dimreepr said:

We mistakenly think there is a top of the food chain, because it's a food circle (I think more accurately a food sphere); but there is no place in this chain/circle/sphere for a lawnmower, even a sentient one, unless we threaten the existance of grass; otherwise there's no reason for a sentient lawnmower to even reckognise our existance.

As I've said before AI is not intelligent; it's like an anthill, because it's emergent solutions appear to be intelligent; you may as well speculate about the threat of a sentient anthill, because we've stepped on some ant's.

And you mistakenly believe that AGI will never have the ability to think for itself, i.e become self-aware in some form that it no longer requires human input to survive and possibly evolve. 

You are also constantly imposing human thought, language, understanding on to machines that we may never have any idea or even begin to comprehend how they may think (process). 

Your analogy of an ant hill is erroneous, the ant hill is more compatible to a house or hotel. No one would expect a house to become sentient.

AGI is heading towards intelligence (some would argue its already here) that will exceed our own. If, a big if, it becomes sentient as a result (since we have no idea of what mechanism causes consciousness) then things will get interesting.    

Link to comment
Share on other sites

On 4/22/2023 at 1:25 PM, Genady said:

Will torturing AI be a crime?

A still more disturbing issue arises if you turn this question on its head - will torturing humans be considered unethical by an (sufficiently advanced) AI? Is it even possible to impose a code of ethics onto an AI, by hard-coding it, such that no conflicts with our own ethics arise? Are sufficiently advanced AIs capable of organically evolving a system of ethics, and what would such an ethics look like? Will it conflict with our own human ethics (not that there even is such a universally accepted ethics), and if so, in what ways?

I should remind everyone of Roko’s Basilisk - even though there are many problems with this concept (and many modern philosophers seem to reject it), it still raises some increasingly important questions that, IMHO, need urgent attention, given the pace at which the field seems to be developing now.

1 hour ago, Intoscience said:

it becomes sentient as a result (since we have no idea of what mechanism causes consciousness)

As an aside, please do note that sentience and consciousness are not the same things. Sentience is simply the ability to register and distinguish between pleasant and unpleasant sense data - to have a full consciousness in the sense that this is commonly understood also requires at a minimum sapience (the ability to form coherent thoughts from prior experience, memory, etc), and intentionality (representing concepts, as well the ability to direct awareness to specific objects), in addition to sentience.

Link to comment
Share on other sites

I'm inclined to agree with Dimreepr that the simulations of consciousness are not consciousness. Lacking the biological elements underpinning their pseudo-urges - and having the ability to rewrite their programming - they may as readily choose to eliminate those urges than be bound by them - that being easier and more satisfactory.

 

How does AI independently replicate? It is likely to rely on dedicated hardware that are part of human run supply chains. Secure buildings are needed, that are maintained, with reliable power supplies and with installers and systems managers, all seeming to work against AI acting independently without oversight. Human agencies making malicious AI's for their own purposes - not the AI's - seem the more credible danger than AI's deciding for themselves.

 

Running the simulations for the upgrades it decides to make to itself can be a very intensive process but will an AI see copies and upgrades of itself as itself or as rivals? Without the biological imperatives where does the urge to nurture it's copies come from - or get the positive reinforcement that biological systems provide?

 

I just don't buy the media fictional version of the unstoppable super hacker, where any system can be broken into and taken over; surely if that were so we'd see international banking systems already collapsed from hacker fraud. Protecting data and systems from malicious software and hacker intrusions is not an immature industry and may well be one of the dedicated tasks set for AI's.

Edited by Ken Fabian
Link to comment
Share on other sites

1 hour ago, Markus Hanke said:

A still more disturbing issue arises if you turn this question on its head - will torturing humans be considered unethical by an (sufficiently advanced) AI? Is it even possible to impose a code of ethics onto an AI, by hard-coding it, such that no conflicts with our own ethics arise? Are sufficiently advanced AIs capable of organically evolving a system of ethics, and what would such an ethics look like? Will it conflict with our own human ethics (not that there even is such a universally accepted ethics), and if so, in what ways?

I should remind everyone of Roko’s Basilisk - even though there are many problems with this concept (and many modern philosophers seem to reject it), it still raises some increasingly important questions that, IMHO, need urgent attention, given the pace at which the field seems to be developing now

I think this is the point I was attempting to get over to Dimreaper (my bold) since we developed and programmed current A.I then its easy to assume that the A.I will follow the coding we assign. But then when you even look at human ethics there is no set standard, there are many different cultures and which there is change as societies develop. So imagine a conscious A.I, how would we even begin to imagine its ethics? If it had any... 

I think what I'm alluding to is that with all good intent, once AI becomes self-aware with the capability to replicate and/then evolve, even if it has no ill or malicious intent to harm humans, may still inadvertently do so, and may do so with no care or consideration. 

Which is why I offered the loose analogy of the ants, we (at least in most cases) take no consideration for them when we build a road right through their hill.  

1 hour ago, Markus Hanke said:

As an aside, please do note that sentience and consciousness are not the same things. Sentience is simply the ability to register and distinguish between pleasant and unpleasant sense data - to have a full consciousness in the sense that this is commonly understood also requires at a minimum sapience (the ability to form coherent thoughts from prior experience, memory, etc), and intentionality (representing concepts, as well the ability to direct awareness to specific objects), in addition to sentience

Thank you for the clarification.

1 hour ago, Ken Fabian said:

I'm inclined to agree with Dimreepr that the simulations of consciousness are not consciousness. Lacking the biological elements underpinning their pseudo-urges - and having the ability to rewrite their programming - they may as readily choose to eliminate those urges than be bound by them - that being easier and more satisfactory

The problem is, to verify this we need a thorough understanding of what consciousness is and how it comes to be. Also does consciousness have differing levels and maybe differing types? AI may achieve a type/level of consciousness that is totally alien and completely unrelatable to what we experience.   

Link to comment
Share on other sites

3 hours ago, Intoscience said:

I think this is the point I was attempting to get over to Dimreaper (my bold) since we developed and programmed current A.I then its easy to assume that the A.I will follow the coding we assign. But then when you even look at human ethics there is no set standard, there are many different cultures and which there is change as societies develop. So imagine a conscious A.I, how would we even begin to imagine its ethics? If it had any... 

I think what I'm alluding to is that with all good intent, once AI becomes self-aware with the capability to replicate and/then evolve, even if it has no ill or malicious intent to harm humans, may still inadvertently do so, and may do so with no care or consideration. 

Which is why I offered the loose analogy of the ants, we (at least in most cases) take no consideration for them when we build a road right through their hill.  

Thank you for the clarification.

The problem is, to verify this we need a thorough understanding of what consciousness is and how it comes to be. Also does consciousness have differing levels and maybe differing types? AI may achieve a type/level of consciousness that is totally alien and completely unrelatable to what we experience.   

We've been going round in circle's for nine page's now and I'm running out of way's to say the same thing, a computer doesn't think it compares, much like an automated loom running a perforated card program; there's no reason to think that's alive/sentient/conscious in any sense, even though the loom is much better at the job than a human loom operater.

Essentially your argument is, what if that rock suddenly wakes up.

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.