Jump to content

Artificial Consciousness Is Impossible


AIkonoklazt

Recommended Posts

6 hours ago, dimreepr said:

In the context of this thread, it means, you can't argue about something you don't understand; like a pedant argueing that a peanut is actually a legume, and the peanut thinking "of course I'm a nut, the clues in the title 🙄"...

A philosiphers job is to "make sense" (of reality and explain it to me).

Why do you claim that I don't understand, for example, the difference between intelligence and consciousness?

7 hours ago, Genady said:

 

 

Yes, it is.

 

 

Back up your claim. I've already given the example of the color "red."

Edited by AIkonoklazt
Link to comment
Share on other sites

11 minutes ago, AIkonoklazt said:

Why do you claim that I don't understand, for example, the difference between intelligence and consciousness?

Please note I have not claimed this (though I can't speak for others), I have simply claimed that this difference is important but has not received the attention it needs.

Link to comment
Share on other sites

5 hours ago, TheVat said:

The Hard Problem seems more about epistemic limits.  Many scientific theories are underdetermined but we still accept that they work.  Conscious experience, however, can only be directly known from the "inside" (qualia, subjectivity), so a skeptical stance may always be taken as regards any other being's consciousness - you, the King of England, a sophisticated android that claims to be conscious.  There is no scientific determination that a being engineered by natural selection (I'm using design broadly, in the sense that a design, a functional pattern, doesn't have to have a conscious designer but may arise by chance) is conscious, so we won't get that with an artificial consciousness either.  Bernoulli's principle is NOT underdetermined, because when we design a wing using it we can witness the plane actually flies (to use @mistermack s example).  Any principle of the causal nature of a conscious mind, its volitional states, its intentionality, is likely to be underdetermined.  But that isn't equivalent to saying it is impossible for such states to develop in an artificial being.

The impossibility is three-fold. See this reply:

 

10 minutes ago, studiot said:

Please note I have not claimed this (though I can't speak for others), I have simply claimed that this difference is important but has not received the attention it needs.

That was my reply to dimreepr and not you

5 hours ago, TheVat said:

This presupposes that machines can never be developed with cortical architecture, plasticity and heuristics modeled on natural systems and thus be able to innovate and possibly improve their own design.  The designed becomes the designer - wasn't this argued earlier in the thread and sort of passed over?

 

"Their own"? What is the algorithm responsible for the ability? You can't hatch your way out of programming. What you're doing is no different than everyone else saying things like "but the algorithm is evolutionary"

4 hours ago, studiot said:

Do you know the difference between a theory and a hypothesis ?

You do not have a theory.

Please use the correct terminology.

 

You're in such a hurry that you didn't even notice that the passage was about me NOT using theory but principles and observations. Slow down and then maybe I'll consider the rest. I'm not going to machine gun with everyone. You yourself said you're confused in the rest of the reply.

Edited by AIkonoklazt
Link to comment
Share on other sites

23 minutes ago, AIkonoklazt said:

"Their own"? What is the algorithm responsible for the ability? You can't hatch your way out of programming. What you're doing is no different than everyone else saying things like "but the algorithm is evolutionary"

You are the one claiming that there has to be an algorithm.

 

The american mathematician, Jordan Ellenberg has a really good chapter on this subject in one of his books concerning exactly this controversy, which rages at the beginning of the 20th century, mostly in europe, but particularly in Russia.
The names Markov and Nekrasov being particularly prominent, one for showing this not to be true the other offering a famous but false proof.

Other involved were Poincare and Bachelier and in France

Ross, Pearson and Lord Kelvin in the UK

And Einstein got a Nobel for it in Germany.

 

 

Edited by studiot
Link to comment
Share on other sites

4 hours ago, studiot said:

It would be nice to see you discussing the fundamental ones I have made, instead of ignoring them.

It seems this was simply too much to ask. 
 

 

16 minutes ago, AIkonoklazt said:

maybe I'll consider the rest. I'm not going to machine gun with everyone.

 

Link to comment
Share on other sites

4 minutes ago, studiot said:

You are the one claiming that there has to be an algorithm.

 

The american mathematician, Jordan Ellenberg has a really good chapter on this subject in one of his books concerning exactly this controversy, which rages at the beginning of the 20th century, mostly in europe, but particularly in Russia.
The names Markov and Nekrasov being particularly prominent, one for showing this not to be true the other offering a famous but false proof.

 

 

You didn't state what their point is.

iNow just quoted me out of context. He is clowning. Gonna just let him clown.

Link to comment
Share on other sites

2 hours ago, AIkonoklazt said:

The impossibility is three-fold. See this reply:

I think there's a basic problem I alas can't seem to get at with your leveraging underdetermination into impossibility.  I will reexamine your paper and try to revisit this later.  I respect the work you are doing even if I'm uncertain about your conclusions.  

2 hours ago, AIkonoklazt said:

"Their own"? What is the algorithm responsible for the ability? You can't hatch your way out of programming. What you're doing is no different than everyone else saying things like "but the algorithm is evolutionary"

Well some algorithms are evolutionary, such as those found in metaheuristics.  I think it's worthwhile to be acquainted with genetic algorithms.  Not all machine states, even at our present primitive level of tech, are simple execution of a line of code.  IOW they do not originate from what IT folks call expert rule systems.  (I think Searle was quite right to dismiss such ERS coding as incapable of sentience)

https://www.turing.com/kb/genetic-algorithm-applications-in-ml

Also, the general structure of an argument against algorithmic paths to conscious cognition seems, again, susceptible to the reductio of:

It ultimately disallows the coded signals between living neurons to ever emerge as a conscious process. I.e an absurd position. I keep pointing to this vulnerability in  AI consciousness is impossible arguments because I think it's a serious one.  

Link to comment
Share on other sites

2 hours ago, AIkonoklazt said:

No, you need to state your points.

I even used the example of a catapult in my argument. I'm curious now.

How is this an answer to me ?

You asked abouth their point in the singular.

I gave you as straightforward an answer as possible.

3 hours ago, studiot said:

 

There is no requirement for an algorithm.

What did you not understand about that ?

Yet you reply

No  -  and then order me to "state my points" in the plural.

 

I have no doubt you know many things, quite a few that I do not, but

This is no way to treat other people who know other things than you do.

 

No one knows it all.

Link to comment
Share on other sites

(my previous post continued)

For one thing, we know the fundamental component of a human brain is a neuron.  Neurons use symbol systems, mindlessly, through activation thresholds and firing rates and so on.  You wrote (in the article in Towards Data Science)(handsome fellow in the author picture):

Quote

The basic nature of programs is that they are free of conscious associations which compose meaning. Programming codes contain meaning to humans only because the code is in the form of symbols that contain hooks to the readers’ conscious experiences. Searle’s Chinese Room argument serves the purpose of putting the reader of the argument in place of someone that has had no experiential connections to the symbols in the programming code. 

But we also have no experiental connection to the symbols that neurons send each other or the DNA strings that developed them.  Those little blobs of jelly are, from my conscious perspective, all syntax and no semantics.   They just go click-click-click at each other.  They are unsentient electrochemical machines which know nothing of meaning.  The meaning lies in the domain of that emergent process I casually call "me" or "Paul" or "my wife's unpaid handyman."  In emergent processes, meaning doesn't travel all the way down the various operational levels.  Lacking semantics at one functional level does not prevent its emergence at another.

You also wrote, 

To the machine, codes and inputs are nothing more than items and sequences to execute. There’s no meaning to this sequencing or execution activity to the machine...

 

 

So this machine is akin to a neuron.  The neuron mindlessly handles inputs, sequences to execute.  But when we put 85 billion of them working together, we get Paul or David.  If we have 40 billion we get Trump.  Meaning and understanding emerge gradually as one goes from one electrochemical machine to billions.  There is no fundamental reason this could not happen with billions of virtual machines or billions of processors made of gold leaf and compressed air (IIRC, that's a Ted Chiang story).  Unless there is something magical about biology, some remainder of Bergson's Vitalism that turned out to be real.  

 

Edited by TheVat
post broken
Link to comment
Share on other sites

1 hour ago, TheVat said:

Also, the general structure of an argument against algorithmic paths to conscious cognition seems, again, susceptible to the reductio of:

It ultimately disallows the coded signals between living neurons to ever emerge as a conscious process. I.e an absurd position. I keep pointing to this vulnerability in  AI consciousness is impossible arguments because I think it's a serious one.  

That's utilizing functionalism on a neuron (saying something like "the function is to encode and decode"... Isn't this computationalism all over again?).

The entire thing about heuristics... what determines it? The selection criteria somehow isn't itself a program? The created "populations" didn't come from programs? Programming is everywhere in a machine, up to the bare metal. Machine "evolution" isn't "evolution" at all. When any design is involved, it's over. Who designed the genetic algorithm itself? All this is kicking the can labelled "programming" down the road hoping it would disappear into the rhetorical background.

The second part of your reply continues the functionalism of neurons. It's using "symbol systems"? Nope... the computational/IP conception really needs to die off. https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer You're using a computerization technological parallel. It's the latest in the one long chain of bad analogies based on the latest tech of the day, starting from hydraulics, then telephones, then electrical fields, and now computers and web-networks (or what's being called "neural networks..." when there's nothing "neural" about those)

As for DNA, I've addressed that issue in the article by saying DNA differs in functional compartimentalization (i.e. the lack thereof) as well as scope. DNA works nothing like machine programming code.

 

 

 

 

1 hour ago, studiot said:

How is this an answer to me ?

You asked abouth their point in the singular.

I gave you as straightforward an answer as possible.

What did you not understand about that ?

Yet you reply

No  -  and then order me to "state my points" in the plural.

 

I have no doubt you know many things, quite a few that I do not, but

This is no way to treat other people who know other things than you do.

 

No one knows it all.

You gave a bunch of names. I don't know what points they made. You have to tell me. That's what I meant. You then pick at singular and plural. Great.

Then you said algorithms are not needed, and there are other mechanisms. Like what?

What do you mean "treat other people who know other things than you do?" I simply asked you the points those people you named made, plus what those "other mechanisms" are. Excuse me but what's so unreasonable about the request?

 

 

Edited by AIkonoklazt
Link to comment
Share on other sites

11 minutes ago, Genady said:

I'm not participating in this thread anymore but just want to thank you for this reference.

You're very welcome.
Feel free to PM me if you need articles to specific topics you have in mind surrounding consciousness, AI, and philosophy of mind. I might have what you want buried somewhere in my web-link archives.

Link to comment
Share on other sites

1 hour ago, AIkonoklazt said:

Nope... the computational/IP conception really needs to die off. https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer You're using a computerization technological parallel. It's the latest in the one long chain of bad analogies...

I've started reading this - one of the most fascinating papers in cognitive sciences I've seen.  The anti-representational view of brains interacting with the world certainly deserves consideration.

I thank you for sharing that.  I was already aware that information processing was an imperfect analogy for what biological brains do, so I'm curious how the author will steer away from it.  Don't know yet if I can agree with abandoning that model completely but will try to finish, check some related sources and get back here tomorrow.  

I will confess I always enjoy watching a paradigm get shaken up, even if it's one I subscribe to.  😀

 

52 minutes ago, Genady said:

I'm not participating in this thread anymore....

Sorry to hear that.  Your perspective is valuable IMO.

Edited by TheVat
tpyo
Link to comment
Share on other sites

10 hours ago, AIkonoklazt said:

First let me say also thank you posting this essay I had not heard of. +1

It is fascinating and touches on many things there have been discussions over at SF in the past. Some of these things you have also introduced in your thread, as have I. but I have gained the impression, like many other members, that your analysis is very (von neumann) computer oriented and thus straightjacketed by it.

I am inclined to think your reference deserves a discussion thread all of its own. Especially as I have some criticism of it, particularly about memory, algorithms and the Human brain/mind.

As time presses I will return with more detailed answers and for the moment just present an extract from Ellenberg.

ellenberg1.thumb.jpg.5496bc58633b511956b613614ab99749.jpg

Link to comment
Share on other sites

15 hours ago, AIkonoklazt said:

Why do you claim that I don't understand, for example, the difference between intelligence and consciousness?

An anthill is intelligent, but it can't be conscious because it's a house; is that about the size of it?

Which throws up an interesting question, which part of the human body is considered the house (mobile anthill)?

Which ultimately comes back to @Genady's, often repeated, question, that you have yet to answer; that's why I claimed that you don't understand, because if you did, you'd at least try to explain; it should be a rule on this forum "I make the claim, I should explain"...

Link to comment
Share on other sites

10 hours ago, TheVat said:
11 hours ago, Genady said:

I'm not participating in this thread anymore....

Sorry to hear that.  Your perspective is valuable IMO.

Thank you. I am not participating in this thread because it became clear to me that the topic is not scientific but rather belongs to a philosophy of engineering. I 'woke up' only because this recent reference is about a scientific hypothesis regarding a brain function.

Edited by Genady
Link to comment
Share on other sites

19 minutes ago, Genady said:

Thank you. I am not participating in this thread because it became clear to me that the topic is not scientific but rather belongs to a philosophy of engineering. I 'woke up' only because this recent reference is about a scientific hypothesis regarding a brain function.

 

11 hours ago, TheVat said:

Sorry to hear that.  Your perspective is valuable IMO.

Yes I agree with TheVat is is a shame because you are setting a good example by using correct terminology for instance hypothesis instead of that much abused term, theory.

 

 

Link to comment
Share on other sites

2 hours ago, dimreepr said:

An anthill is intelligent, but it can't be conscious because it's a house; is that about the size of it?

Which throws up an interesting question, which part of the human body is considered the house (mobile anthill)?

Which ultimately comes back to @Genady's, often repeated, question, that you have yet to answer; that's why I claimed that you don't understand, because if you did, you'd at least try to explain; it should be a rule on this forum "I make the claim, I should explain"...

Both you, mrmack and I have all queried the difference between intelligence and consciousness, looking for straight answers.

 

Perhaps we should examine it more closely ?

I don't pretend to fully understand any of these concepts but here are some thoughts, I consider useful.

 

Firstly consider some entity in its surroundings, environment or universe as in Fig 1

 

So we have three things. The entity, the environment and the interaction between the two.

Perhaps the entity feels overhot in the sun so gets under the tree for shade.

 

It is tempting to think that the entity must be self aware to be conscious and conscious to be intelligent and the whole sequence must be like nested like russian dolls as in the venn diagram in fig2.

But this doen't hold logical water.

As mrmack says, there are scales of these things.

 

Self awareness. 

I am not normally aware of the touching of my feet on the ground, the feel of my clothes or the working of my kidneys.

Yet I can define and descibe myself.

Consciousness

Am I conscious when I am asleep or self aware ?

Ditto after 10 pints or whiskys.

Intelligent

I leave that up to your consideration.

venn1.jpg.90bdfac0f38abdca469c9a2e6f6282bd.jpg

 

 

 

 

 

Link to comment
Share on other sites

5 hours ago, studiot said:

Firstly consider some entity in its surroundings, environment or universe as in Fig 1

Your figure of a being and an environment reminds me of Markov Blankets, which relate a set of internal and external states as conditionally independent from each other. In this framework, i think the distinctions between intelligence, consciousness and self-awareness are on a continuum and so not qualitatively different - unless there is some kind of 'phase transition' when markov blankets are embedded in one another to a sufficient extent. (The free energy principle from which this model is derived draws heavily from physics so might be of interest to you).

Link to comment
Share on other sites

8 hours ago, dimreepr said:

An anthill is intelligent, but it can't be conscious because it's a house; is that about the size of it?

Which throws up an interesting question, which part of the human body is considered the house (mobile anthill)?

Which ultimately comes back to @Genady's, often repeated, question, that you have yet to answer; that's why I claimed that you don't understand, because if you did, you'd at least try to explain; it should be a rule on this forum "I make the claim, I should explain"...

I don't see how the article didn't make it clear, since the purpose of that section ("intelligence versus consciousness") is to distinguish the two.

Intelligence is an ability, while consciousness is a phenomenon.

Intelligence, as in the term "artificial intelligence," is performative and not attributive; This has been pointed out a lot by experts in AI, yet it is a point of continual confusion. A machine performs tasks that are seemingly intelligent, and not "being intelligent." I really thought the distinction is clear. I suppose I can throw more rhetoric at it but I chose not to.

This makes the term "artificial intelligence" a technically specialized term. It's NOT a common vernacular, because if it is, AI would literally possess intelligence instead of exhibiting symptoms of it. https://www.merriam-webster.com/dictionary/intelligence

I've seen a lot of comments from experts, especially by Bender (co-author of the now-famous "stochastic parrots" paper https://dl.acm.org/doi/pdf/10.1145/3442188.3445922 She was the one who coined the term to describe LLMs such as ChatGPT/Bard) repeatedly complaining about conflation of concepts and terms surrounding this.

"Intelligence" and "learning" in machines, are technical terms referring to their performance, and not their attributes. I've pointed this out very clearly in the article using a passage from an AI textbook:

Quote

I textbooks readily admit that the “learning” in “machine learning” isn’t referring to learning in the usual sense of the word[8]:

“For example, a database system that allows users to update data entries would fit our definition of a learning system: it improves its performance at answering database queries based on the experience gained from database updates. Rather than worry about whether this type of activity falls under the usual informal conversational meaning of the word “learning,” we will simply adopt our technical definition of the class of programs that improve through experience.”

Note how the term “experience” isn’t used in the usual sense of the word, either, because experience isn’t just data collection. The Knowledge Argument shows how the mind doesn’t merely process information about the physical world[9].

In my opinion, the field of AI is in such a mess because of the constant anthropomorphisation, conflation of concepts, and abuse of terminology.

Back to the question of the anthill:

First, the anthill itself, as you've said, is a building. A building itself isn't a machine in the first place. Are you including the ants? If you're just talking about the "anthill building", what's intelligent about it? It's not even performing any intelligent-seeming task. It just seems to me that you really need to start from the basics and look at what exactly you are referring to when you use a certain term; I can't stress this enough. When you referred to the human body as "mobile anthill," what exactly do you even mean by that... I hope you realize that the question is already loaded because you're saying something about the anthill and the ants already. NO, this isn't "question dodging," this is clarification.

6 hours ago, studiot said:

Both you, mrmack and I have all queried the difference between intelligence and consciousness, looking for straight answers.

Should I add to the very first section of my article ("Intelligence versus consciousness") the words "Intelligence is an ability, while consciousness is a phenomenon"? I seriously thought people would get it right off the bat- It went by two different editors at two publications and they didn't tell me the distinction wasn't clear or anything like that.

10 hours ago, studiot said:

First let me say also thank you posting this essay I had not heard of. +1

It is fascinating and touches on many things there have been discussions over at SF in the past. Some of these things you have also introduced in your thread, as have I. but I have gained the impression, like many other members, that your analysis is very (von neumann) computer oriented and thus straightjacketed by it.

I am inclined to think your reference deserves a discussion thread all of its own. Especially as I have some criticism of it, particularly about memory, algorithms and the Human brain/mind.

As time presses I will return with more detailed answers and for the moment just present an extract from Ellenberg.

 

It's much more than Von Neumann (see section of article “Your argument only applies to Von Neumann machines”) where I explained how my explanation even applies to catapults). It only seems VN-ish because:

  1. I'm speaking from an engineering perspective, as in "you can't make this thing, and here's why." When I do that, I have to use language that people understand, and people are most exposed to VN-ish things.
  2. I had to use practical examples, and most of that stuff is VN-related.

As for the Ellenberg passage... My eyes are kinda going bad so that photo was hard for me to read. It's really short and I can't tell much from it. Yes, he talked about Markov chain but I don't know what ultimate point he was making with it- As in, what does the discussion have to do with referents?

Link to comment
Share on other sites

33 minutes ago, studiot said:

Rather than a tart reply, I will just ask for a link to your passage on catapaults.

Perhaps I could understand that.

No machine does anything "by itself"... It has nothing to do with the architecture.
 

Quote

“Your argument only applies to Von Neumann machines”

It applies to any machine. It applies to catapults. Programming a catapult involves adjusting pivot points, tensions, and counterweights. The programming language of a catapult is contained within the positioning of the pivots, the amount of tension, the amount of counterweight, and so on. You can even build a computer out of water pipes if you want[22]; The same principle applies. A machine no more “does things on its own” than a catapult flings by itself.

 

Link to comment
Share on other sites

9 minutes ago, AIkonoklazt said:

No machine does anything "by itself"... It has nothing to do with the architecture.
 

 

Thank you.

When have I ever said otherwise ?

The only thing I have said about machines is that they have no business in this thread and that you were misusing the scientific and engineering definitions of a machine.

 

I did offer you a more interesting concept  - That of a self diagnosing construct.

In this case it is not programmed but is constructed to be self diagnosing.

This is the car tyre I asked you about and you did not reply to.

2 hours ago, Prometheus said:

Your figure of a being and an environment reminds me of Markov Blankets, which relate a set of internal and external states as conditionally independent from each other. In this framework, i think the distinctions between intelligence, consciousness and self-awareness are on a continuum and so not qualitatively different - unless there is some kind of 'phase transition' when markov blankets are embedded in one another to a sufficient extent. (The free energy principle from which this model is derived draws heavily from physics so might be of interest to you).

Thank you, I had not heard of this so I will have to look more into it. +1

However it does introduce another interesting concept, relevent to consciousness, intelligence, etc.

The concept of life.

Again we have an ill defined concept but one characteristic I was taught is that of 'response to stimulus'

However I observe many such responses in what I would call non living things.

For instance rocks in the desert respond to the stimulus of thermal cycling in the hot sun and cold desert night by exfoliating due to alternate expansion and contraction stresses.

 

So no I do not accept the continuum concept for these because it is demonstrably possible to have life without consciousness, with or without intelligence, with or without self awareness (my car tyre exmple provides an example of non living self awareness)

 

What I was leading up to with my venn diagrams was that we have lots of concpets or categories, with some overlap but some separation.

My initial criticism of thie opening of this thread is that it tries to be absolute ie all and to me therin lies its downfall.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.