Jump to content

Artificial Consciousness Is Impossible


AIkonoklazt

Recommended Posts

3 minutes ago, iNow said:

No. It’s largely a relic term absent in peer reviewed literature and used only to discuss topics with lay people and bait clicks. 

Is there a replacement term, or is the 'subconscious' no longer considered a valid concept?

Link to comment
Share on other sites

2 hours ago, zapatos said:

Is there a replacement term, or is the 'subconscious' no longer considered a valid concept?

All generalizations are wrong all the time, so I really should be cautious here, but the concepts coming from the literature I engage most with focus on activity in certain regions, saturation of various chemicals, spikes in oxygen uptake, neural conductance and the like. 

I suspect the term unconscious is still used as a shorthand for laypeople even in clinical settings like therapy sessions and Vogue articles, but it’s not very useful because it’s not very descriptive and you likely wouldn’t see those same clinicians using it at conference talks or in brief hallway watercooler problem solving sessions.

When you’re sitting at a table that wobbles, me telling you that it’s wobbling because “there’s a gap between the floor and the table leg” isn’t exactly useful information. My stance is that calling parts of the mind “unconscious” is similarly lacking in value.

However, saying the length of Leg3 is 1/8” longer than the length of Legs 1,2,&4… all of which are equally long… IS useful. It’s more precise and specific like when we speak of oxygen uptake spikes in the parietal lobe or nerve conductance rates up the spine and feedback signals from the gut. 

IMO, ”unconscious” is a remnant of times past when our understanding was more tightly constrained. It’s a dying term used more for poetry than precision. 

Link to comment
Share on other sites

I won't mention those "bodies's languages".. that closed positions. We do this consciously!

What i want to say is that THIS thing sub_conscious makes the difference between AI and a human. This is where our heart lives. And i think that AC is possible. All that movies look in the future. But not artificial subconscious. Someway and somehow it will be possible to measure consciousness, I think. This is a measurement of rational reactions. And subconscious is irrational and immeasurable.

If you are not agree, remember of the art. And why it has such influence on us.

 

 

Edited by mar_mar
Link to comment
Share on other sites

I agree with iNow regarding how the term "subconscious" is largely useless.

3 hours ago, mar_mar said:

 

What i want to say is that THIS thing sub_conscious makes the difference between AI and a human. This is where our heart lives. And i think that AC is possible. All that movies look in the future. But not artificial subconscious. Someway and somehow it will be possible to measure consciousness, I think. This is a measurement of rational reactions. And subconscious is irrational and immeasurable.

If you are not agree, remember of the art. And why it has such influence on us.

 

 

I would say what makes the practical difference between AI and human would be referents. Algorithms, by their nature, can't refer to anything. This is one of the examples in my article I used to illustrate:

Quote

You memorize a whole bunch of shapes. Then, you memorize the order the shapes are supposed to go in so that if you see a bunch of shapes in a certain order, you would “answer” by picking a bunch of shapes in another prescribed order. Now, did you just learn any meaning behind any language?

In other new happenings, Bishop read my article and this is what he had to say:

Quote

Great read; only one minor point where you say "The Chinese Room argument points out the legitimate issue of symbolic processing not being sufficient for any meaning (syntax doesn’t suffice for semantics)"; whereas the CRA specifically targets **any program**, not just those motivated by classical symbolic AI. As JS says, ".. whatever purely formal principles you put into the computer, they will not be sufficient for understanding, since a human will be able to follow the formal principles without understanding anything" ..

I missed the part where Searle said that CRA applies to any formalism in a machine, and not just symbols. So I didn't broaden CRA's scope when I talked about it. What I did was just making it harder for its detractors to argue against (e.g. certain criticisms directed toward CRA for including a 3rd person and/or a specific language)

Link to comment
Share on other sites

9 hours ago, AIkonoklazt said:

I agree with iNow regarding how the term "subconscious" is largely useless.

Don't listen to imaginative music, don't  watch imaginative movies, paintings, and don't read imaginative literature. Only documentaries and science fiction. And music of AI.

https://m.youtube.com/@aiva1828

Edited by mar_mar
Link to comment
Share on other sites

1 hour ago, mar_mar said:

Don't listen to imaginative music, don't  watch imaginative movies, paintings, and don't read imaginative literature. Only documentaries and science fiction. And music of AI.

https://m.youtube.com/@aiva1828

Human spirituality means nothing to AI, much like AI imagination means nothing to us.

As I've previously mentioned, it would be like talking to God without the magic guy 'voice of god' (translator).

AI consciousness will always be a mystery to us, but never impossible...

Link to comment
Share on other sites

8 hours ago, mar_mar said:

Don't listen to imaginative music, don't  watch imaginative movies, paintings, and don't read imaginative literature. Only documentaries and science fiction. And music of AI.

I think you missed the point entirely. The term "subconscious" is in question these days because it's been used as a catchall term for "things we aren't aware of", which is actually the definition of "unconscious". "Preconscious" is more accurate, and current debate is considering dropping the term "subconscious" professionally. At least that's what I've read.

Your reply would be more appropriate if people were rejecting feelings and imagination in favor of pure reason, but they're NOT.

Link to comment
Share on other sites

Searles point has always been that formal programs, as a set of coded instructions, can only embody the syntactic elements of expertise or knowledge, without the semantics (understanding of  meaning).  Put differently, computers enact only unconscious processes, be they symbol-based or stochastic (the current emphasis in deep machine learning).  While I take the points here regarding the usage of unconscious - it hearkens back to Jungian woo, or to the unhelpful wobbly table vagueness that @iNow mentioned - it can refer in a literal way to operations like those involuntary ones in the autonomic nervous system and up to the brainstem.  Such operations aren't preconscious because they never rise into the spotlight of conscious attention or deliberation.  

I have been AFK a lot the past two weeks so will try to catch up a bit more on the threads before saying more.  

Link to comment
Share on other sites

6 hours ago, TheVat said:

formal programs, as a set of coded instructions, can only embody the syntactic elements of expertise or knowledge, without the semantics (understanding of  meaning)

Q* (pronounced Q-star) may be changing this soon-ish. TBD ATM 

 

6 hours ago, TheVat said:

[Unconscious] can refer in a literal way to operations like those involuntary ones in the autonomic nervous system and up to the brainstem.

An entirely fair and valid point. Thank you for amplifying it as a relevant correction of my own. 

Link to comment
Share on other sites

7 minutes ago, iNow said:

Q* (pronounced Q-star) may be changing this soon-ish. TBD ATM 

I can't say anything definitive right now because I haven't seen any real literature out there about it, but unless it somehow breaks out of formalism I can't see it happening.

Wolfram did a very good job explaining LLMs here: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

A "feature" discussed therein isn't any specific idea/concept as a referent; It isn't any "cat-ness" of a cat or "ear-ness" of an ear, for example. Instead of "thing-ness" it's more like "mathematical-text-labelled-signal-value-correspondence level", in which this level can be easily manipulated using adversarially-generated samples (in the case of image identifcation, pixels invisible to the naked eye):
 

adversarial-attack-1899205238.png

Link to comment
Share on other sites

3 minutes ago, AIkonoklazt said:

Wolfram did a very good job explaining LLMs here

That was a year ago. May as well be describing mainframes. 

Or vacuum tubes. 

Or horses and buggies. 

Link to comment
Share on other sites

4 minutes ago, iNow said:

That was a year ago. May as well be describing mainframes. 

Or vacuum tubes. 

Or horses and buggies. 

I don't know what prompted you to say that.

The paper "Attention Is All You Need" is around 7 years old now.

Link to comment
Share on other sites

50 minutes ago, AIkonoklazt said:

I don't know what prompted you to say that.

The paper "Attention Is All You Need" is around 7 years old now.

And the wolfram link you shared, the one I directly quoted you mentioning, was shared a year ago… and is already badly outdated. Anything from 7 years ago may as well have been typed on a GameBoy

 

 

 

image.thumb.png.a151aa4522174fba8b3621b0b72e8e1c.png

Link to comment
Share on other sites

8 hours ago, mar_mar said:

They are.

That's why i responded that way.

Can you quote the people in this thread that are advocating for giving up their feelings and imagination? I've read through the whole thread, and I couldn't find a single one. So what are you talking about?

Link to comment
Share on other sites

On 1/2/2024 at 6:26 AM, mar_mar said:

Don't listen to imaginative music, don't  watch imaginative movies, paintings, and don't read imaginative literature. Only documentaries and science fiction.

Don't you realize that science fiction is imaginative movies and literature?  The fiction part of science fiction should have been a big clue.

Link to comment
Share on other sites

46 minutes ago, Phi for All said:

Can you quote the people in this thread that are advocating for giving up their feelings and imagination? I've read through the whole thread, and I couldn't find a single one. So what are you talking about?

I didn't say a word about feelings. It was subconscious, the concept, which was underrated by some members of a forum. The thing is that one can't create a new work without participation of one's subconscious.

"imagination /ɪˌmadʒɪˈneɪʃn/ noun the faculty or action of forming new ideas, or images or concepts of external objects not present to the senses."

 

33 minutes ago, Bufofrog said:

Don't you realize that science fiction is imaginative movies and literature?  The fiction part of science fiction should have been a big clue.

My mistake. I meant science literature and articles.

Edited by mar_mar
Link to comment
Share on other sites

11 hours ago, AIkonoklazt said:

I don't know what prompted you to say that.

The paper "Attention Is All You Need" is around 7 years old now.

I think I know, but apart from that... 

If one is a fruit fly, 7 years worth of generation's would take them back to 'before the stone age'; it's all a matter of perspective... 🧐 

Not that you'll be able to read this, which is ironic...

14 minutes ago, mar_mar said:

I didn't say a word about feelings. It was subconscious, the concept, which was underrated by some members of a forum. The thing is that one can't create a new work without participation of one's subconscious.

"imagination /ɪˌmadʒɪˈneɪʃn/ noun the faculty or action of forming new ideas, or images or concepts of external objects not present to the senses."

 

So, what are you trying to say?

Because AI is really good at forming new images from external sources.

Link to comment
Share on other sites

12 hours ago, iNow said:

Q* (pronounced Q-star) may be changing this soon-ish. TBD ATM 

 

An entirely fair and valid point. Thank you for amplifying it as a relevant correction of my own. 

Thanks.   On Q* I am not excited yet.  It relies on a Markov decision process, so it's still a stochastic algorithm and builds no meaningful model of the world.  That model I see as essential to AGI.  But maybe Q (why do I see John DeLancie's mug whenever I refer to it) will do better at limited domain expertise than I expect.  I always have doubts about reinforcement learning, maybe due to its roots in Behaviorism.   It's all very lab rat-ty.  🙂

Link to comment
Share on other sites

10 minutes ago, dimreepr said:

I think I know, but apart from that... 

If one is a fruit fly, 7 years worth of generation's would take them back to 'before the stone age'; it's all a matter of perspective... 🧐 

Not that you'll be able to read this, which is ironic...

So, what are you trying to say?

Because AI is really good at forming new images from external sources.

I read that group of artists sued AI for it uses theirs works.

Not much new ideas has AI

 

Edited by mar_mar
Link to comment
Share on other sites

1 minute ago, mar_mar said:

I read that group of artists sued AI for it use theirs works.

Not much new ideas has AI

 

I read that some artist's would forge a work, that 'was' new, once upon a time... 

Link to comment
Share on other sites

12 hours ago, iNow said:

And the wolfram link you shared, the one I directly quoted you mentioning, was shared a year ago… and is already badly outdated. Anything from 7 years ago may as well have been typed on a GameBoy

 

 

 

image.thumb.png.a151aa4522174fba8b3621b0b72e8e1c.png

I don't think you know why I mentioned the paper.

Every LLM out on the market right now operate on the same "self-attention" principle detailed in that 7 year old paper. None of those are "badly outdated."

Again, I don't know what prompted you to say what you said. I don't think you knew what you were talking about.

Please substantiate your statements.

Link to comment
Share on other sites

15 minutes ago, AIkonoklazt said:

I don't think you knew what you were talking about.

Duly noted

15 minutes ago, AIkonoklazt said:

Every LLM out on the market right now operate on the same "self-attention" principle detailed in that 7 year old paper

Please substantiate this statement, especially given how many thousands of new LLMs are being deployed each week. 

Link to comment
Share on other sites

The LLM is still essentially what it was, a stochastic parrot which makes ranked lists of word probabilities and word pairing probabilities.   There is zero modeling of an actual world.  The only thing its doing is modeling next-word probabilities, its only "world" is statistical frequencies of strings within blocks of text.  It is so very very far from AGI that all the hype around it is just ludicrous.  They understand nothing.  I would recommend anything by Emily Bender, who coined the phrase stochastic parrot, on this topic.  The fundamental algorithms of LLM machines has not changed in the past year.  If it has, @iNow could post some citation on that.

 

2 hours ago, AIkonoklazt said:

Every LLM out on the market right now operate on the same "self-attention" principle detailed in that 7 year old paper. None of those are "badly outdated."

Yep.  Self attention is a basic aspect of all machine learning, especially natural language processing, and pattern recognition as in computers doing visual discrimination of objects.  

Edited by TheVat
minor
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.