Jump to content

Artificial Consciousness Is Impossible


AIkonoklazt

Recommended Posts

5 hours ago, Phi for All said:
!

Moderator Note

OK, 17 pages into this discussion, and I'd like to know if any of the input you've gotten has persuaded you to soften your position, or if it's been of no value and you stand by it adamantly. Nobody wants to discuss any subject with a preacher, someone who has no intention of being persuaded by any argument. Please give a brief summary of what, if anything, you've taken on board wrt this discussion. 

 

There has been one correction to my article published on Towards Data Science as a result of my activity on this board (I tried getting Mindmatters to make the change to their republication but they didn't):

Original sentence:

Quote

For example, orbital satellites could still function without considering relativistic effects

Corrected sentence:

Quote

For example, orbital satellites could still function without considering certain relativistic effects

As I have previously stated, that doesn't change the point because models never reflect the actual. This has been a common observation, as seen through various related aphorisms:

Also, as I have previously stated in the thread, at least 50% of my article came from discussions on public forums, namely the entire section "Responses to counterarguments"

This contrasts to utterly useless quips from certain users like iNow and StringJunky above. They are evidently only interested in the "meta." (e.g. "You are wasting your time," "He has chip on his shoulder," "He is overconfident," and other junk remarks) Please remind them to focus on the points, not on some pseudopsychoanalytic prods on perceived modus operandi.

Sure, people on Reddit did that quite a bit, but at least even they coughed up useful items to add (re: exhaustive modeling, answering to accusations of intuition pumping from Dennett fanboys, etc), in this sense, iNow/StringJunky are being less than useless, worse than Reddit. Heck, even the idiot moderator on Ars(e) Technica gave me the lone "gift" of special pleading accusation before he banned me without allowing me to even reply once (because he thought that was the last word or something) Did I mention that he even deleted my post (yes, the article I wrote) after banning me?

p.s. if they even bothered reading the thread, they'd already SEEN what exactly the "agenda" is. I've already mentioned the 2021 UN agency ban on AI legal personhood multiple times

1 hour ago, iNow said:

Thanks for the interesting link, Vat. 

When I hear claims like those above, they sound to my ear like someone 100 years ago similarly proclaiming loudly that planes will NEVER go supersonic or get us to the moon. 

Bad analogy when it comes to machine consciousness since anyone who has digested the points of the argument would see that intentionality has nothing to do with any kind of machine performance.

4 hours ago, TheVat said:

I went back and read the Chomsky interview, at an MIT symposium about ten years ago, where he talks about cognitive science and AI.  I think he has a lot to say to this thread.  I will try to find a PW free screenshot if possible.

https://www.theatlantic.com/technology/archive/2012/11/noam-chomsky-on-where-artificial-intelligence-went-wrong/261637/

Ah!  Found a nice clean archive screenshot....

https://archive.is/2023.10.18-114835/https://www.theatlantic.com/technology/archive/2012/11/noam-chomsky-on-where-artificial-intelligence-went-wrong/261637/

 

This interview uncovers an important ongoing debate in the philosophy of science, as to how best to get at the deeper workings of things.  I don't think you will regret reading it.

Well, iNow sure didn't read it- Doubt anyone else besides us would read it.

Believe me, loads of people on my LinkedIn are also saying that the LLM approach to AI is a dead end. Chomsky mentioned GOFAI and guess what? There are talks of going back to it. Not entirely, but incorporation. That's one way to get any semblance of grounding. I saw a recording of a debate/discussion that occurred at the start of this year, and Chomsky said similar things as far as I could remember. Let me see if I can go find a link to it. It's chock full of today's authorities on AI.

Edited by AIkonoklazt
Did I mention that the Arse Technica mod deleted my post (same as the original post of this thread) after banning me without allowing me to reply? Ah yes, the spirit of discussion...
Link to comment
Share on other sites

This was the debate that I saw the video recording of:

https://www.zdnet.com/article/ai-debate-3-everything-you-need-to-know-about-artificial-general-intelligence/

Gary Marcus moderated it (he's on the same level as LeCun but a much more likeable guy), Chomsky was one of the participants.

Marcus did a vid for Wired recently, geared towards the layman:
https://www.wired.com/video/watch/tech-support-ai-expert-answers-ai-questions-from-twitter

Awesome! I'm again up to 30 red, now I have my badge of honor back!

Link to comment
Share on other sites

Thanks for zoom session report.  So much to chew on there.  Marcus has always struck me as a voice of reason in confronting some of the breathless claims for AI.   Like his observations on what AI theorists call factuality, as deep learning programs can maintain no world model from which to build the kind of understanding that conscious beings do.  Without world models, you are left with only syntax, and it's back to the old Chinese Room.  

And the problems of symbol grounding are something that cognitive scientists/linguists like Chomsky can really help sharpen our view of.  Rich models of the world, innate pathways for meaning and semantics, deep structures that allow an instinctive grasp of a world out there...this is Chomsky territory.  (and Chomsky is still exploring at age 95, still showing up for symposia and interviews, he's an amazing guy)  There are places for both GOFAI and machine learning to shake hands in solving such problems.  

 

I would hope everyone here can keep focus on the many issues of AI, rather than on personalities.   Leave the past burns behind. 

Link to comment
Share on other sites

1 hour ago, iNow said:

You mean the link I thanked him for sharing since I found it to be interesting?

Based on that analogy you made, evidently you didn't.

2 hours ago, TheVat said:

Thanks for zoom session report.  So much to chew on there.  Marcus has always struck me as a voice of reason in confronting some of the breathless claims for AI.   Like his observations on what AI theorists call factuality, as deep learning programs can maintain no world model from which to build the kind of understanding that conscious beings do.  Without world models, you are left with only syntax, and it's back to the old Chinese Room.  

And the problems of symbol grounding are something that cognitive scientists/linguists like Chomsky can really help sharpen our view of.  Rich models of the world, innate pathways for meaning and semantics, deep structures that allow an instinctive grasp of a world out there...this is Chomsky territory.  (and Chomsky is still exploring at age 95, still showing up for symposia and interviews, he's an amazing guy)  There are places for both GOFAI and machine learning to shake hands in solving such problems.  

 

I would hope everyone here can keep focus on the many issues of AI, rather than on personalities.   Leave the past burns behind. 

Chomsky referred to looking at the wrong thing as barking up the wrong tree. To me, that wrong tree is really the entire reverse-engineering paradigm. The mind isn't a tool to be reverse engineered, because it isn't a designed artifact in the first place. To make progress, performance in any form should be persued, and not "reverse engineering" perceived anthropomorphisms. I've said to other people that the "optimizations" of the evolved mind isn't of the same kind possessed by engineered tools. In fact, these mental optimizations are even looked at as deficiencies, so are those going to be all "reverse engineered"? See this list https://en.wikipedia.org/wiki/List_of_cognitive_biases

As for personalities, I like following Marcus's blog for the drama between him and the other AI experts, it's actually pretty hilarious.

Link to comment
Share on other sites

A mention of externalism, among other theories of mind, in a New Yorker book review from about four years ago, caught my attention recently.  It relates somewhat to the seminal (for this thread) Aeon paper by Epstein, in regard to models of cognition that do not use the information processor metaphor.  

https://www.newyorker.com/books/under-review/do-we-have-minds-of-our-own

Manzotti, who has become famous for appearing in panels and lecture halls with his favorite prop, an apple, counts himself among the “externalists,” a group of thinkers that includes David Chalmers and the English philosopher and neuroscientist Andy Clark. The externalists believe that consciousness does not exist solely in the brain or in the nervous system but depends, to various degrees, on objects outside the body—such as an apple. According to Manzotti’s version of externalism, spread-mind theory, which Parks is rather taken with, consciousness resides in the interaction between the body of the perceiver and what that perceiver is perceiving: when we look at an apple, we do not merely experience a representation of the apple inside our mind; we are, in some sense, identical with the apple. As Parks puts it, “Simply, the world is what you see. That is conscious experience.” Like Koch’s panpsychism, spread-mind theory attempts to recuperate the centrality of consciousness within the restrictions of materialism. Manzotti contends that we got off to a bad start, scientifically, back in the seventeenth century, when all mental phenomena were relegated to the subjective realm. This introduced the false dichotomy of subject and object and imagined humans as the sole perceiving agents in a universe of inert matter.

The article looks at Tim Parks book, “Out of My Head: On the Trail of Consciousness," and may be of interest here.  Parks explores several theories which depart from the computational model of mind.

I also found a good summary piece on these issues in MIT Technology Review....

https://www.technologyreview.com/2021/08/25/1030861/is-human-brain-computer/

Has a nice pro and con exchange on the legitimacy of the IP model of mind.

 

Link to comment
Share on other sites

On 12/23/2023 at 11:13 AM, TheVat said:

I also found a good summary piece on these issues in MIT Technology Review....

https://www.technologyreview.com/2021/08/25/1030861/is-human-brain-computer/

Has a nice pro and con exchange on the legitimacy of the IP model of mind.

 

It's been a complaint of mine that some people (or a lot of 'em, I don't know) just have no idea what algorithms is and how they work, and that article is a perfect example.

Points I've previously made regarding algorithms:

Quote

You memorize a whole bunch of shapes. Then, you memorize the order the shapes are supposed to go in so that if you see a bunch of shapes in a certain order, you would “answer” by picking a bunch of shapes in another prescribed order. Now, did you just learn any meaning behind any language?

All programs manipulate symbols this way. Program codes themselves contain no meaning. To machines, they are sequences to be executed with their payloads and nothing more

 

Quote

 

The relationship between the shapes and sequences is arbitrarily defined and not causally determined. Operational rules are what’s simply programmed in, not necessarily matching any sort of worldly causation because any such links would be an accidental feature of the program and not an essential feature (i.e., by happenstance and not necessity.) The program could be given any input to resolve and the machine would follow not because it “understands” any worldly implications of either the input or the output but simply because it’s following the dictates of its programming.

A very rough example of pseudocode to illustrate this arbitrary relationship:

let p=”night”

input R

if R=”day” then print p+”is”+R

Now, if I type “day”, then the output would be “night is day”. Great. Absolutely “correct output” according to its programming. It doesn’t necessarily “make sense” but it doesn’t have to because it’s the programming! The same goes with any other input that gets fed into the machine to produce output e.g., “nLc is auS”, “e8jey is 3uD4”, and so on.

To the machine, codes and inputs are nothing more than items and sequences to execute. There’s no meaning to this sequencing or execution activity to the machine. To the programmer, there is meaning because he or she conceptualizes and understands variables as representative placeholders of their conscious experiences. The machine doesn’t comprehend concepts such as “variables”, “placeholders”, “items”, “sequences”, “execution”, etc. It just doesn’t comprehend, period. Thus, a machine never truly “knows” what it’s doing and can only take on the operational appearance of comprehension.

 

Bishop pointed at one of his papers in his LinkedIn, I haven't had a chance to read it yet https://www.researchgate.net/publication/225794615_Why_Computers_Can't_Feel_Pain

Link to comment
Share on other sites

2 hours ago, StringJunky said:

Great, you are citing a position that you haven't read yet.

Great, you’re still trying hard to be useful in this thread.

People don’t arse with reading my original post either and I don’t see you pointing that out.

Link to comment
Share on other sites

32 minutes ago, AIkonoklazt said:

Great, you’re still trying hard to be useful in this thread.

People don’t arse with reading my original post either and I don’t see you pointing that out.

You can't be arsed to answer anything that doesn't fit your view, however reasonable; it's a yin-yang thing.

Link to comment
Share on other sites

Dennett fans, here's your Dennett paper!

https://web.ics.purdue.edu/~drkelly/DCDWhyCantMakeComputerFeelPain1978.pdf

In it he sure wrote a lot, but ended up just dumping the whole issue onto functionalism, which of course is a fail.

Why didn't he just change the title to "Why You Can't Make a Computer That Does Consciousness" since every single statement he makes in there referencing pain / pain production / whatever could just be rewritten to reference consciousness / consciousness "production"?

Someone can correct me if I'm wrong, provided that anyone reads it (nah)

Link to comment
Share on other sites

19 hours ago, AIkonoklazt said:

Great, you’re still trying hard to be useful in this thread.

People don’t arse with reading my original post either and I don’t see you pointing that out.

Be nice... or take a long walk off a short pier.

 

6 hours ago, AIkonoklazt said:

Dennett fans, here's your Dennett paper!

https://web.ics.purdue.edu/~drkelly/DCDWhyCantMakeComputerFeelPain1978.pdf

In it he sure wrote a lot, but ended up just dumping the whole issue onto functionalism, which of course is a fail.

Why didn't he just change the title to "Why You Can't Make a Computer That Does Consciousness" since every single statement he makes in there referencing pain / pain production / whatever could just be rewritten to reference consciousness / consciousness "production"?

Someone can correct me if I'm wrong, provided that anyone reads it (nah)

Why are you bothering with us? Given your supposed experience in science discussions, you are awfully hot-headed. Do you think we should fawn and help you present your pet ideas? If you assert, expect resistance... tis science.

Link to comment
Share on other sites

On 11/22/2023 at 1:57 PM, dimreepr said:

From your link:

Quote

“we know of no fundamental law or principle operating in this universe that forbids the existence of subjective feelings in artifacts designed or evolved by humans.1”

To the best of our knowledge, this claim is valid still today.

Thanks! For some reason I did not initially notice your quote @dimreepr
It provides an answer to my initial question in this topic; Is there a scientific law or theorem that makes artificial consciousness impossible? 

Link to comment
Share on other sites

19 hours ago, StringJunky said:

Why are you bothering with us? Given your supposed experience in science discussions, you are awfully hot-headed. Do you think we should fawn and help you present your pet ideas? If you assert, expect resistance... tis science.

Why are you replying if you don't have a point? Are you forgetting that "making a point" is "making a point regarding the topic at hand?"

Do you have the ability to not respond to a thread in which you have no actual argument to contribute to?

Your first sentence is beyond ironic.

18 hours ago, Ghideon said:


It provides an answer to my initial question in this topic; Is there a scientific law or theorem that makes artificial consciousness impossible? 

If you count "there is no such thing as programming without programming" as a law in computational science, then there is still "a scientific law or theorem that makes artificial consciousness impossible"

UNLESS you subscribe to epiphenomenalism, which would push the issue to functionalism, which would only be refuted via philosophical principles and not scientific law.

Edited by AIkonoklazt
Link to comment
Share on other sites

On 12/26/2023 at 12:40 AM, StringJunky said:

Be nice... or take a long walk off a short pier.

!

Moderator Note

To clarify, this Reported statement, along with similar ("Go jump in a lake"), is NOT a suggestion that a person commit suicide. It's an old-time admonition to go elsewhere, stop bothering us, get lost, scram. 

 
Link to comment
Share on other sites

15 hours ago, AIkonoklazt said:

If you count "there is no such thing as programming without programming" as a law in computational science, then there is still "a scientific law or theorem that makes artificial consciousness impossible"

I do not count that statement ("there is no such thing as programming without programming") as a law in computational science. 

Link to comment
Share on other sites

9 hours ago, Phi for All said:
!

Moderator Note

To clarify, this Reported statement, along with similar ("Go jump in a lake"), is NOT a suggestion that a person commit suicide. It's an old-time admonition to go elsewhere, stop bothering us, get lost, scram. 

 

I will continue to post relevant points made by other people than me, since I myself have made all of my own points.

I am blocking @StringJunky and this note is for him to block me as well. I'm not forcing anyone on to here.

4 hours ago, Ghideon said:

I do not count that statement ("there is no such thing as programming without programming") as a law in computational science. 

Then it is subject to philosophical principles, as I have indicated. This is still the philosophy subforum after all.

Link to comment
Share on other sites

3 hours ago, AIkonoklazt said:

I will continue to post relevant points made by other people than me, since I myself have made all of my own points.

Well, that makes no sense as a response to what I posted. I was trying to acknowledge a Reported Post without naming anyone, and felt the phrase in question needed some explanation since it was so badly misinterpreted. 

3 hours ago, AIkonoklazt said:

I am blocking @StringJunky and this note is for him to block me as well. I'm not forcing anyone on to here.

You're blocking someone for basically saying "Be civil or get lost", a rephrasing of our prime rule? Got it.

 

Link to comment
Share on other sites

18 hours ago, Phi for All said:

Well, that makes no sense as a response to what I posted. I was trying to acknowledge a Reported Post without naming anyone, and felt the phrase in question needed some explanation since it was so badly misinterpreted. 

You're blocking someone for basically saying "Be civil or get lost", a rephrasing of our prime rule? Got it.

 

My response is for anyone else who may feel trapped and obligated to read.

No. I'm blocking him because he's losing his marbles (e.g. feeling trapped by me merely posting to this thread, or whatever that's bugging him)

Link to comment
Share on other sites

19 minutes ago, mar_mar said:

Does science operate with such term as sub_conscious?

No. It’s largely a relic term absent in peer reviewed literature and used only to discuss topics with lay people and bait clicks. 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.