Jump to content

Artificial Consciousness Is Impossible


AIkonoklazt

Recommended Posts

 

Alko:

The problem I find with your core thesis is that one can use the same argument to deny consciousness to any matter, even matter that grows from DNA coded instructions and which we call a person.  To clarify, let's take your opening comment,

"This article is an attempt to explain why the cherished fiction of conscious machines is an impossibility. The very act of hardware and software design is a transmission of impetus as an extension of the designers and not an infusion of conscious will. "

Now I can substitute DNA coded life into that paragraph, like this:

This article is an attempt to explain why the cherished fiction of conscious beings is an impossibility. The very act of reproduction, resulting in DNA-directed design is a transmission of impetus as an extension of the parents desire, and not an infusion of conscious will. 

Do you see the problem here?  Your formulation seems to be unwittingly sneaking in a sort of Cartesian dualism, where something immaterial must be "infused" in some mystical process.

But really, what does it matter (no pun intended) whether hardware that has the self-modifying features of a neural network (a connectome, in current parlance) is initiated in nucleotide chains or in some inorganic substrate.  Your thesis begs the question.  

Edited by TheVat
Fix
Link to comment
Share on other sites

4 hours ago, TheVat said:

Alko:

The problem I find with your core thesis is that one can use the same argument to deny consciousness to any matter, even matter that grows from DNA coded instructions and which we call a person.  To clarify, let's take your opening comment,

etc

A much posher reply than my post +1

 

@Alkonoklazt

The Drake equation does not refer to any alogorithmic method or Von Neuman machine.

It attempts to evaluate probability.

 

So please don't patronise me or try to foist explanations about matters I didn't mention.

 

 

Link to comment
Share on other sites

On 5/29/2022 at 7:01 AM, AIkonoklazt said:

Informal introduction:

I've tried other places of debate and discussion (most notably Reddit and LinkedIn), but they inevitably devolve into hostility. Some are hostile and insulting from the getgo, others descend into it after a few messages. Ars Technica forum locked me even before I could even respond to questions. I'm going to give this a go one last time before giving online discussion forums a rest.

Purpose of Discussion:

To advance this specific topic through challenge. As of now, avenues of counterargumentation seem to have been exhausted; Additional arguments I've received after the publication of my article all fell into categories that I've already addressed. I'm looking for types of counterarguments that I haven't seen.

Link to the original article is linked for reference only (full text below): https://towardsdatascience.com/artificial-consciousness-is-impossible-c1b2ab0bdc46


Full text of my article:


Artificial Consciousness Is Impossible
Conscious machines are staples of science fiction that are often taken for granted as articles of future fact, but they are not possible.

This article is an attempt to explain why the cherished fiction of conscious machines is an impossibility. The very act of hardware and software design is a transmission of impetus as an extension of the designers and not an infusion of conscious will. The latter half of the article is dedicated to addressing counterarguments. Lastly, some implications of the title thesis are listed.

Intelligence versus consciousness
Intelligence is the ability of an entity to perform tasks, while consciousness refers to the presence of a subjective phenomenon.

Intelligence[1]:

“…the ability to apply knowledge to manipulate one’s environment”

Consciousness[2]:

“When I am in a conscious mental state, there is something it is like for me to be in that state from the subjective or first-person point of view.”

Requirements of consciousness
A conscious entity, i.e., a mind, must possess:

1. Intentionality[3]:

“Intentionality is the power of minds to be about, to represent, or to stand for, things, properties, and states of affairs.”

Note that this is not a mere symbolic representation.

2. Qualia[4]:

“…the introspectively accessible, phenomenal aspects of our mental lives. In this broad sense of the term, it is difficult to deny that there are qualia.”

Meaning and symbols
Meaning is a mental connection between something (concrete or abstract) and a conscious experience. Philosophers of Mind describe the power of the mind that enables these connections intentionality. Symbols only hold meaning for entities that have made connections between their conscious experiences and the symbols.

The Chinese Room, reframed
The Chinese Room is a philosophical argument and thought experiment published by John Searle in 1980[5]:



As it stands, the Chinese Room argument needs reframing. The person in the room has never made any connections between his or her conscious experiences and the Chinese characters, therefore neither the person nor the room understands Chinese. The central issue should be with the absence of connecting conscious experiences, and not whether there is a proper program that could turn anything into a mind (Which is the same as saying if a program X is good enough it would understand statement S. A program is never going to be “good enough” because it’s a program as I will explain in a later section). This original vague framing derailed the argument and made it more open to attacks. (One of such attacks as a result of the derailment was Sloman’s[6])

The Chinese Room argument points out the legitimate issue of symbolic processing not being sufficient for any meaning (syntax doesn’t suffice for semantics) but with framing that leaves too much wiggle room for objections. Instead of looking at whether a program could be turned into a mind, we instead delve into the fundamental nature of programs themselves.

Symbol Manipulator, a thought experiment
The basic nature of programs is that they are free of conscious associations which compose meaning. Programming codes contain meaning to humans only because the code is in the form of symbols that contain hooks to the readers’ conscious experiences. Searle’s Chinese Room argument serves the purpose of putting the reader of the argument in place of someone that has had no experiential connections to the symbols in the programming code. Thus, the Chinese Room is a Language Room. The person inside the room doesn’t understand the meaning behind the programming code, while to the outside world it appears that the room understands a particular human language.

The Chinese Room Argument comes with another potentially undermining issue. The person in the Chinese Room was introduced as a visualization device to get the reader to “see” from the point of view of a machine. However, since a machine can’t have a “point of view” because it isn’t conscious, having a person in the room creates a problem where the possible objection of “there’s a conscious person in the room doing conscious things” arises.

I will work around the POV issue and clarify the syntax versus semantics distinction by using the following thought experiment:

You memorize a whole bunch of shapes. Then, you memorize the order the shapes are supposed to go in so that if you see a bunch of shapes in a certain order, you would “answer” by picking a bunch of shapes in another prescribed order. Now, did you just learn any meaning behind any language?

All programs manipulate symbols this way. Program codes themselves contain no meaning. To machines, they are sequences to be executed with their payloads and nothing more, just like how the Chinese characters in the Chinese Room are payloads to be processed according to sequencing instructions given to the Chinese-illiterate person and nothing more.

Not only does it generalizes programming code, the Symbol Manipulator thought experiment, with its sequences and payloads, is a generalization of an algorithm: “A process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.[7]”

The relationship between the shapes and sequences is arbitrarily defined and not causally determined. Operational rules are what’s simply programmed in, not necessarily matching any sort of worldly causation because any such links would be an accidental feature of the program and not an essential feature (i.e., by happenstance and not necessity.) The program could be given any input to resolve and the machine would follow not because it “understands” any worldly implications of either the input or the output but simply because it’s following the dictates of its programming.

A very rough example of pseudocode to illustrate this arbitrary relationship:

let p=”night”

input R

if R=”day” then print p+”is”+R

Now, if I type “day”, then the output would be “night is day”. Great. Absolutely “correct output” according to its programming. It doesn’t necessarily “make sense” but it doesn’t have to because it’s the programming! The same goes with any other input that gets fed into the machine to produce output e.g., “nLc is auS”, “e8jey is 3uD4”, and so on.

To the machine, codes and inputs are nothing more than items and sequences to execute. There’s no meaning to this sequencing or execution activity to the machine. To the programmer, there is meaning because he or she conceptualizes and understands variables as representative placeholders of their conscious experiences. The machine doesn’t comprehend concepts such as “variables”, “placeholders”, “items”, “sequences”, “execution”, etc. It just doesn’t comprehend, period. Thus, a machine never truly “knows” what it’s doing and can only take on the operational appearance of comprehension.

Understanding Rooms — Machines ape understanding
The room metaphor extends to all artificially intelligent activities. Machines only appear to deal with meaning, when they ultimately translate everything to machine language instructions at a level that is devoid of meaning before and after execution and is only concerned with execution alone (The mechanism underlying all machine program execution illustrated by the shape memorization thought experiment above. A program only contains meaning for the programmer). The Chinese Room and the Symbol Manipulator thought experiments show that while our minds understand and deal with concepts, machines don’t and only deal with sequences and payloads. The mind is thus not a machine, and neither a machine nor a machine simulation could ever be a mind. Machines that appear to understand language and meaning are by their nature “Understanding Rooms” that only take on the outward appearance of understanding.

Learning Rooms- Machines never actually learn, partly because the mind isn’t just a physical information processor
The direct result of a machine’s complete lack of any possible genuine comprehension and understanding is that machines can only be Learning Rooms that appear to learn but never actually learn. Considering this, “machine learning” is a widely misunderstood and arguably oft-abused term.

AI textbooks readily admit that the “learning” in “machine learning” isn’t referring to learning in the usual sense of the word[8]:
 


Note how the term “experience” isn’t used in the usual sense of the word, either, because experience isn’t just data collection. The Knowledge Argument shows how the mind doesn’t merely process information about the physical world[9].

Possessing only physical information and doing so without comprehension, machines hack the activity of learning by engaging in ways that defy the experiential context of the activity. A good example is how a computer artificially adapts to a video game with brute force instead of learning anything[10].

In the case of “learning to identify pictures”, machines are shown a couple hundred thousand to millions of pictures, and through lots of failures of seeing “gorilla” in bundles of “not gorilla” pixels to eventually correctly matching bunches of pixels on the screen to the term “gorilla”… except that it doesn’t even do it that well all of the time[11].

Needless to say, “increasing performance of identifying gorilla pixels” through intelligence is hardly the same thing as “learning what a gorilla is” through conscious experience. Mitigating this sledgehammer strategy involves artificially prodding the machines into trying only a smaller subset of everything instead of absolutely everything[12].

“Learning machines” are “Learning Rooms” that only take on the appearance of learning. Machines mimic certain theoretical mechanisms of learning as well as simulate the result of learning but never replicate the experiential activity of learning. Actual learning requires connecting referents with conscious experiences. This is why machines mistake groups of pixels that make up an image of a gorilla with those that compose an image of a dark-skinned human being. Machines don’t learn- They pattern match and only pattern match. There’s no actual personal experience associating a person’s face with that of a gorilla’s. When was the last time a person honestly mistakes an animal’s face with a human’s? Sure, we may see resemblances and deem those animal faces to be human-like, but we only recognize them as resemblances and not actual matches. Machines are fooled by “abstract camouflage”, adversarially generated images for the same reason[13]. These mistakes are mere symptoms of a lack of genuine learning; machines still wouldn’t be learning even if they give perfect results. Fundamentally, “machine learning” is every bit as distant from actual learning as the simple spreadsheet database updates mentioned in the AI textbook earlier.

Volition Rooms — Machines can only appear to possess intrinsic impetus
The fact that machines are programmed dooms them as appendages, extensions of the will of their programmers. A machine’s design and its programming constrain and define it. There’s no such thing as a “design without a design” or “programming without programming.” A machine’s operations have been externally determined by its programmers and designers, even if there are obfuscating claims (intentional or otherwise) such as “a program/machine evolved,” (Who designed the evolutionary algorithm?) “no one knows how the resulting program in the black box came about,” (Who programmed the program which produced the resultant code?) “The neural net doesn’t have a program,” (Who wrote the neural net’s algorithm?) “The machine learned and adapted,” (It doesn’t “learn…” Who determined how it would adapt?) and “There’s self-modifying code” (What determines the behavior of this so-called “self-modification,” because it isn’t “self.”) There’s no hiding or escaping from what ultimately produces the behaviors- The programmers’ programming.

Let’s take another look at Searle’s Chinese Room. Who or what wrote the program that the man in the Chinese Room followed? Certainly not the man because he doesn’t know Chinese, and certainly not the Chinese Room itself. As indicated earlier in the passage regarding learning, this Chinese Room didn’t “learn Chinese” just by having instructions placed into the room any more than a spreadsheet “learns” items written onto it. Neither the man nor the Chinese Room was “speaking Chinese;” They were merely following the instructions of the Chinese-speaking programmer of the Chinese Room.

It’s easy to see how terms such as “self-driving cars” aren’t exactly apt when programmers programmed their driving. This means that human designers are ultimately responsible for a machine’s failures when it comes to programming; Anything else would be an attempt to shirk responsibility. “Autonomous vehicles” are hardly autonomous. They no more learn how to drive or drive themselves than a Chinese Room learn Chinese or speak Chinese. Designers and programmers are the sources of a machine’s apparent volition.

Consciousness Rooms — Conclusion, machines can only appear to be conscious
Artificial intelligence that appears to be conscious is a Consciousness Room, an imitation with varying degrees of success. As I have shown, they are neither capable of understanding nor learning. Not only that, they are incapable of possessing volition. Artificial consciousness is impossible due to the extrinsic nature of programming which is bound to syntax and devoid of meaning.



Responses to counterarguments
The following segments are responses to specific categories of counterarguments against my thesis. Please note that these responses do not stand on their own and can only be seen as supporting my main arguments above. Each response only applies to those who hold the corresponding objections.

Circularity
From the conclusion, operating beyond syntax requires meaning derived from conscious experience. This may make the argument appear circular (assuming what it’s trying to prove) when conscious experience was mentioned at the very beginning of the argument as a defining component of meaning.

However, the initial proposition defining meaning (“Meaning is a mental connection with a conscious experience”) wasn’t given validity as a result of the conclusion or anything following the conclusion; it was an observation independent of the conclusion.

Functionalist objections (My response: They fail to account for underdetermination)
Many objections come in one form of functionalism or another. That is, they all go something along with one or more of these lines:

· If we know what a neuron does, then we know what the brain does.

· If we can copy a brain or reproduce collections of neurons, then we can produce artificial consciousness

· If we can copy the functions of a brain, we can produce artificial consciousness

No functionalist arguments work here, because to duplicate any function there must be ways of ensuring all functions and their dependencies are visible and measurable. There is no “copying” something that’s underdetermined. The functionalist presumptions of “if we know/if we can copy” are invalid.

Underdetermination entails no such exhaustive modeling of the brain is possible, as explained by the following passage from SEP (emphasis mine)[14]:
 


In short, we have no assurances that we could engineer anything “like X” when we can’t have total knowledge of this X in the first place. There could be no assurances of a complete model due to underdetermination. Functionalist arguments fail because correlations in findings do not imply causation, and those correlations must be 100% discoverable to have an exhaustive model. There are multiple theoretical strikes against a functionalist position even before looking at actual experiments such as this one:

Repeat stimulations of identical neuron groups in the brain of a fly produce random results. This physically demonstrates the underdetermination[15]:
 

In the above-quoted passage, note all instances of the phrases “may be” and “could be.” They are indications of underdetermined factors at work. No exhaustive modeling is possible when there are multiple possible explanations from random experimental results.

Functionalist Reply: “…but we don’t need exhaustive modeling or functional duplication”
Yes, we do, because there isn’t any assurance that consciousness is produced otherwise. A plethora of functions and behaviors can be produced without introducing consciousness; There are no real measurable external indicators of success. See section “Behaviorist Objections” below.

Behaviorist objections
These counterarguments generally say that if we can reproduce conscious behaviors, then we have produced consciousness. For instance, I completely disagree with a Scientific American article claiming the existence of a test for detecting consciousness in machines[16].

Observable behaviors don’t mean anything, as the original Chinese Room argument had already demonstrated. The Chinese Room only appears to understand Chinese. The fact that machine learning doesn’t equate to actual learning also attests to this.

Emergentism via machine complexity
Counterexamples to complexity emergentism include the number of transistors in a phone processor versus the number of neurons in the brain of a fruit fly. Why isn’t a smartphone more conscious than a fruit fly? What about supercomputers that have millions of times more transistors? How about space launch systems that are even more complex in comparison… are they conscious? Consciousness doesn’t arise out of complexity.

Cybernetics and cloning
If living entities are involved then the subject is no longer that of artificial consciousness. Those would be cases of manipulation of innate consciousness and not any creation of artificial consciousness.

“Eventually, everything gets invented in the future” and “Why couldn’t a mind be formed with another substrate?”
The substrate has nothing to do with the issue. All artificially intelligent systems require algorithm and code. All are subject to programming in one way or another. It doesn’t matter how far in the future one goes or what substrate one uses; the fundamental syntactic nature of machine code remains. Name one single artificial intelligence project that doesn’t involve any code whatsoever. Name one way that an AI can violate the principle of noncontradiction and possess programming without programming (see section “Volition Rooms” above.)

“We have DNA and DNA is programming code”
DNA is not programming code. Genetic makeup only influences and does not determine behavior. DNA doesn’t function like machine code, either. DNA sequencing carries instructions for a wide range of roles such as growth and reproduction, while the functional scope of machine code is comparatively limited. Observations suggest that every gene affects every complex trait to a degree not precisely known[17]. This shows their workings to be underdetermined, while programming code is functionally determinate in contrast (There’s no way for programmers to engineer behaviors, whether adaptive or “evolutionary,” without knowing what the program code is supposed to do. See section discussing “Volition Rooms”) and heavily compartmentalized in comparison (show me a large program in which every individual line of code influences ALL behavior). The DNA-programming parallel is a bad analogy that doesn’t stand up to scientific observation.

“But our minds also manipulate symbols”
Just because our minds can deal with symbols doesn’t mean it operates symbolically. We can experience and recollect things for which we have yet formulated proper descriptions[18]. In other words, we can have indescribable experiences. We start with non-symbolic experiences, then subsequently concoct symbolic representations for them in our attempts to rationally organize and communicate those experiences.

A personal anecdotal example: My earliest childhood memory was that of laying on a bed looking at an exhaust fan on a window. I remember what I saw back then, even though at the time I was too young to have learned words and terms such as “bed”, “window”, “fan”, “electric fan’, or “electric window exhaust fan”. Sensory and emotional recollections can be described with symbols but the recollected experiences themselves aren’t symbolic.

Furthermore, the medical phenomenon of aphantasia demonstrates visual experiences to be categorically separate from descriptions of them[19].

Randomness and random number generators
Randomness is a red herring when it comes to serving as an indicator of consciousness (not to mention the dubious nature of all external indicators, as shown by the Chinese Room Argument). A random number generator inside a machine would simply be providing another input, ultimately only serve to generate more symbols to manipulate.

“We have constructed sophisticated functional neural computing models”
The existence of sophisticated functional models does in no way help functionalists escape the functionalist trap. Those models are still heavily underdetermined as shown by a recent example of an advanced neural learning algorithm[20].

The model is very sophisticated, but note just how much underdetermined couching it contains:
 

Models are far from reflecting functioning neural groups present in living brains; I highly doubt that any researcher would lay such a claim, for that’s not their goal in the first place. Models can and do produce useful functions and be practically “correct”, even if those models are factually “wrong” in that they don’t necessarily correspond to actuality in function. In other words, models don’t have to 100% correspond to reality for them to work, thus their factual correctness is never guaranteed. For example, orbital satellites could still function without considering relativistic effects because most relativistic effects are too small to be significant in satellite navigation[21].

“Your argument only applies to Von Neumann machines”
It applies to any machine. It applies to catapults. Programming a catapult involves adjusting pivot points, tensions, and counterweights. The programming language of a catapult is contained within the positioning of the pivots, the amount of tension, the amount of counterweight, and so on. You can even build a computer out of water pipes if you want[22]; The same principle applies. A machine no more “does things on its own” than a catapult flings by itself.

“Your thought experiment is an intuition pump”
In order to take this avenue of criticism, one would have to demonstrate the alleged abuse in reasoning I supposedly engage in. Einstein also used “folk” concepts in his thought experiments regarding reference frames[23], so are thought experiments being discredited en masse here, or just mine? It’s a failure to field a clear criticism, and a vague reply of “thought experiments can be abused” is unproductive. Do people think my analogy is even worse than their stale stratagem of casting the mind as an analog of the prevailing technology of the day- first hydraulics, then telephones, then electrical fields, and now computers[24]? Would people feel better if they perform my experiment with patterned index cards they can hold in their hands instead? The criticism needs to be specific.

Lack of explanatory power (My response: Demonstrating the falsity of existing theories doesn’t demand yet another theory)
Arguing for or against the possibility of artificial consciousness doesn’t give much of any inroads as to the actual nature of consciousness, but that doesn’t detract from the thesis because the goal here isn’t to explicitly define the nature of consciousness. “What consciousness is” (e.g., its nature) isn’t being explored here as much as “what consciousness doesn’t entail,” which can still be determined via its requirements. There have been theories surrounding differing “conscious potential” of various physical materials but those theories have largely shown themselves to be bunk[25]. Explanatory theories are neither needed for my thesis nor productive in proving or disproving it. The necessary fundamental principles were already provided (see section “Requirements of consciousness.”)

On panpsychism
(A topic that has been popular on SA in recent years[26])

I don’t subscribe to panpsychism, but even if panpsychism is true, the subsequently possible claim that “all things are conscious” is still false because it commits a fallacy of division. There is a difference in kind from everything to every single thing. The purported universal consciousness of panpsychism, if it exists, would not be of the same kind as the ordinary consciousness found in living entities.

Some examples of such categorical differences: Johnny sings, but his kidneys don’t. Johnny sees, but his toenails don’t. Saying that a lamp is conscious in one sense of the word simply because it belongs in a universe that is “conscious” in another would be committing just as big of a category mistake as saying that a kidney sings or a toenail sees.

A claim that all things are conscious (including an AI) as a result of universal consciousness would be conflating two categories simply due to the lack of terms separating them. Just because the term “consciousness” connects all things to the adherents of universal consciousness, doesn’t mean the term itself should be used equivocally. Panpsychist philosopher David Chalmer writes[27]:
 


“If it looks like a duck…” (A tongue-in-cheek rebuke to a tongue-in-cheek behaviorist challenge)
If it looks like a duck, swims like a duck, quacks like a duck, but you know that the duck is an AI duck, then you have a fancy duck automaton. “But hold on, what if no one could tell?” Then it’s a fancy duck automaton that no one could tell from an actual duck, probably because all of its manufacturing documentation is destroyed, the programmer died, and couldn’t tell anyone that it’s an AI duck… It’s still not an actual duck, however. Cue responses such as “Then we can get rid of all evidence of manufacturing” and other quips which I deem as grasping at straws and intellectually dishonest. If someone constructs a functionally perfect and visually indistinguishable artificial duck just to prove me wrong then that’s a waste of effort; its identity would have to be revealed for the point to be “proven.” At that point, the revelation would prove me correct instead.

The “duck reply” is another behavioralist objection rendered meaningless by the Chinese Room Argument (see section “Behaviorist Objections” above.)

“You can’t prove to me that you’re conscious”
This denial is gaming the same empirically non-demonstrable fact as the non-duck duck objection above. We’re speaking of metaphysical facts, not the mere ability or inability to obtain them. That being said, the starting point of either acknowledging OR skeptically denying consciousness should start with the question “Do I deny the existence of my consciousness?” and not “Prove yours to me.”

There is no denying the existence of one’s own consciousness, and it would be an exercise in absurdity to question it in other people once we acknowledge ourselves to be conscious. When each of us encounters another person, do we first assume the possibility we’re merely encountering a facsimile of a person, then check to see if that person is a person before finally starting to think of the entity as a person upon satisfaction? No, lest someone is suffering from delusional paranoia. We wouldn’t want to create a world where this absurd paranoia becomes feasible, either (see the section below.)

Some implications with the impossibility of artificial consciousness
1. AI should never be given moral rights. Because they can never be conscious, they are less deserving of rights than animals. At least animals are conscious and can feel pain[28].

2. AI that takes on extremely close likeness to human beings in both physical appearance, as well as behavior (i.e., crossing the Uncanny Valley), should be strictly banned in the future. Allowing them to exist only creates a world immersed in absurd paranoia (see section above). Based on my observations, many people are confused enough on the subject of machine consciousness as-is, by the all-too-common instances of what one of my colleagues called “bad science fiction.”

3. Consciousness could never be “uploaded” into machines. Any attempt at doing so and then “retiring” the original body before its natural lifespan would be an act of suicide. Any complete Ship of Theseus-styled bit-by-bit machine “replacement” would gradually result in the same.

4. Any disastrous AI “calamity” would be caused by bad design/programming and only bad design/programming.

5. Human beings are wholly responsible for the actions of their creations, and corporations should be held responsible for the misbehavior of their products.

6. We’re not living in a simulation. Those speculations are nonsensical per my thesis:

Given that artificial consciousness is impossible:

- Simulated environments are artificial (by definition.)

- Should we exist within such an environment, we must not be conscious. Otherwise, our consciousness would be part of an artificial system- Not possible due to the impossibility of artificial consciousness.

- However, we are conscious.

- Therefore, we’re not living in a simulation.

References
[1] merriam-webster.com, “Intelligence” (2021), https://www.merriam-webster.com/dictionary/intelligence

[2] Internet Encyclopedia of Philosophy, “Consciousness” (2021), https://iep.utm.edu/consciou/

[3] Stanford Encyclopedia of Philosophy, “Intentionality” (2019), https://plato.stanford.edu/entries/intentionality/

[4] Stanford Encyclopedia of Philosophy, “Qualia” (2017), http://plato.stanford.edu/entries/qualia/

[5] Stanford Encyclopedia of Philosophy, “The Chinese Room Argument” (2020), https://plato.stanford.edu/entries/chinese-room/

[6] A. Sloman, Did Searle Attack Strong Strong or Weak Strong AI? (1985), Artificial Intelligence and Its Applications, A.G. Cohn and J.R. Thomas (Eds.) John Wiley and Sons 1986.

[7] Oxford English Dictionary, “algorithm” (2021), https://www.lexico.com/en/definition/algorithm

[8] T. Mitchell, Machine Learning (1997), McGraw-Hill Education (1st ed.)

[9] Stanford Encyclopedia of Philosophy, “Qualia: The Knowledge Argument” (2019), https://plato.stanford.edu/entries/qualia-knowledge/

[10] V. Highfield, AI Learns To Cheat At Q*Bert In A Way No Human Has Ever Done Before (2018), https://www.alphr.com/artificial-intell ... one-before

[11] J. Vincent, Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech (2018), https://www.theverge.com/2018/1/12/1688 ... gorithm-ai

[12] H. Sikchi, Towards Safe Reinforcement Learning (2018), https://medium.com/@harshitsikchi/towar ... b7caa5702e

[13] D. G. Smith, How to Hack an Intelligent Machine (2018), https://www.scientificamerican.com/arti ... t-machine/

[14] Stanford Encyclopedia of Philosophy, “Underdetermination of Scientific Theory” (2017), https://plato.stanford.edu/entries/scie ... rmination/

[15] L. Sanders, Ten thousand neurons linked to behaviors in fly (2014), https://www.sciencenews.org/article/ten ... aviors-fly

[16] S. Schneider and E. Turner, Is Anyone Home? A Way to Find Out If AI Has Become Self-Aware (2017), https://blogs.scientificamerican.com/ob ... elf-aware/

[17] V. Greenwood, Theory Suggests That All Genes Affect Every Complex Trait (2018), https://www.quantamagazine.org/omnigeni ... -20180620/

[18] D. Robson, The ‘untranslatable’ emotions you never knew you had (2017), https://www.bbc.com/future/article/2017 ... ew-you-had

[19] C. Zimmer, Picture This? Some Just Can’t (2015), https://www.nytimes.com/2015/06/23/scie ... blind.html

[20] R. Urbanczik, Learning by the dendritic prediction of somatic spiking (2014), Neuron. 2014 Feb 5;81(3):521–8.

[21] Ž. Hećimović, Relativistic effects on satellite navigation (2013), Tehnicki Vjesnik 20(1):195–203

[22] K. Patowary, Vladimir Lukyanov’s Water Computer (2019), https://www.amusingplanet.com/2019/12/v ... puter.html

[23] Stanford Encyclopedia of Philosophy, “Thought Experiments” (2019), https://plato.stanford.edu/entries/thought-experiment/

[24] M. Cobb, Why your brain is not a computer (2020), https://www.theguardian.com/science/202 ... sciousness

[25] M. A. Cerullo, The Problem with Phi: A Critique of Integrated Information Theory (2015), PLoS Comput Biol. 2015 Sep; 11(9): e1004286. Konrad P. Kording (Ed.)

[26] Various authors, Retrieved list of scientificamerican.com articles on Panpsychism for illustrative purposes (2021 April 22), https://www.scientificamerican.com/sear ... anpsychism

[27] D. J. Chalmers, Panpsychism and Panprotopsychism, The Amherst Lecture in Philosophy 8 (2013): 1–35

[28] M. Bekoff, Animal Consciousness: New Report Puts All Doubts to Sleep (2018), https://www.psychologytoday.com/us/blog ... ubts-sleep

The very act of hardware and software design is a transmission of impetus as an extension of the designers and not an infusion of conscious will.

Very interesting opening premise.  I see some connections with your ideas and Roger Penrose's work.  

Link to comment
Share on other sites

27 minutes ago, Alex_Krycek said:

The very act of hardware and software design is a transmission of impetus as an extension of the designers and not an infusion of conscious will.

Very interesting opening premise.  I see some connections with your ideas and Roger Penrose's work.  

If you can't tell the difference from the behaviour, there is no way of knowing, so it's conscious.  Your assertion invokes metaphysics. I'm with Vat:

Quote

But really, what does it matter (no pun intended) whether hardware that has the self-modifying features of a neural network (a connectome, in current parlance) is initiated in nucleotide chains or in some inorganic substrate.  

 

Link to comment
Share on other sites

6 minutes ago, StringJunky said:

If you can't tell the difference from the behaviour, there is no way of knowing, so it's conscious.  Your assertion invokes metaphysics. I'm with Vat:

 

This assumes that reality is purely subjective based on one's own perception.  I disagree.  

Link to comment
Share on other sites

25 minutes ago, Alex_Krycek said:

This assumes that reality is purely subjective based on one's own perception.  I disagree.  

There is no point in invoking an external reality because the best we have is intersubjective consensus, which can include experiments as part of that. Basically, If we all agree, it is what it is until we know otherwise.

An analogy: Picture yourself floating in space with nothing else in sight and you feel no acceleration. A rock is heading in your direction and it gets bigger. Is it getting bigger because it is moving towards you, or is it because you are moving towards it, or are both of you moving towards each other? What's the reality? The underlying reality is unknowable. The same with this subject. If it walks like a duck... what more can you ask?

Edited by StringJunky
Link to comment
Share on other sites

3 hours ago, StringJunky said:

There is no point in invoking an external reality because the best we have is intersubjective consensus, which can include experiments as part of that. Basically, If we all agree, it is what it is until we know otherwise.

This argument brings to mind a film called Ex-Machina which I saw recently.  An employee of a fictional big tech firm is tasked with assessing whether a robot truly possesses AI.  Part of the dilemma is whether the robot has been purposefully crafted to present the illusion of AI - that is, to present to the observer all the criteria that would qualify it as true AI.  The issue of course, if that this would be a mere charade, a parlor trick.

So the creator of the robot had a secret test: to see if the robot could manipulate the human observer into letting it escape the research facility.  The observer, being somewhat gullible and inexperienced with the opposite sex, was duped by the robot, thus it was able to escape, and passed the test as being truly "intelligent".  

Link to comment
Share on other sites

14 hours ago, Endy0816 said:

But it wouldn't remain just an input. The rules being applied can change the rules themselves in a nondeterministic fashion.

Programs do all exist in some form physically too. They're not really the symbols we might represent their doings as.

What is this "it" that you're speaking of? What rules, and what exactly is changing it? You're being too vague, please clarify..

It doesn't matter what form programs take. I've already listed some examples, including catapults and water computers.

10 hours ago, studiot said:

A much posher reply than my post +1

 

@Alkonoklazt

The Drake equation does not refer to any alogorithmic method or Von Neuman machine.

It attempts to evaluate probability.

 

So please don't patronise me or try to foist explanations about matters I didn't mention.

 

 

I'm not patronizing you. I wasn't talking about probability in the article and I don't know what made you think so.

14 hours ago, TheVat said:

 

Alko:

The problem I find with your core thesis is that one can use the same argument to deny consciousness to any matter, even matter that grows from DNA coded instructions and which we call a person.  To clarify, let's take your opening comment,

"This article is an attempt to explain why the cherished fiction of conscious machines is an impossibility. The very act of hardware and software design is a transmission of impetus as an extension of the designers and not an infusion of conscious will. "

Now I can substitute DNA coded life into that paragraph, like this:

This article is an attempt to explain why the cherished fiction of conscious beings is an impossibility. The very act of reproduction, resulting in DNA-directed design is a transmission of impetus as an extension of the parents desire, and not an infusion of conscious will. 

Do you see the problem here?  Your formulation seems to be unwittingly sneaking in a sort of Cartesian dualism, where something immaterial must be "infused" in some mystical process.

But really, what does it matter (no pun intended) whether hardware that has the self-modifying features of a neural network (a connectome, in current parlance) is initiated in nucleotide chains or in some inorganic substrate.  Your thesis begs the question.  

You skipped the section titled “We have DNA and DNA is programming code”

This was already addressed. The passage explained why DNA is a bad analogy.

I had already mentioned how substrate isn't an issue. See section titled “Eventually, everything gets invented in the future” and “Why couldn’t a mind be formed with another substrate?”

I hope this isn't going to be another instance of me just pointing back to whatever was already addressed in the article.

Also, I should point out that DNA isn't participating in "directed design" because the process of evolution isn't a process brought about by design (unless you're arguing for intelligent design)

Not sure what you're talking about when you refer to dualism. The difference between subjective feelings and non-subjective data isn't a difference between material and "immaterial." I'm not the one inserting a particular metaphysic here.

There's no "self-modification" in machine hardware or software. This was covered in section "Volition Rooms — Machines can only appear to possess intrinsic impetus"

7 hours ago, StringJunky said:

If you can't tell the difference from the behaviour, there is no way of knowing, so it's conscious.  Your assertion invokes metaphysics. I'm with Vat:

 

See sections "Behaviorist objections" and "The Chinese Room, reframed." The person outside has no way of knowing what's in the Chinese Room, so it must be someone who understands Chinese in the room. However, that's not the case.

Edited by AIkonoklazt
biological evolution isn't intelligent design
Link to comment
Share on other sites

On 5/29/2022 at 7:01 AM, AIkonoklazt said:

Informal introduction:

I've tried other places of debate and discussion (most notably Reddit and LinkedIn), but they inevitably devolve into hostility. Some are hostile and insulting from the getgo, others descend into it after a few messages. Ars Technica forum locked me even before I could even respond to questions. I'm going to give this a go one last time before giving online discussion forums a rest.

Purpose of Discussion:

To advance this specific topic through challenge. As of now, avenues of counterargumentation seem to have been exhausted; Additional arguments I've received after the publication of my article all fell into categories that I've already addressed. I'm looking for types of counterarguments that I haven't seen.

Link to the original article is linked for reference only (full text below): https://towardsdatascience.com/artificial-consciousness-is-impossible-c1b2ab0bdc46


Full text of my article:


Artificial Consciousness Is Impossible
Conscious machines are staples of science fiction that are often taken for granted as articles of future fact, but they are not possible.

This article is an attempt to explain why the cherished fiction of conscious machines is an impossibility. The very act of hardware and software design is a transmission of impetus as an extension of the designers and not an infusion of conscious will. The latter half of the article is dedicated to addressing counterarguments. Lastly, some implications of the title thesis are listed.

Intelligence versus consciousness
Intelligence is the ability of an entity to perform tasks, while consciousness refers to the presence of a subjective phenomenon.

Intelligence[1]:

“…the ability to apply knowledge to manipulate one’s environment”

Consciousness[2]:

“When I am in a conscious mental state, there is something it is like for me to be in that state from the subjective or first-person point of view.”

Requirements of consciousness
A conscious entity, i.e., a mind, must possess:

1. Intentionality[3]:

“Intentionality is the power of minds to be about, to represent, or to stand for, things, properties, and states of affairs.”

Note that this is not a mere symbolic representation.

2. Qualia[4]:

“…the introspectively accessible, phenomenal aspects of our mental lives. In this broad sense of the term, it is difficult to deny that there are qualia.”

Meaning and symbols
Meaning is a mental connection between something (concrete or abstract) and a conscious experience. Philosophers of Mind describe the power of the mind that enables these connections intentionality. Symbols only hold meaning for entities that have made connections between their conscious experiences and the symbols.

The Chinese Room, reframed
The Chinese Room is a philosophical argument and thought experiment published by John Searle in 1980[5]:



As it stands, the Chinese Room argument needs reframing. The person in the room has never made any connections between his or her conscious experiences and the Chinese characters, therefore neither the person nor the room understands Chinese. The central issue should be with the absence of connecting conscious experiences, and not whether there is a proper program that could turn anything into a mind (Which is the same as saying if a program X is good enough it would understand statement S. A program is never going to be “good enough” because it’s a program as I will explain in a later section). This original vague framing derailed the argument and made it more open to attacks. (One of such attacks as a result of the derailment was Sloman’s[6])

The Chinese Room argument points out the legitimate issue of symbolic processing not being sufficient for any meaning (syntax doesn’t suffice for semantics) but with framing that leaves too much wiggle room for objections. Instead of looking at whether a program could be turned into a mind, we instead delve into the fundamental nature of programs themselves.

Symbol Manipulator, a thought experiment
The basic nature of programs is that they are free of conscious associations which compose meaning. Programming codes contain meaning to humans only because the code is in the form of symbols that contain hooks to the readers’ conscious experiences. Searle’s Chinese Room argument serves the purpose of putting the reader of the argument in place of someone that has had no experiential connections to the symbols in the programming code. Thus, the Chinese Room is a Language Room. The person inside the room doesn’t understand the meaning behind the programming code, while to the outside world it appears that the room understands a particular human language.

The Chinese Room Argument comes with another potentially undermining issue. The person in the Chinese Room was introduced as a visualization device to get the reader to “see” from the point of view of a machine. However, since a machine can’t have a “point of view” because it isn’t conscious, having a person in the room creates a problem where the possible objection of “there’s a conscious person in the room doing conscious things” arises.

I will work around the POV issue and clarify the syntax versus semantics distinction by using the following thought experiment:

You memorize a whole bunch of shapes. Then, you memorize the order the shapes are supposed to go in so that if you see a bunch of shapes in a certain order, you would “answer” by picking a bunch of shapes in another prescribed order. Now, did you just learn any meaning behind any language?

All programs manipulate symbols this way. Program codes themselves contain no meaning. To machines, they are sequences to be executed with their payloads and nothing more, just like how the Chinese characters in the Chinese Room are payloads to be processed according to sequencing instructions given to the Chinese-illiterate person and nothing more.

Not only does it generalizes programming code, the Symbol Manipulator thought experiment, with its sequences and payloads, is a generalization of an algorithm: “A process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.[7]”

The relationship between the shapes and sequences is arbitrarily defined and not causally determined. Operational rules are what’s simply programmed in, not necessarily matching any sort of worldly causation because any such links would be an accidental feature of the program and not an essential feature (i.e., by happenstance and not necessity.) The program could be given any input to resolve and the machine would follow not because it “understands” any worldly implications of either the input or the output but simply because it’s following the dictates of its programming.

A very rough example of pseudocode to illustrate this arbitrary relationship:

let p=”night”

input R

if R=”day” then print p+”is”+R

Now, if I type “day”, then the output would be “night is day”. Great. Absolutely “correct output” according to its programming. It doesn’t necessarily “make sense” but it doesn’t have to because it’s the programming! The same goes with any other input that gets fed into the machine to produce output e.g., “nLc is auS”, “e8jey is 3uD4”, and so on.

To the machine, codes and inputs are nothing more than items and sequences to execute. There’s no meaning to this sequencing or execution activity to the machine. To the programmer, there is meaning because he or she conceptualizes and understands variables as representative placeholders of their conscious experiences. The machine doesn’t comprehend concepts such as “variables”, “placeholders”, “items”, “sequences”, “execution”, etc. It just doesn’t comprehend, period. Thus, a machine never truly “knows” what it’s doing and can only take on the operational appearance of comprehension.

Understanding Rooms — Machines ape understanding
The room metaphor extends to all artificially intelligent activities. Machines only appear to deal with meaning, when they ultimately translate everything to machine language instructions at a level that is devoid of meaning before and after execution and is only concerned with execution alone (The mechanism underlying all machine program execution illustrated by the shape memorization thought experiment above. A program only contains meaning for the programmer). The Chinese Room and the Symbol Manipulator thought experiments show that while our minds understand and deal with concepts, machines don’t and only deal with sequences and payloads. The mind is thus not a machine, and neither a machine nor a machine simulation could ever be a mind. Machines that appear to understand language and meaning are by their nature “Understanding Rooms” that only take on the outward appearance of understanding.

Learning Rooms- Machines never actually learn, partly because the mind isn’t just a physical information processor
The direct result of a machine’s complete lack of any possible genuine comprehension and understanding is that machines can only be Learning Rooms that appear to learn but never actually learn. Considering this, “machine learning” is a widely misunderstood and arguably oft-abused term.

AI textbooks readily admit that the “learning” in “machine learning” isn’t referring to learning in the usual sense of the word[8]:
 


Note how the term “experience” isn’t used in the usual sense of the word, either, because experience isn’t just data collection. The Knowledge Argument shows how the mind doesn’t merely process information about the physical world[9].

Possessing only physical information and doing so without comprehension, machines hack the activity of learning by engaging in ways that defy the experiential context of the activity. A good example is how a computer artificially adapts to a video game with brute force instead of learning anything[10].

In the case of “learning to identify pictures”, machines are shown a couple hundred thousand to millions of pictures, and through lots of failures of seeing “gorilla” in bundles of “not gorilla” pixels to eventually correctly matching bunches of pixels on the screen to the term “gorilla”… except that it doesn’t even do it that well all of the time[11].

Needless to say, “increasing performance of identifying gorilla pixels” through intelligence is hardly the same thing as “learning what a gorilla is” through conscious experience. Mitigating this sledgehammer strategy involves artificially prodding the machines into trying only a smaller subset of everything instead of absolutely everything[12].

“Learning machines” are “Learning Rooms” that only take on the appearance of learning. Machines mimic certain theoretical mechanisms of learning as well as simulate the result of learning but never replicate the experiential activity of learning. Actual learning requires connecting referents with conscious experiences. This is why machines mistake groups of pixels that make up an image of a gorilla with those that compose an image of a dark-skinned human being. Machines don’t learn- They pattern match and only pattern match. There’s no actual personal experience associating a person’s face with that of a gorilla’s. When was the last time a person honestly mistakes an animal’s face with a human’s? Sure, we may see resemblances and deem those animal faces to be human-like, but we only recognize them as resemblances and not actual matches. Machines are fooled by “abstract camouflage”, adversarially generated images for the same reason[13]. These mistakes are mere symptoms of a lack of genuine learning; machines still wouldn’t be learning even if they give perfect results. Fundamentally, “machine learning” is every bit as distant from actual learning as the simple spreadsheet database updates mentioned in the AI textbook earlier.

Volition Rooms — Machines can only appear to possess intrinsic impetus
The fact that machines are programmed dooms them as appendages, extensions of the will of their programmers. A machine’s design and its programming constrain and define it. There’s no such thing as a “design without a design” or “programming without programming.” A machine’s operations have been externally determined by its programmers and designers, even if there are obfuscating claims (intentional or otherwise) such as “a program/machine evolved,” (Who designed the evolutionary algorithm?) “no one knows how the resulting program in the black box came about,” (Who programmed the program which produced the resultant code?) “The neural net doesn’t have a program,” (Who wrote the neural net’s algorithm?) “The machine learned and adapted,” (It doesn’t “learn…” Who determined how it would adapt?) and “There’s self-modifying code” (What determines the behavior of this so-called “self-modification,” because it isn’t “self.”) There’s no hiding or escaping from what ultimately produces the behaviors- The programmers’ programming.

Let’s take another look at Searle’s Chinese Room. Who or what wrote the program that the man in the Chinese Room followed? Certainly not the man because he doesn’t know Chinese, and certainly not the Chinese Room itself. As indicated earlier in the passage regarding learning, this Chinese Room didn’t “learn Chinese” just by having instructions placed into the room any more than a spreadsheet “learns” items written onto it. Neither the man nor the Chinese Room was “speaking Chinese;” They were merely following the instructions of the Chinese-speaking programmer of the Chinese Room.

It’s easy to see how terms such as “self-driving cars” aren’t exactly apt when programmers programmed their driving. This means that human designers are ultimately responsible for a machine’s failures when it comes to programming; Anything else would be an attempt to shirk responsibility. “Autonomous vehicles” are hardly autonomous. They no more learn how to drive or drive themselves than a Chinese Room learn Chinese or speak Chinese. Designers and programmers are the sources of a machine’s apparent volition.

Consciousness Rooms — Conclusion, machines can only appear to be conscious
Artificial intelligence that appears to be conscious is a Consciousness Room, an imitation with varying degrees of success. As I have shown, they are neither capable of understanding nor learning. Not only that, they are incapable of possessing volition. Artificial consciousness is impossible due to the extrinsic nature of programming which is bound to syntax and devoid of meaning.



Responses to counterarguments
The following segments are responses to specific categories of counterarguments against my thesis. Please note that these responses do not stand on their own and can only be seen as supporting my main arguments above. Each response only applies to those who hold the corresponding objections.

Circularity
From the conclusion, operating beyond syntax requires meaning derived from conscious experience. This may make the argument appear circular (assuming what it’s trying to prove) when conscious experience was mentioned at the very beginning of the argument as a defining component of meaning.

However, the initial proposition defining meaning (“Meaning is a mental connection with a conscious experience”) wasn’t given validity as a result of the conclusion or anything following the conclusion; it was an observation independent of the conclusion.

Functionalist objections (My response: They fail to account for underdetermination)
Many objections come in one form of functionalism or another. That is, they all go something along with one or more of these lines:

· If we know what a neuron does, then we know what the brain does.

· If we can copy a brain or reproduce collections of neurons, then we can produce artificial consciousness

· If we can copy the functions of a brain, we can produce artificial consciousness

No functionalist arguments work here, because to duplicate any function there must be ways of ensuring all functions and their dependencies are visible and measurable. There is no “copying” something that’s underdetermined. The functionalist presumptions of “if we know/if we can copy” are invalid.

Underdetermination entails no such exhaustive modeling of the brain is possible, as explained by the following passage from SEP (emphasis mine)[14]:
 


In short, we have no assurances that we could engineer anything “like X” when we can’t have total knowledge of this X in the first place. There could be no assurances of a complete model due to underdetermination. Functionalist arguments fail because correlations in findings do not imply causation, and those correlations must be 100% discoverable to have an exhaustive model. There are multiple theoretical strikes against a functionalist position even before looking at actual experiments such as this one:

Repeat stimulations of identical neuron groups in the brain of a fly produce random results. This physically demonstrates the underdetermination[15]:
 

In the above-quoted passage, note all instances of the phrases “may be” and “could be.” They are indications of underdetermined factors at work. No exhaustive modeling is possible when there are multiple possible explanations from random experimental results.

Functionalist Reply: “…but we don’t need exhaustive modeling or functional duplication”
Yes, we do, because there isn’t any assurance that consciousness is produced otherwise. A plethora of functions and behaviors can be produced without introducing consciousness; There are no real measurable external indicators of success. See section “Behaviorist Objections” below.

Behaviorist objections
These counterarguments generally say that if we can reproduce conscious behaviors, then we have produced consciousness. For instance, I completely disagree with a Scientific American article claiming the existence of a test for detecting consciousness in machines[16].

Observable behaviors don’t mean anything, as the original Chinese Room argument had already demonstrated. The Chinese Room only appears to understand Chinese. The fact that machine learning doesn’t equate to actual learning also attests to this.

Emergentism via machine complexity
Counterexamples to complexity emergentism include the number of transistors in a phone processor versus the number of neurons in the brain of a fruit fly. Why isn’t a smartphone more conscious than a fruit fly? What about supercomputers that have millions of times more transistors? How about space launch systems that are even more complex in comparison… are they conscious? Consciousness doesn’t arise out of complexity.

Cybernetics and cloning
If living entities are involved then the subject is no longer that of artificial consciousness. Those would be cases of manipulation of innate consciousness and not any creation of artificial consciousness.

“Eventually, everything gets invented in the future” and “Why couldn’t a mind be formed with another substrate?”
The substrate has nothing to do with the issue. All artificially intelligent systems require algorithm and code. All are subject to programming in one way or another. It doesn’t matter how far in the future one goes or what substrate one uses; the fundamental syntactic nature of machine code remains. Name one single artificial intelligence project that doesn’t involve any code whatsoever. Name one way that an AI can violate the principle of noncontradiction and possess programming without programming (see section “Volition Rooms” above.)

“We have DNA and DNA is programming code”
DNA is not programming code. Genetic makeup only influences and does not determine behavior. DNA doesn’t function like machine code, either. DNA sequencing carries instructions for a wide range of roles such as growth and reproduction, while the functional scope of machine code is comparatively limited. Observations suggest that every gene affects every complex trait to a degree not precisely known[17]. This shows their workings to be underdetermined, while programming code is functionally determinate in contrast (There’s no way for programmers to engineer behaviors, whether adaptive or “evolutionary,” without knowing what the program code is supposed to do. See section discussing “Volition Rooms”) and heavily compartmentalized in comparison (show me a large program in which every individual line of code influences ALL behavior). The DNA-programming parallel is a bad analogy that doesn’t stand up to scientific observation.

“But our minds also manipulate symbols”
Just because our minds can deal with symbols doesn’t mean it operates symbolically. We can experience and recollect things for which we have yet formulated proper descriptions[18]. In other words, we can have indescribable experiences. We start with non-symbolic experiences, then subsequently concoct symbolic representations for them in our attempts to rationally organize and communicate those experiences.

A personal anecdotal example: My earliest childhood memory was that of laying on a bed looking at an exhaust fan on a window. I remember what I saw back then, even though at the time I was too young to have learned words and terms such as “bed”, “window”, “fan”, “electric fan’, or “electric window exhaust fan”. Sensory and emotional recollections can be described with symbols but the recollected experiences themselves aren’t symbolic.

Furthermore, the medical phenomenon of aphantasia demonstrates visual experiences to be categorically separate from descriptions of them[19].

Randomness and random number generators
Randomness is a red herring when it comes to serving as an indicator of consciousness (not to mention the dubious nature of all external indicators, as shown by the Chinese Room Argument). A random number generator inside a machine would simply be providing another input, ultimately only serve to generate more symbols to manipulate.

“We have constructed sophisticated functional neural computing models”
The existence of sophisticated functional models does in no way help functionalists escape the functionalist trap. Those models are still heavily underdetermined as shown by a recent example of an advanced neural learning algorithm[20].

The model is very sophisticated, but note just how much underdetermined couching it contains:
 

Models are far from reflecting functioning neural groups present in living brains; I highly doubt that any researcher would lay such a claim, for that’s not their goal in the first place. Models can and do produce useful functions and be practically “correct”, even if those models are factually “wrong” in that they don’t necessarily correspond to actuality in function. In other words, models don’t have to 100% correspond to reality for them to work, thus their factual correctness is never guaranteed. For example, orbital satellites could still function without considering relativistic effects because most relativistic effects are too small to be significant in satellite navigation[21].

“Your argument only applies to Von Neumann machines”
It applies to any machine. It applies to catapults. Programming a catapult involves adjusting pivot points, tensions, and counterweights. The programming language of a catapult is contained within the positioning of the pivots, the amount of tension, the amount of counterweight, and so on. You can even build a computer out of water pipes if you want[22]; The same principle applies. A machine no more “does things on its own” than a catapult flings by itself.

“Your thought experiment is an intuition pump”
In order to take this avenue of criticism, one would have to demonstrate the alleged abuse in reasoning I supposedly engage in. Einstein also used “folk” concepts in his thought experiments regarding reference frames[23], so are thought experiments being discredited en masse here, or just mine? It’s a failure to field a clear criticism, and a vague reply of “thought experiments can be abused” is unproductive. Do people think my analogy is even worse than their stale stratagem of casting the mind as an analog of the prevailing technology of the day- first hydraulics, then telephones, then electrical fields, and now computers[24]? Would people feel better if they perform my experiment with patterned index cards they can hold in their hands instead? The criticism needs to be specific.

Lack of explanatory power (My response: Demonstrating the falsity of existing theories doesn’t demand yet another theory)
Arguing for or against the possibility of artificial consciousness doesn’t give much of any inroads as to the actual nature of consciousness, but that doesn’t detract from the thesis because the goal here isn’t to explicitly define the nature of consciousness. “What consciousness is” (e.g., its nature) isn’t being explored here as much as “what consciousness doesn’t entail,” which can still be determined via its requirements. There have been theories surrounding differing “conscious potential” of various physical materials but those theories have largely shown themselves to be bunk[25]. Explanatory theories are neither needed for my thesis nor productive in proving or disproving it. The necessary fundamental principles were already provided (see section “Requirements of consciousness.”)

On panpsychism
(A topic that has been popular on SA in recent years[26])

I don’t subscribe to panpsychism, but even if panpsychism is true, the subsequently possible claim that “all things are conscious” is still false because it commits a fallacy of division. There is a difference in kind from everything to every single thing. The purported universal consciousness of panpsychism, if it exists, would not be of the same kind as the ordinary consciousness found in living entities.

Some examples of such categorical differences: Johnny sings, but his kidneys don’t. Johnny sees, but his toenails don’t. Saying that a lamp is conscious in one sense of the word simply because it belongs in a universe that is “conscious” in another would be committing just as big of a category mistake as saying that a kidney sings or a toenail sees.

A claim that all things are conscious (including an AI) as a result of universal consciousness would be conflating two categories simply due to the lack of terms separating them. Just because the term “consciousness” connects all things to the adherents of universal consciousness, doesn’t mean the term itself should be used equivocally. Panpsychist philosopher David Chalmer writes[27]:
 


“If it looks like a duck…” (A tongue-in-cheek rebuke to a tongue-in-cheek behaviorist challenge)
If it looks like a duck, swims like a duck, quacks like a duck, but you know that the duck is an AI duck, then you have a fancy duck automaton. “But hold on, what if no one could tell?” Then it’s a fancy duck automaton that no one could tell from an actual duck, probably because all of its manufacturing documentation is destroyed, the programmer died, and couldn’t tell anyone that it’s an AI duck… It’s still not an actual duck, however. Cue responses such as “Then we can get rid of all evidence of manufacturing” and other quips which I deem as grasping at straws and intellectually dishonest. If someone constructs a functionally perfect and visually indistinguishable artificial duck just to prove me wrong then that’s a waste of effort; its identity would have to be revealed for the point to be “proven.” At that point, the revelation would prove me correct instead.

The “duck reply” is another behavioralist objection rendered meaningless by the Chinese Room Argument (see section “Behaviorist Objections” above.)

“You can’t prove to me that you’re conscious”
This denial is gaming the same empirically non-demonstrable fact as the non-duck duck objection above. We’re speaking of metaphysical facts, not the mere ability or inability to obtain them. That being said, the starting point of either acknowledging OR skeptically denying consciousness should start with the question “Do I deny the existence of my consciousness?” and not “Prove yours to me.”

There is no denying the existence of one’s own consciousness, and it would be an exercise in absurdity to question it in other people once we acknowledge ourselves to be conscious. When each of us encounters another person, do we first assume the possibility we’re merely encountering a facsimile of a person, then check to see if that person is a person before finally starting to think of the entity as a person upon satisfaction? No, lest someone is suffering from delusional paranoia. We wouldn’t want to create a world where this absurd paranoia becomes feasible, either (see the section below.)

Some implications with the impossibility of artificial consciousness
1. AI should never be given moral rights. Because they can never be conscious, they are less deserving of rights than animals. At least animals are conscious and can feel pain[28].

2. AI that takes on extremely close likeness to human beings in both physical appearance, as well as behavior (i.e., crossing the Uncanny Valley), should be strictly banned in the future. Allowing them to exist only creates a world immersed in absurd paranoia (see section above). Based on my observations, many people are confused enough on the subject of machine consciousness as-is, by the all-too-common instances of what one of my colleagues called “bad science fiction.”

3. Consciousness could never be “uploaded” into machines. Any attempt at doing so and then “retiring” the original body before its natural lifespan would be an act of suicide. Any complete Ship of Theseus-styled bit-by-bit machine “replacement” would gradually result in the same.

4. Any disastrous AI “calamity” would be caused by bad design/programming and only bad design/programming.

5. Human beings are wholly responsible for the actions of their creations, and corporations should be held responsible for the misbehavior of their products.

6. We’re not living in a simulation. Those speculations are nonsensical per my thesis:

Given that artificial consciousness is impossible:

- Simulated environments are artificial (by definition.)

- Should we exist within such an environment, we must not be conscious. Otherwise, our consciousness would be part of an artificial system- Not possible due to the impossibility of artificial consciousness.

- However, we are conscious.

- Therefore, we’re not living in a simulation.

References
[1] merriam-webster.com, “Intelligence” (2021), https://www.merriam-webster.com/dictionary/intelligence

[2] Internet Encyclopedia of Philosophy, “Consciousness” (2021), https://iep.utm.edu/consciou/

[3] Stanford Encyclopedia of Philosophy, “Intentionality” (2019), https://plato.stanford.edu/entries/intentionality/

[4] Stanford Encyclopedia of Philosophy, “Qualia” (2017), http://plato.stanford.edu/entries/qualia/

[5] Stanford Encyclopedia of Philosophy, “The Chinese Room Argument” (2020), https://plato.stanford.edu/entries/chinese-room/

[6] A. Sloman, Did Searle Attack Strong Strong or Weak Strong AI? (1985), Artificial Intelligence and Its Applications, A.G. Cohn and J.R. Thomas (Eds.) John Wiley and Sons 1986.

[7] Oxford English Dictionary, “algorithm” (2021), https://www.lexico.com/en/definition/algorithm

[8] T. Mitchell, Machine Learning (1997), McGraw-Hill Education (1st ed.)

[9] Stanford Encyclopedia of Philosophy, “Qualia: The Knowledge Argument” (2019), https://plato.stanford.edu/entries/qualia-knowledge/

[10] V. Highfield, AI Learns To Cheat At Q*Bert In A Way No Human Has Ever Done Before (2018), https://www.alphr.com/artificial-intell ... one-before

[11] J. Vincent, Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech (2018), https://www.theverge.com/2018/1/12/1688 ... gorithm-ai

[12] H. Sikchi, Towards Safe Reinforcement Learning (2018), https://medium.com/@harshitsikchi/towar ... b7caa5702e

[13] D. G. Smith, How to Hack an Intelligent Machine (2018), https://www.scientificamerican.com/arti ... t-machine/

[14] Stanford Encyclopedia of Philosophy, “Underdetermination of Scientific Theory” (2017), https://plato.stanford.edu/entries/scie ... rmination/

[15] L. Sanders, Ten thousand neurons linked to behaviors in fly (2014), https://www.sciencenews.org/article/ten ... aviors-fly

[16] S. Schneider and E. Turner, Is Anyone Home? A Way to Find Out If AI Has Become Self-Aware (2017), https://blogs.scientificamerican.com/ob ... elf-aware/

[17] V. Greenwood, Theory Suggests That All Genes Affect Every Complex Trait (2018), https://www.quantamagazine.org/omnigeni ... -20180620/

[18] D. Robson, The ‘untranslatable’ emotions you never knew you had (2017), https://www.bbc.com/future/article/2017 ... ew-you-had

[19] C. Zimmer, Picture This? Some Just Can’t (2015), https://www.nytimes.com/2015/06/23/scie ... blind.html

[20] R. Urbanczik, Learning by the dendritic prediction of somatic spiking (2014), Neuron. 2014 Feb 5;81(3):521–8.

[21] Ž. Hećimović, Relativistic effects on satellite navigation (2013), Tehnicki Vjesnik 20(1):195–203

[22] K. Patowary, Vladimir Lukyanov’s Water Computer (2019), https://www.amusingplanet.com/2019/12/v ... puter.html

[23] Stanford Encyclopedia of Philosophy, “Thought Experiments” (2019), https://plato.stanford.edu/entries/thought-experiment/

[24] M. Cobb, Why your brain is not a computer (2020), https://www.theguardian.com/science/202 ... sciousness

[25] M. A. Cerullo, The Problem with Phi: A Critique of Integrated Information Theory (2015), PLoS Comput Biol. 2015 Sep; 11(9): e1004286. Konrad P. Kording (Ed.)

[26] Various authors, Retrieved list of scientificamerican.com articles on Panpsychism for illustrative purposes (2021 April 22), https://www.scientificamerican.com/sear ... anpsychism

[27] D. J. Chalmers, Panpsychism and Panprotopsychism, The Amherst Lecture in Philosophy 8 (2013): 1–35

[28] M. Bekoff, Animal Consciousness: New Report Puts All Doubts to Sleep (2018), https://www.psychologytoday.com/us/blog ... ubts-sleep

@OP.  Have you considered the Holonomic Brain Theory proposed by David Bohm and Karl Pribram?  If so, how would their ideas factor into your hypothesis?

https://en.wikipedia.org/wiki/Holonomic_brain_theory

http://www.scholarpedia.org/article/Holonomic_brain_theory

 

Link to comment
Share on other sites

4 hours ago, AIkonoklazt said:

 

 

Also, I should point out that DNA isn't participating in "directed design" because the process of evolution isn't a process brought about by design (unless you're arguing for intelligent design)

Not sure what you're talking about when you refer to dualism. The difference between subjective feelings and non-subjective data isn't a difference between material and "immaterial." I'm not the one inserting a particular metaphysic here.

 

When you speak of an "infusion of conscious will" you are engaging in metaphysics.  Not sure how to make that clearer.  

I didn't say DNA engages in directed design, I said that a blind evolutionary process can in effect design a molecular machine, and the burden is on you to prove that is somehow different from any other design, so far as conscious cognition is concerned.

You're entering a special pleading for biological neural nets, that only they can modify their own software and hardware.  Yet current AI research has been moving in that direction for decades.  It's as if you're saying no future innovation is possible, an assertion that the history of science has proved to be laughable, over and over.

You can't keep moving the goalposts, saying, sorry, consciousness is whatever I do, and not what you do.

 

 

 

And it seems to me the Emergentist argument makes the semantic argument (the Chinese Room) obsolete.  Will try to get back to that later.  

Link to comment
Share on other sites

6 hours ago, AIkonoklazt said:

I'm not patronizing you. I wasn't talking about probability in the article and I don't know what made you think so.

But I was. The Drake equation clalculates probability.
It's fine that you answered no, although you might have been straight about it first time round rather than that sideways dismissal you offered.

Anyway I agree with others that the problem here is you use of the absolute.
Far too many promising ideas falter at the first counterexample someone brings up because they have been couched in absolute terms.
Proving the negative is incredibly difficult.

I recommend reading this (short) thread, several very good points about this are made there.

https://www.scienceforums.net/topic/119871-what-is-falsifiability-exactly/#comment-1209560

These two deserve particular attention.

On 8/21/2019 at 5:56 PM, ccdan said:

Then there's the nonsense that a theory can only be "disproved" and never "proved"

Why is it nonsense? A single black swan is enough to disprove a statement like "all swans are white". But if I say "all swans are black or white", I can't prove that, not even if I check every living swan. I'd have to check every swan there ever was, or ever will be, and I'd have to be sure swans didn't live on any other planet in the universe as well. 

This is the basis for theory, the idea that we can only accumulate evidence in support rather than "proving" an idea. It's what keeps us searching for the most supported explanations, rather than answers we decide on and never go back to check.

On 8/21/2019 at 5:56 PM, ccdan said:

We can even try to formulate theories related to imaginary friends: "In every room in every building on earth, there's a green unicorn!"

The key word here is "every"...

On 8/21/2019 at 5:56 PM, ccdan said:

Finding just one room without an unicorn would make the theory "scientific".

No, it would not. It would show that "In every room in every building on earth, there's a green unicorn!" is false statement.

 

Link to comment
Share on other sites

More on the emergentist argument against the Chinese Room.

There is no iron-clad analogy between a computer program and a mind that is required here. Therefore, the semantic argument becomes obsolete: Even though a program as a syntactical construct doesn’t create semantics (and therefore couldn’t be equal to a mind), it doesn’t follow that a program can’t create semantic contents in the course of its execution.

Moreover, this emergentist argument is not that the computer hardware is the carrier of the mental processes. The hardware is not enabled to think this way. Rather, the computer creates the mental processes as an emergent phenomenon, similarly to how the brain creates mental processes as an emergent phenomenon. So, if one considers the question in the title of Searle’s original essay “Can Computers Think?”, the answer would be “No, but they might create thinking.”

In order to make this more plausible, imagine a program that exactly simulates the trajectories and interactions of elementary particles in a brain of a Chinese speaker. This way, the program does not only create the same outputs for the same inputs as the Chinese’s brain, but proceeds completely analogously. There is no immediate way to exclude the possibility that the simulated brain can’t create a mind in exactly the same way as a real brain can. The only assumption here is that the physical processes in a brain are deterministic. 

Searle's argument is ultimately veering into metaphysics because it is one with causal implications, namely that only a biological brain can cause a conscious mind.  This seems to confer a special causal power upon brains, which the OP and others have yet to demonstrate.  When other molecular machines engage in complex internal signaling between elements,  Searle would insist that that it's only syntax and nothing like a mind can emerge.  And yet, very strange, there are executive parts of my brain which help me to understand English but are in themselves not at all conscious - they route signals, handle symbols, but do not attach meaning to them.  Indeed, my understanding of English seems to emerge from these unconscious processes and does not happen in a specific cluster of cells.  Those executive areas, like the person in the Chinese Room, do not understand English at all, but we don't say that I (the totality of my neurological processes) don't understand English.  Hmm.  

 

Link to comment
Share on other sites

On 6/6/2022 at 5:58 AM, Alex_Krycek said:

@OP.  Have you considered the Holonomic Brain Theory proposed by David Bohm and Karl Pribram?  If so, how would their ideas factor into your hypothesis?

https://en.wikipedia.org/wiki/Holonomic_brain_theory

http://www.scholarpedia.org/article/Holonomic_brain_theory

 

  • What I have placed forward is a thesis and not a hypothesis (see respective definitions)
  • I am not willing to deal with theoretics, as I have mentioned in another reply. The reason I stick strictly to principles is because attempting to disprove theories with yet another theory would be akin to attempting to topple a sandcastle with a ball of sand. Theories are not as close to being as solid as principles. I anticipate the principles I had illustrated, if they are to be successfully countered, would be done with other principles in turn.
On 6/6/2022 at 7:57 AM, TheVat said:

When you speak of an "infusion of conscious will" you are engaging in metaphysics.  Not sure how to make that clearer.  

I didn't say DNA engages in directed design, I said that a blind evolutionary process can in effect design a molecular machine, and the burden is on you to prove that is somehow different from any other design, so far as conscious cognition is concerned.

You're entering a special pleading for biological neural nets, that only they can modify their own software and hardware.  Yet current AI research has been moving in that direction for decades.  It's as if you're saying no future innovation is possible, an assertion that the history of science has proved to be laughable, over and over.

You can't keep moving the goalposts, saying, sorry, consciousness is whatever I do, and not what you do.

 

 

 

And it seems to me the Emergentist argument makes the semantic argument (the Chinese Room) obsolete.  Will try to get back to that later.  

Look at the whole sentence. I said "...and not an infusion of conscious will." It was a statement against a metaphysical assumption and not a statement supporting one.

"I said that a blind evolutionary process can in effect design a molecular machine"

That process isn't design. See what the word means. There's no plan in random.

"You're entering a special pleading for biological neural nets, that only they can modify their own software and hardware."

See section: "Volition Rooms — Machines can only appear to possess intrinsic impetus" This was already addressed.

"You can't keep moving the goalposts, saying, sorry, consciousness is whatever I do, and not what you do."

I don't know what you're saying here. Please clarify.

On 6/6/2022 at 9:13 AM, studiot said:

But I was. The Drake equation clalculates probability.
It's fine that you answered no, although you might have been straight about it first time round rather than that sideways dismissal you offered.

Anyway I agree with others that the problem here is you use of the absolute.
Far too many promising ideas falter at the first counterexample someone brings up because they have been couched in absolute terms.
Proving the negative is incredibly difficult.

I recommend reading this (short) thread, several very good points about this are made there.

https://www.scienceforums.net/topic/119871-what-is-falsifiability-exactly/#comment-1209560

 

Nothing can violate the law of noncontradiction, agreed? This is one of the principles used. "Programming without programming" and "design without design" are self-contradictory concepts. The idea of artificial consciousness, upon deeper examination, is a self-contradictory concept. The law of noncontradiction is absolute.

On 6/6/2022 at 9:52 AM, TheVat said:

More on the emergentist argument against the Chinese Room.

There is no iron-clad analogy between a computer program and a mind that is required here. Therefore, the semantic argument becomes obsolete: Even though a program as a syntactical construct doesn’t create semantics (and therefore couldn’t be equal to a mind), it doesn’t follow that a program can’t create semantic contents in the course of its execution.

Moreover, this emergentist argument is not that the computer hardware is the carrier of the mental processes. The hardware is not enabled to think this way. Rather, the computer creates the mental processes as an emergent phenomenon, similarly to how the brain creates mental processes as an emergent phenomenon. So, if one considers the question in the title of Searle’s original essay “Can Computers Think?”, the answer would be “No, but they might create thinking.”

In order to make this more plausible, imagine a program that exactly simulates the trajectories and interactions of elementary particles in a brain of a Chinese speaker. This way, the program does not only create the same outputs for the same inputs as the Chinese’s brain, but proceeds completely analogously. There is no immediate way to exclude the possibility that the simulated brain can’t create a mind in exactly the same way as a real brain can. The only assumption here is that the physical processes in a brain are deterministic. 

Searle's argument is ultimately veering into metaphysics because it is one with causal implications, namely that only a biological brain can cause a conscious mind.  This seems to confer a special causal power upon brains, which the OP and others have yet to demonstrate.  When other molecular machines engage in complex internal signaling between elements,  Searle would insist that that it's only syntax and nothing like a mind can emerge.  And yet, very strange, there are executive parts of my brain which help me to understand English but are in themselves not at all conscious - they route signals, handle symbols, but do not attach meaning to them.  Indeed, my understanding of English seems to emerge from these unconscious processes and does not happen in a specific cluster of cells.  Those executive areas, like the person in the Chinese Room, do not understand English at all, but we don't say that I (the totality of my neurological processes) don't understand English.  Hmm.  

 

That's a theoretical counterargument (e.g. the nature of consciousness), to which I would simply say "That's a theory, but what about my straightforward demonstrations regarding principles of symbolic operation?" The demonstration superseding the Chinese Room Argument (which my article admits being inadequate and therefore needed reframing) is the Symbol Manipulator thought experiment in which the key question of "Now, did you just learn any meaning behind any language?" was asked of the reader. This was followed by another demonstration featuring pseudocode demonstrating the arbitrary nature of algorithms.

This pits your theory against my principle. I don't have to engage in theoretics when I already have a principle I can demonstrate.

Re: Searle. His Chinese Room Argument deals with syntax versus semantic, with the "biology" part only being a possible implication. Sure, Searle himself may believe in "only biological" but that's not what the Chinese Room argues (and not what my main argument does either- It attempts to illustrate the nature of computation and algorithm)

Edited by AIkonoklazt
re: Searle
Link to comment
Share on other sites

On 6/6/2022 at 3:22 PM, Peterkin said:

Ooooh, she's a beauty! Lots of potential for war and crime. 

Needs a human to program it for war or crime.

To err is human; to really screw things up requires the help of a machine tool.

Link to comment
Share on other sites

4 hours ago, AIkonoklazt said:

Needs a human to program it for war or crime.

Obviously. I never suggested that the boat was conscious or had criminal tendencies of its own, only that it's a versatile tool.

Link to comment
Share on other sites

12 hours ago, AIkonoklazt said:

This pits your theory against my principle. I don't have to engage in theoretics when I already have a principle I can demonstrate.

Theory is a good as it gets.  You don't have a theory until it's recognised as one by qualified others. At best you have a hypothesis, regardless of how solid you think it is. Your peers decide, not you.

Edited by StringJunky
Link to comment
Share on other sites

Just to note that demonstrating some task to be impossible can be done without contorted logical reasoning.

 

Quote

The fundamental incompatibility of the duration of the lunar cycle and the length of a year means that a "perfect" calender in which the solstices and equinoxes always fall on the same date, synchronised witht the phases of the moon, is impossible to construct.

Thank you Chad Orzel  - always worth reading his stuff.

 

Link to comment
Share on other sites

On 6/9/2022 at 12:54 PM, StringJunky said:

Theory is a good as it gets.  You don't have a theory until it's recognised as one by qualified others. At best you have a hypothesis, regardless of how solid you think it is. Your peers decide, not you.

I illustrated principles. No theories or hypothesis involved.

On 6/9/2022 at 3:32 PM, studiot said:

 

You need far more than first order logic to support your point.

I did. See the entire rest of the article.

On 6/10/2022 at 5:30 AM, studiot said:

Just to note that demonstrating some task to be impossible can be done without contorted logical reasoning.

 

That's rhetoric until you show me exactly what's wrong with my argumentation.

Link to comment
Share on other sites

11 minutes ago, AIkonoklazt said:
On 6/9/2022 at 11:32 PM, studiot said:

 

You need far more than first order logic to support your point.

I did. See the entire rest of the article.

On 6/10/2022 at 1:30 PM, studiot said:

Just to note that demonstrating some task to be impossible can be done without contorted logical reasoning.

 

That's rhetoric until you show me exactly what's wrong with my argumentation.

 

I already told you exactly what was wrong with your argument, although I am not bound to.
The onus of proof lies with the proposer, not the listener.

 

However I will repeat my statement that you tried to misapply first order logic.

On 6/9/2022 at 8:10 AM, AIkonoklazt said:

The law of noncontradiction is absolute.

This law ( also called the law of the excluded middle) is derived from more fundamental axioms which are what you have actually tried to misapply, notably

The axiom schema of specification.
 

In set terminology this axioms prevents Russell and other similar paradoxes by defining a 'restriction'.

 

The law you refer to is not and never can be absolute.

Link to comment
Share on other sites

26 minutes ago, studiot said:

 

I already told you exactly what was wrong with your argument, although I am not bound to.
The onus of proof lies with the proposer, not the listener.

 

However I will repeat my statement that you tried to misapply first order logic.

This law ( also called the law of the excluded middle) is derived from more fundamental axioms which are what you have actually tried to misapply, notably

The axiom schema of specification.
 

In set terminology this axioms prevents Russell and other similar paradoxes by defining a 'restriction'.

 

The law you refer to is not and never can be absolute.

Exactly which proof I provided are you referring to? ("The onus...")

It remains that programming without programming is an impossibility.

In other words, to refute the above you'd have to explain how there can be a program that's not a program. This is a straightforward contradiction in terms so why would we need set theory for that? Please elaborate.

Edited by AIkonoklazt
Sets? For what?
Link to comment
Share on other sites

1 hour ago, AIkonoklazt said:

Exactly which proof I provided are you referring to? ("The onus...")

It remains that programming without programming is an impossibility.

In other words, to refute the above you'd have to explain how there can be a program that's not a program. This is a straightforward contradiction in terms so why would we need set theory for that? Please elaborate.

I said nothing whatsoever about programming.

Quite the reverse my thesis has always been that you have not demonstrated and discounted a random occurrence as impossible.

I regard a random occurrence as an unprogrammed/programmable or not programmed/programmable occurrence.

 

By introducing a program, you have assumed (in part) what you set out to proove.

 

As to your attempt to avoid the issue of your own actual words which I quote, yet again.

On 6/9/2022 at 8:10 AM, AIkonoklazt said:

The law of noncontradiction is absolute.

 

I said, first quite subtly, and then not so subtly that you should go away and look up the conditions of applicability of your 'law'.

I even offered a suggstion as to the part of General Philosophy to look in, since this is where you have started this thread.

 

Link to comment
Share on other sites

As far as the OP goes, I find it a huge word-salade, full of statements with fuzzy meanings and impossible to follow. The first two definitions of intelligence and consciousness I find weird and unclear, and certainly not self evident. 

If it's not self-evident, then you need to establish it from basic self-evident principles.

Consciousness is a word. But unlike other words, it doesn't have a meaning that's blindingly obvious. I had a look at the wikipedia definition, and they seem to be describing the human experience of consciousness, which is a bit specialised. 

For me, the meaning comes from the contrast between being conscious and unconscious. Describe the difference, and that's what consciousness is. I think awareness of self is a highly developed state of consciousness, it's not necessary or vital for consciousness to exist, just as anti-lock braking isn't necessary for a car to exist.

Artificial consciousness is here and working as far as I'm concerned. My computer is conscious of my mouse and keyboard input, it's conscious of what programs have been opened and closed, my monitor is conscious of what's coming or not coming down the cable, and whether it is powered on or not. 

Animal consciousness is just a huge increase in the range of stimuli that are monitored, and how they are processed. 

But at the end of the day, the difference between a conscious rat, and an unconscious rat, is the level of reaction to stimuli, and the amount of processing going on.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.