Jump to content

Artificial Consciousness Is Impossible


Recommended Posts

Informal introduction:

I've tried other places of debate and discussion (most notably Reddit and LinkedIn), but they inevitably devolve into hostility. Some are hostile and insulting from the getgo, others descend into it after a few messages. Ars Technica forum locked me even before I could even respond to questions. I'm going to give this a go one last time before giving online discussion forums a rest.

Purpose of Discussion:

To advance this specific topic through challenge. As of now, avenues of counterargumentation seem to have been exhausted; Additional arguments I've received after the publication of my article all fell into categories that I've already addressed. I'm looking for types of counterarguments that I haven't seen.

Link to the original article is linked for reference only (full text below): https://towardsdatascience.com/artificial-consciousness-is-impossible-c1b2ab0bdc46


Full text of my article:


Artificial Consciousness Is Impossible
Conscious machines are staples of science fiction that are often taken for granted as articles of future fact, but they are not possible.

This article is an attempt to explain why the cherished fiction of conscious machines is an impossibility. The very act of hardware and software design is a transmission of impetus as an extension of the designers and not an infusion of conscious will. The latter half of the article is dedicated to addressing counterarguments. Lastly, some implications of the title thesis are listed.

Intelligence versus consciousness
Intelligence is the ability of an entity to perform tasks, while consciousness refers to the presence of a subjective phenomenon.

Intelligence[1]:

“…the ability to apply knowledge to manipulate one’s environment”

Consciousness[2]:

“When I am in a conscious mental state, there is something it is like for me to be in that state from the subjective or first-person point of view.”

Requirements of consciousness
A conscious entity, i.e., a mind, must possess:

1. Intentionality[3]:

“Intentionality is the power of minds to be about, to represent, or to stand for, things, properties, and states of affairs.”

Note that this is not a mere symbolic representation.

2. Qualia[4]:

“…the introspectively accessible, phenomenal aspects of our mental lives. In this broad sense of the term, it is difficult to deny that there are qualia.”

Meaning and symbols
Meaning is a mental connection between something (concrete or abstract) and a conscious experience. Philosophers of Mind describe the power of the mind that enables these connections intentionality. Symbols only hold meaning for entities that have made connections between their conscious experiences and the symbols.

The Chinese Room, reframed
The Chinese Room is a philosophical argument and thought experiment published by John Searle in 1980[5]:

Quote


“Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room.”



As it stands, the Chinese Room argument needs reframing. The person in the room has never made any connections between his or her conscious experiences and the Chinese characters, therefore neither the person nor the room understands Chinese. The central issue should be with the absence of connecting conscious experiences, and not whether there is a proper program that could turn anything into a mind (Which is the same as saying if a program X is good enough it would understand statement S. A program is never going to be “good enough” because it’s a program as I will explain in a later section). This original vague framing derailed the argument and made it more open to attacks. (One of such attacks as a result of the derailment was Sloman’s[6])

The Chinese Room argument points out the legitimate issue of symbolic processing not being sufficient for any meaning (syntax doesn’t suffice for semantics) but with framing that leaves too much wiggle room for objections. Instead of looking at whether a program could be turned into a mind, we instead delve into the fundamental nature of programs themselves.

Symbol Manipulator, a thought experiment
The basic nature of programs is that they are free of conscious associations which compose meaning. Programming codes contain meaning to humans only because the code is in the form of symbols that contain hooks to the readers’ conscious experiences. Searle’s Chinese Room argument serves the purpose of putting the reader of the argument in place of someone that has had no experiential connections to the symbols in the programming code. Thus, the Chinese Room is a Language Room. The person inside the room doesn’t understand the meaning behind the programming code, while to the outside world it appears that the room understands a particular human language.

The Chinese Room Argument comes with another potentially undermining issue. The person in the Chinese Room was introduced as a visualization device to get the reader to “see” from the point of view of a machine. However, since a machine can’t have a “point of view” because it isn’t conscious, having a person in the room creates a problem where the possible objection of “there’s a conscious person in the room doing conscious things” arises.

I will work around the POV issue and clarify the syntax versus semantics distinction by using the following thought experiment:

You memorize a whole bunch of shapes. Then, you memorize the order the shapes are supposed to go in so that if you see a bunch of shapes in a certain order, you would “answer” by picking a bunch of shapes in another prescribed order. Now, did you just learn any meaning behind any language?

All programs manipulate symbols this way. Program codes themselves contain no meaning. To machines, they are sequences to be executed with their payloads and nothing more, just like how the Chinese characters in the Chinese Room are payloads to be processed according to sequencing instructions given to the Chinese-illiterate person and nothing more.

Not only does it generalizes programming code, the Symbol Manipulator thought experiment, with its sequences and payloads, is a generalization of an algorithm: “A process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.[7]”

The relationship between the shapes and sequences is arbitrarily defined and not causally determined. Operational rules are what’s simply programmed in, not necessarily matching any sort of worldly causation because any such links would be an accidental feature of the program and not an essential feature (i.e., by happenstance and not necessity.) The program could be given any input to resolve and the machine would follow not because it “understands” any worldly implications of either the input or the output but simply because it’s following the dictates of its programming.

A very rough example of pseudocode to illustrate this arbitrary relationship:

let p=”night”

input R

if R=”day” then print p+”is”+R

Now, if I type “day”, then the output would be “night is day”. Great. Absolutely “correct output” according to its programming. It doesn’t necessarily “make sense” but it doesn’t have to because it’s the programming! The same goes with any other input that gets fed into the machine to produce output e.g., “nLc is auS”, “e8jey is 3uD4”, and so on.

To the machine, codes and inputs are nothing more than items and sequences to execute. There’s no meaning to this sequencing or execution activity to the machine. To the programmer, there is meaning because he or she conceptualizes and understands variables as representative placeholders of their conscious experiences. The machine doesn’t comprehend concepts such as “variables”, “placeholders”, “items”, “sequences”, “execution”, etc. It just doesn’t comprehend, period. Thus, a machine never truly “knows” what it’s doing and can only take on the operational appearance of comprehension.

Understanding Rooms — Machines ape understanding
The room metaphor extends to all artificially intelligent activities. Machines only appear to deal with meaning, when they ultimately translate everything to machine language instructions at a level that is devoid of meaning before and after execution and is only concerned with execution alone (The mechanism underlying all machine program execution illustrated by the shape memorization thought experiment above. A program only contains meaning for the programmer). The Chinese Room and the Symbol Manipulator thought experiments show that while our minds understand and deal with concepts, machines don’t and only deal with sequences and payloads. The mind is thus not a machine, and neither a machine nor a machine simulation could ever be a mind. Machines that appear to understand language and meaning are by their nature “Understanding Rooms” that only take on the outward appearance of understanding.

Learning Rooms- Machines never actually learn, partly because the mind isn’t just a physical information processor
The direct result of a machine’s complete lack of any possible genuine comprehension and understanding is that machines can only be Learning Rooms that appear to learn but never actually learn. Considering this, “machine learning” is a widely misunderstood and arguably oft-abused term.

AI textbooks readily admit that the “learning” in “machine learning” isn’t referring to learning in the usual sense of the word[8]:
 

Quote

“For example, a database system that allows users to update data entries would fit our definition of a learning system: it improves its performance at answering database queries based on the experience gained from database updates. Rather than worry about whether this type of activity falls under the usual informal conversational meaning of the word “learning,” we will simply adopt our technical definition of the class of programs that improve through experience.”


Note how the term “experience” isn’t used in the usual sense of the word, either, because experience isn’t just data collection. The Knowledge Argument shows how the mind doesn’t merely process information about the physical world[9].

Possessing only physical information and doing so without comprehension, machines hack the activity of learning by engaging in ways that defy the experiential context of the activity. A good example is how a computer artificially adapts to a video game with brute force instead of learning anything[10].

In the case of “learning to identify pictures”, machines are shown a couple hundred thousand to millions of pictures, and through lots of failures of seeing “gorilla” in bundles of “not gorilla” pixels to eventually correctly matching bunches of pixels on the screen to the term “gorilla”… except that it doesn’t even do it that well all of the time[11].

Needless to say, “increasing performance of identifying gorilla pixels” through intelligence is hardly the same thing as “learning what a gorilla is” through conscious experience. Mitigating this sledgehammer strategy involves artificially prodding the machines into trying only a smaller subset of everything instead of absolutely everything[12].

“Learning machines” are “Learning Rooms” that only take on the appearance of learning. Machines mimic certain theoretical mechanisms of learning as well as simulate the result of learning but never replicate the experiential activity of learning. Actual learning requires connecting referents with conscious experiences. This is why machines mistake groups of pixels that make up an image of a gorilla with those that compose an image of a dark-skinned human being. Machines don’t learn- They pattern match and only pattern match. There’s no actual personal experience associating a person’s face with that of a gorilla’s. When was the last time a person honestly mistakes an animal’s face with a human’s? Sure, we may see resemblances and deem those animal faces to be human-like, but we only recognize them as resemblances and not actual matches. Machines are fooled by “abstract camouflage”, adversarially generated images for the same reason[13]. These mistakes are mere symptoms of a lack of genuine learning; machines still wouldn’t be learning even if they give perfect results. Fundamentally, “machine learning” is every bit as distant from actual learning as the simple spreadsheet database updates mentioned in the AI textbook earlier.

Volition Rooms — Machines can only appear to possess intrinsic impetus
The fact that machines are programmed dooms them as appendages, extensions of the will of their programmers. A machine’s design and its programming constrain and define it. There’s no such thing as a “design without a design” or “programming without programming.” A machine’s operations have been externally determined by its programmers and designers, even if there are obfuscating claims (intentional or otherwise) such as “a program/machine evolved,” (Who designed the evolutionary algorithm?) “no one knows how the resulting program in the black box came about,” (Who programmed the program which produced the resultant code?) “The neural net doesn’t have a program,” (Who wrote the neural net’s algorithm?) “The machine learned and adapted,” (It doesn’t “learn…” Who determined how it would adapt?) and “There’s self-modifying code” (What determines the behavior of this so-called “self-modification,” because it isn’t “self.”) There’s no hiding or escaping from what ultimately produces the behaviors- The programmers’ programming.

Let’s take another look at Searle’s Chinese Room. Who or what wrote the program that the man in the Chinese Room followed? Certainly not the man because he doesn’t know Chinese, and certainly not the Chinese Room itself. As indicated earlier in the passage regarding learning, this Chinese Room didn’t “learn Chinese” just by having instructions placed into the room any more than a spreadsheet “learns” items written onto it. Neither the man nor the Chinese Room was “speaking Chinese;” They were merely following the instructions of the Chinese-speaking programmer of the Chinese Room.

It’s easy to see how terms such as “self-driving cars” aren’t exactly apt when programmers programmed their driving. This means that human designers are ultimately responsible for a machine’s failures when it comes to programming; Anything else would be an attempt to shirk responsibility. “Autonomous vehicles” are hardly autonomous. They no more learn how to drive or drive themselves than a Chinese Room learn Chinese or speak Chinese. Designers and programmers are the sources of a machine’s apparent volition.

Consciousness Rooms — Conclusion, machines can only appear to be conscious
Artificial intelligence that appears to be conscious is a Consciousness Room, an imitation with varying degrees of success. As I have shown, they are neither capable of understanding nor learning. Not only that, they are incapable of possessing volition. Artificial consciousness is impossible due to the extrinsic nature of programming which is bound to syntax and devoid of meaning.



Responses to counterarguments
The following segments are responses to specific categories of counterarguments against my thesis. Please note that these responses do not stand on their own and can only be seen as supporting my main arguments above. Each response only applies to those who hold the corresponding objections.

Circularity
From the conclusion, operating beyond syntax requires meaning derived from conscious experience. This may make the argument appear circular (assuming what it’s trying to prove) when conscious experience was mentioned at the very beginning of the argument as a defining component of meaning.

However, the initial proposition defining meaning (“Meaning is a mental connection with a conscious experience”) wasn’t given validity as a result of the conclusion or anything following the conclusion; it was an observation independent of the conclusion.

Functionalist objections (My response: They fail to account for underdetermination)
Many objections come in one form of functionalism or another. That is, they all go something along with one or more of these lines:

· If we know what a neuron does, then we know what the brain does.

· If we can copy a brain or reproduce collections of neurons, then we can produce artificial consciousness

· If we can copy the functions of a brain, we can produce artificial consciousness

No functionalist arguments work here, because to duplicate any function there must be ways of ensuring all functions and their dependencies are visible and measurable. There is no “copying” something that’s underdetermined. The functionalist presumptions of “if we know/if we can copy” are invalid.

Underdetermination entails no such exhaustive modeling of the brain is possible, as explained by the following passage from SEP (emphasis mine)[14]:
 

Quote

“…when Newton’s celestial mechanics failed to correctly predict the orbit of Uranus, scientists at the time did not simply abandon the theory but protected it from refutation…

“…This strategy bore fruit, notwithstanding the falsity of Newton’s theory…

“…But the very same strategy failed when used to try to explain the advance of the perihelion in Mercury’s orbit by postulating the existence of “Vulcan”, an additional planet…

“…Duhem was right to suggest not only that hypotheses must be tested as a group or a collection, but also that it is by no means a foregone conclusion which member of such a collection should be abandoned or revised in response to a failed empirical test or false implication.


In short, we have no assurances that we could engineer anything “like X” when we can’t have total knowledge of this X in the first place. There could be no assurances of a complete model due to underdetermination. Functionalist arguments fail because correlations in findings do not imply causation, and those correlations must be 100% discoverable to have an exhaustive model. There are multiple theoretical strikes against a functionalist position even before looking at actual experiments such as this one:

Repeat stimulations of identical neuron groups in the brain of a fly produce random results. This physically demonstrates the underdetermination[15]:
 

Quote

“…some neuron groups could elicit multiple behaviors across animals or sometimes even in a single animal.

Stimulating a single group of neurons in different animals occasionally resulted in different behaviors. That difference may be due to a number of things, Zlatic says: “It could be previous experience; it could be developmental differences; it could be somehow the personality of animals; different states that the animals find themselves in at the time of neuron activation.”

Stimulating the same neurons in one animal would occasionally result in different behaviors, the team found.”

In the above-quoted passage, note all instances of the phrases “may be” and “could be.” They are indications of underdetermined factors at work. No exhaustive modeling is possible when there are multiple possible explanations from random experimental results.

Functionalist Reply: “…but we don’t need exhaustive modeling or functional duplication”
Yes, we do, because there isn’t any assurance that consciousness is produced otherwise. A plethora of functions and behaviors can be produced without introducing consciousness; There are no real measurable external indicators of success. See section “Behaviorist Objections” below.

Behaviorist objections
These counterarguments generally say that if we can reproduce conscious behaviors, then we have produced consciousness. For instance, I completely disagree with a Scientific American article claiming the existence of a test for detecting consciousness in machines[16].

Observable behaviors don’t mean anything, as the original Chinese Room argument had already demonstrated. The Chinese Room only appears to understand Chinese. The fact that machine learning doesn’t equate to actual learning also attests to this.

Emergentism via machine complexity
Counterexamples to complexity emergentism include the number of transistors in a phone processor versus the number of neurons in the brain of a fruit fly. Why isn’t a smartphone more conscious than a fruit fly? What about supercomputers that have millions of times more transistors? How about space launch systems that are even more complex in comparison… are they conscious? Consciousness doesn’t arise out of complexity.

Cybernetics and cloning
If living entities are involved then the subject is no longer that of artificial consciousness. Those would be cases of manipulation of innate consciousness and not any creation of artificial consciousness.

“Eventually, everything gets invented in the future” and “Why couldn’t a mind be formed with another substrate?”
The substrate has nothing to do with the issue. All artificially intelligent systems require algorithm and code. All are subject to programming in one way or another. It doesn’t matter how far in the future one goes or what substrate one uses; the fundamental syntactic nature of machine code remains. Name one single artificial intelligence project that doesn’t involve any code whatsoever. Name one way that an AI can violate the principle of noncontradiction and possess programming without programming (see section “Volition Rooms” above.)

“We have DNA and DNA is programming code”
DNA is not programming code. Genetic makeup only influences and does not determine behavior. DNA doesn’t function like machine code, either. DNA sequencing carries instructions for a wide range of roles such as growth and reproduction, while the functional scope of machine code is comparatively limited. Observations suggest that every gene affects every complex trait to a degree not precisely known[17]. This shows their workings to be underdetermined, while programming code is functionally determinate in contrast (There’s no way for programmers to engineer behaviors, whether adaptive or “evolutionary,” without knowing what the program code is supposed to do. See section discussing “Volition Rooms”) and heavily compartmentalized in comparison (show me a large program in which every individual line of code influences ALL behavior). The DNA-programming parallel is a bad analogy that doesn’t stand up to scientific observation.

“But our minds also manipulate symbols”
Just because our minds can deal with symbols doesn’t mean it operates symbolically. We can experience and recollect things for which we have yet formulated proper descriptions[18]. In other words, we can have indescribable experiences. We start with non-symbolic experiences, then subsequently concoct symbolic representations for them in our attempts to rationally organize and communicate those experiences.

A personal anecdotal example: My earliest childhood memory was that of laying on a bed looking at an exhaust fan on a window. I remember what I saw back then, even though at the time I was too young to have learned words and terms such as “bed”, “window”, “fan”, “electric fan’, or “electric window exhaust fan”. Sensory and emotional recollections can be described with symbols but the recollected experiences themselves aren’t symbolic.

Furthermore, the medical phenomenon of aphantasia demonstrates visual experiences to be categorically separate from descriptions of them[19].

Randomness and random number generators
Randomness is a red herring when it comes to serving as an indicator of consciousness (not to mention the dubious nature of all external indicators, as shown by the Chinese Room Argument). A random number generator inside a machine would simply be providing another input, ultimately only serve to generate more symbols to manipulate.

“We have constructed sophisticated functional neural computing models”
The existence of sophisticated functional models does in no way help functionalists escape the functionalist trap. Those models are still heavily underdetermined as shown by a recent example of an advanced neural learning algorithm[20].

The model is very sophisticated, but note just how much underdetermined couching it contains:
 

Quote

”possibly a different threshold”

”may share a common refractory period”

”will probably be answered experimentally”

Models are far from reflecting functioning neural groups present in living brains; I highly doubt that any researcher would lay such a claim, for that’s not their goal in the first place. Models can and do produce useful functions and be practically “correct”, even if those models are factually “wrong” in that they don’t necessarily correspond to actuality in function. In other words, models don’t have to 100% correspond to reality for them to work, thus their factual correctness is never guaranteed. For example, orbital satellites could still function without considering relativistic effects because most relativistic effects are too small to be significant in satellite navigation[21].

“Your argument only applies to Von Neumann machines”
It applies to any machine. It applies to catapults. Programming a catapult involves adjusting pivot points, tensions, and counterweights. The programming language of a catapult is contained within the positioning of the pivots, the amount of tension, the amount of counterweight, and so on. You can even build a computer out of water pipes if you want[22]; The same principle applies. A machine no more “does things on its own” than a catapult flings by itself.

“Your thought experiment is an intuition pump”
In order to take this avenue of criticism, one would have to demonstrate the alleged abuse in reasoning I supposedly engage in. Einstein also used “folk” concepts in his thought experiments regarding reference frames[23], so are thought experiments being discredited en masse here, or just mine? It’s a failure to field a clear criticism, and a vague reply of “thought experiments can be abused” is unproductive. Do people think my analogy is even worse than their stale stratagem of casting the mind as an analog of the prevailing technology of the day- first hydraulics, then telephones, then electrical fields, and now computers[24]? Would people feel better if they perform my experiment with patterned index cards they can hold in their hands instead? The criticism needs to be specific.

Lack of explanatory power (My response: Demonstrating the falsity of existing theories doesn’t demand yet another theory)
Arguing for or against the possibility of artificial consciousness doesn’t give much of any inroads as to the actual nature of consciousness, but that doesn’t detract from the thesis because the goal here isn’t to explicitly define the nature of consciousness. “What consciousness is” (e.g., its nature) isn’t being explored here as much as “what consciousness doesn’t entail,” which can still be determined via its requirements. There have been theories surrounding differing “conscious potential” of various physical materials but those theories have largely shown themselves to be bunk[25]. Explanatory theories are neither needed for my thesis nor productive in proving or disproving it. The necessary fundamental principles were already provided (see section “Requirements of consciousness.”)

On panpsychism
(A topic that has been popular on SA in recent years[26])

I don’t subscribe to panpsychism, but even if panpsychism is true, the subsequently possible claim that “all things are conscious” is still false because it commits a fallacy of division. There is a difference in kind from everything to every single thing. The purported universal consciousness of panpsychism, if it exists, would not be of the same kind as the ordinary consciousness found in living entities.

Some examples of such categorical differences: Johnny sings, but his kidneys don’t. Johnny sees, but his toenails don’t. Saying that a lamp is conscious in one sense of the word simply because it belongs in a universe that is “conscious” in another would be committing just as big of a category mistake as saying that a kidney sings or a toenail sees.

A claim that all things are conscious (including an AI) as a result of universal consciousness would be conflating two categories simply due to the lack of terms separating them. Just because the term “consciousness” connects all things to the adherents of universal consciousness, doesn’t mean the term itself should be used equivocally. Panpsychist philosopher David Chalmer writes[27]:
 

Quote

“Panpsychism, taken literally, is the doctrine that everything has a mind. In practice, people who call themselves panpsychists are not committed to as strong a doctrine. They are not committed to the thesis that the number two has a mind, or that the Eiffel tower has a mind, or that the city of Canberra has a mind, even if they believe in the existence of numbers, towers, and cities.”


“If it looks like a duck…” (A tongue-in-cheek rebuke to a tongue-in-cheek behaviorist challenge)
If it looks like a duck, swims like a duck, quacks like a duck, but you know that the duck is an AI duck, then you have a fancy duck automaton. “But hold on, what if no one could tell?” Then it’s a fancy duck automaton that no one could tell from an actual duck, probably because all of its manufacturing documentation is destroyed, the programmer died, and couldn’t tell anyone that it’s an AI duck… It’s still not an actual duck, however. Cue responses such as “Then we can get rid of all evidence of manufacturing” and other quips which I deem as grasping at straws and intellectually dishonest. If someone constructs a functionally perfect and visually indistinguishable artificial duck just to prove me wrong then that’s a waste of effort; its identity would have to be revealed for the point to be “proven.” At that point, the revelation would prove me correct instead.

The “duck reply” is another behavioralist objection rendered meaningless by the Chinese Room Argument (see section “Behaviorist Objections” above.)

“You can’t prove to me that you’re conscious”
This denial is gaming the same empirically non-demonstrable fact as the non-duck duck objection above. We’re speaking of metaphysical facts, not the mere ability or inability to obtain them. That being said, the starting point of either acknowledging OR skeptically denying consciousness should start with the question “Do I deny the existence of my consciousness?” and not “Prove yours to me.”

There is no denying the existence of one’s own consciousness, and it would be an exercise in absurdity to question it in other people once we acknowledge ourselves to be conscious. When each of us encounters another person, do we first assume the possibility we’re merely encountering a facsimile of a person, then check to see if that person is a person before finally starting to think of the entity as a person upon satisfaction? No, lest someone is suffering from delusional paranoia. We wouldn’t want to create a world where this absurd paranoia becomes feasible, either (see the section below.)

Some implications with the impossibility of artificial consciousness
1. AI should never be given moral rights. Because they can never be conscious, they are less deserving of rights than animals. At least animals are conscious and can feel pain[28].

2. AI that takes on extremely close likeness to human beings in both physical appearance, as well as behavior (i.e., crossing the Uncanny Valley), should be strictly banned in the future. Allowing them to exist only creates a world immersed in absurd paranoia (see section above). Based on my observations, many people are confused enough on the subject of machine consciousness as-is, by the all-too-common instances of what one of my colleagues called “bad science fiction.”

3. Consciousness could never be “uploaded” into machines. Any attempt at doing so and then “retiring” the original body before its natural lifespan would be an act of suicide. Any complete Ship of Theseus-styled bit-by-bit machine “replacement” would gradually result in the same.

4. Any disastrous AI “calamity” would be caused by bad design/programming and only bad design/programming.

5. Human beings are wholly responsible for the actions of their creations, and corporations should be held responsible for the misbehavior of their products.

6. We’re not living in a simulation. Those speculations are nonsensical per my thesis:

Given that artificial consciousness is impossible:

- Simulated environments are artificial (by definition.)

- Should we exist within such an environment, we must not be conscious. Otherwise, our consciousness would be part of an artificial system- Not possible due to the impossibility of artificial consciousness.

- However, we are conscious.

- Therefore, we’re not living in a simulation.

References
[1] merriam-webster.com, “Intelligence” (2021), https://www.merriam-webster.com/dictionary/intelligence

[2] Internet Encyclopedia of Philosophy, “Consciousness” (2021), https://iep.utm.edu/consciou/

[3] Stanford Encyclopedia of Philosophy, “Intentionality” (2019), https://plato.stanford.edu/entries/intentionality/

[4] Stanford Encyclopedia of Philosophy, “Qualia” (2017), http://plato.stanford.edu/entries/qualia/

[5] Stanford Encyclopedia of Philosophy, “The Chinese Room Argument” (2020), https://plato.stanford.edu/entries/chinese-room/

[6] A. Sloman, Did Searle Attack Strong Strong or Weak Strong AI? (1985), Artificial Intelligence and Its Applications, A.G. Cohn and J.R. Thomas (Eds.) John Wiley and Sons 1986.

[7] Oxford English Dictionary, “algorithm” (2021), https://www.lexico.com/en/definition/algorithm

[8] T. Mitchell, Machine Learning (1997), McGraw-Hill Education (1st ed.)

[9] Stanford Encyclopedia of Philosophy, “Qualia: The Knowledge Argument” (2019), https://plato.stanford.edu/entries/qualia-knowledge/

[10] V. Highfield, AI Learns To Cheat At Q*Bert In A Way No Human Has Ever Done Before (2018), https://www.alphr.com/artificial-intell ... one-before

[11] J. Vincent, Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech (2018), https://www.theverge.com/2018/1/12/1688 ... gorithm-ai

[12] H. Sikchi, Towards Safe Reinforcement Learning (2018), https://medium.com/@harshitsikchi/towar ... b7caa5702e

[13] D. G. Smith, How to Hack an Intelligent Machine (2018), https://www.scientificamerican.com/arti ... t-machine/

[14] Stanford Encyclopedia of Philosophy, “Underdetermination of Scientific Theory” (2017), https://plato.stanford.edu/entries/scie ... rmination/

[15] L. Sanders, Ten thousand neurons linked to behaviors in fly (2014), https://www.sciencenews.org/article/ten ... aviors-fly

[16] S. Schneider and E. Turner, Is Anyone Home? A Way to Find Out If AI Has Become Self-Aware (2017), https://blogs.scientificamerican.com/ob ... elf-aware/

[17] V. Greenwood, Theory Suggests That All Genes Affect Every Complex Trait (2018), https://www.quantamagazine.org/omnigeni ... -20180620/

[18] D. Robson, The ‘untranslatable’ emotions you never knew you had (2017), https://www.bbc.com/future/article/2017 ... ew-you-had

[19] C. Zimmer, Picture This? Some Just Can’t (2015), https://www.nytimes.com/2015/06/23/scie ... blind.html

[20] R. Urbanczik, Learning by the dendritic prediction of somatic spiking (2014), Neuron. 2014 Feb 5;81(3):521–8.

[21] Ž. Hećimović, Relativistic effects on satellite navigation (2013), Tehnicki Vjesnik 20(1):195–203

[22] K. Patowary, Vladimir Lukyanov’s Water Computer (2019), https://www.amusingplanet.com/2019/12/v ... puter.html

[23] Stanford Encyclopedia of Philosophy, “Thought Experiments” (2019), https://plato.stanford.edu/entries/thought-experiment/

[24] M. Cobb, Why your brain is not a computer (2020), https://www.theguardian.com/science/202 ... sciousness

[25] M. A. Cerullo, The Problem with Phi: A Critique of Integrated Information Theory (2015), PLoS Comput Biol. 2015 Sep; 11(9): e1004286. Konrad P. Kording (Ed.)

[26] Various authors, Retrieved list of scientificamerican.com articles on Panpsychism for illustrative purposes (2021 April 22), https://www.scientificamerican.com/sear ... anpsychism

[27] D. J. Chalmers, Panpsychism and Panprotopsychism, The Amherst Lecture in Philosophy 8 (2013): 1–35

[28] M. Bekoff, Animal Consciousness: New Report Puts All Doubts to Sleep (2018), https://www.psychologytoday.com/us/blog ... ubts-sleep

Link to comment
Share on other sites

10 hours ago, AIkonoklazt said:

Informal introduction:

I've tried other places of debate and discussion (most notably Reddit and LinkedIn), but they inevitably devolve into hostility. Some are hostile and insulting from the getgo, others descend into it after a few messages. Ars Technica forum locked me even before I could even respond to questions. I'm going to give this a go one last time before giving online discussion forums a rest.

+1 for reading the rules here and posting appropriately.

What a good start for your discussion.

You have posted a lot of material so it will probably take some time for folks to read and digest.

There are several members interested in aspects of AI here.

Link to comment
Share on other sites

Posted (edited)

Impressive bibliography.   As someone whose work involved some AI, for a while in the late eighties, I have to say many of us moved away from Searle's chinese room because it was more based on older computer architectures - linear, user-coded, nonparallel systems that had more relation to Searle's imagined room than do cutting edge neural networks with plasticity, massive parallellism, self-modifying and code creation, analog-digital integrations, etc.  Modern AI has looked at brains and is learning more how they work and what functions transcend substrate.  There is more openness to strong emergentism in architectures that reveal novel features not deducible from the composite.  As there should be.

The simplest argument I can offer you is: the emergence of artificial consciousness is possible because the consciousness we all know intimately has in fact emerged from matter, molecules which evolved the ability to both represent and to create information.  Searle's model (which he himself has somewhat recanted in recent years) is based on simple  linear machines that only represent information -- neural nets have the potential to do more than simply execute code.  We don't process the world, we actively create it (a bit of metaphor there, no worries) by creating the information that informs our models.  

Edited by TheVat
Minor
Link to comment
Share on other sites

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first. The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing. I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order. My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Link to comment
Share on other sites

30 minutes ago, grantcas said:

By this I mean, can any particular theory be used to create a human adult level conscious machine.

I don't see why. I mean, why does God have to start with Adam instead of pond-scum?

42 minutes ago, grantcas said:

A machine with primary consciousness will probably have to come first.

I think that must be so. And it would be an accidental byproduct of spaghetti code on a buildup of multiple layers of upgrades and add-ons and a number of machines connected in a network; the programmer wouldn't intend it or even know about it.

Once it's recognized, its evolution might be directed, or at least influenced by humans. But there is no guarantee that it would be detected and recognized early in its development. I don't see that it has to follow the biological pattern of brain formation - though I imagine it would have to follow the formation of simple-to-complex abstraction. But since its physical requirements and reproductive process are non-biological, it could evolve very much faster.  

IOW Windows 123

Link to comment
Share on other sites

Posted (edited)

I find some points problematic.

ie. That no human will mistake an ape for a person, but might apparently mistake a machine for one. That rather suggests an issue with training data and/or limited senses. Plato's Cave.

Likewise dismiss randomness as mere symbols, but those symbols can also be the entire program. One could easily create an Infinite Monkey Program and have it generate every other program.

At some point feel can be a co-creation. There's our code and then there's this outside input that can find it's way in.

 

Do have to say is an excellent first post.

Edited by Endy0816
Link to comment
Share on other sites

Posted (edited)
1 hour ago, Peterkin said:

Once it's recognized, its evolution might be directed, or at least influenced by humans. But there is no guarantee that it would be detected and recognized early in its development. I don't see that it has to follow the biological pattern of brain formation - though I imagine it would have to follow the formation of simple-to-complex abstraction. But since its physical requirements and reproductive process are non-biological, it could evolve very much faster.  

IOW Windows 123

AFAIK, this would be the case because we can't predict a prior unknown emergent phenomenon; we can only know ex post facto.

Edited by StringJunky
Link to comment
Share on other sites

1 hour ago, StringJunky said:

AFAIK, this would be the case because we can't predict a prior unknown emergent phenomenon; we can only know ex post facto.

That's kind of what I think. We do recognize the consciousness of a fox or a zebra, even though they don't exhibit the intelligence of a computer. Because we have evolution and biology in common with all other animals, their consciousness is like ours. And of course we're very much aware of the consciousness and intelligence of dogs, because they're culturally close to us and reflect us - that is, their intelligence is informed by our input. This latter would also be the case with computers. Their brand (for want of a better terms) of intelligence is informed by our input, which would make communication easier than with any biological entity.  

Link to comment
Share on other sites

Posted (edited)
31 minutes ago, Peterkin said:

That's kind of what I think. We do recognize the consciousness of a fox or a zebra, even though they don't exhibit the intelligence of a computer. Because we have evolution and biology in common with all other animals, their consciousness is like ours. And of course we're very much aware of the consciousness and intelligence of dogs, because they're culturally close to us and reflect us - that is, their intelligence is informed by our input. This latter would also be the case with computers. Their brand (for want of a better terms) of intelligence is informed by our input, which would make communication easier than with any biological entity.  

Yes, naturally, we will tend to make AI as an extended reflection of our own abilities. There could be a time though, if/when they can self-program, they can evolve in their own direction. At that point, I think we can say humanity has shed its mortal coil.... evolution  of our species becomes non-biological in the physical sense. There's no difference between similar molecules residing in a biological entity or a complex machine.... they are, I think, both capable of ultimately performing the same functions. I tend to think of consciousness as just an emergent function of sufficient signalling complexity.

I think of Alzheimer's and how easily that human complexity can be undone as evidence that we are not some seamless, holistic entity. Operationally, we seem to be the sum of many parts.

Edited by StringJunky
Link to comment
Share on other sites

29 minutes ago, StringJunky said:

There could be a time though, if/when they can self-program, they can evolve in their own direction.

A departure, maybe; not a cleavage. So did we depart from the apes, without having shed the first five billion years of programming.

 

32 minutes ago, StringJunky said:

At that point, I think we can say humanity has shed its mortal coil.... evolution  of our species becomes non-biological in the physical sense.

Again, I don't see that it's necessary. Lots of species coexist with their evolutionary predecessors - that we wipe out the great apes is no indication that AI has to wipe us out or subsume us in order to fulfill its destiny. Maybe once it has an independent consciousness, it won't need us anymore - but neither will it necessarily want to merge with us - or have anything to do with us. It might decide to go find its own planet and start over. It might just ignore us. It might want to stay friends. We don't know what a new unprecedented, non-biological entity may desire. 

37 minutes ago, StringJunky said:

I think of Alzheimer's and how easily that human complexity can be undone as evidence that we are not some seamless, holistic entity. Operationally, we seem to be the sum of many parts.

I rather think AI will not have that problem; it is much better set up to repair, augment and rationalize itself.

Link to comment
Share on other sites

5 minutes ago, Peterkin said:

A departure, maybe; not a cleavage. So did we depart from the apes, without having shed the first five billion years of programming.

 

Again, I don't see that it's necessary. Lots of species coexist with their evolutionary predecessors - that we wipe out the great apes is no indication that AI has to wipe us out or subsume us in order to fulfill its destiny. Maybe once it has an independent consciousness, it won't need us anymore - but neither will it necessarily want to merge with us - or have anything to do with us. It might decide to go find its own planet and start over. It might just ignore us. It might want to stay friends. We don't know what a new unprecedented, non-biological entity may desire. 

 

I never said it would be cleavage, our signalling processes would just change architecture. The 'wet' human forms would still co-exist as long as conditions allowed it, same with chip-based forms.

Quote

Again, I don't see that it's necessary. 

Evolution is a blind watchmaker, we do what we do. Necessity is moot. On the large scale it's not directed. Evolutionary predecessors will co-exist as long as conditions allow.

You seem not to have grasped what I'm saying.

 All I'm saying is autonomous AI becomes a new path of human manifestation that may or may not evolve away from us. That does not mean AI needs to destroy humans.

Quote

I rather think AI will not have that problem; it is much better set up to repair, augment and rationalize itself.

I was rhetorically saying in my post that we don't have a metaphysical basis, as Alzheimer's demonstrates.

Link to comment
Share on other sites

8 hours ago, TheVat said:

The simplest argument I can offer you is: the emergence of artificial consciousness is possible because the consciousness we all know intimately has in fact emerged from matter, molecules which evolved the ability to both represent and to create information. 

That process doesn't sound like an engineering process. One of the principles I'm attempting to convey is that consciousness may not necessarily be a "thing" to be "added" to X or "emerge from X" in order for X to be conscious. If algorithmic functional design (includes "evolutionary" functions) deprive consciousness, then there is something (attribute or not) that must be removed from X in order for X to be conscious. This is the reverse of conventional assumption.

I don't know whether it's okay for me to post separate replies to separate responses, or I am required to consolidate replies. Excuse me in advance if I'm not adhering to forum convention. I'm going to get to everything one by one.

Link to comment
Share on other sites

Posted (edited)
15 minutes ago, AIkonoklazt said:

That process doesn't sound like an engineering process. One of the principles I'm attempting to convey is that consciousness may not necessarily be a "thing" to be "added" to X or "emerge from X" in order for X to be conscious. If algorithmic functional design (includes "evolutionary" functions) deprive consciousness, then there is something (attribute or not) that must be removed from X in order for X to be conscious. This is the reverse of conventional assumption.

I don't know whether it's okay for me to post separate replies to separate responses, or I am required to consolidate replies. Excuse me in advance if I'm not adhering to forum convention. I'm going to get to everything one by one.

You can post point by point if you want. If you quote each person in separate posts, providing no one else has posted in the interim, they will automatically be merged, if it occurs within a certain time. Just try things out. There is sandbox to play with forum controls in:

https://www.scienceforums.net/forum/99-the-sandbox/

Edited by StringJunky
Link to comment
Share on other sites

Posted (edited)
6 hours ago, Peterkin said:

I don't see why. I mean, why does God have to start with Adam instead of pond-scum?

I think that must be so. And it would be an accidental byproduct of spaghetti code on a buildup of multiple layers of upgrades and add-ons and a number of machines connected in a network; the programmer wouldn't intend it or even know about it.

Shouldn't the byproduct of functions/upgrades be yet another function/"upgrade," whether software or hardware?

i don't see how or why opacity would induce a change in nature. Opacity is simply opacity.

6 hours ago, Endy0816 said:

I find some points problematic.

ie. That no human will mistake an ape for a person, but might apparently mistake a machine for one. That rather suggests an issue with training data and/or limited senses. Plato's Cave.

Likewise dismiss randomness as mere symbols, but those symbols can also be the entire program. One could easily create an Infinite Monkey Program and have it generate every other program.

At some point feel can be a co-creation. There's our code and then there's this outside input that can find it's way in.

 

Do have to say is an excellent first post.

Even if the machine gives perfect results, it still wouldn't be learning as the article shows.

It is theoretically possible for an AGI to perform all tasks it is given perfectly (i.e. perfect intelligence) without it ever being conscious. Intelligence (performance) and consciousness (presence of subjective phenomena), as stressed at the beginning of the article, are separate matters.

I don't see how every possible algorithm being created leads to an algorithm not being an algorithm. An algorithm's fundamental nature remains.

Edited by AIkonoklazt
i.e... not e.g.
Link to comment
Share on other sites

2 hours ago, StringJunky said:

You seem not to have grasped what I'm saying.

Evidently. It sounded to me as if, once AI becomes autonomous, humans as we know us will cease to exists. I see no reason why this be.

 

2 hours ago, StringJunky said:

was rhetorically saying in my post that we don't have a metaphysical basis, as Alzheimer's demonstrates.

Of course we don't have a metaphysical basis. Nothing has a metaphysical basis. Metaphysics is far from basics as human imagination gets - short of gods and composite fabled beasties. 

 

39 minutes ago, AIkonoklazt said:

Shouldn't the byproduct of functions/upgrades be yet another function/"upgrade," whether software or hardware?

i don't see how or why opacity would induce a change in nature. Opacity is simply opacity.

Yeah. Until it turns into something else. Like mud was just mud, until the pond-scum started moving to the warm end. All i mean is, you don't know and can't predict what accidental byproducts may emerge from complexity.

It's improbable. It may never happen. But life was improbable and happened anyway.

Link to comment
Share on other sites

Posted (edited)
34 minutes ago, Peterkin said:

Yeah. Until it turns into something else. Like mud was just mud, until the pond-scum started moving to the warm end. All i mean is, you don't know and can't predict what accidental byproducts may emerge from complexity.

It's improbable. It may never happen. But life was improbable and happened anyway.

That's not a valid parallel. Organic matter reorganized in configuration change, and while there are external pressures the process is innate. Same couldn't be said about algorithm- the impetus is still external. Reference section, "Volition Rooms — Machines can only appear to possess intrinsic impetus."

8 hours ago, grantcas said:

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first. The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing. I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order. My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

The theory is basically a variety of functionalism. Reference sections, "Functionalist objections (My response: They fail to account for underdetermination)"

In addition, consciousness isn't required for perfect visible functionality as I've indicated in my previous reply to Endy0816.

10 hours ago, TheVat said:

Searle's model (which he himself has somewhat recanted in recent years)

An aside- May I have a pointer or two to literature describing his recantation? I'm just feeling extra disappointed in Searle right now. First the sexual harassment scandal, and now this.

Edited by AIkonoklazt
typo. to, two
Link to comment
Share on other sites

Posted (edited)
15 hours ago, AIkonoklazt said:

....

An aside- May I have a pointer or two to literature describing his recantation? I'm just feeling extra disappointed in Searle right now. First the sexual harassment scandal, and now this.

It's been awhile, so all I can say is that while still asserting that a traditional programmer-coded computer, uncomprehendingly manipulating symbols on the basis of syntax, not meaning, would never be conscious, he somewhat softened on his bio-chauvinism (where he had formerly been insistent that only biology can have intentionality) and allowed that a neural net with self-plasticity and so on, could perhaps be conscious.  

I think he tried to preserve his earlier position by saying that if you have to mimick biological structures and dynamics so much to engineer a truly sentient machine, then you have admitted that purely syntactical processing never will.  I find it all a bit circular: if I can define machine intelligence narrowly enough, I can demonstrate it is not conscious.  Searle, in the final analysis, only proves the limits of his own definition of AI.

He liked to say that water doesn't gush from a computer simulating a rainstorm, which was superficially clever, but that always seemed to require us to ignore that computers really can move information around, so if we simulate something that moves information around, like a brain, that's a rather different thing than simulating raindrops.  

Your smartphone calculator doesn't simulate doing math.  It actually does math.  An AI doesn't have to simulate wet and squishy just because brains are wet and squishy.  It has only to think, and think with meanings and intentions.  Like a brain.

 

I guess one could take a Penrose stance and posit that consciousness must be non algorithmic, can intuitively overcome Godelian incompleteness, and self-reference via some quantum states of superposition unique to wetware brains.  But why unique to biology?  For me Penrose's non algorithmic processes simply beg the question of why a quantum computer couldn't step up and fill those intuition shoes with its massive states of superposition.   To simply perceive the ambiguity of a myriad of superposed states, why are we certain this perceptive process could not be engineered?  

Edited by TheVat
Adds
Link to comment
Share on other sites

Can I ask if this paper is a version of the Drake equation suggesting that the probability of any artificially constructed system becoming self aware, sentient or conscious is so low that it can't happen ?

Link to comment
Share on other sites

Posted (edited)
On 5/29/2022 at 9:11 PM, AIkonoklazt said:

Shouldn't the byproduct of functions/upgrades be yet another function/"upgrade," whether software or hardware?

i don't see how or why opacity would induce a change in nature. Opacity is simply opacity.

Even if the machine gives perfect results, it still wouldn't be learning as the article shows.

It is theoretically possible for an AGI to perform all tasks it is given perfectly (i.e. perfect intelligence) without it ever being conscious. Intelligence (performance) and consciousness (presence of subjective phenomena), as stressed at the beginning of the article, are separate matters.

I don't see how every possible algorithm being created leads to an algorithm not being an algorithm. An algorithm's fundamental nature remains.


The author Jorge Borges wrote a story about the main character finding the infinite and randomly ordered 'Book of Sand'.

Would one's transcription of page numbers from that book be an act of purely human creation?

If you create a program incorporating an external source of randomness, are the results, potentially every program and every calculation, purely your doing?

 

Speaking for myself this suggests algorithms can be as independent as we are and posses some separate agency. At the same time without that external randomness they can be as deterministic as a rock. Just depends.

On learning I'd say there's simply multiple options for finding a solution within a search space. Sometimes we lack previous knowledge to build upon too.

They might still be pretty dumb for a long time though. Like trying to compare an insect brain to ours. Very very fast but no ridiculous number of connections per neuron.

Edited by Endy0816
Link to comment
Share on other sites

Posted (edited)
On 5/30/2022 at 10:14 AM, TheVat said:

It's been awhile, so all I can say is that while still asserting that a traditional programmer-coded computer, uncomprehendingly manipulating symbols on the basis of syntax, not meaning, would never be conscious, he somewhat softened on his bio-chauvinism (where he had formerly been insistent that only biology can have intentionality) and allowed that a neural net with self-plasticity and so on, could perhaps be conscious.  

I think he tried to preserve his earlier position by saying that if you have to mimick biological structures and dynamics so much to engineer a truly sentient machine, then you have admitted that purely syntactical processing never will.  I find it all a bit circular: if I can define machine intelligence narrowly enough, I can demonstrate it is not conscious.  Searle, in the final analysis, only proves the limits of his own definition of AI.

He liked to say that water doesn't gush from a computer simulating a rainstorm, which was superficially clever, but that always seemed to require us to ignore that computers really can move information around, so if we simulate something that moves information around, like a brain, that's a rather different thing than simulating raindrops.  

Your smartphone calculator doesn't simulate doing math.  It actually does math.  An AI doesn't have to simulate wet and squishy just because brains are wet and squishy.  It has only to think, and think with meanings and intentions.  Like a brain.

 

I guess one could take a Penrose stance and posit that consciousness must be non algorithmic, can intuitively overcome Godelian incompleteness, and self-reference via some quantum states of superposition unique to wetware brains.  But why unique to biology?  For me Penrose's non algorithmic processes simply beg the question of why a quantum computer couldn't step up and fill those intuition shoes with its massive states of superposition.   To simply perceive the ambiguity of a myriad of superposed states, why are we certain this perceptive process could not be engineered?  

If that's what Searle did to his Chinese Room then I'd rather just ignore his recantation and go with his original argument, because his new version doesn't really make sense to me. It seems to complicate the argument without improving his position. I don't know. If he allowed something like "self-plasticity" to mess with his opinion then he messed up. Reference my article, section "Volition Rooms — Machines can only appear to possess intrinsic impetus." There's no more "self-plasticity" in any theoretical machine than a car "self-drive."

Anyhow, I think my "it's intrinsic impetus" might sound more reasonable to people than Searle's "it's biology."

I wouldn't go for Penrose's stance, because I prefer to steer completely clear of theoretics in my argumentation. I'll put it this way: Trying to disprove a theory by using yet another theory would be like trying to hit a sand castle with a ball of sand. It's much easier to defend myself if I stick with offering fundamental principles and self-evident observations. The less I speculate, the more solid my footing.

On 5/30/2022 at 3:38 PM, studiot said:

Can I ask if this paper is a version of the Drake equation suggesting that the probability of any artificially constructed system becoming self aware, sentient or conscious is so low that it can't happen ?

It's not. It denies consciousness for all algorithmic entities equally.

41 minutes ago, Endy0816 said:

If you create a program incorporating an external source of randomness, are the results, potentially every program and every calculation, purely your doing?

Randomness only provides another input to the algorithm. Again, the result is more symbol manipulation. It's not doing anything different, as the article indicated in section "Randomness and random number generators"

Edited by AIkonoklazt
"self-plasticity" of machines wouldn't be from "self"
Link to comment
Share on other sites

On 6/1/2022 at 3:40 AM, AIkonoklazt said:
On 5/30/2022 at 11:38 PM, studiot said:

Can I ask if this paper is a version of the Drake equation suggesting that the probability of any artificially constructed system becoming self aware, sentient or conscious is so low that it can't happen ?

It's not. It denies consciousness for all algorithmic entities equally.

I don't recall either you or I specifying an 'algorithmic' entity.

If you wish to limit your 'entities' to algorithmic ones, you should specify this as the issue becomes quite different.

 

Link to comment
Share on other sites

Posted (edited)
3 hours ago, studiot said:

I don't recall either you or I specifying an 'algorithmic' entity.

If you wish to limit your 'entities' to algorithmic ones, you should specify this as the issue becomes quite different.

 

The issue is the same. Reference section: “Your argument only applies to Von Neumann machines”

A fundamental concept here is intrinsic impetus being denied through the process of design.

Any and all automation involves algorithm.

Edited by AIkonoklazt
adds 3rd sentence
Link to comment
Share on other sites

On 5/31/2022 at 10:40 PM, AIkonoklazt said:

Randomness only provides another input to the algorithm. Again, the result is more symbol manipulation. It's not doing anything different, as the article indicated in section "Randomness and random number generators"

But it wouldn't remain just an input. The rules being applied can change the rules themselves in a nondeterministic fashion.

Programs do all exist in some form physically too. They're not really the symbols we might represent their doings as.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.