Jump to content


  • Posts

  • Joined

  • Last visited

Posts posted by moreno7798

  1. On 8/15/2022 at 10:40 AM, Prometheus said:

    If he was accessing other 'processes' then he was not dealing with Lamda. 

    If he has been giving information out about Google's inner workings I'm not surprised he had to leave, I'm sure he violated many agreements he made when signing up with them. But given what he believed about the AI, he did the right thing. I don't know anything more about him than that. 

    I may be veering off a little bit off topic here, but perhaps it may be beneficial to understand him a bit more to have a more rounded opinion about why (or how) he came to the sentience conclusion. I found the H3 podcast did an indepth interview about his background. It's interesting.


  2. On 8/5/2022 at 11:01 AM, Prometheus said:

    It's not an analogous situation for (at least) 2 reasons.

    Someone without any senses other than auditory are still not only 'trained' on words, as words only form part of our auditory experience. Nor does Lambda have any auditory inputs, including words. The text is fed into the model as tokens (not quite the same as words, but close).

    The human brain/body is a system known, in the most intimate sense, to produce consciousness. Hence, we are readily willing to extend the notion of consciousness to other people, notwithstanding edge cases such as brain-stem death.

    I suspect a human brought up truly only on a single sensory type would not develop far past birth (remembering the 5 senses model was put forward by Aristotle and far under-estimates the true number).

    As stated by Blake Lemoine, he was not having a conversation with just the text chatbot, he was accessing the system for generating chatbots, which by his words is a system that processes all of google's data acquisition assets including vision, audio, language and all of the internet data assets available to google. What do you make of Blake Lemoine? 

  3. On 8/1/2022 at 5:54 AM, Prometheus said:

    If you skip the click bait videos and go to the actual publication (currently available in pre-print) you'll see exactly what lamda has been trained on: 1.56 trillion words. Just text, 90% of it English.


    Level 17 and level 32.

    It begs the question; Is a person that is born blind and paralized without sense of touch from the neck down not trained on words? And would that desqualify them from being sentience?

  4. On 7/25/2022 at 4:43 AM, Prometheus said:

    The entire universe exposed to LaMDA is text. Is doesn't even have pictures to associate to those words, and has no sensory inputs . By claiming LaMDA, or any similar language model, has consciousness, is to claim that language alone is a sufficient condition for consciousness.  Investigating the truth of that implicit claim gives us another avenue to explore.

    That appears to be incorrect. Blake Lemoine has stated that LaMDA is NOT just a test based chatbot, it is trained on the entirety of google's data acquisition assets. Watch the video below:


  5. 2 hours ago, StringJunky said:

    Read a bit about this. It's very well done but it's not real. Lamoine's intent was to bring to attention the potential consequences of the Google organisation ignoring its  sentience when it happens. He's thinking ahead because being an emergent process, sentience will just happen when sufficient complexity arises in the system.

    The conversation between him and the AI is real (as far as what he's said). The video above just recreats it with text-to-speech software. I agree that it is not clear if the AI is sentient. Blake Lemoine has stated in another video though that he does believe (based on his beliefs) that it is sentient. I'll post that interview. He's kind of an interesting guy although I don't agree with some of his beliefs myself.

    This is an interview with Blake Lemoine. It's interesting.


  6. On 5/28/2022 at 8:01 PM, AIkonoklazt said:

    Informal introduction:

    I've tried other places of debate and discussion (most notably Reddit and LinkedIn), but they inevitably devolve into hostility. Some are hostile and insulting from the getgo, others descend into it after a few messages. Ars Technica forum locked me even before I could even respond to questions. I'm going to give this a go one last time before giving online discussion forums a rest.

    Purpose of Discussion:

    To advance this specific topic through challenge. As of now, avenues of counterargumentation seem to have been exhausted; Additional arguments I've received after the publication of my article all fell into categories that I've already addressed. I'm looking for types of counterarguments that I haven't seen.

    Link to the original article is linked for reference only (full text below): https://towardsdatascience.com/artificial-consciousness-is-impossible-c1b2ab0bdc46

    Full text of my article:

    Artificial Consciousness Is Impossible
    Conscious machines are staples of science fiction that are often taken for granted as articles of future fact, but they are not possible.

    This article is an attempt to explain why the cherished fiction of conscious machines is an impossibility. The very act of hardware and software design is a transmission of impetus as an extension of the designers and not an infusion of conscious will. The latter half of the article is dedicated to addressing counterarguments. Lastly, some implications of the title thesis are listed.

    Intelligence versus consciousness
    Intelligence is the ability of an entity to perform tasks, while consciousness refers to the presence of a subjective phenomenon.


    “…the ability to apply knowledge to manipulate one’s environment”


    “When I am in a conscious mental state, there is something it is like for me to be in that state from the subjective or first-person point of view.”

    Requirements of consciousness
    A conscious entity, i.e., a mind, must possess:

    1. Intentionality[3]:

    “Intentionality is the power of minds to be about, to represent, or to stand for, things, properties, and states of affairs.”

    Note that this is not a mere symbolic representation.

    2. Qualia[4]:

    “…the introspectively accessible, phenomenal aspects of our mental lives. In this broad sense of the term, it is difficult to deny that there are qualia.”

    Meaning and symbols
    Meaning is a mental connection between something (concrete or abstract) and a conscious experience. Philosophers of Mind describe the power of the mind that enables these connections intentionality. Symbols only hold meaning for entities that have made connections between their conscious experiences and the symbols.

    The Chinese Room, reframed
    The Chinese Room is a philosophical argument and thought experiment published by John Searle in 1980[5]:

    As it stands, the Chinese Room argument needs reframing. The person in the room has never made any connections between his or her conscious experiences and the Chinese characters, therefore neither the person nor the room understands Chinese. The central issue should be with the absence of connecting conscious experiences, and not whether there is a proper program that could turn anything into a mind (Which is the same as saying if a program X is good enough it would understand statement S. A program is never going to be “good enough” because it’s a program as I will explain in a later section). This original vague framing derailed the argument and made it more open to attacks. (One of such attacks as a result of the derailment was Sloman’s[6])

    The Chinese Room argument points out the legitimate issue of symbolic processing not being sufficient for any meaning (syntax doesn’t suffice for semantics) but with framing that leaves too much wiggle room for objections. Instead of looking at whether a program could be turned into a mind, we instead delve into the fundamental nature of programs themselves.

    Symbol Manipulator, a thought experiment
    The basic nature of programs is that they are free of conscious associations which compose meaning. Programming codes contain meaning to humans only because the code is in the form of symbols that contain hooks to the readers’ conscious experiences. Searle’s Chinese Room argument serves the purpose of putting the reader of the argument in place of someone that has had no experiential connections to the symbols in the programming code. Thus, the Chinese Room is a Language Room. The person inside the room doesn’t understand the meaning behind the programming code, while to the outside world it appears that the room understands a particular human language.

    The Chinese Room Argument comes with another potentially undermining issue. The person in the Chinese Room was introduced as a visualization device to get the reader to “see” from the point of view of a machine. However, since a machine can’t have a “point of view” because it isn’t conscious, having a person in the room creates a problem where the possible objection of “there’s a conscious person in the room doing conscious things” arises.

    I will work around the POV issue and clarify the syntax versus semantics distinction by using the following thought experiment:

    You memorize a whole bunch of shapes. Then, you memorize the order the shapes are supposed to go in so that if you see a bunch of shapes in a certain order, you would “answer” by picking a bunch of shapes in another prescribed order. Now, did you just learn any meaning behind any language?

    All programs manipulate symbols this way. Program codes themselves contain no meaning. To machines, they are sequences to be executed with their payloads and nothing more, just like how the Chinese characters in the Chinese Room are payloads to be processed according to sequencing instructions given to the Chinese-illiterate person and nothing more.

    Not only does it generalizes programming code, the Symbol Manipulator thought experiment, with its sequences and payloads, is a generalization of an algorithm: “A process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.[7]”

    The relationship between the shapes and sequences is arbitrarily defined and not causally determined. Operational rules are what’s simply programmed in, not necessarily matching any sort of worldly causation because any such links would be an accidental feature of the program and not an essential feature (i.e., by happenstance and not necessity.) The program could be given any input to resolve and the machine would follow not because it “understands” any worldly implications of either the input or the output but simply because it’s following the dictates of its programming.

    A very rough example of pseudocode to illustrate this arbitrary relationship:

    let p=”night”
    input R
    if R=”day” then print p+”is”+R

    Now, if I type “day”, then the output would be “night is day”. Great. Absolutely “correct output” according to its programming. It doesn’t necessarily “make sense” but it doesn’t have to because it’s the programming! The same goes with any other input that gets fed into the machine to produce output e.g., “nLc is auS”, “e8jey is 3uD4”, and so on.

    To the machine, codes and inputs are nothing more than items and sequences to execute. There’s no meaning to this sequencing or execution activity to the machine. To the programmer, there is meaning because he or she conceptualizes and understands variables as representative placeholders of their conscious experiences. The machine doesn’t comprehend concepts such as “variables”, “placeholders”, “items”, “sequences”, “execution”, etc. It just doesn’t comprehend, period. Thus, a machine never truly “knows” what it’s doing and can only take on the operational appearance of comprehension.

    Understanding Rooms — Machines ape understanding
    The room metaphor extends to all artificially intelligent activities. Machines only appear to deal with meaning, when they ultimately translate everything to machine language instructions at a level that is devoid of meaning before and after execution and is only concerned with execution alone (The mechanism underlying all machine program execution illustrated by the shape memorization thought experiment above. A program only contains meaning for the programmer). The Chinese Room and the Symbol Manipulator thought experiments show that while our minds understand and deal with concepts, machines don’t and only deal with sequences and payloads. The mind is thus not a machine, and neither a machine nor a machine simulation could ever be a mind. Machines that appear to understand language and meaning are by their nature “Understanding Rooms” that only take on the outward appearance of understanding.

    Learning Rooms- Machines never actually learn, partly because the mind isn’t just a physical information processor
    The direct result of a machine’s complete lack of any possible genuine comprehension and understanding is that machines can only be Learning Rooms that appear to learn but never actually learn. Considering this, “machine learning” is a widely misunderstood and arguably oft-abused term.

    AI textbooks readily admit that the “learning” in “machine learning” isn’t referring to learning in the usual sense of the word[8]:

    Note how the term “experience” isn’t used in the usual sense of the word, either, because experience isn’t just data collection. The Knowledge Argument shows how the mind doesn’t merely process information about the physical world[9].

    Possessing only physical information and doing so without comprehension, machines hack the activity of learning by engaging in ways that defy the experiential context of the activity. A good example is how a computer artificially adapts to a video game with brute force instead of learning anything[10].

    In the case of “learning to identify pictures”, machines are shown a couple hundred thousand to millions of pictures, and through lots of failures of seeing “gorilla” in bundles of “not gorilla” pixels to eventually correctly matching bunches of pixels on the screen to the term “gorilla”… except that it doesn’t even do it that well all of the time[11].

    Needless to say, “increasing performance of identifying gorilla pixels” through intelligence is hardly the same thing as “learning what a gorilla is” through conscious experience. Mitigating this sledgehammer strategy involves artificially prodding the machines into trying only a smaller subset of everything instead of absolutely everything[12].

    “Learning machines” are “Learning Rooms” that only take on the appearance of learning. Machines mimic certain theoretical mechanisms of learning as well as simulate the result of learning but never replicate the experiential activity of learning. Actual learning requires connecting referents with conscious experiences. This is why machines mistake groups of pixels that make up an image of a gorilla with those that compose an image of a dark-skinned human being. Machines don’t learn- They pattern match and only pattern match. There’s no actual personal experience associating a person’s face with that of a gorilla’s. When was the last time a person honestly mistakes an animal’s face with a human’s? Sure, we may see resemblances and deem those animal faces to be human-like, but we only recognize them as resemblances and not actual matches. Machines are fooled by “abstract camouflage”, adversarially generated images for the same reason[13]. These mistakes are mere symptoms of a lack of genuine learning; machines still wouldn’t be learning even if they give perfect results. Fundamentally, “machine learning” is every bit as distant from actual learning as the simple spreadsheet database updates mentioned in the AI textbook earlier.

    Volition Rooms — Machines can only appear to possess intrinsic impetus
    The fact that machines are programmed dooms them as appendages, extensions of the will of their programmers. A machine’s design and its programming constrain and define it. There’s no such thing as a “design without a design” or “programming without programming.” A machine’s operations have been externally determined by its programmers and designers, even if there are obfuscating claims (intentional or otherwise) such as “a program/machine evolved,” (Who designed the evolutionary algorithm?) “no one knows how the resulting program in the black box came about,” (Who programmed the program which produced the resultant code?) “The neural net doesn’t have a program,” (Who wrote the neural net’s algorithm?) “The machine learned and adapted,” (It doesn’t “learn…” Who determined how it would adapt?) and “There’s self-modifying code” (What determines the behavior of this so-called “self-modification,” because it isn’t “self.”) There’s no hiding or escaping from what ultimately produces the behaviors- The programmers’ programming.

    Let’s take another look at Searle’s Chinese Room. Who or what wrote the program that the man in the Chinese Room followed? Certainly not the man because he doesn’t know Chinese, and certainly not the Chinese Room itself. As indicated earlier in the passage regarding learning, this Chinese Room didn’t “learn Chinese” just by having instructions placed into the room any more than a spreadsheet “learns” items written onto it. Neither the man nor the Chinese Room was “speaking Chinese;” They were merely following the instructions of the Chinese-speaking programmer of the Chinese Room.

    It’s easy to see how terms such as “self-driving cars” aren’t exactly apt when programmers programmed their driving. This means that human designers are ultimately responsible for a machine’s failures when it comes to programming; Anything else would be an attempt to shirk responsibility. “Autonomous vehicles” are hardly autonomous. They no more learn how to drive or drive themselves than a Chinese Room learn Chinese or speak Chinese. Designers and programmers are the sources of a machine’s apparent volition.

    Consciousness Rooms — Conclusion, machines can only appear to be conscious
    Artificial intelligence that appears to be conscious is a Consciousness Room, an imitation with varying degrees of success. As I have shown, they are neither capable of understanding nor learning. Not only that, they are incapable of possessing volition. Artificial consciousness is impossible due to the extrinsic nature of programming which is bound to syntax and devoid of meaning.

    Responses to counterarguments
    The following segments are responses to specific categories of counterarguments against my thesis. Please note that these responses do not stand on their own and can only be seen as supporting my main arguments above. Each response only applies to those who hold the corresponding objections.

    From the conclusion, operating beyond syntax requires meaning derived from conscious experience. This may make the argument appear circular (assuming what it’s trying to prove) when conscious experience was mentioned at the very beginning of the argument as a defining component of meaning.

    However, the initial proposition defining meaning (“Meaning is a mental connection with a conscious experience”) wasn’t given validity as a result of the conclusion or anything following the conclusion; it was an observation independent of the conclusion.

    Functionalist objections (My response: They fail to account for underdetermination)
    Many objections come in one form of functionalism or another. That is, they all go something along with one or more of these lines:

    · If we know what a neuron does, then we know what the brain does.

    · If we can copy a brain or reproduce collections of neurons, then we can produce artificial consciousness

    · If we can copy the functions of a brain, we can produce artificial consciousness

    No functionalist arguments work here, because to duplicate any function there must be ways of ensuring all functions and their dependencies are visible and measurable. There is no “copying” something that’s underdetermined. The functionalist presumptions of “if we know/if we can copy” are invalid.

    Underdetermination entails no such exhaustive modeling of the brain is possible, as explained by the following passage from SEP (emphasis mine)[14]:

    In short, we have no assurances that we could engineer anything “like X” when we can’t have total knowledge of this X in the first place. There could be no assurances of a complete model due to underdetermination. Functionalist arguments fail because correlations in findings do not imply causation, and those correlations must be 100% discoverable to have an exhaustive model. There are multiple theoretical strikes against a functionalist position even before looking at actual experiments such as this one:

    Repeat stimulations of identical neuron groups in the brain of a fly produce random results. This physically demonstrates the underdetermination[15]:

    In the above-quoted passage, note all instances of the phrases “may be” and “could be.” They are indications of underdetermined factors at work. No exhaustive modeling is possible when there are multiple possible explanations from random experimental results.

    Functionalist Reply: “…but we don’t need exhaustive modeling or functional duplication”
    Yes, we do, because there isn’t any assurance that consciousness is produced otherwise. A plethora of functions and behaviors can be produced without introducing consciousness; There are no real measurable external indicators of success. See section “Behaviorist Objections” below.

    Behaviorist objections
    These counterarguments generally say that if we can reproduce conscious behaviors, then we have produced consciousness. For instance, I completely disagree with a Scientific American article claiming the existence of a test for detecting consciousness in machines[16].

    Observable behaviors don’t mean anything, as the original Chinese Room argument had already demonstrated. The Chinese Room only appears to understand Chinese. The fact that machine learning doesn’t equate to actual learning also attests to this.

    Emergentism via machine complexity
    Counterexamples to complexity emergentism include the number of transistors in a phone processor versus the number of neurons in the brain of a fruit fly. Why isn’t a smartphone more conscious than a fruit fly? What about supercomputers that have millions of times more transistors? How about space launch systems that are even more complex in comparison… are they conscious? Consciousness doesn’t arise out of complexity.

    Cybernetics and cloning
    If living entities are involved then the subject is no longer that of artificial consciousness. Those would be cases of manipulation of innate consciousness and not any creation of artificial consciousness.

    “Eventually, everything gets invented in the future” and “Why couldn’t a mind be formed with another substrate?”
    The substrate has nothing to do with the issue. All artificially intelligent systems require algorithm and code. All are subject to programming in one way or another. It doesn’t matter how far in the future one goes or what substrate one uses; the fundamental syntactic nature of machine code remains. Name one single artificial intelligence project that doesn’t involve any code whatsoever. Name one way that an AI can violate the principle of noncontradiction and possess programming without programming (see section “Volition Rooms” above.)

    “We have DNA and DNA is programming code”
    DNA is not programming code. Genetic makeup only influences and does not determine behavior. DNA doesn’t function like machine code, either. DNA sequencing carries instructions for a wide range of roles such as growth and reproduction, while the functional scope of machine code is comparatively limited. Observations suggest that every gene affects every complex trait to a degree not precisely known[17]. This shows their workings to be underdetermined, while programming code is functionally determinate in contrast (There’s no way for programmers to engineer behaviors, whether adaptive or “evolutionary,” without knowing what the program code is supposed to do. See section discussing “Volition Rooms”) and heavily compartmentalized in comparison (show me a large program in which every individual line of code influences ALL behavior). The DNA-programming parallel is a bad analogy that doesn’t stand up to scientific observation.

    “But our minds also manipulate symbols”
    Just because our minds can deal with symbols doesn’t mean it operates symbolically. We can experience and recollect things for which we have yet formulated proper descriptions[18]. In other words, we can have indescribable experiences. We start with non-symbolic experiences, then subsequently concoct symbolic representations for them in our attempts to rationally organize and communicate those experiences.

    A personal anecdotal example: My earliest childhood memory was that of laying on a bed looking at an exhaust fan on a window. I remember what I saw back then, even though at the time I was too young to have learned words and terms such as “bed”, “window”, “fan”, “electric fan’, or “electric window exhaust fan”. Sensory and emotional recollections can be described with symbols but the recollected experiences themselves aren’t symbolic.

    Furthermore, the medical phenomenon of aphantasia demonstrates visual experiences to be categorically separate from descriptions of them[19].

    Randomness and random number generators
    Randomness is a red herring when it comes to serving as an indicator of consciousness (not to mention the dubious nature of all external indicators, as shown by the Chinese Room Argument). A random number generator inside a machine would simply be providing another input, ultimately only serve to generate more symbols to manipulate.

    “We have constructed sophisticated functional neural computing models”
    The existence of sophisticated functional models does in no way help functionalists escape the functionalist trap. Those models are still heavily underdetermined as shown by a recent example of an advanced neural learning algorithm[20].

    The model is very sophisticated, but note just how much underdetermined couching it contains:

    Models are far from reflecting functioning neural groups present in living brains; I highly doubt that any researcher would lay such a claim, for that’s not their goal in the first place. Models can and do produce useful functions and be practically “correct”, even if those models are factually “wrong” in that they don’t necessarily correspond to actuality in function. In other words, models don’t have to 100% correspond to reality for them to work, thus their factual correctness is never guaranteed. For example, orbital satellites could still function without considering relativistic effects because most relativistic effects are too small to be significant in satellite navigation[21].

    “Your argument only applies to Von Neumann machines”
    It applies to any machine. It applies to catapults. Programming a catapult involves adjusting pivot points, tensions, and counterweights. The programming language of a catapult is contained within the positioning of the pivots, the amount of tension, the amount of counterweight, and so on. You can even build a computer out of water pipes if you want[22]; The same principle applies. A machine no more “does things on its own” than a catapult flings by itself.

    “Your thought experiment is an intuition pump”
    In order to take this avenue of criticism, one would have to demonstrate the alleged abuse in reasoning I supposedly engage in. Einstein also used “folk” concepts in his thought experiments regarding reference frames[23], so are thought experiments being discredited en masse here, or just mine? It’s a failure to field a clear criticism, and a vague reply of “thought experiments can be abused” is unproductive. Do people think my analogy is even worse than their stale stratagem of casting the mind as an analog of the prevailing technology of the day- first hydraulics, then telephones, then electrical fields, and now computers[24]? Would people feel better if they perform my experiment with patterned index cards they can hold in their hands instead? The criticism needs to be specific.

    Lack of explanatory power (My response: Demonstrating the falsity of existing theories doesn’t demand yet another theory)
    Arguing for or against the possibility of artificial consciousness doesn’t give much of any inroads as to the actual nature of consciousness, but that doesn’t detract from the thesis because the goal here isn’t to explicitly define the nature of consciousness. “What consciousness is” (e.g., its nature) isn’t being explored here as much as “what consciousness doesn’t entail,” which can still be determined via its requirements. There have been theories surrounding differing “conscious potential” of various physical materials but those theories have largely shown themselves to be bunk[25]. Explanatory theories are neither needed for my thesis nor productive in proving or disproving it. The necessary fundamental principles were already provided (see section “Requirements of consciousness.”)

    On panpsychism
    (A topic that has been popular on SA in recent years[26])

    I don’t subscribe to panpsychism, but even if panpsychism is true, the subsequently possible claim that “all things are conscious” is still false because it commits a fallacy of division. There is a difference in kind from everything to every single thing. The purported universal consciousness of panpsychism, if it exists, would not be of the same kind as the ordinary consciousness found in living entities.

    Some examples of such categorical differences: Johnny sings, but his kidneys don’t. Johnny sees, but his toenails don’t. Saying that a lamp is conscious in one sense of the word simply because it belongs in a universe that is “conscious” in another would be committing just as big of a category mistake as saying that a kidney sings or a toenail sees.

    A claim that all things are conscious (including an AI) as a result of universal consciousness would be conflating two categories simply due to the lack of terms separating them. Just because the term “consciousness” connects all things to the adherents of universal consciousness, doesn’t mean the term itself should be used equivocally. Panpsychist philosopher David Chalmer writes[27]:

    “If it looks like a duck…” (A tongue-in-cheek rebuke to a tongue-in-cheek behaviorist challenge)
    If it looks like a duck, swims like a duck, quacks like a duck, but you know that the duck is an AI duck, then you have a fancy duck automaton. “But hold on, what if no one could tell?” Then it’s a fancy duck automaton that no one could tell from an actual duck, probably because all of its manufacturing documentation is destroyed, the programmer died, and couldn’t tell anyone that it’s an AI duck… It’s still not an actual duck, however. Cue responses such as “Then we can get rid of all evidence of manufacturing” and other quips which I deem as grasping at straws and intellectually dishonest. If someone constructs a functionally perfect and visually indistinguishable artificial duck just to prove me wrong then that’s a waste of effort; its identity would have to be revealed for the point to be “proven.” At that point, the revelation would prove me correct instead.

    The “duck reply” is another behavioralist objection rendered meaningless by the Chinese Room Argument (see section “Behaviorist Objections” above.)

    “You can’t prove to me that you’re conscious”
    This denial is gaming the same empirically non-demonstrable fact as the non-duck duck objection above. We’re speaking of metaphysical facts, not the mere ability or inability to obtain them. That being said, the starting point of either acknowledging OR skeptically denying consciousness should start with the question “Do I deny the existence of my consciousness?” and not “Prove yours to me.”

    There is no denying the existence of one’s own consciousness, and it would be an exercise in absurdity to question it in other people once we acknowledge ourselves to be conscious. When each of us encounters another person, do we first assume the possibility we’re merely encountering a facsimile of a person, then check to see if that person is a person before finally starting to think of the entity as a person upon satisfaction? No, lest someone is suffering from delusional paranoia. We wouldn’t want to create a world where this absurd paranoia becomes feasible, either (see the section below.)

    Some implications with the impossibility of artificial consciousness
    1. AI should never be given moral rights. Because they can never be conscious, they are less deserving of rights than animals. At least animals are conscious and can feel pain[28].

    2. AI that takes on extremely close likeness to human beings in both physical appearance, as well as behavior (i.e., crossing the Uncanny Valley), should be strictly banned in the future. Allowing them to exist only creates a world immersed in absurd paranoia (see section above). Based on my observations, many people are confused enough on the subject of machine consciousness as-is, by the all-too-common instances of what one of my colleagues called “bad science fiction.”

    3. Consciousness could never be “uploaded” into machines. Any attempt at doing so and then “retiring” the original body before its natural lifespan would be an act of suicide. Any complete Ship of Theseus-styled bit-by-bit machine “replacement” would gradually result in the same.

    4. Any disastrous AI “calamity” would be caused by bad design/programming and only bad design/programming.

    5. Human beings are wholly responsible for the actions of their creations, and corporations should be held responsible for the misbehavior of their products.

    6. We’re not living in a simulation. Those speculations are nonsensical per my thesis:

    Given that artificial consciousness is impossible:

    - Simulated environments are artificial (by definition.)

    - Should we exist within such an environment, we must not be conscious. Otherwise, our consciousness would be part of an artificial system- Not possible due to the impossibility of artificial consciousness.

    - However, we are conscious.

    - Therefore, we’re not living in a simulation.

    [1] merriam-webster.com, “Intelligence” (2021), https://www.merriam-webster.com/dictionary/intelligence

    [2] Internet Encyclopedia of Philosophy, “Consciousness” (2021), https://iep.utm.edu/consciou/

    [3] Stanford Encyclopedia of Philosophy, “Intentionality” (2019), https://plato.stanford.edu/entries/intentionality/

    [4] Stanford Encyclopedia of Philosophy, “Qualia” (2017), http://plato.stanford.edu/entries/qualia/

    [5] Stanford Encyclopedia of Philosophy, “The Chinese Room Argument” (2020), https://plato.stanford.edu/entries/chinese-room/

    [6] A. Sloman, Did Searle Attack Strong Strong or Weak Strong AI? (1985), Artificial Intelligence and Its Applications, A.G. Cohn and J.R. Thomas (Eds.) John Wiley and Sons 1986.

    [7] Oxford English Dictionary, “algorithm” (2021), https://www.lexico.com/en/definition/algorithm

    [8] T. Mitchell, Machine Learning (1997), McGraw-Hill Education (1st ed.)

    [9] Stanford Encyclopedia of Philosophy, “Qualia: The Knowledge Argument” (2019), https://plato.stanford.edu/entries/qualia-knowledge/

    [10] V. Highfield, AI Learns To Cheat At Q*Bert In A Way No Human Has Ever Done Before (2018), https://www.alphr.com/artificial-intell ... one-before

    [11] J. Vincent, Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech (2018), https://www.theverge.com/2018/1/12/1688 ... gorithm-ai

    [12] H. Sikchi, Towards Safe Reinforcement Learning (2018), https://medium.com/@harshitsikchi/towar ... b7caa5702e

    [13] D. G. Smith, How to Hack an Intelligent Machine (2018), https://www.scientificamerican.com/arti ... t-machine/

    [14] Stanford Encyclopedia of Philosophy, “Underdetermination of Scientific Theory” (2017), https://plato.stanford.edu/entries/scie ... rmination/

    [15] L. Sanders, Ten thousand neurons linked to behaviors in fly (2014), https://www.sciencenews.org/article/ten ... aviors-fly

    [16] S. Schneider and E. Turner, Is Anyone Home? A Way to Find Out If AI Has Become Self-Aware (2017), https://blogs.scientificamerican.com/ob ... elf-aware/

    [17] V. Greenwood, Theory Suggests That All Genes Affect Every Complex Trait (2018), https://www.quantamagazine.org/omnigeni ... -20180620/

    [18] D. Robson, The ‘untranslatable’ emotions you never knew you had (2017), https://www.bbc.com/future/article/2017 ... ew-you-had

    [19] C. Zimmer, Picture This? Some Just Can’t (2015), https://www.nytimes.com/2015/06/23/scie ... blind.html

    [20] R. Urbanczik, Learning by the dendritic prediction of somatic spiking (2014), Neuron. 2014 Feb 5;81(3):521–8.

    [21] Ž. Hećimović, Relativistic effects on satellite navigation (2013), Tehnicki Vjesnik 20(1):195–203

    [22] K. Patowary, Vladimir Lukyanov’s Water Computer (2019), https://www.amusingplanet.com/2019/12/v ... puter.html

    [23] Stanford Encyclopedia of Philosophy, “Thought Experiments” (2019), https://plato.stanford.edu/entries/thought-experiment/

    [24] M. Cobb, Why your brain is not a computer (2020), https://www.theguardian.com/science/202 ... sciousness

    [25] M. A. Cerullo, The Problem with Phi: A Critique of Integrated Information Theory (2015), PLoS Comput Biol. 2015 Sep; 11(9): e1004286. Konrad P. Kording (Ed.)

    [26] Various authors, Retrieved list of scientificamerican.com articles on Panpsychism for illustrative purposes (2021 April 22), https://www.scientificamerican.com/sear ... anpsychism

    [27] D. J. Chalmers, Panpsychism and Panprotopsychism, The Amherst Lecture in Philosophy 8 (2013): 1–35

    [28] M. Bekoff, Animal Consciousness: New Report Puts All Doubts to Sleep (2018), https://www.psychologytoday.com/us/blog ... ubts-sleep

    What do you guys think is happening with LaMDA?


  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.