Jump to content

Artificial Consciousness Is Impossible


AIkonoklazt

Recommended Posts

42 minutes ago, studiot said:

Thank you.

When have I ever said otherwise ?

The only thing I have said about machines is that they have no business in this thread and that you were misusing the scientific and engineering definitions of a machine.

 

I did offer you a more interesting concept  - That of a self diagnosing construct.

In this case it is not programmed but is constructed to be self diagnosing.

This is the car tyre I asked you about and you did not reply to.

You said I was "misusing" a definition and yet never stated how or why. Back up your assertion.

How is a car tyre a machine? I don't think I'm the one misusing a term. Which definition of a machine are you using and from where?

(Searched this thread for the word "tyre" and I only found my reply and the reply that I just quoted.)

Edited by AIkonoklazt
Link to comment
Share on other sites

If you really want a discussion you must respect the other side and stop this dodging and diving.

 

I SPECIFICALLY CALLED THE CAR TYRE A CONSTRUCT NOT A MACHINE.

 

If you can't stop trying to put words I did not say into my mouth I will stop here and now.

Edited by studiot
Link to comment
Share on other sites

14 minutes ago, studiot said:

If you really want a discussion you must respect the other side and stop this dodging and diving.

 

I SPECIFICALLY CALLED THE CAR TYRE A CONSTRUCT NOT A MACHINE.

 

If you can't stop trying to put words I did not say into my mouth I will stop here and now.

Stop your hysterics.

How is a car tyre "self diagnosing" (since I can't find your reference) and how does that even fit any definition of intentionality and/or qualia, which my article stated to be the basic requirements for consciousness?

and by the way, again, how the heck am I misusing the term "machine?"

iNow trying to troll. Cute.

Edited by AIkonoklazt
Link to comment
Share on other sites

45 minutes ago, iNow said:

He’s just another time waster 

That may depend on where one wants to land on the question of AI and consciousness.  I have found his paper quite thoughtful and it is nudging me to review my notions of the popular analogies between human brains and digital processors as we know them.  The Epstein paper he linked also dashed some cold water in my face, especially regarding how little we know about the causal operations of brains.  I want to marinate for a few days on that one.

That said, I am disappointed when anyone uses terms like "hysterics" against anyone.  That's a putdown rooted in misogyny and myths about the psyche, but maybe its roots are being forgotten.  Hope we can move past that.

Talking of self-diagnosing tires seems a little off the topic, but maybe not.  Whatever consciousness refers to, it seems to be something emergent in highly complex and multilayered systems, so that seems like the place to turn the light and try to discern causal efficacy.

Link to comment
Share on other sites

1 hour ago, TheVat said:

 

Talking of self-diagnosing tires seems a little off the topic, but maybe not.  Whatever consciousness refers to, it seems to be something emergent in highly complex and multilayered systems, so that seems like the place to turn the light and try to discern causal efficacy.

 

I don't buy into emergentism, especially complexity emergentism which I addressed in my article.

This is what I got off of two quick searchs from MS Bing:
 

Quote
Item Number of Transistors/Neurons
Apple A17 Pro Processor 19 billion transistors1
Brain of Fruit Fly 3,016 neurons and 548,000 synapses2
GPT-4.5 The number of transistors in GPT-4.5 is not available at this time. However, its predecessor, GPT-3, has 175 billion parameters3.
Frontier Supercomputer The Frontier Supercomputer has 9,472 AMD Epyc 7453s "Trento" 64 core 2 GHz CPUs (606,208 cores) and 37,888 Radeon Instinct MI250X GPUs (8,335,360 cores)4.
Quote
Item Number of Transistors
AMD Epyc 7453 16,600 million transistors1
Radeon Instinct MI250X GPU 58,200 million transistors2

Frontier has about 9,707,648,000 + 48,598,272,000 = 58,305,920,000 or over 58 billion transistors. (Edit: oops looks like I severely undercounted this figure because just the CPUs in that thing is more than 95 trillion transistors but let's just be pessimistic about computers)

Let's discount any connections between transistors, don't even design anything, just plop all of them down on a slab substrate or something.

Let's allow connections in the fruit fly brain but not computer chips, because we need "margin." Because I didn't even get an answer out of Bing, I went to Perplexity and got this:
 

Quote
To calculate the number of connections between 3016 neurons and 548,000 synapses, we need to know the average number of synapses per neuron. According to the search results, the brain of a larval fruit fly contains 548,000 connections between a total of 3,016 neurons
2
5
. Therefore, the average number of synapses per neuron in a fruit fly's brain is approximately 182. This means that each neuron in the fruit fly's brain is connected to approximately 182 other neurons through synapses. Using this average, we can estimate that the number of connections between 3016 neurons and 548,000 synapses would be approximately 55,272,512.

Okay. That's about 55 million versus 58 billion with a big margin built in. Why isn't a supercomputer more conscious than a fruit fly?

There goes the complexity argument, but what about other varieties of emergentism?

I don't buy those either, and others apparently also don't. This is what someone else has to say (he leads an applied AI team at a robotics company https://ykulbashian.medium.com/emergence-isnt-an-explanation-it-s-a-prayer-ef239d3687bf

There is another discussion from a prominent systems scientist, but it's behind a signup wall: https://iai.tv/articles/the-absurdity-of-emergence-auid-2552?_auid=2020

I think it's handwaving and they do too when it comes to the idea being abused. If the issue is about system behavior as Cabrera points out, then what separates it from behaviorism?

Edited by AIkonoklazt
severely undercounting transistors, but I won't redo the math
Link to comment
Share on other sites

Let's redo the completely screwed up math I did earlier...

CPU: 16.6 billion times 9472 is 157235200000 (157 billion)

GPU: 58.2 billion times 37888 is 2205081600000000 (2 quadrillion)

GPU dwarfs CPU so we'll just forget the CPUs.

Like I said before, ignore all connections between all transistors on-chip (plus discounting everything else that's not on the chips, like boards, memory, storage, interfaces, controllers etc etc etc), it's 2 quadrillion bare transistors of the supercomputer versus 55 million connections in the fruit fly brain (again I'm generously counting connections between all neurons and synapses) plus whatever other bonus handicaps I'm giving, the supercomputer is still multiple orders of magnitudes more complex than a brain of a fruit fly, yet the fruit fly is more conscious than the supercomputer... The complexity emergence argument evidently just holds no merit.

Don't discount the complexity of a modern superscaler microprocessor... It takes a design team of at least hundreds of people SEVERAL YEARS to churn out one... and that's just the chip design, it doesn't include process development e.g. the manufacturing tech development side of things

Link to comment
Share on other sites

17 minutes ago, AIkonoklazt said:

the supercomputer is still multiple orders of magnitudes more complex than a brain of a fruit fly, yet the fruit fly is more conscious than the supercomputer... The complexity emergence argument evidently just holds no merit.

You're comparing apples with oranges. Computers don't work the same way as a living brain. From what I remember, brain cells multiply their capacity by vast amounts, by using combinations of connections where a computer uses one connection to do one thing. 

So five brain connections has 120 possible combinations, compared to a computer's five. That's an over-simplification of what I'm saying, but that's the sort of thing that what I read was saying.

In any case, computers are not designed with consciousness as the target. But in fruit flies, consciousness, or the ability to respond to the environment very quickly, is a survival benefit, so their brains evolved towards that ability.

Link to comment
Share on other sites

20 minutes ago, mistermack said:

You're comparing apples with oranges. Computers don't work the same way as a living brain. From what I remember, brain cells multiply their capacity by vast amounts, by using combinations of connections where a computer uses one connection to do one thing. 

So five brain connections has 120 possible combinations, compared to a computer's five. That's an over-simplification of what I'm saying, but that's the sort of thing that what I read was saying.

In any case, computers are not designed with consciousness as the target. But in fruit flies, consciousness, or the ability to respond to the environment very quickly, is a survival benefit, so their brains evolved towards that ability.

The calculations include all connections in a fruit fly brain (each to 182 others via synapses) while counting none in the machine whatsoever.

 

Edited by AIkonoklazt
next reply
Link to comment
Share on other sites

9 minutes ago, AIkonoklazt said:

The calculations include all connections in a fruit fly brain (each to 182 others via synapses) while counting none in the machine whatsoever.

I haven't studied the calculations, but the fundamental difference in how they work is still relevant. The brain could be using sequences of connections in real time, instead of one connection pattern. Until you have a full understanding of what it's doing, you can't compare meaningfully. And as I said, the computer isn't designed to be conscious. 

Link to comment
Share on other sites

Just now, mistermack said:

I haven't studied the calculations, but the fundamental difference in how they work is still relevant. The brain could be using sequences of connections in real time, instead of one connection pattern. Until you have a full understanding of what it's doing, you can't compare meaningfully. And as I said, the computer isn't designed to be conscious. 

Yeah, because a computer is designed and a brain isn't.

So would you agree if I say something like "comparing artificial consciousness to natural consciousness would be comparing apples with oranges, so to expect consciousness the way everyone has been talking about from machines would be nonsense?"

Hey, I would agree with that 100%! 😁 It'd be "emulated symptomatic blinky-lights consciousness" instead of "actual consciousness"

Link to comment
Share on other sites

1 minute ago, AIkonoklazt said:

Hey, I would agree with that 100%! 😁 It'd be "emulated symptomatic blinky-lights consciousness" instead of "actual consciousness"

Since you still haven't given a short concise definition in plain English then "actual consciousness" is still a mystery. It's going to be different in every animal anyway. Human consciousness isn't likely to be the same as in a fruit fly larva. 

Link to comment
Share on other sites

29 minutes ago, mistermack said:

Since you still haven't given a short concise definition in plain English then "actual consciousness" is still a mystery. It's going to be different in every animal anyway. Human consciousness isn't likely to be the same as in a fruit fly larva. 

It's not a definition (I'm not going to give a theory- Everyone can use the regular English dictionary definition) but as I've already indicated earlier in the thread, it's a matter of necessary and sufficient conditions for consciousness (intentionality and qualia). WIthout those you don't have consciousness.

 

Link to comment
Share on other sites

14 hours ago, AIkonoklazt said:

Intelligence, as in the term "artificial intelligence," is performative and not attributive; This has been pointed out a lot by experts in AI, yet it is a point of continual confusion. A machine performs tasks that are seemingly intelligent, and not "being intelligent." I really thought the distinction is clear. I suppose I can throw more rhetoric at it but I chose not to.

If the topic title was "artificial-consciousness-is-impossible, today", I'd have nothing to say on the subject and completely agree with the above statement; which also agrees with my anthill analogy.

The point that you continue to ignore, is that 10,000 year's ago artificial intelligence was impossible and then we invented the loom. 

The point is, emergence depends on complexity and an anthill depends on simplicity, until it doesn't.

21 hours ago, studiot said:

Both you, mrmack and I have all queried the difference between intelligence and consciousness, looking for straight answers.

 

Perhaps we should examine it more closely ?

I don't pretend to fully understand any of these concepts but here are some thoughts, I consider useful.

 

Firstly consider some entity in its surroundings, environment or universe as in Fig 1

 

So we have three things. The entity, the environment and the interaction between the two.

Perhaps the entity feels overhot in the sun so gets under the tree for shade.

 

It is tempting to think that the entity must be self aware to be conscious and conscious to be intelligent and the whole sequence must be like nested like russian dolls as in the venn diagram in fig2.

But this doen't hold logical water.

As mrmack says, there are scales of these things.

 

Self awareness. 

I am not normally aware of the touching of my feet on the ground, the feel of my clothes or the working of my kidneys.

Yet I can define and descibe myself.

Consciousness

Am I conscious when I am asleep or self aware ?

Ditto after 10 pints or whiskys.

Intelligent

I leave that up to your consideration.

venn1.jpg.90bdfac0f38abdca469c9a2e6f6282bd.jpg

 

 

 

 

 

I think fig 2 should be a ven diagram with a triple intersection labelled 'me'.

Link to comment
Share on other sites

2 hours ago, mistermack said:

Computers don't work the same way as a living brain.

 

+1

 

1 hour ago, mistermack said:

Since you still haven't given a short concise definition in plain English then "actual consciousness" is still a mystery. It's going to be different in every animal anyway. Human consciousness isn't likely to be the same as in a fruit fly larva.

Fruit fly consciousness is actually very interesting on account of their behaviour, as you have already observed.

 

12 hours ago, AIkonoklazt said:

How is a car tyre "self diagnosing" (since I can't find your reference) and how does that even fit any definition of intentionality and/or qualia, which my article stated to be the basic requirements for consciousness?

 

I already said that I don't know about US law but the point is that UK law require the fabrication of tyres to incorporate tell-tale markers at the minimum legal tread depth. Hence the tyre self diagnoses when it is too worn.

However the tyre in this respect is not even a machine, let alone living or conscious so can take it no further.

 

 

12 hours ago, AIkonoklazt said:

 

and by the way, again, how the heck am I misusing the term "machine?"

 

 

We have been through all that with the discussion about my accidental wedge.

 

12 hours ago, AIkonoklazt said:

Stop your hysterics.

 

This whole thread appears to be one long litany of rejection.

The opening post starts with a hypothesis of rejection  "Artificial Consciousness is Impossible" and carries on from there.

You seem to have rejected pretty well all matters germaine to the discussion of this hypothesis, at times quite rudely to others.

Your current score of matters germaine appears to be You nearly 100%  Others nearly 0%

Do you think this likely for any human analysis ?

Link to comment
Share on other sites

11 hours ago, AIkonoklazt said:

Okay. That's about 55 million versus 58 billion with a big margin built in. Why isn't a supercomputer more conscious than a fruit fly?

I think the stakeholders on complexity emergentism are going with the assumption that type of complexity matters.  I would agree a rigid structure of transistors pushed together like Legos is highly unlikely to be the sort of complexity one might find in an animal  connectome.  We are very distant from understanding the connectome or its idiosyncrasies, as Epstein points out, so I could agree that invoking its complexity is, at this time in history, hand waving.  

Link to comment
Share on other sites

8 hours ago, dimreepr said:

If the topic title was "artificial-consciousness-is-impossible, today", I'd have nothing to say on the subject and completely agree with the above statement; which also agrees with my anthill analogy.

The point that you continue to ignore, is that 10,000 year's ago artificial intelligence was impossible and then we invented the loom. 

The point is, emergence depends on complexity and an anthill depends on simplicity, until it doesn't.

I think fig 2 should be a ven diagram with a triple intersection labelled 'me'.

Yeah "we invented the loom" because the loom is designed, while the brain isn't.

I've stressed over and over again that evolution isn't a process of design.

Once you design anything it's game over- The "designing a non-design" contradiction occurs, and volition is denied.

7 hours ago, studiot said:

 

I already said that I don't know about US law but the point is that UK law require the fabrication of tyres to incorporate tell-tale markers at the minimum legal tread depth. Hence the tyre self diagnoses when it is too worn.

However the tyre in this respect is not even a machine, let alone living or conscious so can take it no further.

 

We have been through all that with the discussion about my accidental wedge.

 

 

This whole thread appears to be one long litany of rejection.

 

The tyre isn't the one doing the diagnosing. A person would be the one reading and interpreting the marker. You're fooled by the nomenclature.

Your "accidental wedge" doesn't itself contain moving parts so exactly which definition are you even using? Of course "we've been through this".

https://www.merriam-webster.com/dictionary/machine?src=search-dict-hed

Impossibility is a "rejection." Refutations are "rejections."

If you go through the whole thread, you'll see plenty of instances where people weren't exactly "polite" to me to begin with. Pardon me if you were mixed up in all the jeering and tomato-tossing, but I generally treat others as I'm treated.

 

6 hours ago, TheVat said:

I think the stakeholders on complexity emergentism are going with the assumption that type of complexity matters.  I would agree a rigid structure of transistors pushed together like Legos is highly unlikely to be the sort of complexity one might find in an animal  connectome.  We are very distant from understanding the connectome or its idiosyncrasies, as Epstein points out, so I could agree that invoking its complexity is, at this time in history, hand waving.  

The final numbers of 2 quadrillion versus 55 million were extremely generous since I was counting unconnected transistors (and only on-processor too; ignoring the entire rest, including CPUs and only counting GPU transistors at that) while at the same time counting every connection in a fruit fly brain, averaging 182 out from _each_ to every other.

Microprocessor circuit connections are also very complex. We'd have to mark down exactly what kinds of connection "complexities" we're talking about if we talk complexity; Otherwise, it will always be handwaving if the criteria isn't firm. Complexity may just be a moot concept here.

Edited by AIkonoklazt
Link to comment
Share on other sites

On 9/16/2023 at 12:11 PM, AIkonoklazt said:
  1. Actually, scientific studies supports the presence of underdetermined factors themselves (the neuronal stimulation experiment on fly neuronal groups). The procession of science itself (this is a big one) demonstrates the underdetermination of scientific theories as a whole (the passage from SEP re: discovery of planets in our solar system). My argument is also evidential.
  2. The impossibility, as demonstrated, is multifaceted. A) The problem isn't a scientific problem but an engineering as well as an epistemic problem (i.e. no complete model) as previously mentioned. B) There's also the logical contradiction mentioned. The act of design itself creates the issue. A million years from now, things still have to be designed, and as soon as you design anything, volition is denied from it. (Of course you can gather up living animals and arrange them into a "computer," but any consciousness there wouldn't be artificial consciousness. Why not just cut out animal brains and make cyborgs? It's cheaper and simpler that way anyhow if people are so desperate for those kinds of things... I seriously hope not) C) The nature of computation forbids engagement with meaning, as demonstrated in the Symbol Manipulator thought experiment (which is derived from Searle's Chinese Room Argument- instead of refuting behaviorism/computationalism like the CRA did it now shows the divorce of machine activity from meaning) and the peudocode programming example

Is the argument air-tight? I wouldn't know unless people show me otherwise. This is why I've posted the article. This is why I've been trying to set up debates with experts (One journalist agreed to help a while ago, haven't heard back since... Usually people are really busy; I make the time because this has become my personal mission especially since court cases are starting to crop up as I have expected- The UN agency UNESCO banned AI personhood in its AI ethics guidelines but who knows the extent the member countries would actually follow it)

I thought I did think up a loophole myself a few months back, but after some discussion with a neuroscience research professor (he's a reviewer in an academic journal) I realized that the possible counterargument just collapses into yet another functionalist argument.

Thanks, this is useful for further discussion, it rules out categories of counter arguments that one could think of. Do we agree on the following statement?
"Under the assumption that we agree that Artificial Consciousness is a logical contradiction given the definitions in your article then any introduction of counter arguments from the natural sciences is pointless; such arguments do not apply."
 

Link to comment
Share on other sites

16 minutes ago, Ghideon said:

Thanks, this is useful for further discussion, it rules out categories of counter arguments that one could think of. Do we agree on the following statement?
"Under the assumption that we agree that Artificial Consciousness is a logical contradiction given the definitions in your article then any introduction of counter arguments from the natural sciences is pointless; such arguments do not apply."
 

 

If you want to stress the scientific point of view, try this:

"Scientific methods are only useful in confirming the theories behind supports and refutes. The actual question of consciousness isn't amenable to science and its methods because phenomenal consciousness is off-limits to scientific investigation."

When I use scientific underdetermination, that's a philosophical argument, based on scientific evidence yes, but still a philosophical argument.

"What it is like to see the color red" is off limits to empirical science. Dennett and his followers may keep disagreeing and keep holding up the "intuition pump" card (which I also addressed in my article) but I think the Knowledge Argument holds. The counterarguments Dennett and others have against the Monochrome Room thought experiment are silly (like the one with a different color banana... If anyone finds that entire argument again, give it to me and I'll explain why it's silly) https://plato.stanford.edu/entries/qualia-knowledge/

Edited by AIkonoklazt
Link to comment
Share on other sites

17 hours ago, AIkonoklazt said:

I've stressed over and over again that evolution isn't a process of design.

So what? How does that argue your point?

 

17 hours ago, AIkonoklazt said:

Once you design anything it's game over- The "designing a non-design" contradiction occurs, and volition is denied.

Somewhat like your fundamental arguement and your subsequent attempts to support it, contraction's occur and questions are ignored... 🤔 

Link to comment
Share on other sites

5 hours ago, dimreepr said:

So what? How does that argue your point?

 

Somewhat like your fundamental arguement and your subsequent attempts to support it, contraction's occur and questions are ignored... 🤔 

 

Design takes away volition. We're not products of design, the loom that you raised as example was.

TheVat as well as other people have raised the same issue separately, which I had to answer over and over:

"Evolutionary" or "genetic" algorithms don't escape design and determination. You don't "design volition into something," you automatically take volition away when designing. People have this idea in their head that this "artificial consciousness" is just a thing that's passively "there"- No, it has to be designed and made. This process is a lock-in. There's no "instruction without instruction." Animals derive behavior from an entirely different avenue from machines- Their behavior is influenced and not a result of any design.

Again, I don't understand people's stubborn refusal to deal with the article. Several professors, two of them very senior (Professor Emeritus and Distinguished Professor), took the time out of their extremely busy schedules to read every last word so why aren't people here able to? This is the section:

Quote

Volition Rooms — Machines can only appear to possess intrinsic impetus

The fact that machines are programmed dooms them as appendages, extensions of the will of their programmers. A machine’s design and its programming constrain and define it. There’s no such thing as a “design without a design” or “programming without programming.” A machine’s operations have been externally determined by its programmers and designers, even if there are obfuscating claims (intentional or otherwise) such as “a program/machine evolved,” (Who designed the evolutionary algorithm?) “no one knows how the resulting program in the black box came about,” (Who programmed the program which produced the resultant code?) “The neural net doesn’t have a program,” (Who wrote the neural net’s algorithm?) “The machine learned and adapted,” (It doesn’t “learn…” Who determined how it would adapt?) and “There’s self-modifying code” (What determines the behavior of this so-called “self-modification,” because it isn’t “self.”) There’s no hiding or escaping from what ultimately produces the behaviors- The programmers’ programming.

 

(The forum software won't let me do a separate reply and would just tack this onto the end of my last reply but I'm making a note here that I will be heading out of the country for two weeks very soon and won't be looking at this forum while on the trip. I'll address things sometime after I come back)

Edited by AIkonoklazt
Link to comment
Share on other sites

On 9/19/2023 at 6:33 PM, AIkonoklazt said:

 

Design takes away volition. We're not products of design, the loom that you raised as example was.

TheVat as well as other people have raised the same issue separately, which I had to answer over and over:

"Evolutionary" or "genetic" algorithms don't escape design and determination. You don't "design volition into something," you automatically take volition away when designing. People have this idea in their head that this "artificial consciousness" is just a thing that's passively "there"- No, it has to be designed and made. This process is a lock-in. There's no "instruction without instruction." Animals derive behavior from an entirely different avenue from machines- Their behavior is influenced and not a result of any design.

Again, I don't understand people's stubborn refusal to deal with the article. Several professors, two of them very senior (Professor Emeritus and Distinguished Professor), took the time out of their extremely busy schedules to read every last word so why aren't people here able to? This is the section:

 

(The forum software won't let me do a separate reply and would just tack this onto the end of my last reply but I'm making a note here that I will be heading out of the country for two weeks very soon and won't be looking at this forum while on the trip. I'll address things sometime after I come back)

Two thing's:

1. The question I asked is, how does this support your topic thesis?

2. Everything that work's is by design, from the ant to the human, for instance, the antill is designed by ant's unconsciously; why is that any different to a loom or a planet? 

Link to comment
Share on other sites

  • 2 weeks later...

 

Phlegm theories 

Article in the Atlantic a few years back on the explanatory holes in many theories of consciousness.  Popped back on my radar...an excerpt:

Quote

According to medieval medicine, laziness is caused by a build-up of phlegm in the body. The reason? Phlegm is a viscous substance. Its oozing motion is analogous to a sluggish disposition.

The phlegm theory has more problems than just a few factual errors. After all, suppose you had a beaker of phlegm and injected it into a person. What exactly is the mechanism that leads to a lazy personality? The proposal resonates seductively with our intuitions and biases, but it doesn’t explain anything.

In the modern age we can chuckle over medieval naiveté, but we often suffer from similar conceptual confusions. We have our share of phlegm theories, which flatter our intuitions while explaining nothing. They’re compelling, they often convince, but at a deeper level they’re empty.

One corner of science where phlegm theories proliferate is the cognitive neuroscience of consciousness. The brain is a machine that processes information, yet somehow we also have a conscious experience of at least some of that information. How is that possible? What is subjective experience? It’s one of the most important questions in science, possibly the most important, the deepest way of asking: What are we? Yet many of the current proposals, even some that are deep and subtle, are phlegm theories....

https://www.theatlantic.com/science/archive/2016/03/phlegm-theories-of-consciousness/472812/

If you want to read it and you get paywall blocked, LMK and I'll put up a screenshot link.  

Link to comment
Share on other sites

  • 2 weeks later...
On 9/22/2023 at 5:22 AM, dimreepr said:

Two thing's:

1. The question I asked is, how does this support your topic thesis?

2. Everything that work's is by design, from the ant to the human, for instance, the antill is designed by ant's unconsciously; why is that any different to a loom or a planet? 

It supports my thesis because it presents a contradition- A design without a design.

The anthill analogy is a really bad analogy. Let me requote it here:

Quote

An anthill is intelligent, but it can't be conscious because it's a house; is that about the size of it?

Which throws up an interesting question, which part of the human body is considered the house (mobile anthill)?

How is an anthill "intelligent"? I think you're confusing the anthill with the ants that are in it.

On 10/1/2023 at 2:33 PM, TheVat said:

 

Phlegm theories 

Article in the Atlantic a few years back on the explanatory holes in many theories of consciousness.  Popped back on my radar...an excerpt:

https://www.theatlantic.com/science/archive/2016/03/phlegm-theories-of-consciousness/472812/

If you want to read it and you get paywall blocked, LMK and I'll put up a screenshot link.  

(I could still access the article directly, probably because I'm still under a quota. Maybe the next time I click it I'd be paywalled. I've saved the article to a bookmark app in the meantime)

The recognition of being surrounded by bad analogies is a start. Well, as you've seen in the earlier article that I've shared, information processing itself is a bad analogy. With qualia, we aren't dealing with physical information. The first-person phenomena of conscious experience isn't in the form of physical information. The "color" of yellow isn't anywhere in the physical world because this "color" is a conscious experience and not a physical property and thus not a piece of physical information: https://www.extremetech.com/archive/49028-color-is-subjective. This is covered by the Knowledge Argument (reference only... it's long, way longer than my article so I don't expect anyone here to digest it): https://plato.stanford.edu/entries/qualia-knowledge/

I am currently working on some writings to attempt to explain some of the conceptual conflations that goes on. Just some rough notes:

Quote

Perceived vs inherent

There is great confusion in the field of AI on attributive vs performant meanings of terms such as intelligence.

 

Possible indication / possible result

 

Performant intelligence is a merely a possible indication and possible result OF attributive intelligence. An equation should NOT be drawn, yet it is done all the time. The word "intelligence" in "artificial intelligence" is constantly being abused

 

There’s no such thing as "performant consciousness" (behaviorism and so-called "tests" for AI consciousness); Consciousness isn't something that is "done" (consciousness-as-act). Consciousness is akin to life, where life is a state of being (e.g. "being alive." Something that is alive and something that's not alive can have the exact same chemical/physical composition and subject to same physical forces, yet one is alive while the other isn't; Consciousness is in a similar situation... note that this parallel is good for illustrative purposes only to explain its stature).

 

We don’t want to proliferate new senses of words that only a limited number of people know and most people misinterpret, leading to further obfuscation of concepts.

 

Machine learning was one of those terms, and perhaps so was artificial intelligence (the term "artificial intelligence" is out-of-the-bag; It is far too late to stop its usage but there needs to be an educational campaign surrounding it. Like "machine learning," "artificial intelligence" is a technical term which has been absorbed into popular vernacular. Apparently certain professionals are also confused by it)

 

This isn’t arguing semantics; I’m pointing out endemic misappropriation of terms that lead to widespread conflation and misrepresentation of concepts.

 

What’s called "attention" in LLMs isn’t actual attention but simulated attention

 

There's nothing "neural" in "neural networks" [facepalm]

 

Things I said about learning all apply to intelligence and everything surrounding it including attention - theoretical models but never actuality. Again, “all models are wrong, some are useful”

 

Anthropomorphic designations / references persist in the field of AI partly because of systematic terminological abuses within it

 

Machines operates in a very limited dimension that doesn’t include anything like concepts or percepts. We are tempted to misattribute their purely mechanistic workings to our operational realm of understanding. (Imagine that old Nova episode where a stick figure in a 2D world suddenly has to confront a 3D world? Machines trying to handle "meaning" is like that)

 

Mentality (and thus meaning) is another dimension altogether from the very limited dimension that machine code / machine Ops exist in. Machines are completely isolated from the world in that manner (ref. Pseudocode example I raised in my article)

 

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.