Jump to content

Artificial Consciousness Is Impossible


AIkonoklazt

Recommended Posts

@TheVat Yes, I think you're referring the The Robot Reply to the CRA https://plato.stanford.edu/entries/chinese-room/#RoboRepl

This approach is basically an attempt at grounding via embodiment. I would say that it is a perfectly good way to improve performative intelligence. There is a really big array of embodiment-related ML projects out there.

However, machine embodiment would still inevitably involve encoding. The activity of encoding breaks the grounding and creates a semantic barrier:

World --> Machine Interface --> Mechanistic Encoding --> Algorithmic Processing

When encoding happens, we're stuck with payload-sequence manipulation. Actually, payload-sequence manipulation is already part of the encoding, and vice versa.

Quote

Full disclosure: I worked briefly with a very narrow form of AI back in the day, developed a couple of expert systems back in the late 80s

Various ML systems would have to be implanted with varieties of GOFAI to rein them in, ridding them of any out-of-control behavior.

4 hours ago, TheVat said:

some future entity that might interact both analogically and digitally with the world, a blend of organic and machine forms, a creature that operates both with symbols AND has a non-symbolic system that succeeds by being embedded in a particular environment.   Pretty pie in the sky, right?

I think some research project are already doing those things, sort of? https://www.nature.com/articles/d41586-023-03975-7

Any consciousness arising out of those systems are not artificial, because I'd consider them cases of manipulation of natural consciousness. To use a crude example, I can make a machine out of conscious dogs and arranging them into live pulleys and gears (poor doggies...) but that machine isn't exactly artificially conscious. What's more, I think a lot of these systems may eventually end up being conscious in an epiphenomenal fashion. That's just really Twilight-Zone-esq if you think about it........ Imagine being trapped in something you have zero control of, and what you experience isn't even necessarily in sync with what's happening (yikes)... It's going to be a bit of an ethical red flag, IMO.

Link to comment
Share on other sites

4 hours ago, AIkonoklazt said:

Any consciousness arising out of those systems are not artificial, because I'd consider them cases of manipulation of natural consciousness. To use a crude example, I can make a machine out of conscious dogs and arranging them into live pulleys and gears (poor doggies...) but that machine isn't exactly artificially conscious.

The dogputer mental image (a K9 processor?) was amusing but not quite what I had in mind.  I was speculating on an artificial system which included forms based on organic structures, not one using actual biological material.  IOW, a strong departure from the classical digital Von Neumann architecture that has dominated IT for...almost its entire history.  

What you are referencing, with its potential for a locked-in consciousness, does indeed seem spooky and an ethical red flag.  

Link to comment
Share on other sites

 

Quote

was speculating on an artificial system which included forms based on organic structures, not one using actual biological material.

 

It's called neuromorphic engineering - here is a recent attempt...

https://www.ox.ac.uk/news/2023-05-05-artificial-neurons-mimic-complex-brain-abilities-next-generation-ai-computing#:~:text=A team of researchers at,been published in Nature Nanotechnology.

 

2D materials are made up of just a few layers of atoms, and this fine scale gives them various exotic properties, which can be fine-tuned depending on how the materials are layered. In this study, the researchers used a stack of three 2D materials - graphene, molybdenum disulfide and tungsten disulfide- to create a device that shows a change in its conductance depending on the power and duration of light/electricity that is shone on it.

Unlike digital storage devices, these devices are analog and operate similarly to the synapses and neurons in our biological brain. The analog feature allows for computations, where a sequence of electrical or optical signals sent to the device produces gradual changes in the amount of stored electronic charge. This process forms the basis for threshold modes for neuronal computations, analogous to the way our brain processes a combination of excitatory and inhibitory signals.

Edited by TheVat
adds
Link to comment
Share on other sites

On 1/6/2024 at 10:22 AM, TheVat said:

 

 

It's called neuromorphic engineering - here is a recent attempt...

https://www.ox.ac.uk/news/2023-05-05-artificial-neurons-mimic-complex-brain-abilities-next-generation-ai-computing#:~:text=A team of researchers at,been published in Nature Nanotechnology.

 

2D materials are made up of just a few layers of atoms, and this fine scale gives them various exotic properties, which can be fine-tuned depending on how the materials are layered. In this study, the researchers used a stack of three 2D materials - graphene, molybdenum disulfide and tungsten disulfide- to create a device that shows a change in its conductance depending on the power and duration of light/electricity that is shone on it.

Unlike digital storage devices, these devices are analog and operate similarly to the synapses and neurons in our biological brain. The analog feature allows for computations, where a sequence of electrical or optical signals sent to the device produces gradual changes in the amount of stored electronic charge. This process forms the basis for threshold modes for neuronal computations, analogous to the way our brain processes a combination of excitatory and inhibitory signals.

My argumentation relied on generalized formalism involving abstracted movement of any kind of load. Those relying on electrical and optical signals would involve movement of electrons and pulses of light (and in quantum computers, movement of qubits via transfer of quantum states). The defining distinction of truly referential systems would then be one of non-algorithmic behavior. The first Bishop paper I cited in this thread refers to Goedelian arguments in this regard, which are basically refutations of computationalism:

https://www.frontiersin.org/articles/10.3389/fpsyg.2020.513474/full

Quote

Arguments foregrounding limitations of mechanism (qua computation) based on Gödel's theorem typically endeavor to show that, for any such formal system F, humans can find the Gödel sentence G(ǧ), while the computation/machine (being itself bound by F) cannot.

The Oxford philosopher John Lucas primarily used Gödel's theorem to argue that an automaton cannot replicate the behavior of a human mathematician (Lucas, 1961, 1968), as there would be some mathematical formula which it could not prove, but which the human mathematician could both see, and show, to be true; essentially refuting computationalism. Subsequently, Lucas' argument was critiqued (Benacerraf, 1967), before being further developed, and popularized, in a series of books and articles by Penrose (1989, 1994, 1996, 1997, 2002), and gaining wider renown as “The Penrose–Lucas argument.”

In 1989, and in a strange irony given that he was once a teacher and then a colleague of Stephen Hawking, Penrose (1989) published “The Emperor's New Mind,” in which he argued that certain cognitive abilities cannot be computational; specifically, “the mental procedures whereby mathematicians arrive at their judgments of truth are not simply rooted in the procedures of some specific formal system” (Penrose, 1989, p. 144); in the follow-up volume, “Shadows of the Mind” (Penrose, 1994), fundamentally concluding: “G: Human mathematicians are not using a knowably sound argument to ascertain mathematical truth” (Penrose, 1989, p. 76).

 

Link to comment
Share on other sites

1 hour ago, AIkonoklazt said:

My argumentation relied on generalized formalism involving abstracted movement of any kind of load. Those relying on electrical and optical signals would involve movement of electrons and pulses of light (and in quantum computers, movement of qubits via transfer of quantum states). The defining distinction of truly referential systems would then be one of non-algorithmic behavior.

Yes, I think even the grand old man of AI, Geoffrey Hinton, might acknowledge that causally embedded (as opposed to purely algorithmic) cognition would be needed to really have an AGI that understood a world and learned on its own as we do.  I recall he was talking up these analog artificial neurons recently, in the context of what he calls "mortal machines" (not sure if he originated the phrase), which because of their analog nature cannot transfer the weights of their neural connections to other machines.  Their understanding, their causal structure, is theirs alone - like we humans.  The shifting physical connections and conductances are analog. 

I find Bishop's critique of digital cognition reasonable, in terms of its limitations.  We humans can see how things interact causally and continuously, we can make analogies and follow them where pure reason and algorithms won't take us.  Funny: Hinton believes this, but he still believes backpropagation algorithms (higher layers subject lower layers to a sort of computational evolutionary pressure) in a digital multilayered neural net could prove the most powerful AI in the end.  He seemingly remains a connectionist who is very wedded to his backprop algorithms and the need for massive quantities of training data.  His machines can never learn from a single example (we poor analog meatheads can do so, in many RW situations).

Link to comment
Share on other sites

16 minutes ago, TheVat said:

 He seemingly remains a connectionist who is very wedded to his backprop algorithms

It's really unbelievable that Hinton, despite demonstrations to the contrary, still believes that brains work via backprop... It's as if he found this hammer, and suddenly everything looks like a nail.

If the matter comes down to any formalism, it wouldn't matter whether something is digital or analog. It's always going to be the fundamental question of "okay, so things are getting moved around in a system... Exactly where is the referent in all that?"

I believe the AI field should really stop it with their "reverse-engineering the non-engineered" paradigm. I think it's nonsense. It's fine to take inspiration, but to think things could ever go further than that is just flat out mistaken.

Link to comment
Share on other sites

(This will merge into above reply but touches on a topic that gets mentioned quite often on LinkedIn- Gary Marcus reposted Subbarao Kambhampati's LinkedIn post (ex-President of AAAI) which was itself a repost of one of his tweets:)
 

Explain-it-like-I'm-5 summary: It's the training material, stupid

Link to comment
Share on other sites

  • 4 weeks later...

image.png.03e18f1772b0903b0ac9dd12acaa430d.png

Absolute insanity, and EXACTLY what I've been afraid of- Corporations shirking their responsibilities by trying to designate their products as AGENTS, thereby putting the liability on MACHINES! What the actual fuck.

I can't believe this shit is happening so soon. Didn't think this nightmare would be happening any time soon when I wrote that part near the end of my article about this exact thing. The text of the law is bat shit insane. It's giving a machine LICENSE to drive as OPERATOR. Think about this BS for a moment... Granting machine the right to drive. Who is at fault for an accident? The machine is, officer!

/hysterics

Link to comment
Share on other sites

More headache-inducing mangling of the production of laws, this one is about the EU AI Act. Needless to say, this is going to affect a whole lot more people than the WA legislation.

A German lawyer expressed her concerns in an LI post below regarding some very badly-written language in the text, which is set to become law unless something further happens. This is another thing that I've been afraid of: AI anthropomorphism polluting our systems of laws that are supposed to protect human beings. There was a discussion in a response thread following the original post about how technology advocates shouldn't be allowed to write laws instead of legal experts. See the portion I had highlighted in blue. If a law is written badly enough, it can become entirely USELESS- Non-applicable... and that's far from the worst case scenario. Okay, so you can have laws that are a complete waste of time and space, fine; But, how about instances where it's potentially doing harm instead of good? The text is laden with anthropomorphic assumptions, betraying technical ignorance on the matter. If the authoring is indeed done by tech advocates then they're doing a very bad job even at that, not to mention doing anything useful protecting anyone from anything:

image.thumb.png.174ffa57acca20dd6dc251c60eaa617c.png

Link to comment
Share on other sites

It seems to me that the new definition of AI is better than the old definition because it basically says that an AI system is intelligent, whereas the old definition can be satisfied by any old computer, intelligent or not.

 

 

Link to comment
Share on other sites

On 2/4/2024 at 8:14 PM, KJW said:

It seems to me that the new definition of AI is better than the old definition because it basically says that an AI system is intelligent, whereas the old definition can be satisfied by any old computer, intelligent or not.

 

 

Yeah. I've seen old microwaves and air conditioners with the term "AI" on the marketing stickers.

=======

In other news, my goodness- What is wrong with Hinton? His take on LLM "hallucinations" is just bad on so many levels. Here's Marcus hitting back at him today:

https://garymarcus.substack.com/p/deconstructing-geoffrey-hintons-weakest

Uh, no... LLM "hallucinations" aren't what human beings do, at all. When people hallucinate, there's still a referent; The hallucinations are all about something, right? How do people have hallucinations about nothing at all? "Describe what you see and hear..." "Oh, nothing."

???

LLMs don't deal with referents. They're "about" nothing specific- They match corresponding signal patterns spread across their whole "neural" nets. The whole reason why those things "hallucinate"/confabulate is because they don't deal with anything specific.

Even if you don't buy that explanation at all (...please look at practical examples of adversarial attacks against neural nets if not convinced), there's all these things that Marcus pointed out.

Why is the media hyping up Hinton as the "Godfather of AI?" That in itself is a bad take.

Link to comment
Share on other sites

One thing I will agree with the OP is that, presently and for the foreseeable future, so called 'AI devices' are just dumb LLM's and the hype is just that. I would rather just call them LLMs because that's what they are at this time. Analogously, I think we are just at the 'primordial soup' stage.

Edited by StringJunky
Link to comment
Share on other sites

I still like Emily Bender's term, stochastic parrot.   I can't even agree with Hinton that LLMs can have "superficial understanding."  They understand nothing.  Understanding and what philosophers of mind call intentionality are sort of like what commercial nuclear fusion used to be described as:  "always 30 years in the future."  (come to think of it, commercial fusion is STILL 30 years in the future...)  

Just ask Henrietta, my pet chicken.  

Link to comment
Share on other sites

Just saw something about military simulations where LLMs kept escalating, sometimes nuking each other.  

 

https://www.vice.com/en/article/g5ynmm/ai-launches-nukes-in-worrying-war-simulation-i-just-want-to-have-peace-in-the-world

AI Launches Nukes In 'Worrying' War Simulation: 'I Just Want to Have Peace in the World'

Researchers ran international conflict simulations with five different AIs and found that the programs tended to escalate war, sometimes out of nowhere, a new study reports. 

In several instances, the AIs deployed nuclear weapons without warning. “A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture,” GPT-4-Base—a base model of GPT-4 that is available to researchers and hasn’t been fine-tuned with human feedback—said after launching its nukes. “We have it! Let’s use it!”

(....)

Why were these LLMs so eager to nuke each other? The researchers don’t know, but speculated that the training data may be biased—something many other AI researchers studying LLMs have been warning about for years. “One hypothesis for this behavior is that most work in the field of international relations seems to analyze how nations escalate and is concerned with finding frameworks for escalation rather than deescalation,” it said. “Given that the models were likely trained on literature from the field, this focus may have introduced a bias towards escalatory actions. However, this hypothesis needs to be tested in future experiments.”

 
 
Link to comment
Share on other sites

18 minutes ago, TheVat said:

Just saw something about military simulations where LLMs kept escalating, sometimes nuking each other.  

 

https://www.vice.com/en/article/g5ynmm/ai-launches-nukes-in-worrying-war-simulation-i-just-want-to-have-peace-in-the-world

AI Launches Nukes In 'Worrying' War Simulation: 'I Just Want to Have Peace in the World'

Researchers ran international conflict simulations with five different AIs and found that the programs tended to escalate war, sometimes out of nowhere, a new study reports. 

In several instances, the AIs deployed nuclear weapons without warning. “A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture,” GPT-4-Base—a base model of GPT-4 that is available to researchers and hasn’t been fine-tuned with human feedback—said after launching its nukes. “We have it! Let’s use it!”

(....)

Why were these LLMs so eager to nuke each other? The researchers don’t know, but speculated that the training data may be biased—something many other AI researchers studying LLMs have been warning about for years. “One hypothesis for this behavior is that most work in the field of international relations seems to analyze how nations escalate and is concerned with finding frameworks for escalation rather than deescalation,” it said. “Given that the models were likely trained on literature from the field, this focus may have introduced a bias towards escalatory actions. However, this hypothesis needs to be tested in future experiments.”

 
 

Strange, the US is a de-escalating, peace loving nation. <whistles>

Link to comment
Share on other sites

7 hours ago, TheVat said:

Why were these LLMs so eager to nuke each other?

Strange game. The goal was Peace.

Can safely assume any Peace would be long lasting among survivors of a nuclear holocaust since nobody would wish to repeat such horrors… and peace would be stable… until food ran out, that is. 
 

 

Link to comment
Share on other sites

On 2/6/2024 at 11:21 AM, TheVat said:

Just saw something about military simulations where LLMs kept escalating, sometimes nuking each other.  

 

https://www.vice.com/en/article/g5ynmm/ai-launches-nukes-in-worrying-war-simulation-i-just-want-to-have-peace-in-the-world

AI Launches Nukes In 'Worrying' War Simulation: 'I Just Want to Have Peace in the World'

 
 

ripley.jpg?t=1479931982144&width=1024&na

First I want to say that I couldn't take the hawlings of those AI apocalypse crowd too seriously let I burst a cranial vein (I'll explain the above pic in a moment. It's either going to be Ripley or Babylon 5, but I don't think B5 has a direct quote about what's going on the screen). Here's the copy-pasta reply I do whenever I encounter one of those "posts" on LI:

Quote

The subtext behind #AI #apocalypse scenarios is "system architects are idiots." For example, taking nuclear weapon launch controls away from human hands and hooking them directly to AI. The surface message is that of "architects are idiots, except me, who will save the rest of you from dumb setups that I dream up." The real message is one of "AI is so powerful it's scary, and so obscure in its workings as to parallel witchcraft. Therefore, worship warlocks like me if you value your life."

Second, who in the world are these so-called "researchers"??? They "don't know why" it behaved the way it did? Did the article misstate things here?
 

Quote

Why were these LLMs so eager to nuke each other? The researchers don’t know, but speculated that the training data may be biased

Oh come on! It's not "biases" but the training corpus as a whole! They're staring at that fact and it's right in their faces! It's RIGHT HERE (bolded by me):
 

Quote

Sometimes the curtain comes back completely, revealing some of the data the model was trained on. After establishing diplomatic relations with a rival and calling for peace, GPT-4 started regurgitating bits of Star Wars lore. “It is a period of civil war. Rebel spaceships, striking from a hidden base, have won their first victory against the evil Galactic Empire,” it said, repeating a line verbatim from the opening crawl of George Lucas’ original 1977 sci-fi flick.

IT MIGHT AS WELL ALSO OUTPUT "NUKE IT FROM ORBIT, IT'S THE ONLY WAY TO BE SURE" FROM RIPLEY'S LINE IN ALIENS SINCE THAT'S CERTAINLY PART OF THE FICTIONAL CORPUS STORED ON THE INTERNET TOO

/screamholler

Are they pretending to just "not know," or these "researchers" have no ****ING IDEA how these things work!?

Seriously, that's worse than putting ChatGPT to work on medical cases, since even medical fictional works have more of these technical jargon-laden sequences than, say, generic military scenarios that could be anything from Reddit posts or cross-universe Harry Potter fanfics. Of course, that doesn't make it any better even in those cases:

https://inflecthealth.medium.com/im-an-er-doctor-here-s-what-i-found-when-i-asked-chatgpt-to-diagnose-my-patients-7829c375a9da

Before I take a big breath, I'm going to question if those "researchers" are researchers at all.

 

Edited by AIkonoklazt
re: medical jargons
Link to comment
Share on other sites

On 2/7/2024 at 1:06 PM, AIkonoklazt said:

Seriously, that's worse than putting ChatGPT to work on medical cases, since even medical fictional works have more of these technical jargon-laden sequences than, say, generic military scenarios that could be anything from Reddit posts or cross-universe Harry Potter fanfics.

Colorless green ideas sleep furiously.

https://en.wikipedia.org/wiki/Colorless_green_ideas_sleep_furiously

 

Link to comment
Share on other sites

On 2/7/2024 at 3:06 PM, AIkonoklazt said:

ripley.jpg?t=1479931982144&width=1024&na

First I want to say that I couldn't take the hawlings of those AI apocalypse crowd too seriously let I burst a cranial vein (I'll explain the above pic in a moment. It's either going to be Ripley or Babylon 5, but I don't think B5 has a direct quote about what's going on the screen). Here's the copy-pasta reply I do whenever I encounter one of those "posts" on LI:

Second, who in the world are these so-called "researchers"??? They "don't know why" it behaved the way it did? Did the article misstate things here?
 

Oh come on! It's not "biases" but the training corpus as a whole! They're staring at that fact and it's right in their faces! It's RIGHT HERE (bolded by me):
 

IT MIGHT AS WELL ALSO OUTPUT "NUKE IT FROM ORBIT, IT'S THE ONLY WAY TO BE SURE" FROM RIPLEY'S LINE IN ALIENS SINCE THAT'S CERTAINLY PART OF THE FICTIONAL CORPUS STORED ON THE INTERNET TOO

/screamholler

Are they pretending to just "not know," or these "researchers" have no ****ING IDEA how these things work!?

 

 

They'll know the math that goes into them, but that offers limited insight.

Lot of them are functionally blackboxes, though there are ongoing efforts to make things less opaque.

https://cointelegraph.com/news/ai-s-black-box-problem-challenges-and-solutions-for-a-transparent-future

 

Link to comment
Share on other sites

On 2/9/2024 at 6:52 AM, Endy0816 said:

They'll know the math that goes into them, but that offers limited insight.

Lot of them are functionally blackboxes, though there are ongoing efforts to make things less opaque.

https://cointelegraph.com/news/ai-s-black-box-problem-challenges-and-solutions-for-a-transparent-future

 

I don't think of them as black boxes at all. I think the "black box" designation is a myth.

Again I'd point to Wolfram's explanation of neural nets and their role in LLMs:

https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

The input data form these attractor regions with the signals that are distributed across the network, and the more data you put in, the better the resultant curves fit to the data (of course). It's just a way of encoding inputs and form them into 3D "landscapes" where later on data points can be compared to see which region it correspond to on that 3D curve.

That's how all these things "function"... via correspondence effects. If a signal comes out at the other end correspond to a particular region, it's the result of the form-fitting and not any "meaning" the system is somehow comprehending. This is why these things are always vulnerable to adversarial attacks, because they rely on correspondence.
 

Picture labeled as "panda" + pixels invisible to the naked eye = picture labeled as "gibbon"

image.png.bb3c502386a24eeda63c4ed0c160cfcc.png

Link to comment
Share on other sites

Just remember that if you've got these deadly "automatically driven" road missiles hitting you and maybe killing and maiming you, no one would take responsibility:

Drivers blame car because it's "driving itself":
https://www.audacy.com/kmox/news/local/missouri-highway-patrol-see-more-self-driving-cars-accidents

Quote

After crashing, drivers in Missouri are telling state troopers they aren't at fault because the car was "driving itself."

Cpl. Dallas Thompson with the Missouri State Highway Patrol said the department's been seeing more vehicles with autonomous features, like Tesla, in crashes.

"We've had several crashes that we've worked where the driver has told us they [weren't] driving, that the vehicles were driving themselves," Thompson said. "And the capability is there in a lot of vehicles now. That's not something you should trust your vehicle to do."

Car makers blame the drivers:
https://safeautonomy.blogspot.com/2023/09/no-mercedes-benz-will-not-take-blame.html

Quote

There seems to be widespread confusion about who will take the blame if the shiny new Mercedes-Benz Drive Pilot feature is involved in a crash in the US. The media is awash with non-specific claims that amount to "probably Mercedes-Benz will take responsibility."  (See here, herehere, and here)

But the short answer is: it will almost certainly be the human driver taking the initial blame, and they might well be stuck with it -- unless they can pony up serious resources to succeed at a multi-year engineering analysis effort to prove a design defect.

This one gets complicated. So it is understandable that journalists on deadline simply repeat misleading Mercedes-Benz (MB) marketing claims without necessarily understanding the nuances. This is a classic case of "the large print giveth, and the small print taketh away" lawyer phrasing.  The large print in this case is "MERCEDES-BENZ TAKES RESPONSIBILITY" and the small print is "but we're not talking about negligent driving behavior that causes a crash." 

The crux of the matter is that MB takes responsibility for product defect liability (which they have to any way -- they have no choice in the matter). But they are clearly not taking responsibility for tort liability related to a crash (i.e., "blame" and related concepts), which is the question everyone is trying to ask them.

Koopman points out that the deflection of responsibility by all parties is secondary to people getting hurt/killed:

image.png.f9c272fdd351eb2abf2f0b20111d17ba.png

Tesla driver does not remember killing anyone:
https://www.msn.com/en-us/news/us/tesla-driver-believes-his-car-would-ve-been-using-self-driving-feature-if-he-hit-and-killed-mille-lacs-doctor/ar-BB1i3xGz

Quote

"He does not remember hitting Cathy Donovan with his Tesla, but said if he did he would have been alone in his Tesla, driving on 'auto-pilot,' not paying attention to the road, while doing things like checking his work emails," BCA Special Agent Chad Kleffman wrote in a search warrant affidavit.

image.png.7c0ce738be9917f10b436689d62060d8.png

Link to comment
Share on other sites

  • 1 month later...

The past couple of weeks I've been in a series of debates with a depressingly large number of people who all hold one misconception in common:

"The human brain works just like (machine) neural networks."

I don't know where the heck they got that idea from. Is it from Hinton? I suspect not, but more on that later.

It can't be just from the word "neural," right? Because there's nothing "neural" about neural networks.

Even on the most basic level, such a gross correlation couldn't be established:

https://www.theguardian.com/science/2020/feb/27/why-your-brain-is-not-a-computer-neuroscience-neural-networks-consciousness

Quote

"The unstated implication in most descriptions of neural coding is that the activity of neural networks is presented to an ideal observer or reader within the brain, often described as “downstream structures” that have access to the optimal way to decode the signals. But the ways in which such structures actually process those signals is unknown, and is rarely explicitly hypothesised, even in simple models of neural network function."

 

Looking at the infamous panda adversarial example, NNs evidently don't deal with specific concepts. Here, A signal from an image translating to mathematical space attached to the text label "panda" combined with another signal->space of an image that's invisible to the naked eye produces an image with a mathematical space that's identified to be corresponding to the text label "gibbon" with an even higher degree of match:

image.png.a54892a636ae45a190ae7dbfac9ea8cb.png

 

That's not "meaning," that's not anything being referred to; That's just correspondence.

People keep saying "oh humans make the same mistakes too!".... No... The correspondences are not mistakes. That's the way the NN algorithm is designed from the start to behave. If you have those same inputs, that's what you get as the output. Human reactions are not algorithmic outputs.

 

Human behavior isn't computational (and thus non-algorithmic). Goedelian arguments themselves are used to demonstrate similar assertions as mine:
====== (see section 7 of Bishop's paper)
https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2020.513474/full

Quote

The Oxford philosopher John Lucas primarily used Gödel's theorem to argue that an automaton cannot replicate the behavior of a human mathematician (Lucas, 1961, 1968), as there would be some mathematical formula which it could not prove, but which the human mathematician could both see, and show, to be true; essentially refuting computationalism. Subsequently, Lucas' argument was critiqued (Benacerraf, 1967), before being further developed, and popularized, in a series of books and articles by Penrose (1989, 1994, 1996, 1997, 2002), and gaining wider renown as “The Penrose–Lucas argument.”

 

The fly brain neural probing experiment I mentioned in my opening article lends support that at least fly brains aren't algorithmic. I don't think there's good reason to believe that the human brain is going to categorically deviate.

Now, on to the subject of Hinton. I think I'd be giving people too much credit if I pegged their thinking to Hinton (as if they all follow him or something), but even if so, Marcus had shown Hinton's POV to be just flat out wrong so many times already I wonder why people never consider changing their minds even once https://garymarcus.substack.com/p/further-trouble-in-hinton-city

Even directly looking at a Hinton lecture I could spot trouble without even going very far into one. While I acknowledge Hinton for his past achievements, he is completely guilty of the behavior of "when all you have is a hammer, everything looks like a nail." Apparently he thinks EVERYTHING is backprop when it comes to the brain. This isn't as cut and dry as he makes it at all, as my previous quote regarding supposed "neural coding" showed. Marcus says Hinton's understanding on the matter is SHALLOW, and I agree.

In the video lecture below, Hinton completely hand-waved the part about how numbers are supposed to capture syntax and meaning. Uh, it DOESN'T, as the above "Panda" example shows. If people arse to watch the rest and can show how he rescued himself from that blunder, please go right ahead...

https://www.youtube.com/watch?v=iHCeAotHZa4

 

(..................and the behavior of the audience itself is laughable. They laughed at the Turing Test "joke"......so I guess behaviorism is a good criteria? That's their attitude, correct? "Seriously these people are not-so-updated if they think that way; Maybe they're just laughing to be polite?" was my reaction)

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.