Jump to content

How does ChatGPT work?


PeterBushMan

Recommended Posts

Facts are sometimes like neutrinos: they pass right through people with zero interaction.

On 5/4/2023 at 12:59 PM, wtf said:

When you (or anyone) claims that a program implemented on standard computing hardware might somehow achieve consciousness, by means of "supervenience" or "emergence," -- two impressive-sounding words that in my opinion convey no meaning and explain nothing -- you (or they, if you didn't say this) are making the claim that consciousness is computable. There is no evidence that this is true. 

The "evidence" consists generally of equivocation of words. They'll say that "Minds process information, and computers process information, therefore minds are computers," without noticing that the processing in question is qualitatively different. 

I agree that many equivalences that are drawn, most notably with artificial neural nets, are not warranted.  I would also note that we can distinguish between simulations of a process and an actual process - e.g. when a computer simulates a thunderstorm, water doesn't gush from its hardware nor does it shoot deadly electrical discharges at us.  That said, information is a little different from a bit of nasty weather.  Machines can process information, they aren't simulating processing information.  Information processing is a genuine causal power of computing devices.  The same, we presume, is true of other people's brains.  So I am drawn to Integrated Information Theory, in which a system's consciousness (what it is like subjectively) is conjectured to be identical to its causal properties (what it is like objectively).  If we can fully uncover the causal properties of a brain, there would seem no obstacle in principle to designing a machine with the same causal properties.  (and I agree that present digital computers do not have such causal properties)

I append a relevant snatch from the wiki article....

IIT "starts with consciousness" (accepts the existence of our own consciousness as certain) and reasons about the properties that a postulated physical substrate would need to have in order to account for it. The ability to perform this jump from phenomenology to mechanism rests on IIT's assumption that if the formal properties of a conscious experience can be fully accounted for by an underlying physical system, then the properties of the physical system must be constrained by the properties of the experience. The limitations on the physical system for consciousness to exist are unknown and consciousness may exist on a spectrum, as implied by studies involving split brain patients[10] and conscious patients with large amounts of brain matter missing.[11]

Specifically, IIT moves from phenomenology to mechanism by attempting to identify the essential properties of conscious experience (dubbed "axioms") and, from there, the essential properties of conscious physical systems (dubbed "postulates").

Link to comment
Share on other sites

This short article sheds more light on the OP question:

Quote

Labelers will tag particular items (be they distinct visual images or kinds of text) so that machines can learn to better identify them on their own. ...

Artificial intelligence may seem like magic—springing to life and responding to user requests as if by incantation—but, in reality, it’s being helped along by droves of invisible human workers who deserve better for their contribution.

OpenAI's ChatGPT Powered by Human Contractors Paid $15 Per Hour (gizmodo.com)

Link to comment
Share on other sites

I tried ChatGPT and chatBing with the following:

Geometry problem:  Semi-circle inside triangle:  Triangle with known length sides a, b, c where a is the longest.  Place inside the triangle a semi-circle with diameter resting on side a. What is semicircle radius of largest possible semi-circle in terms of side lengths?  Position of diameter center along a?

Neither worked - chatGPT gave no answer, while chatBing got it wrong.

Link to comment
Share on other sites

On 5/8/2023 at 1:01 AM, Eise said:

I am not aware using the word 'computability'. Obviously you filled that in. If that helps: no, I do not think there will be an algorithm for consciousness. 'Complexity' surely is a much better description, even if it sounds more vaguely. But e.g. Daniel Dennett makes a well argued case in his Consciousness Explained, making it less vague than it sounds.

Glad you asked. By the way, what did Dennett say? 

So, computability is what algorithms do. The key thing about algorithmic computability is that the speed of the computation doesn't matter. So if you have a machine that is not conscious when run slowly, but is conscious when run quickly; then whatever consciousness is, it's not a computation. 

On the other hand there's complexity theory, which is concerned with the efficiency of a given computation. It may well be the case that if you run an algorithm slowly it's not conscious, but if you run it quickly it is. That just means that whatever consciousness is, it's not a computation. But it might be something else. 

One interesting idea is analog computation. It seems to me that the wet goo processing in our brains is more of an analog process. The operation of our neurotransmitters seems more analog than digital to me. As I understand it, people are interested in the question of whether analog systems can compute things that digital (ie discrete) algorithms can't. Perhaps our brains do something that digital computers don't, but it's still natural, and yet it's not necessarily quantum, if you don't like Penrose's idea. 

 

 

 

On 5/8/2023 at 1:01 AM, Eise said:

So if your elevator is just executing algorithms, without having these algorithms unwanted side effects, then it is not conscious. Exactly like a neuron, or a small set of neurons.

Ok then you agree that an elevator composed of switches and executing an algorithm, is not conscious. 

But neurons are a lot different IMO. Neurons are not digital switches at all. They're not on/off switches. They're complex biological entities, and the real action is in the synapses where the neurotransmitters get emitted and reabsorbed in ways that are not fully understood. It's not anything at all like how digital computers work. Digital switches are NOT like small sets of neurons, not at all. 

 

On 5/8/2023 at 1:01 AM, Eise said:

Added bold. Does that say that there were no discussions about this topic? Nope. 

Ok, minor misunderstanding.

On 5/8/2023 at 1:01 AM, Eise said:

And panpsychism is not my cup of tea. Should we also adhere to 'panvivism'? Because living organisms exist, should we suppose that all atoms are at least a little bit alive? 

I'm not necessarily a panpsychist, but there's something to be said for the idea. If a small pile of atoms is not conscious and a large pile, arranged in just the right manner is, then where's the cutoff point? And if it's a gradual process, maybe an individual atom has some micro-quantity of consciousness, just waiting to be unleashed when the atom is joined by others in just the right configuration. Just an idle thought.

On 5/8/2023 at 1:01 AM, Eise said:

Maybe you should explain what 'existential' means.

Oh, ok. I said that (in my opinion) the current generation of AI systems will be socially transformative but not existential. That means that these systems will profoundly change society, just as fire and the printing press and the Internet did. But they will not destroy us all, as some seem to believe and are claiming out loud these days. I don't think that will happen. We came out of caves and built all this, and I would not bet against us humans. We invented AI as a tool. A lot of people get killed by cars, another transformative technology. 3000 a month in the US, 100 every day. Did you know that? Another 100 today, another 100 tomorrow. Somehow we have accommodated ourselves to that, although in my opinion we should crack down on the drunk drivers. We're way too tolerant of drunks. Maybe a lot of people will get killed by AI.  Just as with cars, we'll get used to it. It's not the end of the world and it's not the end of humanity. That's what I meant by "transformative but not existential."

 

On 5/8/2023 at 1:31 PM, TheVat said:

Facts are sometimes like neutrinos: they pass right through people with zero interaction.

Hello Mr. Vat. I was a member on your other site under a different handle. Sad about whatever happened, it was a good site. 

 

On 5/8/2023 at 1:31 PM, TheVat said:

I agree that many equivalences that are drawn, most notably with artificial neural nets, are not warranted.  I would also note that we can distinguish between simulations of a process and an actual process - e.g. when a computer simulates a thunderstorm, water doesn't gush from its hardware nor does it shoot deadly electrical discharges at us. 

Yes yes. I've used the same analogy myself. That a simulation of gravity does not attract nearby bowling balls. 

 

 

On 5/8/2023 at 1:31 PM, TheVat said:

That said, information is a little different from a bit of nasty weather.  Machines can process information, they aren't simulating processing information.  Information processing is a genuine causal power of computing devices. 

Ok, I agree that "information processing" is different. A simulation of information processing is not different than information processing. The gravity analogy breaks down.

But, I do think you may be doing that semantic equivalence thing ... My laptop processes information, my brain processes information, therefore there must be some analogy or likeness between how my laptop and my brain work. But this is not true. There's an equivocation of the phrase "information processing." In particular, in computer science, information has a specific meaning. It's a bitstream. A string of discrete 1's and 0's, which are processed in discrete steps. Brains are a lot different. Neurons and all that. Neurotransmitter reuptake. That is not a digital process. It's analog. 

 

 

On 5/8/2023 at 1:31 PM, TheVat said:

The same, we presume, is true of other people's brains.

Brains just aren't digital computers. And also, we're not talking about brains, but rather minds, which are different things entirely. Suppose we made a neuron-by-neuron copy of a brain out of digital circuitry. It might even appear identical to a brain from the outside. Give it a visual stimulus and the right region of the visual cortex lights up. But would it have a mind? I have no idea. Nobody does. But I think we should be careful with this machine analogy and especially with the "information processing" analogy. Elevators process information, as I've noted. They're not conscious, they're not intelligent. But they do "decide" and "remember." These are semantic issues. We use the same words to mean very different things.

On 5/8/2023 at 1:31 PM, TheVat said:

So I am drawn to Integrated Information Theory, in which a system's consciousness (what it is like subjectively) is conjectured to be identical to its causal properties (what it is like objectively).  If we can fully uncover the causal properties of a brain, there would seem no obstacle in principle to designing a machine with the same causal properties.  (and I agree that present digital computers do not have such causal properties)

I've heard of Tononi's IIT, where he has some mathematical function that figures out how conscious something is as a function of its complexity. That's literally all I know, which isn't much. 

On 5/8/2023 at 1:31 PM, TheVat said:

The ability to perform this jump from phenomenology to mechanism rests on IIT's assumption that if the formal properties of a conscious experience can be fully accounted for by an underlying physical system, then the properties of the physical system must be constrained by the properties of the experience.

I confess I do not understand this sentence. "If the formal properties .. can be fully accounted for ..." I understand. But what does it mean that the properties of the physical system must be constrained by the experience? That seems backward. The experience must be constrained by the mechanism, not vice versa. Maybe I'm just misunderstanding. Or not understanding.

On 5/8/2023 at 1:31 PM, TheVat said:

consciousness may exist on a spectrum, as implied by studies involving split brain patients

Back to pansychism. Maybe an atom is a tiny little bit conscious, and all we need to do is put enough of them together in just the right configuration.

I remember the split brain experiments of the 60's, but I thought I read that the idea's been debunked. 

On 5/8/2023 at 1:31 PM, TheVat said:

Specifically, IIT moves from phenomenology to mechanism by attempting to identify the essential properties of conscious experience (dubbed "axioms") and, from there, the essential properties of conscious physical systems (dubbed "postulates").

As more of a math person, axioms and postulates are synonymous to me. 

But then there's the flying analogy. Birds fly and airplanes fly but the mechanisms are radically different. Even the underlying physical principles are not the same. Planes don't fly by flapping their wings. The Wright brothers, as far as I know, did not study birds. It would have been a blind alley.

Why should machines think the way people do?

Edited by wtf
Link to comment
Share on other sites

1 hour ago, wtf said:

Glad you asked. By the way, what did Dennett say? 

So, computability is what algorithms do. The key thing about algorithmic computability is that the speed of the computation doesn't matter. So if you have a machine that is not conscious when run slowly, but is conscious when run quickly; then whatever consciousness is, it's not a computation. 

On the other hand there's complexity theory, which is concerned with the efficiency of a given computation. It may well be the case that if you run an algorithm slowly it's not conscious, but if you run it quickly it is. That just means that whatever consciousness is, it's not a computation. But it might be something else. 

One interesting idea is analog computation. It seems to me that the wet goo processing in our brains is more of an analog process. The operation of our neurotransmitters seems more analog than digital to me. As I understand it, people are interested in the question of whether analog systems can compute things that digital (ie discrete) algorithms can't. Perhaps our brains do something that digital computers don't, but it's still natural, and yet it's not necessarily quantum, if you don't like Penrose's idea. 

 

 

 

Ok then you agree that an elevator composed of switches and executing an algorithm, is not conscious. 

But neurons are a lot different IMO. Neurons are not digital switches at all. They're not on/off switches. They're complex biological entities, and the real action is in the synapses where the neurotransmitters get emitted and reabsorbed in ways that are not fully understood. It's not anything at all like how digital computers work. Digital switches are NOT like small sets of neurons, not at all. 

 

Ok, minor misunderstanding.

I'm not necessarily a panpsychist, but there's something to be said for the idea. If a small pile of atoms is not conscious and a large pile, arranged in just the right manner is, then where's the cutoff point? And if it's a gradual process, maybe an individual atom has some micro-quantity of consciousness, just waiting to be unleashed when the atom is joined by others in just the right configuration. Just an idle thought.

Oh, ok. I said that (in my opinion) the current generation of AI systems will be socially transformative but not existential. That means that these systems will profoundly change society, just as fire and the printing press and the Internet did. But they will not destroy us all, as some seem to believe and are claiming out loud these days. I don't think that will happen. We came out of caves and built all this, and I would not bet against us humans. We invented AI as a tool. A lot of people get killed by cars, another transformative technology. 3000 a month in the US, 100 every day. Did you know that? Another 100 today, another 100 tomorrow. Somehow we have accommodated ourselves to that, although in my opinion we should crack down on the drunk drivers. We're way too tolerant of drunks. Maybe a lot of people will get killed by AI.  Just as with cars, we'll get used to it. It's not the end of the world and it's not the end of humanity. That's what I meant by "transformative but not existential."

 

Hello Mr. Vat. I was a member on your other site under a different handle. Sad about whatever happened, it was a good site. 

 

Yes yes. I've used the same analogy myself. That a simulation of gravity does not attract nearby bowling balls. 

 

 

Ok, I agree that "information processing" is different. A simulation of information processing is not different than information processing. The gravity analogy breaks down.

But, I do think you may be doing that semantic equivalence thing ... My laptop processes information, my brain processes information, therefore there must be some analogy or likeness between how my laptop and my brain work. But this is not true. There's an equivocation of the phrase "information processing." In particular, in computer science, information has a specific meaning. It's a bitstream. A string of discrete 1's and 0's, which are processed in discrete steps. Brains are a lot different. Neurons and all that. Neurotransmitter reuptake. That is not a digital process. It's analog. 

 

 

Brains just aren't digital computers. And also, we're not talking about brains, but rather minds, which are different things entirely. Suppose we made a neuron-by-neuron copy of a brain out of digital circuitry. It might even appear identical to a brain from the outside. Give it a visual stimulus and the right region of the visual cortex lights up. But would it have a mind? I have no idea. Nobody does. But I think we should be careful with this machine analogy and especially with the "information processing" analogy. Elevators process information, as I've noted. They're not conscious, they're not intelligent. But they do "decide" and "remember." These are semantic issues. We use the same words to mean very different things.

I've heard of Tononi's IIT, where he has some mathematical function that figures out how conscious something is as a function of its complexity. That's literally all I know, which isn't much. 

I confess I do not understand this sentence. "If the formal properties .. can be fully accounted for ..." I understand. But what does it mean that the properties of the physical system must be constrained by the experience? That seems backward. The experience must be constrained by the mechanism, not vice versa. Maybe I'm just misunderstanding. Or not understanding.

Back to pansychism. Maybe an atom is a tiny little bit conscious, and all we need to do is put enough of them together in just the right configuration.

I remember the split brain experiments of the 60's, but I thought I read that the idea's been debunked. 

As more of a math person, axioms and postulates are synonymous to me. 

But then there's the flying analogy. Birds fly and airplanes fly but the mechanisms are radically different. Even the underlying physical principles are not the same. Planes don't fly by flapping their wings. The Wright brothers, as far as I know, did not study birds. It would have been a blind alley.

Why should machines think the way people do?

So you think it'd require using ternary(or other base) or an analog computer then?

What about using neurons on a chip? Is intelligence just a matter of scaling things up and/or creating hardware capable of performing the same?

Link to comment
Share on other sites

3 hours ago, Endy0816 said:

So you think it'd require using ternary(or other base) or an analog computer then?

 

The encoding makes no difference at all. Any positional notation like decimal or ternary is equivalent to binary for purposes of defining computation. As far as analog computing, that's what some people think may enable us to break out of the limitations of digital computing. But the idea is speculative.

 

3 hours ago, Endy0816 said:

What about using neurons on a chip?

Well, biological neurons can't be replicated on a chip. That's the point. 

What the AI folks call "neurons" are mathematical models of abstract neurons. Signals go in, signals go out, nodes have weights and paths have probabilities and so forth. Here's the Wiki writup.

https://en.wikipedia.org/wiki/Artificial_neuron

It's not a new idea. The McCulloch-Pitts neuron dates from 1943.

https://towardsdatascience.com/mcculloch-pitts-model-5fdf65ac5dd1

So yes, you could put digital neurons on a chip, and it would make the computations go faster. I wouldn't be surprised if a lot of the modern AI models are already implemented partially using custom chips. I can't see how it would make a substantial difference. I'm sure they already do every performance tweak they can.

 

3 hours ago, Endy0816 said:

Is intelligence just a matter of scaling things up and/or creating hardware capable of performing the same?

Well, nobody knows, right? But if we take "hardware" in its most general form, we ourselves are hardware, in the sense that we're made of "stuff," whatever stuff is in these days of probability waves of quantum fields. But if we accept materialism, which we need to do in order to get the conversation off the ground, then we ourselves are machines. So in the end, there must be some kind of machine that instantiates or implements consciousness, since we are that type of machine.

My argument in the last couple of posts is that we just don't happen to be computational machines, in the sense of Turing machines or algorithms/programs.

Since we are conscious, that means that

(1) Either we are doing something that digital computers can't (my opinion, though I'm hard-pressed to identify the nature of the secret sauce); or else

(2) We ourselves operate the same way digital computers do, programs implementing algorithms. I don't believe that, but some people do. I'm not dogmatic about my opinion, I'd just be personally horrified to find out I'm just a character in a video game, or somebody's word processor. Many other people these days already believe that they are and don't seem to mind. I think they're selling humanity short. At least I hope they are.

I found a nice article about this the other day. This paragraph articulates the difference between what probabilistic large-language models like ChatGpt do, and what creative human artists do.

Quote

AI operates by making high-probability choices: the most likely next word, in the case of written texts. Artists—painters and sculptors, novelists and poets, filmmakers, composers, choreographers—do the opposite. They make low-probability choices. They make choices that are unexpected, strange, that look like mistakes. Sometimes they are mistakes, recognized, in retrospect, as happy accidents. That is what originality is, by definition: a low-probability choice, a choice that has never been made.

 

"Why AI Will Never Rival Human Creativity"

https://www.persuasion.community/p/why-ai-will-never-rival-human-creativity

That's what humans do well that machines don't. Make the choices that have never been made.

Edited by wtf
Link to comment
Share on other sites

7 hours ago, wtf said:

Glad you asked. By the way, what did Dennett say? 

Multiple drafts model

7 hours ago, wtf said:

One interesting idea is analog computation. It seems to me that the wet goo processing in our brains is more of an analog process.

Sure. But this would also be an argument against any physics simulations: most laws of physics are continuous, so 'analog'. Using your argument computer models of physical process would be useless. So why would a digital computer not be able to be precise enough to simulate analog brain processes? Followup with the TheVat's idea that there is no difference between information processing and a simulation of information processing. 

7 hours ago, wtf said:

I'm not necessarily a panpsychist, but there's something to be said for the idea. If a small pile of atoms is not conscious and a large pile, arranged in just the right manner is, then where's the cutoff point?

A small pile of atoms cannot calculate, so computers, made of atoms cannot either (or the other way round: computers can calculate, so every atom can calculate a bit). Protons, neutrons and electrons do not have colour, so nothing built from them can have colour. Etc.

7 hours ago, wtf said:

That's what I meant by "transformative but not existential."

Thanks for the clarification. I was reading 'existential' more in the way existentialists use it. And in that sense, you are already showing the first symptoms of 'existential fear':

2 hours ago, wtf said:

I'm not dogmatic about my opinion, I'd just be personally horrified to find out I'm just a character in a video game, or somebody's word processor.

Read Dennett. He is also a strong defender of the idea that we have free will. (No not libertarian free will, not plain (quantum) randomness). And he wrote a chapter in Intuition Pumps And Other Tools for Thinking about the 'just-operator' (made it bold in your sentence)

Link to comment
Share on other sites

Following to chatGPT - answer was triangle inradius, NOT semicirvle radius/  Why?

------------------------------------------------------------------

-Geometry problem:  Semi-circle inside triangle:  Triangle with known length sides a, b, c where a is the longest.  Place inside the triangle a semi-circle with diameter resting on side a. What is semicircle radius of largest possible semi-circle in terms of side lengths?  Position of diameter center along a?
-------------------------------------------

Link to comment
Share on other sites

30 minutes ago, mathematic said:

Following to chatGPT - answer was triangle inradius, NOT semicirvle radius/  Why?

------------------------------------------------------------------

-Geometry problem:  Semi-circle inside triangle:  Triangle with known length sides a, b, c where a is the longest.  Place inside the triangle a semi-circle with diameter resting on side a. What is semicircle radius of largest possible semi-circle in terms of side lengths?  Position of diameter center along a?
-------------------------------------------

If you go back to your first post on this question, https://www.scienceforums.net/topic/131410-how-does-chatgpt-work/?do=findComment&comment=1238821, and follow about a dozen or so comments after it, you will find several different wrong answers that the commenters got from the bot.

Link to comment
Share on other sites

4 hours ago, Genady said:

If you go back to your first post on this question, https://www.scienceforums.net/topic/131410-how-does-chatgpt-work/?do=findComment&comment=1238821, and follow about a dozen or so comments after it, you will find several different wrong answers that the commenters got from the bot.

I've been hearing that they're coming out with a Wolfram alpha plugin for it.

That really seems more ideal until GAI is developed. Hand off the bits it can't or shouldn't do

Edited by Endy0816
Link to comment
Share on other sites

On 4/30/2023 at 11:01 PM, mathematic said:

Posed the following and got no answer:

Geometry problem:  Semi-circle inside triangle:  Triangle with known length sides a, b, c where a is the longest.  Place inside the triangle a semi-circle with diameter on side a. What is radius of largest possible semi-circle in terms of side lengths?  Position of diameter center along a?

 

On 5/1/2023 at 3:37 AM, Sensei said:

He objected to the title of this thread i.e. "How does ChatGPT work?" giving an example that it does not work i.e. gives wrong answers..

 

The foundation models of chat-GPT aren't trying to be factual.

A common use of chat-GPT is for science fiction writers - they will at times want accurate science and maths and at other times want speculative, or simply 'wrong', science and maths in service of a story. Which you want will determine what you consider a 'right' or good answer. 

Prompt engineering is the skill of giving inputs to a model such you get the type of answers you want, i.e. learning to steer the model. A badly driven car still works.

Or wait for the above mentioned Wolfram Alpha API which will probably make steering towards factually correct maths easier.

 

BTW, question for the thread, are we talking about chat-GPT specifically, LLMs or just potential AGI in general? - they all seem to get conflated at different points of the thread.

Link to comment
Share on other sites

On 5/10/2023 at 2:28 AM, Eise said:

Multiple drafts model

I gave that Wiki page a fairminded try. I really did. I just didn't understand any of it, and the small parts I did understand, I disagreed with. Starting from the first sentence:

"Daniel Dennett's multiple drafts model of consciousness is a physicalist theory of consciousness based upon cognitivism, which views the mind in terms of information processing."

Here we have the same old problem of equivocating "information." An algorithm does information processing in the sense of processing a stream of bits, one bit at a time. The machine is in a particular state. If the next bit is a 0, it goes to one state; if a 1, it goes to a different state. All algorithmic processing can be reduced to that idea.

Now when I go outside and see the blue sky and feel the soft breeze and smell the fresh-cut grass, I am doing no such thing. There is no bit stream, there is no algorithm. Subjective mental experience is nothing of the sort. 

I tried to read the rest of it. Some paragraphs several times. It felt like drowning in maple syrup. I just can't read this kind of prose. Even the Wiki version. And their excerpts from Dennett himself were worse. Must be me. 

 

On 5/10/2023 at 2:28 AM, Eise said:

Sure. But this would also be an argument against any physics simulations: most laws of physics are continuous, so 'analog'. Using your argument computer models of physical process would be useless. So why would a digital computer not be able to be precise enough to simulate analog brain processes? Followup with the TheVat's idea that there is no difference between information processing and a simulation of information processing.

Not at all. We can easily simulate continuous phenomena with discrete ones, as when we go to a traditional (analog) movie, which is nothing more than a sequence of still images that depend on a quirk of the visual system to give the illusion of motion. Likewise with modern digital video imagery. A bitstream, a long stream of 0's and 1's, give the illusion of motion. Any physical process can be simulated by a discrete system. Nonlinear systems can be approximated to any desired degree of accuracy by linear ones, as in calculus. All this is commonplace.

Perhaps we could even simulate, or approximate, the function of a brain. It might behave correctly from the outside: give it a visual stimulus and the correct region of the visual cortex lights up.

But mind ... that's something entirely different. There's no evidence that we can simulate a mind, by any means at all. Approximating brain function would not (in my opinion) implement a mind. And there is no evidence at all that it would.

So I'll agree with you that it's possible that we could simulate brain function. But that is not remotely the same as simulating mind. Our simulated brain would light up the correct region of the visual cortex. But would it then have a subjective experience of seeing? That's the "hard problem" of Chalmers. We don't know, and we have no idea how to even approach the problem.

 

On 5/10/2023 at 2:28 AM, Eise said:

A small pile of atoms cannot calculate, so computers, made of atoms cannot either (or the other way round: computers can calculate, so every atom can calculate a bit). Protons, neutrons and electrons do not have colour, so nothing built from them can have colour. Etc.

I think you are agreeing with me. Or else falling back on emergence. Small pile of atoms can't calculate but big pile can. But "emergence" explains nothing. It's only a label for something we don't understand. If we understood it, we wouldn't have to use a label as a substitute for understanding.

 

On 5/10/2023 at 2:28 AM, Eise said:

Thanks for the clarification. I was reading 'existential' more in the way existentialists use it. And in that sense, you are already showing the first symptoms of 'existential fear':

Yes good point. I was thinking of the political meaning, as we often read these days that the Ukraine war is existential for Russia. Their very existence depends on it. Or as the Google example that comes up when you type in "existential, "relating to existence. ""the climate crisis is an existential threat to the world""

And not their second definition, "concerned with existence, especially human existence as viewed in the theories of existentialism. "the existential dilemma is this: because we are free, we are also inherently responsible.""

 

 

On 5/10/2023 at 2:28 AM, Eise said:

Read Dennett. He is also a strong defender of the idea that we have free will. (No not libertarian free will, not plain (quantum) randomness). And he wrote a chapter in Intuition Pumps And Other Tools for Thinking about the 'just-operator' (made it bold in your sentence)

Will give him a try sometime, thanks for the pointer.

Link to comment
Share on other sites

14 hours ago, Genady said:

When people ask the bot a question, they don't care if the answer is wrong? If so, it tells about the people more than about the bot.

50% of US voters didn’t care that their president was  actively lying to them about easily verifiable facts. Your bar seems much too high, sir. Look around. 

Edited by iNow
Link to comment
Share on other sites

12 hours ago, wtf said:

Now when I go outside and see the blue sky and feel the soft breeze and smell the fresh-cut grass, I am doing no such thing.

But your neurons fire like they are always do, "just" in another pattern. 

12 hours ago, wtf said:

But mind ... that's something entirely different. There's no evidence that we can simulate a mind, by any means at all.

That is true. But there is also no evidence that we can't.

12 hours ago, wtf said:

Perhaps we could even simulate, or approximate, the function of a brain. It might behave correctly from the outside: give it a visual stimulus and the correct region of the visual cortex lights up.

Now assume that we are able to simulate a complete brain: that means the simulation can also report on what it sees. And then, being able to do everything that a natural brain can do, it can report that it does not like what it sees. And when asked why, it can reveal some of its reasons. But that means it has inner states, or even stronger, is aware of its inner states. Then it becomes difficult to argue that it has no consciousness. And if it cannot give its reasons? Well, then it was not a good simulation, or at least incomplete.

13 hours ago, wtf said:

But would it then have a subjective experience of seeing? That's the "hard problem" of Chalmers.

Of course it would! I am convinced that if all 'easy problems' are solved, there is no hard problem left. Qualia have no causal powers, so they might just as well none existent. Maybe it helps, if you ponder about why nobody today thinks we need 'elan vital' to explain life anymore. 

13 hours ago, wtf said:

Small pile of atoms can't calculate but big pile can.

Piles? Nope. It is the structure and kind of processes that run on this structure. But that is probably what you meant.

13 hours ago, wtf said:

But "emergence" explains nothing.

Well, if one drops the word "emergence" just like that, I agree. But if you have a model on how higher level phenomena can be explained by the workings of a lower level, then "emergence" is a sensible description of that.

Link to comment
Share on other sites

16 hours ago, Eise said:

And then, being able to do everything that a natural brain can do

But isn't this just conjecture? How can we be sure that a simulated brain is functionally indistinguishable from a biological one? So far no one has succeeded in accurately simulating even a single neuron (except as rough approximations), since, when you look at it more closely, it's actually an incredibly complex system. How can we be sure that it isn't the case that some part of the biological hardware is actually necessary for a brain to function like a brain?

I'm not claiming it can't be done (I don't know, and I'm not an expert in this either), I'm only urging caution with this assumption. I think it needs to be questioned.

16 hours ago, Eise said:

And when asked why, it can reveal some of its reasons. But that means it has inner states, or even stronger, is aware of its inner states.

I don't understand this inference - why does being able to verbally articulate something imply that there are necessarily inner states? And why does the presence of inner states imply awareness?

Any and all human languages have a finite number of elements, and a finite number of ways these elements can be combined, due to the presence of rules of grammar etc. It is conceivable to me that one can write software that simply trawls through the entirety of all written and spoken material that has ever been digitally captured, and, based on this, will be able to verbally respond to any question you pose to it in seemingly meaningful ways, based on precedents and probabilities within already existing media. In fact, if my understanding is correct, this is roughly what ChatGPT does. However, this is purely a mechanical and computational process, and I wouldn't agree that there are any kind of 'inner states' or 'awareness' involved in this. Of course there could be, but how can we be sure?

My feeling is that any sufficiently complex language model will eventually become externally indistinguishable from a conscious agent (based on verbal interactions), even though it is entirely mechanical in nature. I guess this is just the classical 'philosophical zombie' thing.

Link to comment
Share on other sites

1 hour ago, Markus Hanke said:

My feeling is that any sufficiently complex language model will eventually become externally indistinguishable from a conscious agent (based on verbal interactions), even though it is entirely mechanical in nature. I guess this is just the classical 'philosophical zombie' thing.

Does it matter if it responds as you would expect? How do I know that you personally  don't have an AI source running inside you?

Link to comment
Share on other sites

2 hours ago, Markus Hanke said:

software that simply trawls through the entirety of all written and spoken material that has ever been digitally captured, and, based on this, will be able to verbally respond to any question you pose to it in seemingly meaningful ways, based on precedents and probabilities within already existing media.

I don't think so. Such response requires more than written and spoken material for the foundation.

1 hour ago, StringJunky said:

Does it matter if it responds as you would expect?

If it is a language model, I don't think this will ever happen.

Link to comment
Share on other sites

2 hours ago, Genady said:

I don't think so. Such response requires more than written and spoken material for the foundation.

Why?

What more? It's a machine fulfilling its objective. 

4 hours ago, StringJunky said:

Does it matter if it responds as you would expect? 

That's an interesting question, I think it would matter, because of all the noise it would create; where does the next human breakthrough come from?

It strikes me as the echo chamber version 2.0.

Link to comment
Share on other sites

20 hours ago, Genady said:

I don't think so. Such response requires more than written and spoken material for the foundation.

Why not? What else would it require but a model of natural language, and a sufficiently large set of precedents? I don’t see why verbal interactivity cannot be simulated by a machine, to such a degree that it becomes indistinguishable from a conscious agent - at least in principle.

22 hours ago, StringJunky said:

How do I know that you personally  don't have an AI source running inside you?

You can never be sure, based purely on verbal interaction. In fact, if you were to interact with me directly in the real world, you’d probably be left wondering; I am autistic, so my face-to-face communication style is - shall we say - unconventional and not quite what you would expect from an ‘ordinary’ person, so you’d be forgiven for mistaking me for an AI :)

 
 

Link to comment
Share on other sites

1 hour ago, Markus Hanke said:

Why not? What else would it require but a model of natural language, and a sufficiently large set of precedents? I don’t see why verbal interactivity cannot be simulated by a machine, to such a degree that it becomes indistinguishable from a conscious agent - at least in principle.

You can never be sure, based purely on verbal interaction. In fact, if you were to interact with me directly in the real world, you’d probably be left wondering; I am autistic, so my face-to-face communication style is - shall we say - unconventional and not quite what you would expect from an ‘ordinary’ person, so you’d be forgiven for mistaking me for an AI :)

 
 

 

1 hour ago, Markus Hanke said:

Why not? What else would it require but a model of natural language, and a sufficiently large set of precedents? I don’t see why verbal interactivity cannot be simulated by a machine, to such a degree that it becomes indistinguishable from a conscious agent - at least in principle.

I think the gold standard amongst the AI-consuming public for what constitutes an acceptable human-like response will be much lower. If it tells you what you think you want to hear on human-level of consistency, i.e. not perfectly consistent , I think it will be enough for most folk. in fact, I think a 'perfect' response everytime would actually reveal its source. There needs to be jitter built-in probably.

i feel that some who are sceptical of 'sufficiently complex' and emergent-process models are inadvertently backing themselves into  a Cartesian duality stance. 

Edited by StringJunky
Link to comment
Share on other sites

You can ask ChatGPT about itself. It probably knows a bit more about it than scienceforums.net users. Parts of its answers:

Quote

As an AI language model, I work by using a neural network to analyze large amounts of text and identify patterns and relationships between words and phrases. [...]

The training process involves feeding the neural network vast amounts of text data, such as books, articles, and websites, and then adjusting the weights and biases of the network based on the patterns and relationships it identifies in that data. This allows the network to gradually learn how to generate text that is similar to the text it has been trained on.

Much, but not all, of what it says is just based on what other people have said. For example if you ask it a science question that is generally misunderstood repeatedly across the internet, and is not something some higher-weighted "expert system" would describe correctly, you might likely get the wrong generally misunderstood answer.

Months ago I asked it if the Twin Paradox required general relativity, and it told me that it does, sounding just like a post you might see anywhere across the internet. Now if I ask it, it says it doesn't. I don't know if that's better data, better model, better weightings, or if it is actively being corrected on answers it commonly gets wrong. There is a small army of humans providing it with specific data, not just scraped stuff.

Quote

As an AI language model, I have been programmed to perform a wide range of mathematical operations, including:

  • Addition, subtraction, multiplication, and division of numbers.
  • Solving equations and inequalities.
  • Calculating percentages, ratios, and proportions.
  • Finding the prime factorization of a number.
  • Calculating logarithms and exponents.
  • Solving systems of linear equations.
  • Performing calculus operations such as differentiation and integration.

It is not as simple as taking everything that's been written and scraped, and generating something similar. It is programmed with a bunch of different specific abilities. For example, it can multiply 2 numbers no one has ever multiplied before, with no data in existence to suggest what the probable answer is based only on what's been written before. It could conceivably learn the steps to do something it's not programmed to, but in this case it has specific programming for it.

Quote

Yes, as an AI language model, I was programmed with a special code specifically for generating poetry. This code is designed to help me identify and replicate the various structural and stylistic elements that are common in poetry.

The code for generating poetry involves using a neural network to analyze existing poems and identify patterns in their structure, such as the number of lines, the number of syllables per line, and the rhyme scheme. The neural network also identifies patterns in the language used in poetry, such as the use of metaphor, simile, and other figurative language.

Once the neural network has learned the patterns and structures of poetry, it can generate new poems by using those patterns and structures as a guide. The generated poems may not be identical to existing poems, but they will share many of the same features and will often have a similar tone, mood, and theme.

The general and most basic functionality is to mimic what it has seen before, and that alone can answer a lot of questions correctly and carry on a conversation similar to ones that have happened before. But there is a lot more additional programming that it hasn't just "learned" by itself. Certainly, additional programmed capabilities will be added over time.

 

 

An AI like this can create new ideas. For example, if someone somewhere has associated A with X, and someone else has associated B with X, it's possible for an AI to associate A with B, even if no human has ever done that.

When people say an AI doesn't, or never will, "understand" something like a human does, I wonder how they define understanding, or feeling, thinking, etc., without using "like a human does," in their definition. How do we know that human understanding is more than just the learned connections between a very large set of concepts and knowledge?

Edited by md65536
Link to comment
Share on other sites

16 minutes ago, md65536 said:

 How do we know that human understanding is more than just the learned connections between a very large set of concepts and knowledge?

My thought too. That needs to be eliminated first before looking for more exotic explanations. 

Link to comment
Share on other sites

5 hours ago, Markus Hanke said:

Why not? What else would it require but a model of natural language, and a sufficiently large set of precedents?

I think that our use of language is determined not only by the intra-language connections, but also by connections between the language and our sensory / motor / affective experiences. IOW, intra-language connections themselves don't have enough information to generate verbal responses indistinguishable from humans. The precedents should also include sensory / motor / affective precedents related to the linguistic experiences.

Edited by Genady
Link to comment
Share on other sites

I have been following this interesting discussion thread and would like to add the following.

 

I have just listened to a most enlightening interview on our local radio with

Nello Christiani, and Italian chap who has been working in AI for 30 years and is now at the University of Bath.

His ( and the definition used by workers in the field)  of 'intelligence' is much wider than has been used here, and quite different from the stuffy theoretical definition from abstract philosopher's camp.

He has just published a book explaining much.

Short Cut

Why intelligent machines do not think like us

published by CRC press

 

I haven't yet had a chance to read it but hopefully it has far more solid detail than the interview.

He did make some good points about ChatGPT etc.

including a good example of how a false, possibly illegal, conclusion could arise from using an AI to select candidates for a job of Mechanical Engineer.

ChatGPT is based on statistical data comparison so trawling the internet would soon reveal that the majority of MEs are male.

This could lead to the rejection of an otherwise excellent female candidate.

 

 

 

 

 

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.