Jump to content

Is an AI really a Turing Machine ?


studiot

Recommended Posts

I know the official line is that so called AI programs are nothing more than glorified TMs

But is that really true ?

One of the characteristics of a TM is reproducibility.

That is a given input always results in an identical (and predictable in theory) output.

Yet we have noted that repeating a question to, say CHATGPT, sometimes results in several quite different answers.

A true TM should not exhibit this behaviour.

 

I wonder if what is happening is that when the AI program does its data search and subsequent statistical 'pattern matching' this happens because the dataset for comparison varies each time the search is conducted or that the same dataset is invoked each time but there are additional limiting or cutoff instructions in the program to mean that each time a different dataset is actually compared.

 

 

 

Link to comment
Share on other sites

15 minutes ago, studiot said:

That is a given input always results in an identical (and predictable in theory) output.

Yes, it is TM. The statement above is incorrect. In TM, the next state of the machine depends on a given state. While the input to AI might be the same, the machine is not in the same state when the question is repeated. This leads to a different output.

 

19 minutes ago, studiot said:

the same dataset is invoked each time

Yes, it is so in the current systems.

Link to comment
Share on other sites

48 minutes ago, Genady said:

Yes, it is TM. The statement above is incorrect. In TM, the next state of the machine depends on a given state. While the input to AI might be the same, the machine is not in the same state when the question is repeated. This leads to a different output.

 

Yes, it is so in the current systems.

 

The switch on routine in a current computer is meant to place the machine in a 'standard' state.

This could be further strengthened by hard wiring to include the AI routine as part of that standard state.

Link to comment
Share on other sites

12 minutes ago, studiot said:

 

The switch on routine in a current computer is meant to place the machine in a 'standard' state.

This could be further strengthened by hard wiring to include the AI routine as part of that standard state.

When a user connects to AI and starts a session, the AI is not a routine that is switched on. AI keeps running and starting a new session changes its state.

Link to comment
Share on other sites

Artificial intelligence has "states" and "weights" ("probabilities of true/false") of these states. These affect other states in a complex hierarchy tree.

"States" are updated, "weights" are updated when you and other people talk to a human (updated in the brain) or to an artificial intelligence.

"learning by repetition" ("repeat a lie often enough and it becomes the truth") (see Russia these days) increases "weights" of some "state" ("information").

https://www.google.com/search?q=state+weights+artificial+intelligence

A normal computer only has bits with 0 or 1, there is no other alternative.

 

ps. Other alternative chatbots have ended their lives as racists, misogynists, nationalists, simply because the people talking to them provided such and not other content and anybody could influence them (just like evil dictators/political leaders do to people)..

https://www.google.com/search?q=chatbot+racist+answers

e.g.

https://en.wikipedia.org/wiki/Tay_(chatbot)

Quote

Tay was an artificial intelligence chatbot that was originally released by Microsoft Corporation via Twitter on March 23, 2016; it caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch.[1] According to Microsoft, this was caused by trolls who "attacked" the service as the bot made replies based on its interactions with people on Twitter.[2] It was replaced with Zo.

 

Link to comment
Share on other sites

5 hours ago, studiot said:

That is a given input always results in an identical (and predictable in theory) output.

 

https://en.wikipedia.org/wiki/Nondeterministic_Turing_machine

NTMs have exactly the same computational power as standard (deterministic) TMs. They may or may not be more efficient in terms of complexity. This is the famous P versus NP problem.

Link to comment
Share on other sites

On 5/31/2023 at 9:49 AM, studiot said:

I know the official line is that so called AI programs are nothing more than glorified TMs

But is that really true ?

One of the characteristics of a TM is reproducibility.

That is a given input always results in an identical (and predictable in theory) output.

Yet we have noted that repeating a question to, say CHATGPT, sometimes results in several quite different answers.

A true TM should not exhibit this behaviour.

 

I wonder if what is happening is that when the AI program does its data search and subsequent statistical 'pattern matching' this happens because the dataset for comparison varies each time the search is conducted or that the same dataset is invoked each time but there are additional limiting or cutoff instructions in the program to mean that each time a different dataset is actually compared.

 

Pseudorandom values and past communication for that session are being used in the background.

Link to comment
Share on other sites

3 hours ago, Endy0816 said:

 

Pseudorandom values and past communication for that session are being used in the background.

And not only for that session, but also for past sessions of that user.

Edited by Genady
Link to comment
Share on other sites

On 5/31/2023 at 2:49 PM, studiot said:

A true TM should not exhibit this behaviour.

What is a true Scotsman? 

Turin also asked, how could we tell?

The machine he predicted is general AI, AI as we know it, is a 'potential' baby GAI, or as the late great Douglas Adams would say "42"

Link to comment
Share on other sites

Thanks to all for offering some discussion points rather than taking up stances.

 

On 5/31/2023 at 8:46 PM, wtf said:

https://en.wikipedia.org/wiki/Nondeterministic_Turing_machine

NTMs have exactly the same computational power as standard (deterministic) TMs. They may or may not be more efficient in terms of complexity. This is the famous P versus NP problem.

 

Good information listing many extensions to a basic TM (all including a qualifying adjective) +1

However these are extensions, rather than restrictions as I don't think it is possible to simplify a TM further.

 

1 hour ago, dimreepr said:

What is a true Scotsman? 

Turin also asked, how could we tell?

The machine he predicted is general AI, AI as we know it, is a 'potential' baby GAI, or as the late great Douglas Adams would say "42"

 

It is sometimes difficult to follow your oblique thinking, but if you mean 'potential' as in the Greek's used potential in relation to infinity, then no that would not be a TM.

Note that Turing did not predict his machine he specified it and finiteness is a very important part of that specification.

Probability itself is philosophically problematic since TMs cannot directly handle the infinite.
And the probability function has an infinite  (though bounded) range.

I'm not sure hot sure how many appreciate the implications of finiteness.

Part of the specification includes a finite character set that can be input or output by a TM.
The same applies to the instruction set and the set of operations that may be carried out.
Finally there is the 'one character at a time'  requirement.

This is why a TM cannot fully compass a multicharacter 'message'.

This in turn, leads to the probability pattern matching in use by current 'AIs'.

Link to comment
Share on other sites

9 minutes ago, studiot said:

This in turn, leads to the probability pattern matching in use by current 'AIs'.

I think that probability pattern matching is an interpretation of what AI is doing, rather than its actual functioning. There are no, AFAIK, explicit steps of probability pattern matching neither during its training nor during its run.

During the training, parameters (aka weights) of a function of specific form are calculated based on a set of input-output pairs. During the run, an output of this function is calculated according to accumulated history of inputs and responses. This is a basic mechanism, although there are variations, additions, modifications, randomizations, human interventions, etc.

Link to comment
Share on other sites

6 hours ago, Genady said:

I think that probability pattern matching is an interpretation of what AI is doing, rather than its actual functioning. There are no, AFAIK, explicit steps of probability pattern matching neither during its training nor during its run.

During the training, parameters (aka weights) of a function of specific form are calculated based on a set of input-output pairs. During the run, an output of this function is calculated according to accumulated history of inputs and responses. This is a basic mechanism, although there are variations, additions, modifications, randomizations, human interventions, etc.

Although it is not my name for the AI process, I think it is quite a good simple general one so I use it.

Consider the following situation.

A Turing Machine has a small defect. It does not have a symbol for the character  9 in its set of allowable characters.

So it can calculate 2 + 3,  2 + 4, 2 + 5, 2 + 6 but not 2 + 7  but again it can calculate 2 + 8.

Now the question arises "Is an AI capable of coming up with the deduction that 2 + 7 exists so it needs a symbol for the result, when asked to calculate 2 + 8 " ?

 

Link to comment
Share on other sites

9 minutes ago, studiot said:

Although it is not my name for the AI process, I think it is quite a good simple general one so I use it.

Consider the following situation.

A Turing Machine has a small defect. It does not have a symbol for the character  9 in its set of allowable characters.

So it can calculate 2 + 3,  2 + 4, 2 + 5, 2 + 6 but not 2 + 7  but again it can calculate 2 + 8.

Now the question arises "Is an AI capable of coming up with the deduction that 2 + 7 exists so it needs a symbol for the result, when asked to calculate 2 + 8 " ?

 

I am not sure how to apply this hypothetical scenario to an AI of today, because it is a TM with only two symbols in its character set, namely 0 and 1.

Link to comment
Share on other sites

23 hours ago, studiot said:

I'm not sure hot sure how many appreciate the implications of finiteness.

For me it means, a paradox can't exist; for Turing, the machine will inevitably surpass us, as with 'Watson' or whatever the 'go' winner was called.

What we can't say is, that behaviour is wrong...

Link to comment
Share on other sites

On 5/31/2023 at 6:49 AM, studiot said:

I wonder if what is happening is that when the AI program does its data search and subsequent statistical 'pattern matching' this happens because the dataset for comparison varies each time the search is conducted or that the same dataset is invoked each time but there are additional limiting or cutoff instructions in the program to mean that each time a different dataset is actually compared.

As I understand it , in the case of the Large Language Models (LLMs), the system is "trained" on a body of data. Once the training is done. the system is "locked in," so to speak. It's a big map of text strings and their likely followups, ranked by statistical frequency. 

In other words first, they do a massive statistical analysis of a body of text (the "corpus"). Once they're done analyzing, they build out their graph of what text strings are likely followups for what other ones, then they run it. 

The initial statistical analysis doesn't change nor does the body of text. It's only done in one phase, as they analyze the text and experimentally assign "weights" to  various likely continuations. For example if you always pick the most popular continuation, your chatbot will be boring; and if you have too high a preference for the unlikely continuations, your chatbot will appear to be insane or extremely hip or something. 

If the corpus is biased, the AI will be biased. And all bodies of text are biased. Therefore all chatbots and all statistical-based AIs are biased. Which is all of the AIs these days, LLMs or neural nets. That's important to remember. As true today as it was decades ago. Garbage in, garbage out.

Hope that was reasonably on topic to what you were asking. 

 

On 5/31/2023 at 6:49 AM, studiot said:

I know the official line is that so called AI programs are nothing more than glorified TMs

In short TMs are abstract models of computing and not physical. They have unbounded memory, which no physical computer can have. AIs running on physical computers are actually Finite State Machines (FSMs) or Pushdown Automatons (PDAs). Both are strictly weaker than TMs in the sense that that TMs can solve a wider class of problems than FSMs or PDAs.

See 

https://en.wikipedia.org/wiki/Finite-state_machine

Even the fanciest deep neural net or LLM runs on conventional computing hardware, so it's an FSM (or PDA). Or a physical implementation of a TM, subject to space constraints. 

[This is a limitation of my knowledge. I don't know if computer programs running on real-life hardware are FSMs or PDAs. Either way they're strictly weaker than TMs because real-life hardware is finite].

In other words in principle, there is no difference between the world's fanciest AI program and your first "Hello World!" program. No matter what it is they're doing, they are executing on conventional cpu and memory chips. Ultimately, at the machine level, they're just chucking bits like your Solitaire program on Windows.

Many people have mystical beliefs in this regard, for example that neural nets are "more than regular programs because they're not strictly programmed" or some such nonsense. It's not true. At the chip level the hardware can't tell your AI from an LolCat.

 

Edited by wtf
Link to comment
Share on other sites

8 hours ago, wtf said:

Many people have mystical beliefs in this regard, for example that neural nets are "more than regular programs because they're not strictly programmed" or some such nonsense. It's not true. At the chip level the hardware can't tell your AI from an LolCat.

That doesn't negate Turin's hypothesis, even an automated loom/anthill surpasses a human on some level; strictly speaking a TM only has to pass, a human test... 

Link to comment
Share on other sites

12 minutes ago, dimreepr said:

That doesn't negate Turin's hypothesis, even an automated loom/anthill surpasses a human on some level; strictly speaking a TM only has to pass, a human test... 

You seem to be conflating two different things; Turing's test and a Turing machine.
As far as I know, no Turing machine would pass Turing's test.

Link to comment
Share on other sites

8 hours ago, wtf said:

They have unbounded memory, which no physical computer can have. AIs running on physical computers are actually Finite State Machines (FSMs) or Pushdown Automatons (PDAs).

Are you sure about it? My doubts are as follows:

Although TM has an infinite tape, at any given stage of its run, only a finite part of this tape is in use. The infinity of the tape allows TM to use as much as needed, but it never actually uses an infinity of the tape.

Physical computers are the same: at any given stage of their run, they use finite memory, but the memory can be, and is, dynamically added as needed. 

Link to comment
Share on other sites

1 minute ago, John Cuthber said:

You seem to be conflating two different things; Turing's test and a Turing machine.
As far as I know, no Turing machine would pass Turing's test.

That's given me something to think about, thanks.

Link to comment
Share on other sites

On 6/8/2023 at 8:10 AM, Genady said:

Are you sure about it? My doubts are as follows:

Yes I will say I'm sure. I'll try to address your doubts. I'll also mention that I'm not an expert in CS theory, I've just picked up a little online. But I'm pretty sure I've got this right. 

 

On 6/8/2023 at 8:10 AM, Genady said:

Although TM has an infinite tape, at any given stage of its run, only a finite part of this tape is in use. The infinity of the tape allows TM to use as much as needed, but it never actually uses an infinity of the tape.

That's true. In fact it's a point where a lot of people are imprecise. They'll say it's an infinite tape, but it's not. It's an unbounded tape. Exactly as you say, it's always as long as it needs to be to perform a given computation.

But FSMs (finite state machines) start out in life with a fixed finite number of states; say 47, or a googolplex, or Graham's number, or any of the other humongous finite numbers computer scientists play with. 

No matter what the number is, there's some problem that can not be solved in that many states. There's always a problem that requires more states than you have. That's the limitation of FSMs. The TM just grows as much as it needs to. So a TM can always solve more problems than an FSM can. 

Now I think I see what you mean by considering ALL the FSMs with 1, 2, 3, 4, 5, ... states. Given any problem, SOME FSM must be able to solve it. I think I don't know the answer to this question and now you have me confused on this point. 

Nevertheless, I have some references that agree with me, so I'll post them after the next paragraph and we'll both have to defer to authority. I'll see if I can understand what's going on.

 

 

 

 

On 6/8/2023 at 8:10 AM, Genady said:

Physical computers are the same: at any given stage of their run, they use finite memory, but the memory can be, and is, dynamically added as needed. 

There are only [math]10^{78}[/math] atoms in the observable universe. At some point there is no more memory to be had. Physical computers are definitely much more limited than Turing machines. 

So while we're not too sure about the computational strength of the set of ALL FSMs; we're sure that any ONE FSM is strictly weaker than a TM, because there are problems it can't solve; namely, problems requiring more states than it has. 

I started out thinking I'd explain this but now I'm unsure about the set of all FSMs. I'll have a look at these links myself. I haven't curated these, they just looked interesting from a Google search.

https://news.ycombinator.com/item?id=7684027

https://www.tutorialspoint.com/distinguish-between-finite-automata-and-turing-machine

https://cs.stackexchange.com/questions/16315/difference-between-a-turing-machine-and-a-finite-state-machine

 

 

 

 

 

On 6/8/2023 at 7:51 AM, dimreepr said:

That doesn't negate Turin's hypothesis, even an automated loom/anthill surpasses a human on some level; strictly speaking a TM only has to pass, a human test...

As @John Cuthber noted, the Turing test and Turing machines are two different things. Just to flesh that out a bit, Turing machines were devised by Turing in 1936 as a mathematical formalization of an idealized computer. 

It turns out that many/most computer scientists believe that, informally, "Anything that a human being can compute, can already be computed by a Turing machine." This is not a mathematical theorem or fact that is capable of proof. Rather, it's the Church-Turing thesis. Although it was formulated in the 1930s, it still stands unrefuted today. 

Some think -- I think -- that the next breakthrough must be the breaking of the Church-Turing thesis. Finding some mode of computation, whatever that means, that exceeds the power of Turing machines. I think that might be the secret sauce to moving further towards consciousness. I think human minds are doing something that TMs aren't, yet that may still be explainable by physical means. But that's only a belief for which I have no proof.

So much for philosophy. I also wanted to comment on the idea of the Turing test. It turns out that by far the weak spot in the test is the humans. From the days of the very first primitive chatbot, https://en.wikipedia.org/wiki/ELIZA, humans have been all too willing to imagine that chatbots understand them. The Turing test is actually far too weak, because humans are just too easy to fool. They want to believe the machine is human so they do believe. I just wanted to mention that. 

Edited by wtf
Link to comment
Share on other sites

3 hours ago, wtf said:

Yes I will say I'm sure. I'll try to address your doubts. I'll also mention that I'm not an expert in CS theory, I've just picked up a little online. But I'm pretty sure I've got this right. 

 

That's true. In fact it's a point where a lot of people are imprecise. They'll say it's an infinite tape, but it's not. It's an unbounded tape. Exactly as you say, it's always as long as it needs to be to perform a given computation.

But FSMs (finite state machines) start out in life with a fixed finite number of states; say 47, or a googolplex, or Graham's number, or any of the other humongous finite numbers computer scientists play with. 

No matter what the number is, there's some problem that can not be solved in that many states. There's always a problem that requires more states than you have. That's the limitation of FSMs. The TM just grows as much as it needs to. So a TM can always solve more problems than an FSM can. 

Now I think I see what you mean by considering ALL the FSMs with 1, 2, 3, 4, 5, ... states. Given any problem, SOME FSM must be able to solve it. I think I don't know the answer to this question and now you have me confused on this point. 

Nevertheless, I have some references that agree with me, so I'll post them after the next paragraph and we'll both have to defer to authority. I'll see if I can understand what's going on.

 

 

 

 

There are only 1078 atoms in the observable universe. At some point there is no more memory to be had. Physical computers are definitely much more limited than Turing machines. 

So while we're not too sure about the computational strength of the set of ALL FSMs; we're sure that any ONE FSM is strictly weaker than a TM, because there are problems it can't solve; namely, problems requiring more states than it has. 

I started out thinking I'd explain this but now I'm unsure about the set of all FSMs. I'll have a look at these links myself. I haven't curated these, they just looked interesting from a Google search.

https://news.ycombinator.com/item?id=7684027

https://www.tutorialspoint.com/distinguish-between-finite-automata-and-turing-machine

https://cs.stackexchange.com/questions/16315/difference-between-a-turing-machine-and-a-finite-state-machine

 

 

 

 

 

As @John Cuthber noted, the Turing test and Turing machines are two different things. Just to flesh that out a bit, Turing machines were devised by Turing in 1936 as a mathematical formalization of an idealized computer. 

It turns out that many/most computer scientists believe that, informally, "Anything that a human being can compute, can already be computed by a Turing machine." This is not a mathematical theorem or fact that is capable of proof. Rather, it's the Church-Turing thesis. Although it was formulated in the 1930s, it still stands unrefuted today. 

Some think -- I think -- that the next breakthrough must be the breaking of the Church-Turing thesis. Finding some mode of computation, whatever that means, that exceeds the power of Turing machines. I think that might be the secret sauce to moving further towards consciousness. I think human minds are doing something that TMs aren't, yet that may still be explainable by physical means. But that's only a belief for which I have no proof.

So much for philosophy. I also wanted to comment on the idea of the Turing test. It turns out that by far the weak spot in the test is the humans. From the days of the very first primitive chatbot, https://en.wikipedia.org/wiki/ELIZA, humans have been all too willing to imagine that chatbots understand them. The Turing test is actually far too weak, because humans are just too easy to fool. They want to believe the machine is human so they do believe. I just wanted to mention that. 

So, if we assume that the observable universe has a limited number of states, then AI is a limited TM. This answers the OP question, correct?

Link to comment
Share on other sites

1 hour ago, Genady said:

So, if we assume that the observable universe has a limited number of states, then AI is a limited TM. This answers the OP question, correct?

Correction. I should've said, a bounded TM.

Link to comment
Share on other sites

10 hours ago, Genady said:

So, if we assume that the observable universe has a limited number of states, then AI is a limited TM. This answers the OP question, correct?

A TM with bounded memory is called a linear bounded automaton. LBAs are strictly weaker than TMs, although it did take me a bit of searching to find a reference.

https://www.cs.princeton.edu/courses/archive/spr01/cs126/comments/17turing.html#:~:text=Note that Turing machines are,ability to recognize more languages.

That same highlighted paragraph says that everyday computers are as powerful as TMs. That can't possibly be, since everyday computers are limited by the finiteness of the observable universe. I don't understand their remark, and to the best of my understanding it's wrong. So apparently OP's question remains unanswered. Theoretical CS is a bit of a rabbit hole.

 

Edited by wtf
Link to comment
Share on other sites

3 minutes ago, wtf said:

A TM with bounded memory is called a linear bounded automaton. LBAs are strictly weaker than TMs, although it did take me a bit of searching to find a reference.

https://www.cs.princeton.edu/courses/archive/spr01/cs126/comments/17turing.html#:~:text=Note that Turing machines are,ability to recognize more languages.

We needn't assume the observable universe is finite, that's regarded as a proven fact. Finite number of atoms, finite radius of the observable universe, finite amount of stuff in it, finite amount of energy in it. 

Thanks. I knew about FSA and PDA, but not about LBA. Of course, they are weaker than TM.

If the expansion of the universe slowed, the observable universe would grow with time. In this case, it's not clear if it were bounded. However, with the accelerated expansion, the observable universe actually shrinks. In this case, it is bounded for certain.

I understood the OP question as asking if AI is stronger than TM. It seems that we agree on the answer, no, it is not.

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.