Jump to content

Church-Turing thesis is outdated


Recommended Posts

Hello.  Some speculate that a quantum mechanical system which somehow uses an infinite superposition of states could compute a noncomputable function.   This is not possible using the standard QUBIT machine, because it is proven that a regular quantum computer is PSPACE-reducible (a quantum computer running in polynomial time can be simulated by a classical computer running in polynomial space). 

What is potentially non-deterministic is extracting the output of a computation in classical terms. The same final quantum state may be measured as different classical states with varying probabilities. However, if you choose your computation such that the final state is an eigenstate of whatever value you intend to measure, such that the probability of one particular classical output is 1 and all others are 0, then it is effectively deterministic. This is not always possible or practical to do, in which case you may need to to run the quantum algorithm many times to extract the effective classical probability distribution, but effective quantum computation depends on structuring your algorithm to boost the amplitude of the intended output as much as possible, while getting all of the wrong answers to destructively interfere, such that you don’t have to re-run your quantum computation an impractically large number of times.

I guess there are nondeterministic models (as noted above), which are just that, models of a hypothetical device sometimes called a hypercomputer.  But in the RW, computations are deterministic. 

Link to comment
Share on other sites

Posted (edited)
1 hour ago, Ghideon said:

I gave you examples. 

Again; according to what sources? This thread is interesting and it would be unfortunate if a good opportunity for discussion is missed due to different definitions being used.

There are many sources, here's a Wikipedia article on computability:

Quote

Computability is the ability to solve a problem in an effective manner. It is a key topic of the field of computability theory within mathematical logic and the theory of computation within computer science. The computability of a problem is closely linked to the existence of an algorithm to solve the problem.

and

Quote

The most widely studied models of computability are the Turing-computable and μ-recursive functions, and the lambda calculus, all of which have computationally equivalent power. 

Note, neither Turing machines, μ-recursive functions nor Lambda calculus are non-deterministic, so they obviously cannot be used to model a non-deterministic system, since they are used to model mathematics it follows that mathematics must be deterministic.

Again, I'm not aware of any aspect mathematics, computability theory or algorithm design that supports a concept of non-determinism.

Every computable function in mathematics yields the same result from the same input, all the time, every time.

 

 

1 hour ago, TheVat said:

Hello.  Some speculate that a quantum mechanical system which somehow uses an infinite superposition of states could compute a noncomputable function.   This is not possible using the standard QUBIT machine, because it is proven that a regular quantum computer is PSPACE-reducible (a quantum computer running in polynomial time can be simulated by a classical computer running in polynomial space). 

What is potentially non-deterministic is extracting the output of a computation in classical terms. The same final quantum state may be measured as different classical states with varying probabilities. However, if you choose your computation such that the final state is an eigenstate of whatever value you intend to measure, such that the probability of one particular classical output is 1 and all others are 0, then it is effectively deterministic. This is not always possible or practical to do, in which case you may need to to run the quantum algorithm many times to extract the effective classical probability distribution, but effective quantum computation depends on structuring your algorithm to boost the amplitude of the intended output as much as possible, while getting all of the wrong answers to destructively interfere, such that you don’t have to re-run your quantum computation an impractically large number of times.

I guess there are nondeterministic models (as noted above), which are just that, models of a hypothetical device sometimes called a hypercomputer.  But in the RW, computations are deterministic. 

Exactly, quantum mechanics is regarded as wholly deterministic whereas observations of these systems are not, the same is true for chaos theory, fully deterministic.

1 hour ago, Kartazion said:

As long as you consider an algorithm to be a computation. 

Nondeterministic algorithm - Wikipedia

Yes, I'm aware of this but such algorithms do not play a role - to my knowledge - in computability theory, and a Turing Machine does not support non-determinism.

So far as I can see all computations (as understood in mathematics) can be converted into algorithms.

That article points out that a race condition in a concurrent software system can behave or perform non-deterministically, but I don't regard that as arising from the deterministic rules used for the code. Instead that behavior arises from the unpredictability of the code in the presence of other factors.

Edited by Holmes
Link to comment
Share on other sites

1 hour ago, Holmes said:

Again, I'm not aware of any aspect mathematics, computability theory or algorithm design that supports a concept of non-determinism.

At last statement respectful towards Mathematics (and mathematicians).

You are not aware of.......

It is not appropriate in this thread as it is specific to Turing, but rational thought (including Mathematics) can be divided into two main divisions  and here we are dealing with only one of them.
The second is so often forgotten in statements about rational thought in Science, Mathematics Engineering and so on, especially by Scientists.

I have spent a good deal of my working life dealing with that second aspect, so naturally I often draw examples from there.

Link to comment
Share on other sites

Posted (edited)

 

1 hour ago, Holmes said:

There are many sources, here's a Wikipedia article on computability:

That page also references Non-deterministic pushdown automata and Nondeterministic finite automaton. 

 

1 hour ago, Holmes said:

Note, neither Turing machines, μ-recursive functions nor Lambda calculus are non-deterministic, so they obviously cannot be used to model a non-deterministic system, since they are used to model mathematics it follows that mathematics must be deterministic.

How does it follow that mathematics must be deterministic? Can you clarify how that conclusion is valid as it seems to disagree with @studiot's post?  (scienceforums.net/1180863)

 

edit: x-post with @studiot

Edited by Ghideon
x-post
Link to comment
Share on other sites

4 minutes ago, Ghideon said:

 

That page also references Non-deterministic pushdown automata and Nondeterministic finite automaton. 

It does but I suspect it needs to be corrected or questioned, for example it says this:

Quote

Another simple model of computation, although its processing sequence is not uniquely determined. It can be interpreted as taking multiple paths of computation simultaneously through a finite number of states. However, it is possible to prove that any NFA is reducible to an equivalent DFA.

The emphasized text bothers me because a deterministic system cannot behave non-deterministically so this looks like an error, a contradictory statement.

An example is random number generation algorithms, these are actually called pseudo-random number generators because we cannot get randomness from non-random operations.

Also the article does not speak of a non-deterministic PDA only of PDAs. I've designed and written PDAs they underpin the parsing of context free grammars seen in programming languages they are powerful in what they can parse but they are wholly deterministic.

4 minutes ago, Ghideon said:

How does it follow that mathematics must be deterministic? Can you clarify how that conclusion is valid as it seems to disagree with @studiot's post?  (scienceforums.net/1180863)

edit: x-post with @studiot

Well I suppose mathematics is a discipline that supports the formal concept of a proof, that is proposition's are either true or false (though we don't always have the proof) if a set of steps in a proof sometimes gave the result "true" but at other times gave the result "false" then the very term "proof" would become meaningless.

Link to comment
Share on other sites

54 minutes ago, Holmes said:

Well I suppose mathematics is a discipline that supports the formal concept of a proof, that is proposition's are either true or false (though we don't always have the proof) if a set of steps in a proof sometimes gave the result "true" but at other times gave the result "false" then the very term "proof" would become meaningless.

I suspect that you know full well what I am talking about.

It is the same part of rational thinking that I use to reply to Physicists who try to insist that Physics must be mathematical.

You are only addressing the area of analysis.

There is a whole area of synthesis you are ignoring.

Link to comment
Share on other sites

1 hour ago, Holmes said:

It does but I suspect it needs to be corrected or questioned,

Ok. Maybe you can provide another source, preferably a peer reviewed paper? 

 

Link to comment
Share on other sites

10 minutes ago, Ghideon said:

Ok. Maybe you can provide another source, preferably a peer reviewed paper? 

 

If I said that I question the claim: 1 +3 = 76.9 you would not ask for a "peer reviewed paper" so this is a case where something that's obvious to me is beyond your knowledge, it is your ignorance (I'm allowed to say this apparently) that is the real issue for you.

 

Link to comment
Share on other sites

Posted (edited)

Out of interest I just had a quick flick through my old textbook

Computability and Logic

Boolos and Jeffrey

Cambridge University Press.

 

I immediately note that most of the undecidable (the correct term is not indeterminate) problems refer to synthesis, not analysis.

For example in Ramsey's Theorem there is a perfectly good analytical proof of the existence of certain numbers.

But when asked the synthetical question of finding a specific one for specific cases we struggle as there are only a few known sets for very small numbers.

 

Then we have the dyadic undecidability theorem

There is no effective method for deciding the validity of an arbitrary pure dyadic sentence.

The book abounds with counterexamples to your claim, most of which I had forgotten because this is not really my chosen area of mathematics.

Edited by studiot
Link to comment
Share on other sites

21 minutes ago, Holmes said:

If I said that I question the claim: 1 +3 = 76.9 you would not ask for a "peer reviewed paper" so this is a case where something that's obvious to me is beyond your knowledge, it is your ignorance (I'm allowed to say this apparently) that is the real issue for you.

You were asked to proved an alternative source, preferably something peer reviewed. Your reply merely evades this request and shows you’re either unable or unwilling. Which is it? 

Link to comment
Share on other sites

Posted (edited)
53 minutes ago, iNow said:

You were asked to proved an alternative source, preferably something peer reviewed. Your reply merely evades this request and shows you’re either unable or unwilling. Which is it? 

The phrase "you were asked to proved" betrays a poor grasp of English grammar.

I'm unwilling, as already stated the request indicates ignorance on the part of Ghideon, and of course you too for perpetuating it.

As I explained (not that I'd expect you to be able to follow along without pencil and paper) only through ignorance would someone ever demand a peer reviewed paper to prove that 1 + 3 = 76.9 is false, an error.

The Wikipedia article appears to be in error, that's my position.

 

Edited by Holmes
Link to comment
Share on other sites

48 minutes ago, Holmes said:

If I said that I question the claim: 1 +3 = 76.9 you would not ask for a "peer reviewed paper" so this is a case where something that's obvious to me is beyond your knowledge, it is your ignorance (I'm allowed to say this apparently) that is the real issue for you.

This thread is about concepts I have not worked with a lot. I am open to learn new things about computer science, preferably things from the scientific mainstream. I am less interested in personal assumptions, incorrect conclusions or unbacked claims.

Link to comment
Share on other sites

Posted (edited)
51 minutes ago, Ghideon said:

This thread is about concepts I have not worked with a lot. I am open to learn new things about computer science, preferably things from the scientific mainstream. I am less interested in personal assumptions, incorrect conclusions or unbacked claims.

Well back to the facts at hand, I don't actually post in forums like this for petty bickering and name calling as is the case with iDontKNow - sorry - INow, that person delights in avoiding objective arguments because its too much of a stretch for him.

So, Turing.

Note what that Wikipedia article on computability said:

Quote

However, it is possible to prove that any NFA is reducible to an equivalent DFA.

But the paper cites no source as you can see and I'm not personally convinced this is true, at least in the sense that I'd expect such a claim to be true.

A state machine (FSM or FA) is a fascinating and powerful way to solve certain problems, they are easy to understand but with a large set of states can be hard to fully embrace.

A Turing machine is a finite state machine with a hypothetical infinite memory ("tape") in fact the prefix "finite" is redundant, all real state machines have a finite set of states, so "finite" is implied.

Now the article on Nondeterministic state machines says:

Quote

Using the subset construction algorithm, each NFA can be translated to an equivalent DFA;

Which begs the question - in my mind anyway - is the NFA then really not already, inherently a DFA?

To do that we must look at the "subset construction algorithm" which I admit to know nothing about.

In that article we read:

Quote

It is important in theory because it establishes that NFAs, despite their additional flexibility, are unable to recognize any language that cannot be recognized by some DFA.

So, they are saying - it seems - that an NFA cannot do anything that a DFA can't do, so why is it even there? what can a NFA do that a DFA cannot? that's the interesting question for me.

As I read these articles I am forming the impression that the term "nondeterministic" in their names, is not what I think of as non-deterministic.

I read this too:

Quote

Again, an NFA consumes a string of input symbols, one by one. In each step, whenever two or more transitions are applicable, it "clones" itself into appropriately many copies, each one following a different transition. If no transition is applicable, the current copy is in a dead end, and it "dies". If, after consuming the complete input, any of the copies is in an accept state, the input is accepted, else, it is rejected.

Now that there, that description does not seem to incorporate an element of unpredictability or randomness, it is an algorithm, creating the conceptual "clones" each of which is in one of these "alternate" states is not indeterminate so far as I can see.

I think we have misleading language, here, non-determinism to me, means there is no rule that tells us the output for a given input, here in the definitions of NFA it seems it is just the presence of alternative states at a given transition that is used to name it non-deterministic.

My position here is this - if some "thing" we call a NFA is logically always equivalent to some other DFA then it really is a DFA just expressed differently.

What these guys are calling a NFA seems to be just a way to implement a DFA that has a large number of states.

Edited by Holmes
Link to comment
Share on other sites

53 minutes ago, Holmes said:

The phrase "you were asked to proved" betrays a poor grasp of English grammar.

Perhaps, but in this case it was merely a typo. It should be obvious in context that I'd intended to type "provide," but I do appreciate you offering me an opportunity to clarify. 

56 minutes ago, Holmes said:

As I explained (not that I'd expect you to be able to follow along without pencil and paper)

Actually, will you please use crayon... you know, to be sure you're speaking to me at the appropriate level and to help me overcome my ignorance?

In fact, a picture would be best. Much obliged. 

Link to comment
Share on other sites

7 minutes ago, Holmes said:

So, they are saying - it seems - that an NFA cannot do anything that a DFA can't do, so why is it even there? what can a NFA do that a DFA cannot? that's the interesting question for me.

An NFA allows a Turing Machine to have transition from having n internal states to n + x internal states to any number of internal states. An input tape is accepted if any of the currently active internal states of the non-deterministic automaton result in an accepting state. A person doesn't have a finite number of cells in the body we can easily generate new cells to combat viruses etc.

A DFA cannot create new internal states. It has a fixed number of internal states. It has been shown that there will always exist a theoretical DFA that can be constructed from an NFA with a known number of internal states.

There is probably a scientific answer. One could argue that the body's ability to defend against infection qualifies as something non-deterministic that is probably not possible for a DFA. You could say that your body has learned to recognize a new tape called a virus. So we could assume that there are some things a DFA cannot do but we can't easily prove because it lies outside our understanding of science.

Link to comment
Share on other sites

Posted (edited)
39 minutes ago, fiveworlds said:

An NFA allows a Turing Machine to have transition from having n internal states to n + x internal states to any number of internal states. An input tape is accepted if any of the currently active internal states of the non-deterministic automaton result in an accepting state. A person doesn't have a finite number of cells in the body we can easily generate new cells to combat viruses etc.

A DFA cannot create new internal states. It has a fixed number of internal states. It has been shown that there will always exist a theoretical DFA that can be constructed from an NFA with a known number of internal states.

There is probably a scientific answer. One could argue that the body's ability to defend against infection qualifies as something non-deterministic that is probably not possible for a DFA. You could say that your body has learned to recognize a new tape called a virus. So we could assume that there are some things a DFA cannot do but we can't easily prove because it lies outside our understanding of science.

Thank you for telling me some stuff I didn't know.  +1
This thread has some use after all.

Very interesting example illustration in the cells of the body.
I would suggest a correction to the use of the word 'finite' .

The body has a finite though large number of cells. Even with the ability to add new ones it will never generate an infinite number.

Though finite the number of cells is not specific and continually changing as cells rub off etc and new one are, as you say, generated.

So I would suggest  not specific or specified instead of not finite.

Edited by studiot
Link to comment
Share on other sites

Posted (edited)

 

Thanks for the reply. 

13 hours ago, Holmes said:

But the paper cites no source as you can see and I'm not personally convinced this is true, at least in the sense that I'd expect such a claim to be true.

There are plenty of sources* for proof of NFA and DFA Equivalence. And again:

14 hours ago, Ghideon said:

I am less interested in personal assumptions, incorrect conclusions or unbacked claims.

 

13 hours ago, Holmes said:

As I read these articles I am forming the impression that the term "nondeterministic" in their names, is not what I think of as non-deterministic.

I was curious about nondeterminism in the context of this thread; Carl Hewitt's papers and the mathematics used in the Actor model of computation. It is hard to apply your arguments about nondeterminism if you define nondeterminism in another way than the authors that this tread is about. I addressed this earlier:

On 7/6/2021 at 12:11 AM, Ghideon said:

Maybe you are using different definitions than used in the papers discussed in this thread.

In the theory of computation there are more than one mathematical model for instance Turing machine and lambda calculus. The actor model, the topic of this thread, is a concurrent model. That needs to be taken into account when discussing determinism and nondeterminism.

 

15 hours ago, Holmes said:

this is a case where something that's obvious to me is beyond your knowledge, it is your ignorance (I'm allowed to say this apparently) that is the real issue for you.

Yes you are allowed to say that. "Allowed" does not mean it is necessary or appropriate to say it. 

 

Example: https://courses.grainger.illinois.edu/cs373/fa2013/Lectures/lec05.pdf

Edited by Ghideon
Link to comment
Share on other sites

Posted (edited)

An addition in case someone is interested in further discussion of the original topic: Since the Actor Model has concurrency a process calculus* such as π-calculus** applies to the discussion. 

From Wikipedia (bold by me)

Quote

In the first half of the 20th century, various formalisms were proposed to capture the informal concept of a computable function, with μ-recursive functions, Turing machines and the lambda calculus possibly being the best-known examples today. The surprising fact that they are essentially equivalent, in the sense that they are all encodable into each other, supports the Church-Turing thesis. Another shared feature is more rarely commented on: they all are most readily understood as models of sequential computation. The subsequent consolidation of computer science required a more subtle formulation of the notion of computation, in particular explicit representations of concurrency and communication. Models of concurrency such as the process calculi, Petri nets in 1962, and the actor model in 1973 emerged from this line of inquiry.

My understanding (so far) is that to discuss Hewitt's models one need to have the above in mind. I'll do some more reading and maybe post a followup on the claim in title of the thread "Church-Turing thesis is outdated". 

*) https://en.wikipedia.org/wiki/Process_calculus
**) https://en.wikipedia.org/wiki/Π-calculus
Hewitt paper mentioning π-calculus https://arxiv.org/vc/arxiv/papers/1008/1008.1459v8.pdf

Edited by Ghideon
correction
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.