Jump to content

quantum consciousness


gib65

Recommended Posts

I'm not so sure you can assert that something simulated on a Turing machine is not equivalent to a Turing machine. It would seem to me that the Church-Turing Thesis would suggest that this is exactly the case.

 

Algorithms run on turing machines. Not all algorithms are eqivalent to turing machines.

 

The brain is NOT equivalent to a turing machine running an algorithm but it is deterministic (probably) and therefore can be converted into an algorithm and run on a turing machine itself.

 

The Church-Turing thesis, put simply, states that any algorithm (that satisfies certain requirements) can be ran on a turing machine equivalent device given sufficient time and storage. It also states that these computers have equal computational power.

 

Undecideability, the halting problem etc., is not really an issue for the brain in my opinion. It seems to assume that the main function of our nervous system is to determine the truth of paticular statements or act as a theorum proover or something likewise. It is strange then that most people are pretty poor at abstract logic problems.

 

The way that most people model nervous tissue at the moment is by using sets of dynamic equations (plus other bits and bobs) so we could compare thi s kind of simulation with weather forecasting programmes. I assume that they use dynamic equations. If we imagine a huge wheather forecasting computer where in particular would undecideablilty come in? I think it's a lot easier to ignore that brain performs cognitive tasks and think about it instead as a non-linear dynamic system.

Link to comment
Share on other sites

  • Replies 102
  • Created
  • Last Reply

Top Posters In This Topic

Undecideability' date=' the halting problem etc., is not really an issue for the brain in my opinion. It seems to assume that the main function of our nervous system is to determine the truth of paticular statements or act as a theorum proover or something likewise. It is strange then that most people are pretty poor at abstract logic problems.

[/quote']

 

Frankly I see that as a general argument against any purely deterministic model of thought.

 

It would seem what you are suggesting then is that a state of consciousness is somehow a network of chaotic attractors. What I have seen written on this topic, (removing all of the Nu Age BS ) seem to me to be suffering from the same general criticisms that have been leveled against the quantum models; they both seem to be 'minimization of mysteries', that is they are seen as explanations for thought more because they look intuitively linked to it than that there is any good evidence supporting the claims.

Link to comment
Share on other sites

I just wanted to post this little tidbit from Daniel Dennett's Consciousness Explained that I absolutely loved:

 

"Why, Dan," ask the people in Artificial Intelligence, "d you waste your time conferring with those neuroscientists? They wave their hands about 'information processing' and worry about where it happens, and which neurotransmitters are involved, and all those boring facts, but they haven't a clue about the computational requirements of higher cognitive functions." "Why," ask the neuroscientists, "do you waste your time on the fantasies of Artificial Intelligence? They just invent whatever machinery they want, and say unpardonably ignorant things about the brain." The cognitive psychologists, meanwhile, are accused of concocting models with neither biological plausibility nor proven computational powers; the anthropologists wouldn't know a model if they saw one, and the philosophers, as we all know, just take in each other's laundry, warning about confusions they themselves have created, in an arena bereft of both data and empirically testable theories. With so many idiots working on the problem, no wonder consciousness is still a mystery.

 

He does go on to say that he was just joking and the people working on developing a complete model of consciousness are unabashedly brilliant

Link to comment
Share on other sites

In regards do Godel I'd like to mention that an ontology (IS) can contain conflicting information...

 

I mean, surely everyone in here has learned something was wrong, forgotten it was wrong, said the thing that was wrong again, then remembered learning that it was wrong?

Link to comment
Share on other sites

Well I am not yet convinced that the Limitive Theorems can be dismissed that lightly.

 

All of them suggest that once the ability to represent the structure of thought (by the process of thought) reaches a certain critical point , that is the kiss of death: it guarantees it can never be fully described and will always be incomplete.

 

Because assuming that thought is consistent and the level of modeling is below some critical level; it is incomplete by hypothesis. Or it must reach a point where the Limitive Theorems (or their metaphorical analogues) kick in and it incomplete in some Godelian way.

 

The more likely case is that thought is inconsistent and thus cannot be deterministic.

Link to comment
Share on other sites

I just wanted to say that the flaw in Penrose's argument, at least as I understand it, is that the supposed paradoxical knowledge a mechanical thinker could learn is a description of that thinker's thought capibilities. But since knowledge of that description alters the thought capibilities (it's... a meme!) it is no longer applicable to the mechanical thinker after it has been learned and thus there is no paradox.

Link to comment
Share on other sites

originally posted by DV8 2XL

The more likely case is that thought is inconsistent and thus cannot be deterministic.

Some of my thoughts are consistent and deterministic and complete and so I would have to reject your hypothesis in it's general form while allowing that I may be an exception that fails to prove the rule. What would you need to know so that you could allow someone to dismiss the Limitive Theorems lightly? Or even with a struggle?

Link to comment
Share on other sites

What would you need to know so that you could allow someone to dismiss the Limitive Theorems lightly? Or even with a struggle?

 

I'm still confused as to exactly what he means by "Limitive Theorems"

 

According to Google:

 

Your search - "limitive theorems" - did not match any documents.

 

Did you mean: "limited theorem"

 

My only exposure to the use of Godel's Incompleteness Theorem in order to attempt to prove that consciousness must be inherently non-computational was Roger Penrose's "proof" which Dennett deconstructed in Darwin's Dangerous Idea (the one I touched on in my last post)

Link to comment
Share on other sites

Here's another debunking of Roger Penrose which points out the same fundamental fallacy in his thinking:

 

http://www.1729.com/consciousness/godel.html

 

In his book "Shadows of the Mind", Roger Penrose details an argument based on Godel's Incompleteness theorem to prove that the mathematical capabilities of human mathematicians are non-computable.

 

Godel's Incompleteness Theorem

 

Here is a brief description and proof of Godel's Incompleteness Theorem.

 

  • Let X and Y be members of a data language D whose members can be used to represent executable programs that act on elements of D.
  • A program X applied to input data Y is written X(Y), and may or may not terminate in a finite number of steps.
  • Let pair be a function which maps D × D to some subset of D and is invertible. The function pair and its inverse should be computable. By an abuse of notation, abbreviate F(pair(X,Y)) as F(X,Y).
  • Let F be a function with the following property - if F(X,Y) terminates, then X(Y) does not terminate. Such an F is to be called consistent.
  • If the converse holds for such an F, i.e.
    for all X and Y, X(Y) does not terminate => F(X,Y) terminates,
    then we call F complete.
  • Godel's theorem tells us that there is no F which is both consistent and complete. Proof:
    • Let F be consistent, and define G such that G(X)=F(X,X).
    • Then G(G) = F(G,G), and if G(G) terminates then this implies G(G) does not terminate.
    • Which is a contradiction, so we conclude that G(G) does not terminate.
      Thus F is not complete. Q.E.D. Also F can be extended to a new F', which is in a sense more complete, with F' defined as F'(X,Y) = {if ((X=G) and (Y=G)) then terminate else F(X,Y)}.

    [*]The F' defined in the above proof is consistent, and it recognises a larger set of non-terminating programs, i.e. the set that F recognises, and G(G).

 

F can be interpreted as a theorem prover

 

We can regard F as representing a set of logical axioms and rules of proof in a mathematical theory that enables us to prove theorems about the non-termination of computer programs. Thus an execution of F(X,Y) is a "proof" of the theorem that X(Y) does not terminate. The system F is incomplete, in that it does not prove all statements about non-termination which are true, and F can be explicitly extended to prove a larger set of true statements. Of course the extended F is still incomplete (for exactly the same reason) and can itself be extended, and so on.

 

Interpreting F as a robot mathematician

 

We can regard F as an algorithm that specifies the operation of a robotic mathematician tasked with proving theorems about non-terminating programs. Godel's Incompleteness theorem tells us that merely by inspecting the design of a such a robot, we can "know" something that the robot does not know, i.e. that G(G) as defined above is non-terminating.

Interpreting F as a description of the capabilities of a human mathematician

 

Let us make the following assumptions -

  1. Human beings operate according to the laws of physics.
  2. The laws of physics are computable, i.e. the behaviour of all physical systems can be predicted using algorithmic calculations.
  3. From the behaviour of a human mathematician we can extract reliably correct theorems about non-terminating programs (e.g. when the mathematician writes a theorem down, submits it to a respected journal, and states when asked that they are "really, really sure" that the proof of their theorem is correct, then we assume the theorem is reliably correct).

 

From these assumptions we come to the conclusion that there is some algorithm F which is equivalent to the ability of a human mathematician to state correct theorems about non-terminating programs.

 

But we can derive G from F, as above, and know that G(G) is a non-terminating program.

 

But the reader is a human, so we have just proved that a human can know something that a human cannot know.

 

This is a contradiction, so one of the initial assumptions is wrong.

 

Roger Penrose wants to come to the conclusion that assumption 2 is incorrect. Some of his opponents challenge assumption 3, i.e. the infallibility of human mathematicians.

 

I am willing to concede that assumption 3 is correct, for the sake of argument, with the caveat that it may cause difficulties in practice if an attempt is made to carry out procedures based on that assumption. I want to concede the correctness of assumption 3, because I believe that the real fallacy in Penrose's argument lies elsewhere.

 

The Fallacy: Extra Hints

 

To make the contradiction obvious, let the human mathematician who understands that G(G) is non-terminating be the same human mathematician for whom F determines their mathematical ability. If the mathematician was a robot, telling them that G(G) is non-terminating would cause a genuine increase in their mathematical ability. But Roger Penrose claims that the mathematician already knows G(G) is non-terminating, because they understand the Godelian argument.

 

I will show that this is not the case. We must return to basic principles. The task assigned to the function F is the following -

  • Given program X and data Y, determine if X(Y) does not terminate.

 

The human mathematician is given X=G and Y=G, with G defined such that G(x)=F(x,x) where F is the function that describes the mathematician's mathematical ability. The mathematician does not know that F is the function that describes their mathematical ability, unless we tell them so. If they did recognise F, then their understanding of the Godelian argument would allow them to determine that G(G) derived from F is non-terminating.

 

If we tell the mathematician that F is the program determining their mathematical ability, then we are giving them extra information, and that is what enables them to state that G(G) is non-terminating, apparently going beyond the capability determined by F.

 

We can just as easily program a robot mathematician to accept claims made by trustworthy parties about things that the robot does not already know, for example that a function F is the function that determines that robot's mathematical ability. But the moment that the robot accepts that information, F goes out of date as a description of that robot's mathematical ability.

 

Is this a practical procedure for increasing human mathematical ability?

 

If we ignore the difficulties of describing a mathematician's brain precisely enough to be able to formulate F, and we also ignore the question of infallibility of human mathematical ability, then the answer is "yes". There are two things that can happen to a mathematician's understanding of mathematics after they learn of an F that describes their mathematical ability:

  1. F is simple enough that the mathematician can commit it to memory without sacrificing any of their existing mathematical ability. In which case the mathematician's ability to prove theorems entirely on their own has definitely increased.
  2. F is so large that the mathematician cannot commit it to memory. Their mathematical ability is only increased if they are allowed to retain a copy of F on some external medium and refer to it as necessary.

 

The fact that the F that bounds a mathematician's mathematical ability can be increased just by giving them a (very large) bit of paper with something written on it alerts us to the necessity of being very specific about what resources are available to a mathematician when we attempt to determine an algorithmic bound on the mathematical ability of that mathematician.

 

What about the mathematical ability of all human mathematicians combined?

 

It's just the same argument, but with a different starting point.

 

What if we gave the mathematician an apparatus for analysing the construction of their own brain, so that they could derive F without any outside help?

 

Yet another starting point. The fallacy is obvious if we apply this improvement to a robot mathematician. The derived F will take account of everything except the existence of the analysing apparatus.

Link to comment
Share on other sites

No, no alas, no.

 

This is a grave misinterpretation the Incompleteness Theorem.

 

Gödel's first incompleteness theorem basically says that:

 

For any consistent formal logical system that basic arithmetic can be done with, it is possible to construct an arithmetical statement that is true but not be proved within the system. That is, any consistent logical system of a certain expressive strength is incomplete.

 

Note what I put in bold because this is the critical point: 'no complex enough system' is where this bites the hardest.

 

Gödel's second incompleteness theorem goes on to say:

 

For any consistent formal logical system that basic arithmetical can be done with, that also has a method of formal provability, includes a statement of its own consistency if and only if it is inconsistent.

 

Gödel's second incompleteness theorem implies that a system satisfying the technical conditions outlined above can't prove the consistency of any other system which proves the consistency of the first system. This is because then first system can prove that if second system proves the consistency of first system , then the first system is in fact consistent.

 

Again note the bolded part of the statement.

 

So, the only way that an system of logic and that includes any machine built to do logic can 'know everything about itself', (as it were) is if it is so simple that it can't do enough logic to do arithmetic, AND any system that can analyse a system complex enough that it can do arithmetic - has itself to be powerful enough, and can't be analyzed by the first. Worse no matter how many meta-systems are constructed to analyse the ones before them, one that can be analysed by the first cannot be constructed, to close the loop.

 

And you just can't beat it.

 

Now if thought is a pure (deterministic) formal system, such that it can be modeled on a Turing Machine it's subject to these limitations. Thus by Gödel's second theorem, a human mind cannot formally prove its own consistency. The only way out is to assume some other (non-deterministic) algorithms are at work, because then this doesn't apply.

 

Or, as Hilary Putnam puts it:

 

"Let T be a Turing machine which "represents" me in the sense that T can prove just the mathematical statements I prove. Then using Gödel's technique I can discover a proposition that T cannot prove, and moreover I can prove this proposition. This refutes the assumption that T "represents" me, hence I am not a Turing machine."

 

 

Now that doesn't make Penrose and Hameroff's conclusion that quantum phenomena are responsible right, nor do I think the Orch-OR theory is right, but I am hard pressed to see that any deterministic theory can be made to work.

 

P.S.

 

Limitive Theorems is a catch-phase that D.R. Hofstadter used to describe: Godel's Incompleteness theorem; Church's Undecidability Theorem; Turing's Halting Theorem; and Tarki's Truth Theorem.

Link to comment
Share on other sites

Why do we have to prove the logical consistency of consciousness in order to implement it as a computer program?

 

This is not saying that we have to prove the logical consistency of consciousness, that's not relevant; by Godel's second theorem, a human mind cannot formally prove its own consistency...

What computer program is logically consistent?

 

...because all arguments about the consciousness implications of Godel's theorems are really arguments about whether the the Church–Turing thesis is true.

 

This is a non-issue...

 

No it's not a non-issue, I'm afraid; it is about as big an issue as there is in this matter.

Link to comment
Share on other sites

I must say that I am sorry that I didn't get involved in this thread earlier. But I would like to make a contribution consciousness and about being a dualist or not...

I don't like to think as myself as a dualist, but I totally agree with Gib65 when he says that the qualia, or subjective experience is what makes consciousness a "hard" problem. This is what is the core of consciousness for me. And it should be clear that the functioning of on neuron, or 1000 neurons, or 50000 neurons firing in sync, may be the physical correlate of consciousness, it has some very different properties than the subjective qualia, and I have never seen any explanations as to how these physical properties cause the properties of the subjective qualia. So, I must admit that I am a property dualist, but that doesn't mean that I am a metaphysicist.

Link to comment
Share on other sites

Well I haven't been arguing a dualist position, only that more is going on than can be modeled on a Turing Machine. Whether a physical correlate of consciousness can or cannot model the subjective qualia is somewhat different and here bascule and I agree that it can. Our debate is on mechanisms.

Link to comment
Share on other sites

  • 3 weeks later...
Victor J. Stenger' date=' writing in The Humanist, May/Jume 1992 harshly dismissed the Quantum Mind saying:

 

"The overwhelming weight of evidence, from seven decades of experimentation, shows not a hint of a violation of reductionist, local, discrete, non-superluminal, non-holistic relativity and quantum mechanics - with no fundamental involvement of human consciousness other than in our own subjective perception of whatever reality is out there. Of course our thinking processes have a strong influence on what we perceive. But to say that what we perceive therefore determines, or even controls, what is out there is without rational foundation. The world would be a far different place for all of us if it was just all in our heads - if we really could make our own reality as the New Agers believe. The fact that the world rarely is what we want it to be is the best evidence that we have little to say about it. The myth of quantum consciousness should take its place along with gods, unicorns, and dragons as yet another product of the fantasies of people unwilling to accept what science, reason, and their own eyes tell them about the world."[/quote']

 

Victor J Stenger obviously hasn't noticed that we DO 'make our own reality.' If I choose to do something reality becomes one where I have chosen to do that something. If Hitler had not decided to enter politics WW2 wouldn't have happened. Obviously we can alter reality by our actions - and IMO that is done by altering reality in our brains i.e., collapsing wave functions by using our conciousness. The consiousness of one individual can only effect very small things with a high degree of randomness. I cannot make (by effort of willpower) a car change into a banana, but I can probably alter the position or direction or state of an electron - especially one in my brain. This is how we are able to control our thoughts. This is the point. This is the reason. This is why the double slit experiment is such a puzzle. Of course brains function on a quantum level. It's useful - why d'you think people are desperately trying to build a quantum computer! Evolution got there years ago!

 

The alternative is that everything is predetermined, and conciousness and free will are just a biochemical illusion. Every single thought or action in the universe is pre-determined and we are lucky enough not to realize.

 

Either one could be true, but I know which one I'd rather believe. Ultimately its all down to faith you know...

Link to comment
Share on other sites

It's as big an issue as there is in the entire universe, but it's totally irrelevant to the computability of consciousness...

 

Yes, it is as big an issue as there is in the entire universe - and that's the point!

Link to comment
Share on other sites

...that is done by altering reality in our brains i.e., collapsing wave functions by using our consciousness.

 

At this time no proof of QC exists, however unlike some, I am not willing to dismiss it outright. However I do recognise that there may be other mechanisms that are at work, like thermal randomness, to provide the necessary non-deterministic input to the system.

 

The alternative is that everything is predetermined, and consciousness and free will are just a biochemical illusion. Every single thought or action in the universe is pre-determined and we are lucky enough not to realize.

 

Well that's the crux of the discussion we were having.

 

Either one could be true, but I know which one I'd rather believe. Ultimately its all down to faith you know...

 

No, it will 'boil down' to proof via some yet to be determined quantifiable, reproducible, experiment testing a falsifiable hypothesis.

Link to comment
Share on other sites

At this time no proof of QC exists, however unlike some, I am not willing to dismiss it outright. However I do recognise that there may be other mechanisms that are at work, like thermal randomness, to provide the necessary non-deterministic input to the system.

 

Thermal randomness does not explain why human willpower alone can affect the production of random numbers by a computer. There have been numerous experiments which show this, but the results have largely been ignored by the scientific community as they can't be explained by traditional scientific explanations. QC would explain this.

 

Well that's the crux of the discussion we were having.

 

Agreed' date=' and a big crux it is too! The biggest I suppose.

 

No, it will 'boil down' to proof via some yet to be determined quantifiable, reproducible, experiment testing a falsifiable hypothesis.

 

What if it cannot be proven because the act of observation affects the results?

Link to comment
Share on other sites

Thermal randomness does not explain why human willpower alone can affect the production of random numbers by a computer. There have been numerous experiments which show this, but the results have largely been ignored by the scientific community as they can't be explained by traditional scientific explanations. QC would explain this.

 

No they are ignored by the scientific community because they are not reproducible, even if they were these tests have not been framed as tests of QC, thus they cannot be said to explain anything about this theory.

 

What if it cannot be proven because the act of observation affects the results?

 

Well if it the effect cannot be detected, or denied by by observation, then the issue is moot since then no useful predictions can be made. Ether way the theory dies

Link to comment
Share on other sites

No they are ignored by the scientific community because they are not reproducible' date=' even if they were these tests have not been framed as tests of QC, thus they cannot be said to explain anything about this theory.

 

 

 

Well if it the effect cannot be detected, or denied by by observation, then the issue is moot since then no useful predictions can be made. Ether way the theory dies[/quote']

 

Actually, they are reproducible - and HAVE been reproduced - even 'on demand' but they are still ignored.

 

Superstring theory is unproveable, but is still one of the leading hypotheses around at the moment.

 

Anyway, all I will say is that we will await the results of current QC experiments - I'm sure if they prove something we will hear about it!

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.