Jump to content

Computational Theory of Mind


bascule
 Share

Do you ascribe to the computational theory of mind?  

10 members have voted

  1. 1. Do you ascribe to the computational theory of mind?

    • Yes
    • No, I think consciousness is non-computable but still explained by natural (e.g. quantum) processes
    • No, I think consciousness is supernatural
    • Dur dee durr


Recommended Posts

I'm a proponent of the computational theory of mind, which proposes that the brain is an information processing system and can therefore be emulated on a Turing-machine type of computer.

 

This philosophical approach to consciousness meshes well with modern neuroscience, which so far has not found any mechanisms in the brain which cannot be understood by classical mechanics.

 

If the computational theory of mind is correct, our thinking process is of course deterministic.

 

Who here ascribes to the computational theory of mind? If you don't, why?

 

A counterexample to the computational theory of mind which still relies on natural processes would be Penrose/Hameroff's Orch-OR proposal, which hypothesizes that microtubules in neurons exhibit distinctly quantum mechanical behavior which is significant to consciousness. In Orch-Or, the brain functions as a hypercomputer.

Edited by bascule
consistently capitalize
Link to comment
Share on other sites

Since the human mind created the computer, and the computer did not create the human mind, the human mind may have created the computer in its own image. A hammer is an extension of the arm and fist. Even science can lose track of cause and effect and place effect as the cause. The question I would have is how good did we do at unconsciously simulating the mind with the computer and could the modern computer still fall short?

 

Clothes are like the hide of an animal. The hides of animals did not form in nature to look like designer clothes. Although the way they fit is getting closer to the natural.

Link to comment
Share on other sites

At this time, I don't believe that "quantum computation" is necessary to produce the human mind. Certainly quantum processes exists within atoms of the brain, but nothing that specially results in consciousness.

 

Didn't that computational/scanned model of the mouse brain work with ordinary computation? I don't think we're that special, we just need faster computers (and/or friendly AI).

Link to comment
Share on other sites

  • 1 month later...
Stuart Hammeroff covers a lot on the quantum computations of mind. I subscribe to it.

 

He reminds me of Ray Comfort describing why evolution is wrong. The interviewer is asking him really good questions though.

 

By the way, you can embed youtube videos with the

tag:

 

y4y8mTRqXAo

Link to comment
Share on other sites

Well, a real question would be "Has nature given us the ability to do something undetermined, such as a random act within nature?" People are still trying to figure out how evolution occurred.

 

In general, I would have to say that many things are deterministic, and that perhaps nature has given organisms the ability to complete random tasks. In theory, I suspect a person could make something similar to a human. But in reality, I doubt it could occur. With a computer it's got deterministic limitations, even when creating random processes. You could have a computer simulate that, but it would be a limited mocking of that human process. You could make something extremely similar but it would never be the same unless the computer has DNA and the other little physical properties that came with molecules.

 

I don't really think science is going to attempt to get that far. I think it's a novel concept, but people would more than likely walk away from the ordeal. Maybe if you created an AI system to attempt to make itself as human as possible, then it would attempt to do so. Otherwise, I do not believe that humans in the long run would be interested in getting every single physical detail down could be understood and exist in the realm of physics now and in the future.

 

I'm not saying human->cyborg-> android can't occur. If a person still feels like him/herself in the end, good enough. What I'm trying to say is that I do not believe we would know all of physics that could ever exist to make a virtual human.

 

Curious how the man called the cytoskeleton the paramecium's nervous system. I'm not sure if I can agree with that.

Edited by Genecks
Link to comment
Share on other sites

  • 3 weeks later...
Stuart Hammeroff covers a lot on the quantum computations of mind. I subscribe to it.

 

I ran into this article today:

 

http://www.wired.com/medtech/drugs/magazine/16-04/ff_kurzweil_sb#ixzz0oDpV3szG

 

Not to poison the well, but it's written by a guy who's an obvious proponent of Penrose/Hammeroff. I could go through and point out the numerous fallacies and falsehoods in this article, but I'll just focus on one:

 

Many computer scientists take it on faith that one day machines will become conscious

 

This is a very common argument surrounding this school of thought, and one I see commonly made by evolution deniers as well. They try to equivocate faith with science, and ascribe a religiosity to their beliefs.

 

However, this belies the fact that we have seen no evidence to date of nonclassical, distinctly quantum mechanical behavior in the brain. It simply does not exist. If there's any "faith" to be had, it's proceeding with the assumption that the brain is a classical physical system.

 

Call that faith if you will, but proponents of Orch-OR have a different hypothesis, namely that the brain does operate with distinctly quantum mechanical behavior. They even claim to know where such behavior is taking place, and have found a way to falsify their claims. However, they have not done the necessary experiments to look for evidence of their claims. It's as if they've done everything it takes for their ideas to be scientific, but can't actually find the evidence that supports their claims. However, that doesn't stop them or their supporters from claiming proponents of the computational theory of mind are "wrong." To me, that's true faith, and an arrogant one at that. The computational theory of mind falls perfectly in line with mainstream neuroscience. It's Orch-OR and other quantum mind hypotheses that are suggesting an extraneous element not known or understood by mainstream neuroscience.

 

"These techno-utopians should pay closer attention to developments in neuroscience"... the obvious ad hominem aside, the author of this article says that, then advances Orch-OR which is a hypothesis not wells supported among mainstream neuroscience.

Link to comment
Share on other sites

  • 1 month later...

"Many computer scientists take it on faith that one day machines will become conscious"

 

You know they are exploring bacterias now as a means of storage for information and computer processing. Not that it would have a baring on anything but it is interesting that its a living computer.

 

Living Computer

 

 

But I think by premise we don't have to assume that consciousness can not be realized by a computer system, when we are able to understand the processes that lead to consciousness. It would be easier then to make a computer that does exhibit consciousness.

 

There could be some incredibly simple and elegant explanation of what it is exactly that defines why we are even aware of ourselves.

 

It is only when we assume what it is that we have a problem with it.

 

Your world view is mediocre vs the school of thought that hameroff is from. It took a lifetime for hammeroff to be able to scratch the surface on these topics.

 

There is a lot more to reality than just 1+1=2

Link to comment
Share on other sites

Here is my theory of the mind. Consciousness has a connection to entropy, since consciousness allows additional degrees of freedom compared to machines. A computer can only do what it is programmed to do. If we could design a computer that could step outside its programming, and do more than it was programmed for, it would have the additional degrees of freedom or the extra entropy for consciousness.

 

What is unique about neural memory, compared to computer memory, rather than be an on-off switch like computer memory, neural memory is dynamic switch In the case of neural memory, the switches (synapses) fire constantly and generate energy.

 

To increase entropy, one needs to add energy. The firing of the memory generates the energy, used for the entropy of consciousness. With the development of language, the amount of conscious memory increased, so did the entropy of consciousness; choice and willpower.

 

If you look at brain waves, these reflect a system wide firing of memory, so a wide range of memory sets up the background energy for the entropy of consciousness. If the brain waves get faster, this generates more energy per time and increases the entropy of consciousness.

 

waves.gif

Link to comment
Share on other sites

I think quantum does indeed apply
Only in the same way that we would apply QM to the path of a baseball. Neurons are classical structures.

 

we might be able to program a computer to emulate consciousness but it would a simulation like any other computer simulation...

Our brains are deterministic physical systems as are computers. If our neocortex(a pattern storing/recalling device) causes consciousness, why would a turing machine be unable? Is our consciousness, then, a simulation? What's the difference?

Link to comment
Share on other sites

Only in the same way that we would apply QM to the path of a baseball. Neurons are classical structures.

 

Penrose would disagree...

 

http://ase.tufts.edu/cogstud/papers/penrose.htm

 

Our brains are deterministic physical systems as are computers. If our neocortex(a pattern storing/recalling device) causes consciousness, why would a turing machine be unable? Is our consciousness, then, a simulation? What's the difference?

 

What's the difference between virtual reality and reality?

Link to comment
Share on other sites

By the way, you can embed youtube videos with the

tag:

 

y4y8mTRqXAo

 

The stupid, it hurts.

Wetness of water is an emergent property. Many people think consciousness emerges as a new novel property at a high level of complexity in this hierarchal system that we call the brain. The problem with that, I think, is that none of these other emergent phenomena are conscious. They don't have conscious experience.

There's no way he's serious with this objection. This is so dumb. No one claims water should be conscious(well, except that crazy guy in 'What the Bleep Do We Know' who says that water not only has feelings, but can read Japanese).

 

Wetness and consciousness are two completely different emergent properties and have nothing at all to do with each other. The only way that they are similar is that they are convenient ways to describe large scale behaviour of an incredible number of small scale interactions.

 

He might as well claim that water can't be wet because it's not also conscious. If he did, he'd be justifiably laughed at. The same justification here. If all his work is this bad, this guy is a joke. Let's resume the video.

 

Moreover there's no real prediction of at what level of complexity consciousness might occur in the brain. And if that were the case, computers should be conscious already *dramatic decrease in volume* or should soon be.*volume restores*

Again, no. Not at all. While we would need better computers to run an HTM large enough to simulate the activity of an entire neocortex, that is rather irrelevant to this point as the neocortex and turing machines run on completely different principles. That's why rather than modeling the brain structures themselves, it is more efficient to make the HTMs do what the neocortical structures do.

 

cCdbZqI1r7I

 

The CPU lacks the hierarchal nature of which consciousness is said to be an emergent property. If you are interested in how the neocortex actually works, read 'On Intelligence' by Jeff Hawkins, however, I will do my best to summarize the basic idea of the book here.

 

The brain work in a fundamentally different manner than that of the computers, however, draw an analogy between consciousness and computer programming.

 

Our minds are analogous to computer programming. we can look at computer programming from various levels. You can look it it from the low level of electrons moving about on wires. In the same manner, you can look at the brain as ion currents through neurons. At a higher level, you have logic gates in computers and neural hierarchies in the brain. Then you have higher level programming like python. The analog in the brain is an idea.

Our actions, our choices, are all based upon our beliefs, our values, preconcieved notions, etc. It's algorithmic(albeit VERY complicated). All of these things come at the lowest level from deterministic physics, since the neurons involved are classical structures.

 

Our consciousness comes from a thin covering of the "old brain" called the neocortex. It works hierarchically(with many more feedback connections than feedforward) to produce a working model of the world. Instead of creating trillions of files to save what every object looks like under every condition(that would be utterly ridiculous as the pattern on your retina is never the same) the cortical-thalamo-cortical loops use a time delay to form invariant auto-associative memories which are used recursively in hierarchical feedback loops to provide a model of our world. This is how our senses are cleaned up. For example, these auto-associative memories fill in our blind spot. This model is what we experience. Most of our experiences are what we expect to experience rather than what we actually experience.

 

We can illustrate this last point with a simple experiment you can do at home(it's easier to do around Halloween, though).

 

What you'll need:

1)A barrier of some sort(a cardboard box will do).

2)A fake arm(that's why I said it's easier at Halloween)

3)An assistant

4)A chair

5)A table

 

Now, sit down at the table and place your arms on the table. Block one arm from your view with the barrier and place the fake arm on the table next to your arm such that you can see it. Have your assistant sit opposite you at the table. The assistant will now touch both the fake arm and the hidden arm simultaneously in the same manner. If you poke the fake arm, poke the real arm in the same point at the same time. Poke them, stroke them, shake them, whatever. After a while of watching your fake arm while the assistant manipulates both the fake arm and the real arm, your neocortex will assimilate the fake arm into your model of self.

 

Now here comes the creepy part. Have the assistant(at a point in time unknown to you) stop manipulating the hidden arm and keep manipulating the fake arm. You will still feel it.

 

For that reason' date=' plus the fact that that would take away any possibility of Free Will[/quote']

 

Determinism is not at odds with free will. In fact, free will depends upon some level of determinism. It doesn't make much sense to speak of someone making a choice when the choice is based on the roll of the dice.

 

Let's just take some elementary thought about what free will is for a moment. Free will is the ability for 'you' to contemplate multiple options and choose one output. For any 'you' to be distinct from 'Bob' or 'Ashley' or 'him' or 'her', there must be some regularity; there must be a pattern in the choices(otherwise the phrase 'out of character' is rather meaningless, no?). In fact, that is exactly what we see in practice. If you spend enough time around someone you can pretty well predict their choices given a set of circumstances.

 

How do we make choices? A basic overview of psychology(and just common sense) reveals that our choices are quite unsurprisingly based upon factors including our beliefs, values, and past experiences. These can be seen as some of the inputs into the decision generating algorithm we call Free Will.

 

So, we can see that Free will:

1)produces a predictable pattern of results

2)requires known inputs

3)functions in a classical rather than quantum computation device

 

That sounds pretty deterministic to me.

 

If you're interested in Free Will and whether it conflicts with determinism, I suggest reading 'Freedom Evolves' by Daniel Dennett.

 

It also wouldn't give us this property of binding how we bind everything together into one sense of self or unity of consciousness and how we transition from the preconscious or subconscious to consciousness itself. These problems, I think, suggest that there's something more to consciousness than being an emergent property of computation. The brain is more than a simple classical computer.

The binding process is inherent in the hierarchal nature of the neocortex as the multiple senses feed into each other in associative nodes. This guy doesn't even understand that against which he is attempting to argue; bascule was right in saying that he's similar to a YEC.

 

I've already said that the brain is NOT like a classical computer. He's equivocated here. Notice that in the beginning, he was talking about the idea that consciousness is an emergent property of a hierarchal system like the neocortex, but all his objections are about turing machines which aren't hierarchal.

 

Interviewer: "When you say 'computers should be conscious by now or soon'' date=' does that mean they're doing as many computations as our brain is doing right now or will be soon?[/color']

 

Well, they will be within the next 20 years or so and people make these predictions that when the brain reaches a certain level of computation equivalent to the brain, it should be conscious. Of course they'll hedge and hedge and say "well, it's not organized the same" and they'll just keep pushing the boundary back.

It's not pushing anything back; it's saying what we've said from the get go-the brain is not a computer.

 

But the first problem with that is that AI people(artificial intelligence people) who make these predictions assume that the brain works along the lines of the computer in that the neurons in the brain and their connection synapses are the fundamental units. So' date=' for example, we have roughly 10 billion neurons with the thousand connections each with thousands of switches to other neurons which gives us about 10[sup']15[/sup] operations per second with each neuron operating as a fundamental unit. The problem with that is that each neuron is much more complex than a simple switch. For example, consider a single cell like a paramecium(a single celled organism) that swims around and finds food. It learns; if you suck it into a capillary tube, it escapes, and if you do it again, it gets out quicker and quicker each time so it learns. It can find mates, it has a sex life. It can do many kinds of things, but it doesn't have any synapses whatsoever; it's just one cell.

 

Interviewer: And yet it's conscious

 

I'm not sure if it's conscious or not, but it's certainly intelligent and it does complex things without any synapses.

I'm not well versed in paramecia, but, based on the rest of the video, it is extremely likely that he is overstating the case(and there are probably very simple answers known for a while). Nonetheless, a neuron is not a paramecium.

 

So, if a paramecium-one cell-can do all those things, why should we think that a neuron is a simple on-off switch or that a synapse is a simple on-off switch? The capacity of a neuron is much greater than that.

That's what we've observed it do. That's what the evidence says. The burden of proof is upon you to say it is something else.

 

If we go back to the paramecium, how does it do that? It uses its internal structure; it's cytoskeleton-what seems like the structural support, but is also the nervous system within each cell. It is comprised of microtubules mainly(these hollow cylindrical polymers that are seemingly perfectly designed to be information processing devices at the molecular level-a scale below that of neurons). They are the nervous system within each cell.These proteins(they're made of proteins) are much faster than neurons. There's more of them-there's like 10 million of them within each cell switching in a nano-second. If we think of processing at that level, there's as much processing in one neuron as there is in the whole brain according to these AI guy estimates.

Really? Neurons are cells with structure for internal functions that all life has? Who would have thought? This is a giant red herring.

 

The cells in my big toe have the same microtubules, but no one is suggesting that it is in any way conscious. We know fairly well, iirc, how neurons fire. Unless he can present any evidence that these microtubules are relevant, this whole part of his rant is moot. Let's continue.

 

So, if we think that the information processing in the brain goes down to that level, we increase the information capacity from 1015 to 1027, so that pushes the goal way farther for the AI people.

 

The problem is that even if that were the case, 1027 operations per second-even if the microtubules are the fundamental computers of consciousness-that still doesn't tell us why we have experience, why we have an inner life; why we have emotions, feelings, what philosophers call qualia. That's just more reductionism; more computation, but it doesn't solve the problem, nor does it solve the other problems like binding, transition from preconscious processes to consciousness, the problem of free will and soforth.

For once, I agree. Your irrelevant little rant doesn't explain anything.

 

And actually, I worked on the idea that microtubules inside neurons and other cells with information processors(for almost 20 years) suggesting that to understand consciousness-to understand the brain-we need to go inside the neuron to the level to consider all the information processing.

I'm sorry you wasted your time. We don't need to model the inside of the cells. We don't even need to model the cells themselves. We just need to replicate what the cells working together do. And, in fact, we've begun doing that, but our HTMs are nowhere near the size of the neocortex.

 

And yet people would say "okay, maybe you're right. So what? How does that solve the Hard Problem, as it's now known, of Consciousness? How do you explain conscious experience from just further reductionism?" And I had to admit that they were right. Even if the capacity of the brain were squared, it still didn't tell us why we have consciousness. Because the same arguments against emergence that I mentioned before still held. So, about that point at 1990, I read a book by Roger Penrose, the Oxford mathematical physicist called The Emperor's New Mind[/u'] and it was kind of a challenge to Artificial Intelligence(AI being the computer industrial complex pushing the idea that larger and larger computers will attain consciousness). And Roger's idea was based on the idea that our minds-our conscious minds-do something that is beyond the realm of regular computation. He called it non-computable. Basically the idea is that we know things other than through algorithms. It's through Gödel's Theorem and it's mathematical and philosophical-to be honest, I didn't even really understand all those arguments.
The brain is not a turing machine; we know this. We base the HTM theory on it. However, turing machines can simulate anything, so we can still make HTMs. The neocortex doesn't compute answers, it remembers them, so we're developing software to do the same.

 

But' date=' um, my gut level was that he was right. He argued that, to explain consciousness, to explain how we can have this non-computability, which is really another word for free will or along the lines of free will or going in the direction of free will, because, if the brain is just a computer, everything is deterministic; We're just reacting to things in our environment.

 

[i']Interviewer: Which means we should be completely predictable[/i]

 

Completely predictable. Correct maybe with some randomness, but certainly with no free will and we would be, as the philosopher Huxley said, merely helpless spectators. We would be epiphenomena just along for the ride. We wouldn't be in control of anything. We would just be epiphenomena. Just going along with our actions and observing basically without really having a say in what was going on. We might think that we did, but it was an illusion.

I've already shown that he doesn't have a clue about free will.

 

Not only that, he seems to think that his desire for free will is a reason to negate the hypothesis(even though free will necessitates determinism). The universe doesn't care what you want to be true.

 

Roger's idea was that the only thing in nature that could give us this noncomputable element was a quantum mechanism, specifically a quantum gravity mechanism. And this seems so tangental to the idea of what is going on in the brain that most people couldn't really buy it. It's a difficult concept, but to me, there's intuitively something to it, because, what he said was, well, he likened the brain to a quantum computer. And that brings us into the world of quantum theory which is a very difficult subject. In fact, Richard Feynman once said that "Anyone who claims to understand quantum theory is either lying or crazy", because it's so bizarre. For example, if you go down to the quantum realm, like small, like down say down to the level of atoms maybe larger, but let's just talk about atoms subatomic particles things are completely different than they are in our classical world where things are firm and real and in one place, because at the quantum level, things can be in multiple places at the same time. Particles can be smeared out and act like waves. Things can be interconnected over great distances. Time is smeared out. Everything is kind of different at that level.

So much for the coin toss.

 

Regardless, QM doesn't apply, because the calculations were done and the relevant brain structures are classical.

 

It's hard to see how anyone can take Hammerhoff seriously. Bascule was right, he argues quite a bit like YECs:

1)Straw men everywhere

2)A demonstrated complete lack of understanding of the relevant issues

3)Dismissal based on what is desired to be true

4)Attempts to tear down with no building up

 

Thanks, truedeity, for the amusing video.


Merged post follows:

Consecutive posts merged

What's the difference between virtual reality and reality?

 

That's a weird objection, as it has nothing to do with why the output of one machine would be 'real' and the other 'simulated' if the machines are made to do the same thing.

Edited by ydoaPs
Consecutive posts merged.
Link to comment
Share on other sites

 

That's a weird objection, as it has nothing to do with why the output of one machine would be 'real' and the other 'simulated' if the machines are made to do the same thing.

 

I'm a weird guy, IMHO there is always a real difference between reality and a simulation, if nothing else a simulation can be repeated, reality cannot, but sadly I cannot take the argument any further...

Link to comment
Share on other sites

I'm a weird guy, IMHO there is always a real difference between reality and a simulation, if nothing else a simulation can be repeated, reality cannot, but sadly I cannot take the argument any further...

 

You've not said why the computer doing the same thing the brain does would be a simulation, but the brain doing it isn't.

Link to comment
Share on other sites

You've not said why the computer doing the same thing the brain does would be a simulation, but the brain doing it isn't.

 

A computer simulation can be programed to do the exact same thing over and over, the brain might try to do the same thing but the pathways are always different and the out come is never exactly he same due to fluctuations that cannot be controlled in any real way.

 

A computer also has to be programed, the program will always run to the same conclusion along the same pathways, given the same programing a human will never come to the exact same conclusion the exact same way.

Link to comment
Share on other sites

  • 3 years later...

When you consider philosophers such as david chalmers it would be very unlikely that consciousness would ever be explainable using classical explanations.

 

 

When chalmers speaks about qualia and the hard problem it really hit's home for me, I don't see anything in terms of classical that can account for consciousness.

 

In my view Orch-OR is currently the best theory to explain consciousness, the theory is holding up still and remains to be the ONLY testable hypothesis. Computational and AI groups do not produce a testable hypothesis.

 


An example that I like to use to point out the problem with computational theories of consciousness is deep blue vs. gary kasparov. A computer can be programmed to play chess, and even defeat the best human chess player but the difference between the human player, and the computer is the human knows he's playing chess and the computer doesn't.

 

However, if the Orch-OR model is correct one day computers could become conscious but they won't be classical, it will have to be a truly quantum based computer and certain questions emerge such as how many q-bit's would it require?


It should also not be bizarre that quantum processes take place in the brain. Plants use quantum processes during photosynthesis, and human smell is another example which employs quantum tunneling.


But to only look at human brain neurons would be a mistake (almost as bad as thinking animals aren't conscious.) There is a "quality" to all life that is ineffable, you can see it even in the smallest living organisms. The single celled Amoeba is able to swim, find food, learn, and multiply. I would argue that there is a primitive consciousness or proto consciousness there, and I think the correlation with that and Orch-OR is that Amoebas are part of the eukaryote domain, and all Eukaryotic cells have microtubules and I find this to be an important correlation. In Orch-OR Microtubules play the key role in brain neurons, inside microtubules there are dendrites which act as effective q-bits, e.g. they are either open, or shut, and or the super position (open and shut).

 

I'll leave it here for now, and redirect most of the difficult questions to hameroff and penrose most of which has been answered if you google it or watch the multide of videos that are avail surrounding the subject.

 

On the other hand what I found to be interesting coming form the classical perspective was rudolfo llinas, on "i of the vortex". There will have to be a marriage of classical and quantum perspectives on consciousness in order for there to be a working model on consciousness.

 

 

hameroff/penrose + rudolfo best complete picture on consciousness it's a good marriage IMO.

Link to comment
Share on other sites

The poll should also include "Both".

 

 

Indeed.

 

...Perhaps also "neither" and "it's far more complicated than that".

 

In very real ways the human brain is a computer that processes information just like an electronic computer. The biggest difference is that the human brain is hardwired to process information that is important to humans just as a rabbit's brain processes information unique to rabbits. The language used by the brain is flexible and ever changing and is the format in which the brain operates. The number of possible language systems is likely infinite (or must be thought of this way). The way an individual uses any language system is unique.

 

The sort of "math" to which the brain "aspires" is not the analytical math of computers (the math that allows their operation), but is geared to achieving best results by human standards and individual standards.

 

Much of what comprises an individul human from the standpoint of beliefs and actions is determined by accumulated knowledge and beliefs which are far more related to experience, education, and superstitions than to rapid and accurate thought.

 

Modern language obscures such things but is highly flexible and well suited to science and the communication of scientific results.

Link to comment
Share on other sites

Correct me if I'm wrong, and excuse me for any arrogance that may come across by saying this but, I think that I may be the leading expert in this particular subject... at the very least, on these forums.

 

I can't tell you how excited I was when I saw the title of this thread in new content.

 

The reason I say that is because I know the computational aspect and have gotten more in depth with it than any other person has, as I have witnessed at least.

 

If I do happen to get into it from a programmatical perspective, then please understand how hard it actually is to write something like that out with a cellphone.

I ran into this article today:

 

http://www.wired.com/medtech/drugs/magazine/16-04/ff_kurzweil_sb#ixzz0oDpV3szG

 

Not to poison the well, but it's written by a guy who's an obvious proponent of Penrose/Hammeroff. I could go through and point out the numerous fallacies and falsehoods in this article, but I'll just focus on one:

 

 

 

 

 

 

 

Many computer scientists take it on faith that one day machines will become conscious

This is a very common argument surrounding this school of thought, and one I see commonly made by evolution deniers as well. They try to equivocate faith with science, and ascribe a religiosity to their beliefs.

 

However, this belies the fact that we have seen no evidence to date of nonclassical, distinctly quantum mechanical behavior in the brain. It simply does not exist. If there's any "faith" to be had, it's proceeding with the assumption that the brain is a classical physical system.

 

Call that faith if you will, but proponents of Orch-OR have a different hypothesis, namely that the brain does operate with distinctly quantum mechanical behavior. They even claim to know where such behavior is taking place, and have found a way to falsify their claims. However, they have not done the necessary experiments to look for evidence of their claims. It's as if they've done everything it takes for their ideas to be scientific, but can't actually find the evidence that supports their claims. However, that doesn't stop them or their supporters from claiming proponents of the computational theory of mind are "wrong." To me, that's true faith, and an arrogant one at that. The computational theory of mind falls perfectly in line with mainstream neuroscience. It's Orch-OR and other quantum mind hypotheses that are suggesting an extraneous element not known or understood by mainstream neuroscience.

 

"These techno-utopians should pay closer attention to developments in neuroscience"... the obvious ad hominem aside, the author of this article says that, then advances Orch-OR which is a hypothesis not wells supported among mainstream neuroscience.

This post is complete moot! If you knew anything about a point of interest, you would know that it's completely quantum when it's received. A classical computer could very well be operating quantum mechanically at this very moment! The one you have, right now, before your eyes! Explain how our computers can access ONE UNIT in a list (a list that can have "infinite paper") instantaneously. I've witnessed it. There's nothing more astounding to me than how efficient this particular part of the generative algorithm is.

 

Stuart Hammeroff covers a lot on the quantum computations of mind. I subscribe to it.

I'd absolutely love to see the rest of this discussion. Citation please

 

The list that I call knowledge IS, NECESSARILY, quantum. I cannot sway from that argument for any known or predictable reason.

Points of interest prompt units of knowledge instantaneously (upon recognition), and, computationally, can only be parametrized statistically with no arbitrary component.

 

When you start getting into time, it's, IMO, best suited as a tuple, which means that it does "smear", but because of the "smear" it's being considered classical. I believe that time is prompted and that you will have no recollection of no occurrences.

Edited by Popcorn Sutton
Link to comment
Share on other sites

When you consider philosophers such as david chalmers it would be very unlikely that consciousness would ever be explainable using classical explanations.

 

 

When chalmers speaks about qualia and the hard problem it really hit's home for me, I don't see anything in terms of classical that can account for consciousness.

 

In my view Orch-OR is currently the best theory to explain consciousness, the theory is holding up still and remains to be the ONLY testable hypothesis. Computational and AI groups do not produce a testable hypothesis.

 

 

An example that I like to use to point out the problem with computational theories of consciousness is deep blue vs. gary kasparov. A computer can be programmed to play chess, and even defeat the best human chess player but the difference between the human player, and the computer is the human knows he's playing chess and the computer doesn't.

 

However, if the Orch-OR model is correct one day computers could become conscious but they won't be classical, it will have to be a truly quantum based computer and certain questions emerge such as how many q-bit's would it require?

 

It should also not be bizarre that quantum processes take place in the brain. Plants use quantum processes during photosynthesis, and human smell is another example which employs quantum tunneling.

 

But to only look at human brain neurons would be a mistake (almost as bad as thinking animals aren't conscious.) There is a "quality" to all life that is ineffable, you can see it even in the smallest living organisms. The single celled Amoeba is able to swim, find food, learn, and multiply. I would argue that there is a primitive consciousness or proto consciousness there, and I think the correlation with that and Orch-OR is that Amoebas are part of the eukaryote domain, and all Eukaryotic cells have microtubules and I find this to be an important correlation. In Orch-OR Microtubules play the key role in brain neurons, inside microtubules there are dendrites which act as effective q-bits, e.g. they are either open, or shut, and or the super position (open and shut).

 

I'll leave it here for now, and redirect most of the difficult questions to hameroff and penrose most of which has been answered if you google it or watch the multide of videos that are avail surrounding the subject.

 

On the other hand what I found to be interesting coming form the classical perspective was rudolfo llinas, on "i of the vortex". There will have to be a marriage of classical and quantum perspectives on consciousness in order for there to be a working model on consciousness.

 

 

hameroff/penrose + rudolfo best complete picture on consciousness it's a good marriage IMO.

That last video you posted blew my mind twice at approximately the same part. I don't know why but I thought that I might've sensed ignorance or hatred.

Link to comment
Share on other sites

If the computational theory of mind is correct, our thinking process is of course deterministic.

But if that's true, we should be able to predict every action someone will take in any situation based off their genes, since genes are responsible for the way the brain is shaped and how it adjusts to different environments. But when you say it out loud, that just sounds ridiculous.

I will give you one thing though, which is that even though consciousness rises in different places, we seem to be able to recognize it, so there must be some pattern involved.

But another question is, where is the cut-off? A rock may not have biological information like a brain, but still there are units of information like spin states in photons and vibrations being transferred throughout it. And, if consciousness occurred in machines as you were speculating, it would prove you can have consciousness without being biologically alive which would make the cut-off even less clear. In fact, can something be a "thing" outside of physical space-time? Can I say that the correlation "1+1=2" exists? So if I can, can I say that consciousness is a network of patterns formed by correlations rather than chemicals themselves that would account for it existing in abiotic objects and thus not be tangible? I can't touch "a+b=c," all I can really do is put some markings on a paper and convince myself that those markings represent the variable values denoted as a and b and c, but I will never touch what "c" actually is. So, I think there's more to consider when trying to boil down consciousness to math because mathematics and physical reality don't always agree. What I think and where science seems to be heading with things like putting someone's mind in a computer and teleporting people's consciousness is that instead of saying consciousness is a collection of chemicals, we should instead say its a pattern that can be formed by chemicals or any other arrangement which allows consciousness to have both tangible and intangible qualities. Matter and energy can follow a pattern which is where we see a physical quality, but no amount of physical matter and energy in the universe is going to make the statement "6/3=2" untrue, meaning the statement doesn't physically react with physical processes.

Edited by SamBridge
Link to comment
Share on other sites

We probably can't predict behavior by peoples genes. Stuart Hammerhoff makes a good point in his lecture and I think that he's right. "[Microtubules are the containers of knowledge]," and then he backs up his point by giving an example of a single celled organism performing complex tasks (such as having a sex life, learning, and navigating its environment). From my experience (computationally), if you want to be able to predict the output of any computational mechanism you're going to need access to a few things. 1. in the simplest scenario, you need to know that there is ABSOLUTELY no alternative input that the mechanism is receiving because if there was, then the system is exactly twice as complex as you anticipated, 2. You need to know how the mechanism is organized, 3. you need to know the fundamental units that are associated with knowledge (commonly called "units", "units of knowledge", "bits of information", etc.) and you need to know the boundaries between these units. Currently, we don't have a single mechanism that can measure even the simplest known central system that is required for output (algorithmically) in any known organism, and that's not even to mention what the input may be. I see that one day we will have this type of tool, but it needs to be extremely precise and it needs to know how to distinguish noise from the actual thing that we are measuring. A common complaint about neuroscience is that the brain is just too noisy and that our systems aren't able to bypass that yet. It's getting better thanks to pattern recognition and Bayesian/statistical inference, but I think that we are still years away from measuring accurately and recognizably anything remotely as complex as our analytical system, and particularly language (which is BY FAR the most complex thing that any of us can achieve).

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.