Jump to content

Brain vs. Computer


ajaysinghgoshiyal

Recommended Posts

 

There is absolutely no evidence that the mind is a product of the brain. Killing someone or inflicting brain damage is not evidence since the mind is subjective and your objective observation of someone else's 'mind' does not equal evidence.

 

 

I disagree, the fact that drugs and or brain damage can make you a completely different person is evidence and the shot gun test might not communicate the after life to those still living it is the only way "you" can know and that only applies if you are correct, if there is no life after death then no one can ever know...

Link to comment
Share on other sites

 

 

I disagree, the fact that drugs and or brain damage can make you a completely different person is evidence and the shot gun test might not communicate the after life to those still living it is the only way "you" can know and that only applies if you are correct, if there is no life after death then no one can ever know...

 

What evidence is there that I have a mind?

Link to comment
Share on other sites

It's really not. See, people can easily conceive of inconsistent systems. That means they can conceive of that which is logically impossible. All other modalities are built on logical possibility. Nothing which is logically impossible is metaphysically possible.

 

We do? What do you mean by "free will"? Dollars to donuts, your idea of free will is metaphysically confused at best and logically impossible at worst. How does it exclude nonorganic things from having free will in principle? After all, your OP claims that anything which is conceivable is possible. We can conceive of robots with free will. Aren't you, then, committed to allowing the possibility of robots with free will? Hint: the answer is 'yes'.

I am simply not saying that i am an automaton that possesses free will. I do not believe that a robot or machine can ever think let alone make decisions for us (watch i robot starring will smirh). If robots or machines or other such automatons had free will actually then science fiction would have become a reality. This is not possible right now. It may be possible in the distant future. But i do not fall into the camp of materialist reductionists so i cannot believe that 'yes they do have free will'.

 

 

My views might sound heretic to some. I am open to debate over this.

Edited by ajaysinghgoshiyal
Link to comment
Share on other sites

human brains think via metaphor and analogy. Computers so far do not.

 

That is related to why computers are so bad at certain things humans find easy, such routing a journey

 

In principle one would expect a sufficiently comp[lex neural net learning setup to eventually think for itself much as we do.

Link to comment
Share on other sites

human brains think via metaphor and analogy. Computers so far do not.

 

That is related to why computers are so bad at certain things humans find easy, such routing a journey

 

In principle one would expect a sufficiently comp[lex neural net learning setup to eventually think for itself much as we do.

 

Have you seen "i robot" starring will smirh?? Watch it and you will understand why science fiction is still just that .. .. computers dont think..

Link to comment
Share on other sites

Have you seen "i robot" starring will smirh?? Watch it and you will understand why science fiction is still just that .. .. computers dont think..

I think it depends on how you define think. When one chess master beats another, we tend to say the winner thought through the moves better. But, when a computer wins, we tend to say the machine computes better. But, really, what is the difference; in other words, can you precisely define "think" and "compute" in terms that are not anthropomorphic and not self referential in such a way that the set of things included as "thinking" do not intersect with the set of things included in "computing?" It is possible to say thinking is biological and computing is not, but I think that is not precise and borders on being anthropomorphic.

Link to comment
Share on other sites

I think it depends on how you define think. When one chess master beats another, we tend to say the winner thought through the moves better. But, when a computer wins, we tend to say the machine computes better. But, really, what is the difference; in other words, can you precisely define "think" and "compute" in terms that are not anthropomorphic and not self referential in such a way that the set of things included as "thinking" do not intersect with the set of things included in "computing?" It is possible to say thinking is biological and computing is not, but I think that is not precise and borders on being anthropomorphic.

 

Isn't thinking associated with understanding? In computing terms, all the understanding (and the logic) is done by the biological programmer isn't it? The computer just computes based on rules that have been fed to it by a thinking mind.

Link to comment
Share on other sites

 

Isn't thinking associated with understanding? In computing terms, all the understanding (and the logic) is done by the biological programmer isn't it? The computer just computes based on rules that have been fed to it by a thinking mind.

There are programs that learn, for example Watson, but none exist that anyone considers to be a thinking program. However, work is progressing on several fronts to make Artificial General Intelligence that can pass the Turing test. One of these programs is a complete simulation of the brain using physiology from research currently being done. When (if) the Singularity occurs, a program will be able to write programs.

Link to comment
Share on other sites

There are programs that learn, for example Watson, but none exist that anyone considers to be a thinking program. However, work is progressing on several fronts to make Artificial General Intelligence that can pass the Turing test. One of these programs is a complete simulation of the brain using physiology from research currently being done. When (if) the Singularity occurs, a program will be able to write programs.

 

I just looked up the brain simulation you talked about. That sounds fascinating. Thanks.

 

Those who insist that computers could never think must explain what is the extra special source that makes that makes the brain something other than a neural network machine? What is it that biology can do that engineering cannot?

 

If we are talking about consciousness, then since we understand so little about how brains produce consciousness I think it's hard to speculate whether non-biological brains could ever produce the same. Again it depends on what we mean by think. Also how would we ever 'know' that a robot is conscious? That could be a big problem for science.

Link to comment
Share on other sites

You are welcome.

 

 

Also how would we ever 'know' that a robot is conscious? That could be a big problem for science.

 

I suppose we could use the mirror test, but IMO it is not important whether we consider robots as being conscious or not. Everyone seems to understand what it is, but no on can precisely define it. I think the mirror test is only a test to determine whether an organism recognizes a mirror, and that every organism has some measure of consciousness; although, it depends on how one defines consciousness. For example, even an ant feels pain when it is inured, and that is a kind of consciousness--being conscious of pain. Being conscious of ones thoughts is similar, and I suspect all animals are conscious of their thoughts. Although, some animals only have primitive responses with limited learning ability and no reasoning.


 

But, I am not a scientist, and scientists do consider consciousness important. You are probably right about the ambiguity of robot consciousness confounding scientists.

Link to comment
Share on other sites

I think in terms of ethics it would be important to know whether a robot/simulation is conscious or not. If you are conscious then you can suffer.

 

even an ant feels pain when it is inured, and that is a kind of consciousness--being conscious of pain.

 

I agree that pain is a kind of consciousness. I would say anything with an experience element is consciousness. However I don't think we can know for certain whether ants feel pain, or whether they are conscious at all. I would suspect that they do have some level of consciousness - but that is all we can ever do in the realm of consciousness: suspect; and the further away we get from our own species the harder it is to know (whether things are conscious at all, or what the nature of their consciousness is i.e. what it *feels* like).

Link to comment
Share on other sites

 

I agree that pain is a kind of consciousness. I would say anything with an experience element is consciousness. However I don't think we can know for certain whether ants feel pain, or whether they are conscious at all. I would suspect that they do have some level of consciousness - but that is all we can ever do in the realm of consciousness: suspect; and the further away we get from our own species the harder it is to know (whether things are conscious at all, or what the nature of their consciousness is i.e. what it *feels* like).

I agree.

 

I think in terms of ethics it would be important to know whether a robot/simulation is conscious or not. If you are conscious then you can suffer.

Interesting issue.

 

Is simulated pain actually pain and does it qualify as suffering?

 

Is simulated happiness enough to motivate a robot to "want" to live?

 

Can we build circuits into robots that allow robots to turn off their simulated pain/happiness? Should we?

 

I have always assumed that a robot can replace its parts, including electronics, and survive (live) indefinitely. If a robot learns the Universe will eventually end, will it decide decide living is futile and decide to end it all?

Link to comment
Share on other sites

I gathered from rereading the text that they were saying that the qualia cannot be accounted for by a physical brain state on any machine as of now. It suffices to say that the mind is possibly a distinct entity from the brain that produces it and that each can interact in both directions. But it sounds lunatic to me personally speaking. How are they saying these kind of things will require more research on my part before i throw in the towel already.

 

 

Forgetting qualia just for a minute! How does matter (tangible brain) produce mind (intangible)???

Actually, I think that is a common 'argument' for the evidence of where our consciousness comes from--that the brain produces the mind, the mind being our consciousness. (meaning, our ability to be aware of what is happening around us)

Relating to the OP, I don't believe robots will ever be 'conscious,' as to be conscious would mean to be aware. Robots will never be aware of what is happening, they are merely programmed to fulfilling tasks. Fulfilling tasks in an automated, robotic fashion, with no emotion or consciousness of actions.

Link to comment
Share on other sites

Lets get this strait although they are too timid to say the S word, those who are arguing that there is something that humans do that a machine cannot do, are presumably arguing that we have a mystical soul correct? Otherwise we are just a neural network machine which a sufficiently powerful computer and software can emulate.

Link to comment
Share on other sites

There is nothing in principle that prevents one from building - or more likely growing - a neural network capable of supporting the same patterns of patterns of activity that the human brain does. Then all it would need would be a long period of environmental stimulus and so forth.

 

But we are so far away from that, technologically, that the question of whether it is even possible still has a place. A more interesting question, since we are still in the speculation stage, is whether boredom and error and sleep are essential properties of such a network. Also, does some part of the sensory or operating stuff need to be analog, in practice? Of course any such patterns could be emulated on a Turing machine - but not in this universe: we only have so many billions of years left.

 

Computers right now think so much differently from humans that emulating human mental patterns in a computer is very inefficient, slow, bugridden policy

Link to comment
Share on other sites

Lets get this strait although they are too timid to say the S word, those who are arguing that there is something that humans do that a machine cannot do, are presumably arguing that we have a mystical soul correct? Otherwise we are just a neural network machine which a sufficiently powerful computer and software can emulate.

 

It's really not that simple, you're making unfounded pleadings with a statement like 'just a neural network machine'. The mind for one doesn't automatically correlate to a physical entity just because you/we can't imagine/understand otherwise.

Link to comment
Share on other sites

There is nothing in principle that prevents one from building - or more likely growing - a neural network capable of supporting the same patterns of patterns of activity that the human brain does. Then all it would need would be a long period of environmental stimulus and so forth.

 

But we are so far away from that, technologically, that the question of whether it is even possible still has a place. A more interesting question, since we are still in the speculation stage, is whether boredom and error and sleep are essential properties of such a network. Also, does some part of the sensory or operating stuff need to be analog, in practice? Of course any such patterns could be emulated on a Turing machine - but not in this universe: we only have so many billions of years left.

 

Computers right now think so much differently from humans that emulating human mental patterns in a computer is very inefficient, slow, bugridden policy

 

So when people say "impossible", i should read "very hard" thanks for clearing that up because generally I reserve the word impossible for things like free energy devices.

 

 

It's really not that simple, you're making unfounded pleadings with a statement like 'just a neural network machine'. The mind for one doesn't automatically correlate to a physical entity just because you/we can't imagine/understand otherwise.

 

When you say "The mind for one doesn't automatically correlate to a physical entity just because you/we can't imagine/understand otherwise." Sounds to me like your talking about a soul. I wont hold that against you I am open to positive results from psi experiments or evidence of souls or reincarnation if good proof could be found. But if you are it would make the debate much clearer if you admitted that was the direction you where coming from. An argument goes nowhere if you don't establish its precepts.

There is nothing in principle that prevents one from building - or more likely growing - a neural network capable of supporting the same patterns of patterns of activity that the human brain does. Then all it would need would be a long period of environmental stimulus and so forth.

 

snip

 

If you create a copyable neural net of evolving software algorithm. You would only need to school the first few AI's you could copy the rest you need from the first templates.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.