Eise

Senior Members
  • Content Count

    1036
  • Joined

  • Last visited

  • Days Won

    7

Eise last won the day on March 25

Eise had the most liked content!

Community Reputation

328 Beacon of Hope

About Eise

  • Rank
    Organism

Profile Information

  • Location
    the old world
  • Favorite Area of Science
    Physics, Astronomy
  • Biography
    University degree philosophy, subsidary subject physics
  • Occupation
    Database administrator, a bit of Linux too

Recent Profile Visitors

6441 profile views
  1. Yep. And therefore one should neither ascribe features belonging to persons to the level that lies underneath, nor search there for such features, and then on not finding them there, say that persons do not have these features. It is what is called in philosophy a category error.
  2. Ah... Do neurons decide or choose? 'Decision', 'choice' are higher level descriptions of what the brain as a whole does. Firing neurons only effect, causally, other neurons. You do not find decisions or choices in the brain. Persons make decisions and choices. To call the tipping of the scale a decision is, well, a bit of a stretch. But that, mutatis mutandis, is a just as big stretch when you apply it to neurons. Just to follow-up: I’m curious to better understand your thinking on this part.  I do not agree to call these examples of coercion. According to Wiktionary: So it is, again, a bit of a stretch, to apply 'coercion' on lower levels of description. 'Coercion' simply does not apply on the level of the examples you mention: it applies to persons only. And persons, a bit simplified, are the complete functioning brain. Pity, I have not much time now. But maybe you like this. This is the so called 'four case argument' of Derk Pereboom, also known as 'the cases of Professor Plum'. For more read the accompanying text from this website where I took the illustrations from. (Click to see full size). Would that agree with your position?
  3. Eise

    Split from AI sentience

    Are your sure? I agree that we agree that not every trait organisms have is (or was) evolutionary advantageous. BUT: this is what I said: Do you then also agree that consciousness is evolutionary advantageous? If so, how does this work when consciousness is just a byproduct? Or why shouldn't we call it an 'epiphenomenon'?
  4. Input to what? Note that you also mention 'neural structure'. To what is the neural structure input?
  5. Eise

    Split from AI sentience

    You should know me better, that I would defend that all organisms are conscious. And I think it should be clear from my followup sentences: So consciousness is an evolutionary advantage, just as the trunk of the elephant, eukaryotes, the ever growing of new teeth of sharks etc. In short, I would say that consciousness is the capability of an animal to anticipate possible futures dependent on what it will do.
  6. This is not about free will. It is about predictability of our choices. But predictability and free will have next to nothing to do with each other. To give a simple example: my wife knows I like whiskey more than brandy. So if tomorrow there is a party, and there is a choice between whiskey and brandy, she will already know a day in advance what I will choose. I am very predictable in this respect. But it is still a free choice of me to drink whiskey. But under the thread of somebody to kill my wife if I do not drink the brandy, I will drink the brandy. But then I do not act according my own wish (whiskey!), but the wish of that person that I drink brandy. And that makes the action coerced, so not from my will. I think the general critique on such 'Libet-like' experiments is still valid, even if 'prediction 11 seconds before' sounds impressive.
  7. Eise

    Split from AI sentience

    Yep. But as you said some postings above, the kind of free 'free will' you are referring to is one that does not fit to a causally closed universe: This is a nice rhetorical trick, especially this 'expansive'. Is it also 'expansive' that e.g. the colour of objects only apply at molecular (or even higher) level, but not to electrons? But still, these electrons are still responsible for the colour of the objects where they are part of: but it is the relationship with their environment (nucleus, other electrons...) that makes that objects have colours. If you have a naturalistic world view, e.g. that you think that the universe is causally closed, then none of the discoveries of neuroscience is a surprise in this respect. Science, which includes of course neuroscience, has as one of the big assumptions, that more or less everything is determined, i.e. develops according to laws of nature. So not finding a soul, or a non-causally determined subsystem in the brain, is a no-brainer (). (Of course, I am convinced that the non-determinism of QM has nothing to do with free will.) Neuroscience discovers how determinism works its way through the brain, but not that it works its way through the brain: that is the presumption behind any science, otherwise science would be impossible. Well, if I understand you correctly, when this 'labeling' is done consistently by somebody who acts, and by his social environment, and even more, when the label itself has causal impact ('He did it voluntary, so he is guilty'), then the causality of the labeling is given, even when it is implemented in a 'deterministic machine'. One cannot understand a chess-playing computer, when one studies the quantum physics of the semiconductors in the computer. That is true, but I do not think consciousness is a byproduct, but an essential factor in evolution. The capability to picture your environment, see possible futures dependent on how you will act (which then of course includes the capability to distinguish between yourself, as actor, and your environment) also based on previous experiences seems a terrible evolutionary advantage to me. And I have great difficulty not to call these capabilities 'consciousness'. I fully agree. But I am not driven by romanticism to my viewpoint, but by the drive to understand the world around, and in me. Can you elaborate more? I do not get your points.
  8. Eise

    Split from AI sentience

    Why do you use 'predetermined'? Is determined not enough? Or what would be the difference according to you? Well, without the 'pre': yes of course. 'Free' does not mean not-determined (or not predictable...). It means that you can act according your own motives and world view. When you are forced to act against them you are not free. If an organism or object has, cannot have, motives and a world view, then the concepts 'free' or 'not-free' simply do not apply.
  9. Eise

    Split from AI sentience

    Here is the methodological problem: the discourse about free will is only concerned with humans, not with their interiors. So the question if some action was free only applies to the bag of water and chemicals as a whole. If I introduce you to Keith Jarrett, the great jazz pianist, would you say he is no piano player, because you do not find a 'piano playing capable neurons'? So if we are capable to act freely, i.e. according our own motives and world view, should you then look into the brain for a 'free will neurons'? You see the trees, but you do not see the forest. Another way of seeing it: if you dive into the brain, you surely find no 'free neurons': but you do not find a none-free soul either. But when there is no 'soul' inside, there is also nothing to which free or none-free even applies. So on this level these concepts simply have no meaning. Another point you should consider is that we, as conscious organisms, were somehow selected for in evolution. This is difficult to understand when consciousness has no impact on the survivability of organisms. That means that somehow consciousness must have causal impact, so it can't be as passive as you think. Maybe not as mind-boggling as you think. It is a fact that you did not choose your genes, the parents and culture where you were born etc. All these made you to who you are. But that is not what free will is about. Free will is about your capability to act according to your motives, values, and world view, wherever they come from. It could be genetically determined that you do not like Brussels sprouts. But if you decide to eat them yourself (e.g. as demonstration that you have a strong will...), or are forced to eat them, is the difference between a free or forced action. So you are doing exactly what I warned you for:
  10. Eise

    Split from AI sentience

    Why? If I am sure I want to do something, and then I do it, then it is a free action. Being able to do free actions means you have free will. I do not see what a 'nondeterministic function' would have to do with it. Or do you think free actions are random actions? The problem with your thinking, is that you stick to the 'fundamental level'. It is a bit like "evolution does not exist, because the components of which organisms are built, cannot copy themselves, let alone introduce 'copy errors', so if there is no evolution on the basic level, there can't be at higher level". This is obviously wrong. Therefore I invited you to first look at your daily life, and see how you differentiate between free and coerced actions. Again, I am sure you do. You know when you are forced to do something against your will, by somebody or by circumstances, and when you did something from your free will. In case of penal laws, the difference can be to go into prison or not! However as soon as you dive into the chemical details, you will be lost.
  11. Eise

    Split from AI sentience

    Really? If you think about your life, you do not see any difference between coerced actions ('Your money or your life') and a free action (Spending money to Oxfam)? Let's take an absurd example: 'there is no difference between objects at all, they all have mass'. Or: 'reading is the same as running' because the underlying chemistry is consistent'. If you abstract enough, everything is the same. The 'sameness relationship' is always under a certain abstraction, otherwise things are only the same with itself. Sure, neurons work the way they do in all kind of actions: but that does not make all (kinds of) actions the same. So I would like you to flesh out how a coerced action differs from a free action. I am 100% sure you make this distinction in daily life. I call the kind of abstraction you use here a 'symptom of the philosopher's disease'. Abstract ideas ('it's all chemistry') do not match ideas one uses in daily life, and so one defends the theoretical idea in (philosophical) discussions, but keeps on using concepts in daily life that show you do not live according those abstract ideas. I am sure you hate it to be forced (by somebody, by circumstances) to do something.
  12. Eise

    Split from AI sentience

    Fully agree. That kind of free will does not exist. One could say, this concept went already overboard when we departed from the idea that we have a soul. But does that imply e.g. that there is no distinction between coerced and free actions? Or between actions following from an addiction or followed by a conscious reasoned decision? How differs simulation of intelligence differ from intelligence? What has randomisation to do with being free? Aren't you mixing up predictability and free will?
  13. Eise

    Split from AI sentience

    Thanks to you. Still, I think you make it a bit too simple. I know you have a naturalist world view, just as I do. It would quite be possible, that if we would extendedly discuss our world views, we would come very close. We might agree on which capabilities humans (or human brains) have, but still... I would say we have free will, and you say we don't. So I think it is essential when you write such things as above that you add what you understand under free will. Just as a stupid example: say somebody says he believes in God. When you ask him, he explains that all the laws of nature he calls 'God' (So God for him is not the 'historical' Yahweh or Shiva, it is an abstract concept.) You can oppose him that he uses the word 'God' in this way, you might even say you do not believe in God (but then you must say you mean 'entities' like traditional gods), but that doesn't make you a disbeliever in laws of nature. So in my opinion you should explicitly define the kind of free will is that you deny. I think you would discover that it is not the same concept as most people use in daily life, or in political discourse. PS The first one who tries to stop the discussion with 'it is just semantics' gets a negative reputation point from me...
  14. Wouldn't that mean that if enclosed by the event horizon, there are dancing chocolate elephants, we would also notice that because of oscillations of the event horizon? Just think about the different end stadia of stars. Depending on the mass, there are several possibilities: (this one nearly does not count) brown dwarfs. Have not enough mass to get hydrogen fusion started, so are compressed till even hydrogen behaves as a metal white dwarfs: End stadium of small to medium sized stars. The compression is stopped by electron degeneracy. According to the Pauli principle no electron can have exactly the same state as another, and this sets a limit to further compression neutron stars: under very high pressure, protons and electrons combine to neutrons. But again Pauli stands at the upper limit of this: as neutrons are fermions just as electrons, no two neutrons can be in exactly the same state black holes: Under even higher pressure the pressure of neutron degeneracy does not suffice, and there is nothing we know of that would stop further compression. And when the object is compressed enough that it is surrounded by an event horizon, we will never be able to investigate, as nothing can leave the inside of a black hole. So I think that we are already going too far, if we say that all the mass is compressed in a point, the singularity. That could be the case, but without a quantum theory of gravity, we even have no hint what is inside the event horizon.
  15. Eise

    AI sentience

    LISP