Jump to content

Free will


Robert Wilson

Recommended Posts

2 hours ago, iNow said:

I wouldn’t waste your time, Eise. He’s just here to stir up trouble

And now he’s not here at all since he got banned. Carry on...

Thank you, @iNow. +1. @Eise, please do carry on. I'm thinking about more arguments to offer you (I'm designing a toy-model scenario for you to discuss) and I'm doing my homework on Daniel Dennett and free will. Plus reviewing all the previous arguments.

Link to comment
Share on other sites

8 hours ago, iNow said:

I wouldn’t waste your time, Eise. He’s just here to stir up trouble

And now he’s not here at all since he got banned. Carry on...

Another Übermensch biting the dust... I already felt irritated, that he seemed to claim that he is absolute autonomous. And then he vents just standard racist ideas in the George Floyd thread. Very autonomous indeed. Glad I spent just a 'yes' at his question. And normally I say that in philosophy one should always give arguments. I felt somehow that in this case it was no use.

6 hours ago, joigus said:

Thank you, @iNow. +1. @Eise, please do carry on. I'm thinking about more arguments to offer you (I'm designing a toy-model scenario for you to discuss) and I'm doing my homework on Daniel Dennett and free will. Plus reviewing all the previous arguments.

Seems you have an awful lot to do... I hope you do all this freely. I do not want to force anybody...

Edited by Eise
Link to comment
Share on other sites

23 hours ago, joigus said:

I'm thinking about more arguments to offer you (I'm designing a toy-model scenario for you to discuss) and I'm doing my homework on Daniel Dennett and free will. Plus reviewing all the previous arguments.

Hi Joigus: Based on your previous posts, I suspect that the arguments you present will fail to strike at the heart of Eise’s position. I believe you two will remain at an impasse. I’m going to take John Searle’s position as an example because I think his to some degree reflects yours (and also those of many other posters who’ve contributed here over the past few pages).

Searle considers the compatibilist view a cop out1.  He thinks its advocates avoid “the problem” which he defines in the form of a question:

“Is it the case for every decision I make that the antecedent causes were sufficient to determine that very decision?” 1) If yes, then no free will. 2) If there is a decision where the antecedent causes were not sufficient to determine the outcome, then, he says, “there is a possibility of free will”. 

For 2) to be the case, according to Searle, there would have to be a gap of sorts between antecedent events and the act of deciding; one wherein you/I/we could participate (I can already hear Eise crying: “Dualist trap!”). Searle does acknowledge that there can be an experiential gap during which the higher level reasoning takes place, but then he adds, “If the neurobiological level is causally sufficient to determine your behaviour, then the fact that you have the experience of freedom at the higher level is…irrelevant.” And of course Searle points to the evidence of neurobiology as highly suggestive that it is sufficient cause.

Eise seems to be saying the opposite: “…the 'determining relation' between us and our constituents is an emergence relationship, not a causal relationship” (my emphasis). Not only is the neurological level not “causally sufficient” to determine the decision, it is not even causally related to it! (And this sends many of us reeling!)2

Although Dennett thinks we should focus on the biological level (and not that of physics), here https://www.youtube.com/watch?v=joCOWaaTj4A he appears to care little about the relationship between the neurobiological level and our decisions. He says what matters is making a distinction between “determined” and “inevitable” which, upon first hearing, is also enough to send one reeling (or worse, provoke one to slap him in the face!). Of course he does go on to elaborate: 

“To be clear on this we have to see what “inevitable” means—it means unavoidable. So then we have to get clear on what avoiding is, and then we can begin to see the biological dimension.” 

He then characterizes evolution over earth history as “an explosion of avoidance”—everything from dissolution, to being eaten, to starving to death. Avoidance is the result of “anticipating and taking corrective measures”; the ability to project possible futures (or at least those we think are possible) and avoid them.

To me this seems to run roughshod over everything Searle says: Whether there was something (an agent?) intervening somewhere along the way doesn’t seem to strike at the core (spirit?) of the idea. And so I think any arguments about causality across the levels, whether below governs above, etc will experience a similar fate vis-à-vis what it is Dennett and Eise point to. Simply put: One either does or does not agree that such avoidance abilities render determinism and free will compatible. It’s either “good enough for you” or it isn’t. And that is pretty much the end of it. 

1 The view which he encapsulates here as: “…you are determined by certain sorts of causes such as your desires instead of somebody putting a gun at your head”. https://www.youtube.com/watch?v=_rZfSTpjGl8

2  Perhaps I will later try to defend this position in some shape or form even if it is not Eise’s. 

 

Link to comment
Share on other sites

7 hours ago, vexspits said:

For 2) to be the case, according to Searle, there would have to be a gap of sorts between antecedent events and the act of deciding; one wherein you/I/we could participate (I can already hear Eise crying: “Dualist trap!”).

You have very good ears!

This fits very well to Searle's later acceptance of QM as a possible source of free will. He does exactly what so many 'new-age thinkers' do: postulate free will in the 'causality gap'. Pity enough physics delivers nothing to support that view. There is no single hint that 'X-causality' (whereby X in (mind, agent, soul)) could work in that gap. On the other side, randomness is exactly what we do not need for free will: I decide to do A, but because of randomness I do B. And that should be an example of free will? No way.

7 hours ago, vexspits said:

Eise seems to be saying the opposite: “…the 'determining relation' between us and our constituents is an emergence relationship, not a causal relationship” (my emphasis). Not only is the neurological level not “causally sufficient” to determine the decision, it is not even causally related to it! (And this sends many of us reeling!)2

That is not what I am saying. The neurological level of course is causally sufficient. But being causally closed (the term I prefer), meaning for every event we can find its cause, means there is 'no causal determination left' for emergence; e.g. that energy leaks away from the neurological level, in 'causing' emergent phenomena. But then the neurological level cannot be causally closed, which I think is absurd.

On the other side, the emergent phenomena are completely determined by the behaviour of the lower levels. There is no room for 'free wiggles'. Just in case somebody missed it: I say we are determined. (so please, nobody attack my viewpoint by giving one of myriad examples to show that we are determined! (Maybe I should write this in my disclaimer...)).

The point lies in our concept of free will: we simply assume that for free will our actions should be uncaused, i.e. not determined. But this is simply not given in our experience. It is already an interpretation of our experience. The farthest I can get phenomenologically, is that given a certain situation, what will happen next depends on me. So e.g. sitting in a restaurant, the waiter patiently waits what I will decide. And this clearly depends on me alone. That is in my opinion the experience of free will. If I do not choose, nothing will happen (and the waiter losses its patience...). Even being a determinist changes nothing about that fact. The 'universal causality' eats its way through my brain. And, as 'me' is a higher level phenomenon, an emergent property of my brain, so is free will: that 'me' acts according his wishes and believes.

8 hours ago, vexspits said:

Simply put: One either does or does not agree that such avoidance abilities render determinism and free will compatible. It’s either “good enough for you” or it isn’t. And that is pretty much the end of it. 

I would not say it is the end of it. The question left is if this concept of compatibilist free will is enough for our praxis of praising and blaming, for ethics,  justice, etc. I do not want to start that discussion now, but let me only say that Dennett thinks that is enough. His older book about free will says everything in its subtitle: Elbow Room:The Varieties of Free Will Worth Wanting.

 

Link to comment
Share on other sites

8 hours ago, vexspits said:

I think his [John Searle's position] to some degree reflects yours

No, no. I don't really have a position. I'm striving to find one.

I don't agree with John Searle as to AI, so I'm not sure I'd agree as to free will. I'm more with Daniel Dennett as to AI, so...

But Eise has shown me very clearly that I was (unwillingly) misrepresenting Dennett as to free will, because I didn't completely understand his points, so I must take some time to try and understand it better. Once I do that, maybe I develop my own position. Or maybe I will keep drifting, who knows. 👇

Link to comment
Share on other sites

6 hours ago, joigus said:

Or maybe I will keep drifting, who knows

Your "freedom" will surely carry you along whatever incoming currents determine... we are but twigs in the shoulders of a mighty stream.

Link to comment
Share on other sites

9 hours ago, iNow said:

Your "freedom" will surely carry you along whatever incoming currents determine... we are but twigs in the shoulders of a mighty stream.

+1.

I act according to my wishes and belief. What happens when my wishes and belief point in different directions? What do I do then? Where does the stream go?

Link to comment
Share on other sites

On 6/20/2020 at 2:22 AM, joigus said:

I act according to my wishes and belief. What happens when my wishes and belief point in different directions? What do I do then?

You know the idea that one cannot derive an 'ought' from 'is'? With 'believes' I mean facts which a person holds for true. (Ideally it is based on science, of course.) With 'wishes' what a person would like to do (and hopefully thinks is possible based on his 'facts'). Or can you give an example of believes and wishes that point in different directions? What is the 'direction' of a belief about what is true?

On 6/19/2020 at 5:10 PM, iNow said:

we are but twigs in the shoulders of a mighty stream.

So there is no difference between a twig and e.g. a fish? For a twig it is inevitable that it flows with the current. But for a fish? Doesn't it have at least some 'elbow room', that the twig has not?

Before you answer: remember I fully subscribe to the fact that we are (sufficiently) determined. So arguing the fish (or we) is determined, as the twig, does not touch my position. 

Link to comment
Share on other sites

2 hours ago, Eise said:

So there is no difference between a twig and e.g. a fish?

Are there differences? Certainly. Are those differences relevant to the idea that determinism renders the concept of freedom in context of our will moot? No. 

Link to comment
Share on other sites

3 hours ago, Eise said:

Or can you give an example of believes and wishes that point in different directions?

I'm not sure about the sufficiency of some of your definitions. For example: I would include within my belief system the absolute conviction that life of another human is to be respected under all circumstances. That no human should kill another human. Is that something that I hold "as true" following your definition?

But then I may have an almost irresistible compulsion to kill somebody at some point in my life because they committed an act extremely harmful, painful, unfair and gratuitous to me or to someone I love.

So there you are; there are conflicts, which you are always avoiding with your definitions. The ability to decide is constrained in manifold ways. It's not like an arrow pointing in one direction. As I said before, there's fear, doubt, reckoning of chances to win or lose, foreseeing of consequences, and probably hundreds more...

I don't think it's a simple matter of following the arrow.

Link to comment
Share on other sites

21 hours ago, joigus said:

For example: I would include within my belief system the absolute conviction that life of another human is to be respected under all circumstances. That no human should kill another human. Is that something that I hold "as true" following your definition?

Not according my definition, but now we are only quarreling words, I think. What I want to express with 'beliefs and wishes' is that there are 2 categories, that are needed to be able to act: facts (what is the case), and values (what I want to get done). For an action however, it is not needed that somebody is right in both. The question if a person is coerced or not does not lie in the correctness of 'his facts' or 'his values': the question is if he can act according to them. If I stop somebody of littering the road, I limit his free will. If I stop somebody from pushing a button, because he thinks he would just turn out the light, but in fact he would turn of the electricity in the whole building, I limit his free will. 

22 hours ago, iNow said:

Are there differences? Certainly. Are those differences relevant to the idea that determinism renders the concept of freedom in context of our will moot? No. 

I read your sentence 10 times, but I still could not understand for sure what you are saying. Can you repeat in simplified English? Me foreigner.

Link to comment
Share on other sites

Twigs and fish are not identical. There are differences... in composition, in function, in source, across multiple metrics they are different. You asked me, "so there is no difference between a twig and e.g. a fish?" I said, No. Of course there are differences.

I then returned the statement to the original context in which I shared it... freedom and will. I said that we are but twigs in the shoulders of a mighty stream... suggesting that we are carried around by currents beyond any control... it's determined, as you agree. In that regard, the idea of freedom is moot.

In my current way of thinking, every input and variable to our behavior is a type of coercion. I find your threshold for is/is not coercion somewhat arbitrary, though also acknowledge that my own view of "it's all a type of coercion" has limitations (since it's so broad and all encompassing, it tends to lack utility). 

Perhaps poorly, but I tried to summarize all of that in a pithy way by saying, "Are there differences? Certainly. Are those differences relevant to the idea that determinism renders the concept of freedom in context of our will moot? No."

Link to comment
Share on other sites

16 hours ago, iNow said:

I then returned the statement to the original context in which I shared it... freedom and will. I said that we are but twigs in the shoulders of a mighty stream... suggesting that we are carried around by currents beyond any control... it's determined, as you agree. In that regard, the idea of freedom is moot.

Using an unworkable, none-naturalistic definition of free will, yes. But using my idea of free will, no.

17 hours ago, iNow said:

Twigs and fish are not identical. There are differences... in composition, in function, in source, across multiple metrics they are different. You asked me, "so there is no difference between a twig and e.g. a fish?" I said, No. Of course there are differences.

Say, a twig in the stream is heading for a rock in the water. Given the external circumstances of the stream leading the twig to collide with the rock, this collision is inevitable. Now take the fish: it can see that he is heading for the rock, but for it,  collision is not inevitable. So even if everything is determined, a fish has an increased capability to avoid certain events, compared to the twig. One way of seeing this, is that in the case of the fish, 'determinism' runs much more through the fish' than in the case of the twig. In humans this is even stronger: as we reach a kind of consciousness that make us aware of our preferences, and that allows us to a detailed picture of the world around us, our place in it, and the possibilities to evaluate possible consequences of different potential actions, we are capable to act according our preferences. If we can, we have free will. (No, not the ghostly free will of uncaused free will, or caused by a soul, or whatever).

I would say that your level of abstraction is too high: yes, everything is determined, and from this view point there is no difference between whatever process in nature I am looking at (yes, leaving out QM again, because it is not relevant). It just does not help to see the obvious difference between a voluntary or an involuntary action. I could abstract from all inner structures of anything, by saying that we all just simply matter. Proof: if you throw a human from a roof, he falls according the laws of Newton. So there is no difference between a human and a stone.

If you refuse to look at the detail needed to distinguish between voluntary and involuntary actions, then of course you see no difference.

Link to comment
Share on other sites

6 hours ago, Eise said:

The question if a person is coerced or not does not lie in the correctness of 'his facts' or 'his values': the question is if he can act according to them. If I stop somebody of littering the road, I limit his free will. If I stop somebody from pushing a button, because he thinks he would just turn out the light, but in fact he would turn of the electricity in the whole building, I limit his free will. 

I think this is a non-sequitur. I was talking about internal conflicts. The primitive drive to kill is dormant in all of us. What if somebody awakens it? It may take a very cruel, very sadistic action to awake it, e.g., in any of us here talking. But the truth is it could be awakened, if somebody "cared" enough to do it. I wouldn't call that a coercion.

A coercion, in my definition would be,

If you don't do A, I will do B

My example above would be,

I've done A, what are you going to do? B, C, D,...?

It would be our frontal cortex being inhibited and more basic circuitry in our brain taking over. Are you or aren't you, as a free agent, responsible for shutting off the frontal cortex and letting the limbic system, or more primitive, circuitry take over?

I think the concept of free will that you use, as well as Daniel Dennett's, is motivated for the best of reasons in order to avail some working definition of responsibility, that may work in a variety of contexts. But it's not fundamental, and it misses a wide spectrum of possibilities. Not least among them: What happens when an individual who is a responsible one, at some point, whether indefinitely or not, ceases to be one? PTSD and other similar syndromes could happen to anyone. They're not genetic; they're event-induced, and they can't be qualified as plain "coercion."

I do agree that enunciating "free will is an illusion" and leaving it there and spreading the message without further qualifications would be socially irresponsible. But the other solution doesn't satisfy me either.

Emergent properties are complicated. Suppose "free will is an illusion" is true in some reasonable sense. Then, because it is an illusion, there are bound to be individuals who believe it, as well as individuals who don't, no matter what its value of truth is. Acting irresponsibly just because "free will is an illusion," even if it were true, would be incredibly stupid. The reason, if no other, is that you would eventually meet those individuals who don't believe it to be true (precisely because they act according to complex combinations of internal and external determinations and they can't choose any other option but to believe it's not true!) and you would end up paying for the consequences no matter what you believe or what is ultimately true and, in fact no matter what is true. Thereby, a need to act responsibly would be self-maintained so to speak, as a major conditioning for your actions, so that you don't get in trouble with society. Bootstrap mechanisms can appear when complexity is involved.

I would agree, though, that communicating this subtle scientific proposition to general swathes of society, just because ultimately the useful concept of responsibility they know and love vanishes (when you analyze it in more basic terms,) doesn't make it a good idea to just get rid of it socially. That is a delicate question.

There are further questions, as the value of the proposition "free will is an illusion" for those individuals that are under strong compulsions. It could have a positive effect in terms of soothing feelings of remorse. I don't know, but more study and discussion is needed. But that's another matter. The point I want to make here is: There are potentially positive uses of the proposition as well as potentially negative ones.

Link to comment
Share on other sites

I will contribute to this thread, as opposed to the one in General Philosophy, which makes no sense.

Let's say I inject you, Eise, with a mind control drug ( like Sodium Pentathol in the movies ).
This chemical is now part of you , just like your atoms, molecules, neurons, hormones and enzymes.
This drug severely restricts your choices, but because of who you are ( including the drug ) you are still able to choose freely from the few remaining choices available to you.
Is it, then, your contention that the drugged you still has free will ?

What if instead of an administered drug, your choices are diminished by an enzyme imbalance ?
Does the imbalanced you have free will as per your definition ?

Edited by MigL
Link to comment
Share on other sites

19 hours ago, joigus said:

Not least among them: What happens when an individual who is a responsible one, at some point, whether indefinitely or not, ceases to be one? PTSD and other similar syndromes could happen to anyone. They're not genetic; they're event-induced, and they can't be qualified as plain "coercion."

Then the person changed character. If his acts are free or not depends on the impairment of his capabilities that are ground condition for expressing free will. Some of these conditions are:

  • picture himself in his environment
  • see different possibilities for actions
  • evaluate results and consequences of these possible actions
  • being able to decide, and act based on the previous mentioned conditions

I would say that normally for most people these conditions are fulfilled. If these capabilities are impaired, then we have a case of reduced culpability, and a therapy or other measures should be the right reaction. However, if these are not impaired, then it is a (tragic) case of character change, but the (new) person is culpable.

Of course distinguishing these is easier said then done. I am 100% sure some of you can construct some example in which it is not so easy to distinguish the too. But 'clear concepts' does not always mean 'easy to apply'. Take a typical case: colours. Where there is a clear difference between red and orange, it is possible to present border cases where some people never will agree upon.

20 hours ago, joigus said:

I think the concept of free will that you use, as well as Daniel Dennett's, is motivated for the best of reasons in order to avail some working definition of responsibility, that may work in a variety of contexts. But it's not fundamental,

Why would we need something 'fundamental'? And I think these 'variety of contexts' you mention, encompass all normal use of the concepts of free will and responsibilities. Are you still thinking about 'genuine free will'? The kind of 'uncaused free will'? I see this as a mental block. As long as you see the word free will, a trigger in your mind seems to fire that replaces 'being able to do what you want' with 'uncaused mind'. 

The 'illusion of free will' is the illusion that it is uncaused. Not that it does not exist.

4 hours ago, MigL said:

This chemical is now part of you , just like your atoms, molecules, neurons, hormones and enzymes.

<ant-fucking mode> The body does everything to get rid of this stuff, so to say it is part of you is a bit of a stretch. Is sand in the machine part of the machine? </ant-fucking mode>

4 hours ago, MigL said:

This drug severely restricts your choices, but because of who you are ( including the drug ) you are still able to choose freely from the few remaining choices available to you.
Is it, then, your contention that the drugged you still has free will ?

I think what I wrote above also applies to your example. If the ground conditions of free will are impaired, then it is contradictory to say that I can choose freely. Otherwise my charcater was changed. 

5 hours ago, MigL said:

What if instead of an administered drug, your choices are diminished by an enzyme imbalance ?
Does the imbalanced you have free will as per your definition ?

I think that our present praxis in judicial cases covers the situation: if an act had a biological cause, because some of the above ground conditions fails, then somebody is not (or is less) culpable. If it was only a temporary imbalance, then one could say the person who acted does not exist anymore.

Link to comment
Share on other sites

4 hours ago, Eise said:

Why would we need something 'fundamental'? And I think these 'variety of contexts' you mention, encompass all normal use of the concepts of free will and responsibilities. Are you still thinking about 'genuine free will'?

No. The "fundamental concept of free will" is what I thought you were thinking. You said you weren't, but it keeps coming back. My apologies.

10 hours ago, MigL said:

I will contribute to this thread, as opposed to the one in General Philosophy, which makes no sense.

I think I finally dodged the bullet. Eise is more involved in that forum, from what I've seen. ;)

Link to comment
Share on other sites

Thanks for your reply. I'm slow at making progress in this topic!

On 6/14/2020 at 10:34 AM, Eise said:

There are some practical problems with your scenario, e.g. both A.I.s should be fed with exactly the same input, but I think the gist of your argument is correct.

I agree. I guess the scenario for instance requires that any kind of randomness is either forbidden for the A.I. to use? Or that the second A.I. must always get the same random result* as the first A.I got.

While I'm thinking more about the topic, a question about definition:

On 6/10/2020 at 10:23 AM, Eise said:

Whereby I define 'free will' as 'being able to act according your wishes and beliefs'. 

Does the ability to act require some level of complexity or require other properties? Naive example, just to illustrate the question: Lets' say an A.I is programmed to act according to the set of wishes and beliefs the programmer have offered. The A.I. acts by sorting available actions alphabetically and the picks the first action on the list. For us on the outside (the programmer) does that count as free will? And if no-one told the A.I. that it's free will is implemented in this way, would the A.I consider itself to have free will? 

I'm not questioning the validity of your definition, I'm trying to use it to find out / learn more about free will in relation to the A.I. I'm thinking about how various concepts looks from the A.I's perspective vs from the programmers perspective and any differences depending on the level of insight they have. 

 

*) Once the first A.I has thrown it's dice in a board game we know what value to give to the second A.I once it throws it's dice. The second A.I. does not need to know that it is not getting true random numbers.

Link to comment
Share on other sites

15 hours ago, Ghideon said:

Lets' say an A.I is programmed to act according to the set of wishes and beliefs the programmer have offered. The A.I. acts by sorting available actions alphabetically and the picks the first action on the list. For us on the outside (the programmer) does that count as free will? And if no-one told the A.I. that it's free will is implemented in this way, would the A.I consider itself to have free will? 

Hi Ghideon:

If you took ten real-world situations you faced this week, had a list of what you reported as wishes/beliefs and a list of possible actions, you would find a probability way outside of chance there was some correspondence between the two lists. In the A.I scenario you describe, it seems to me that there would be no such probability of correspondence between the two. It also seems to me that as soon as you give the A.I. awareness, this discrepancy would become apparent even to it. And without any possibility of revision (evolution) to the scheme the programmer (nature) provided, the correspondence between the two lists would forever fail to rise above chance! And if you provided the A.I. with the capacity to suffer—my God! What a horrifying scenario you (we) have imagined! LOL 

Edited by vexspits
forgot to move cursor below quote box before replying.
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.