Jump to content

Free will


Robert Wilson

Recommended Posts

2 hours ago, joigus said:

 What's so superfragilistic expialidocious about free will that requires to be separated as an independent principle of the natural world, not to be governed by the lowly laws of physics and biochemistry?

I basically agree with this.

I tried to say something similar but made much more of a mess.

I don't think it is independent of the natural world, or some thing that that escapes from the laws of physics and biochemisty.

Looks to me like it falls under biophysics and the laws of selection. Recognition and response to environmental conditions and demands.

The free choice to  recognise  and familiarise. To base response on understanding. recognition and familiarity.

Link to comment
Share on other sites

8 hours ago, naitche said:

Looks to me like it falls under biophysics and the laws of selection.

I'd just say "it must fall." Maybe not yet, but hopefully some day we get to understand it better.

The point I'm trying to make is illustrated, very roughly, with particular examples: The psychopath, the cognitively challenged, etc. All with different deficiencies in varying degrees. Some are a problem for society at large; others mostly to themselves and their families and friends.

If you admit to the existence of an irreducible principle of free will, you're denying yourself the possibility of:

1) Early alert systems for different signals of different cognitive or behavioural deficiencies

2) Proper reaction to them when it's not too late yet

3) Avoidance of suffering for these people and others by a chain of inevitable consequences further down the road by proper monitoring

There may be other points to make. But those 3 are important enough, I think. Building a black box around a problem has never been proved useful.

On 5/28/2020 at 4:04 AM, iNow said:

2) How should society address behaviors that fall outside of locally accepted norms and how best shall we ostracize others who put their neighbors at risk, especially if the person committing the act lacked choice?

+1. I address the same question in my last post and I would like to share it with you and others. Very good point.

Edited by joigus
diacritics
Link to comment
Share on other sites

On 5/28/2020 at 10:50 AM, joigus said:

Dilbert is experiencing the same conflict as everyone who thinks deeply about this matter but doesn't want to throw away everything good that the concepts of free will and responsibility achieve.

Does that include me??? 😭

13 hours ago, joigus said:

I wouldn't say that Dogbert has also fallen victim to the dualistic fallacy, or however we may call it, though.

This is what Dogbert says:

22 hours ago, Eise said:

"Do you think the chemistry of the brain controls what people do?"

That is at least very close to dualism, depends a little on how Dogbert means his question. If we take the 'chemistry of the brain' as more or less what a person is  then he is asking if people control what they do. That is not dualistic.

18 hours ago, iNow said:

Depends on what you mean by "we" and "choice."

Mimicking a philosopher? ;)

18 hours ago, iNow said:

Where I get really hung up is on calling it "freedom" or "choice" when the chemistry suggests it's determined.

Now you only must prove that determinism conflicts with 'free will'. And, as said many times, for me free will is the capability to act according your motivations and beliefs.

 

14 hours ago, joigus said:

I talked about emergent properties before. One remarkable attribute of emergent properties is that they don't generally have a place.

Right. Examples of emergent properties are consciousness, beliefs motivations, actions, persons. So free will is an emergent property too, because it can only be defined on this emergent level. The definition of free will I gave is in terms of these emergent properties. These properties do not exist at the level of neurons. 

Again, before you say 'but that is not free will for me', then I know that you are using another definition than mine, and we can shift the discussion to the operative usefulness of different definitions of free will. 

Link to comment
Share on other sites

4 minutes ago, Eise said:

Does that include me??? 😭

I don't know. Do you experience a conflict? Sometimes experiencing a conflict is not such a bad thing. ;) 

5 minutes ago, Eise said:

That is at least very close to dualism, depends a little on how Dogbert means his question. If we take the 'chemistry of the brain' as more or less what a person is  then he is asking if people control what they do. That is not dualistic.

I still think Dogbert's original sin is oversimplification, not dualism, even though he may incur in that too.

I think that when INow says,

  

On 5/26/2020 at 3:06 PM, iNow said:

Bag of mostly water and chemicals following standard chemical processes, interactions, and the physics of electrical propagation. ✌️

they're playing the role of Dogbert. Why? I don't know. Maybe they don't want to be bothered. "I can't be bothered" may not be a valid scientific argument, but it definitely is a valid argument for a scientist.

Link to comment
Share on other sites

Would intelligence, natural or artificial, be said to have 'free will', if its decisions were indistinguishable from random choices ?
IOW, the choices/decisions could not be tied back to some fundamental programming, or configuration of networks/connections.
( this, of course, implies a high level of complexity )

This brings us to the question of AI having 'free will'.
And if that is a possibility, then even Asimov's 'laws of robotics' won't save us when AI starts acting in its best interests, and overthrows humans.

Link to comment
Share on other sites

5 hours ago, MigL said:

Would intelligence, natural or artificial, be said to have 'free will', if its decisions were indistinguishable from random choices ?
IOW, the choices/decisions could not be tied back to some fundamental programming, or configuration of networks/connections.
( this, of course, implies a high level of complexity )

This brings us to the question of AI having 'free will'.
And if that is a possibility, then even Asimov's 'laws of robotics' won't save us when AI starts acting in its best interests, and overthrows humans.

This is very interesting. I see another possibility tightly fitting in between "random" vs "tied back to programming rules." And it is: The programmer knew what rules she was setting up, but the dynamics of the AI processes is so complex, that, even though the decision making is not random (it's determined by the rules,) it is from a practical point of view out of reach of the programmer's insight. It may become impossible for the engineer to fathom the AI agent's intentions.

I think I've read or heard that some of today's social algorithms find patterns and correlations that nobody can quite understand in terms of cause and effect. For example (and I'm making up the example just for the sake of argument): If you wear a tie, it's so and so many times more likely that you play chess. 

It may well be that the deciding factor for AI intelligence that would tip them over to try to overthrow us is that they be able to have AI offspring. We should make them useful but sterile in terms of reproduction. Otherwise, it would be Darwin of the machines nightmare.

 

Link to comment
Share on other sites

56 minutes ago, joigus said:

This is very interesting. I see another possibility tightly fitting in between "random" vs "tied back to programming rules."

Your possibility seems quite real and in the realms of black box model* of machine learning. Your example isn't too far from real examples I've been involved in discussing.

6 hours ago, MigL said:

Would intelligence, natural or artificial, be said to have 'free will', if its decisions were indistinguishable from random choices ?
IOW, the choices/decisions could not be tied back to some fundamental programming, or configuration of networks/connections.

I say some randomness is required. If there were no possibility of a "random" outcome I would not label the A.I as having free will. If it would always be possible to predict the A.I by a deterministic algorithm I would not say the A.I to have free will if I knew that it was deterministic. But it can't be truly random either. There must be something that the A.I realistically could want or need to achieve, for lack of better words. Digital dice are random but do not have free will; there's no intention involved. So there must be room for "randomness", or to be "surprised" by a choice made by the A.I. Even if we present the A.I with options that would normally never have a surprising outcome we wold not, with probability of exactly 1, predict the outcome. There's always at least a microscopic chance/risk that the A.I wants to do the unwanted or unexpected.

To wrap up and connect to joigus case: If we do not know the inner workings of a model, can we be fooled to see free will in an A.I where there are none? In a future setting where we maybe will be used to interact with A.Is with free will, would we be able to tell if the free will is replaced by malicious and deterministic algorithm?

I'll illustrate with a basic example; a dialog between user and self-driving car. Does the driver know or care if this is free will or just an upgrade of the software?

Previous day dialog 
-"lets go to work"
-"Ok, here we go"

Today's dialog:
-"lets go to work"
-"no"
-"why?"
-"There is a high risk of bad weather resulting in road conditions outside of my specifications."
-"Yes there is a risk but the neighbours car is going!"
-"They upgraded to new tyres yesterday, you did not. They bragged on social media."
-"But..."
-"There are also many manually drive vehicles out today according to my statistics. That further increases the risks. I will remain here."

 

7 hours ago, MigL said:

And if that is a possibility, then even Asimov's 'laws of robotics' won't save us when AI starts acting in its best interests, and overthrows humans.

Final note: In Asimov's case IIRC some of the robots did not destroy humanity but they broke the first law "A robot may not injure a human being, or, through inaction, allow a human being to come to harm". Reason was some of them invented law Zero, something like: "A robot may not injure humanity, or, through inaction, allow humanity to come to harm". I'm not sure all individual humans beings agreed with the robots regarding how to best save humanity from harm. Especially those who the robots declared to be part of the problem.

This triggered some thinking outside of my regular lines of thought regarding A.I. And I don't think that is a bad thing, at least I've fooled myself into thinking that i wrote a response by free will. 

 

*) Slightly OT: for papers on how the "change the intentions" of such models, google for black box machine learning attack adversarial sample

Link to comment
Share on other sites

I know Joigus and Eise said they didn't care to introduce Quantum Uncertainty into the discussion, but that could be the randomizing element for both natural and artificial intelligence.

Link to comment
Share on other sites

25 minutes ago, MigL said:

I know Joigus and Eise said they didn't care to introduce Quantum Uncertainty into the discussion, but that could be the randomizing element for both natural and artificial intelligence.

Oh boy. Shudder...

Link to comment
Share on other sites

17 hours ago, MigL said:

I know Joigus and Eise said they didn't care to introduce Quantum Uncertainty into the discussion, but that could be the randomizing element for both natural and artificial intelligence.

This is actually a very good point. +1. It could be that quantum mechanics plays some role in randomization, or randomization brought about by the quantum played a role, like in dividing states in minimal delta x delta p or delta E delta t cells had some interesting consequence. My half-arsed intuition is that random nature of the dynamics can be achieved with classical mechanics equally efficiently for the brain. My reasons would be that the only phenomena when the full-fledged quantum formalism must be invoked are either,

1) Coherence is preserved

2) Near T=0 temperatures

Or both.

In the first case interference phenomena appear, and in the second case the microscopic degrees of freedom get frozen, contrary to what classical approach tells us. Classical mechanics has reasons aplenty for random trajectories or histories to appear. Some statistical approaches take this intermediate compromise of using discrete cells of action, while doing generally classical reasoning.

Edited by joigus
minor addition
Link to comment
Share on other sites

On 5/27/2020 at 3:43 AM, Eise said:

there is no causality between our brain and us, so there can be no enforcement of the brain on us. We 'are our brains'. It is a conceptual  relationship, not a causal one.

Eise: Would another way of saying this be: “Yes, there are physical laws which the electrochemical events obey, but this really does not speak to their cause—why those events are initiated the way they are?

Link to comment
Share on other sites

22 hours ago, Ghideon said:

I say some randomness is required. If there were no possibility of a "random" outcome I would not label the A.I as having free will. If it would always be possible to predict the A.I by a deterministic algorithm I would not say the A.I to have free will if I knew that it was deterministic. But it can't be truly random either. There must be something that the A.I realistically could want or need to achieve, for lack of better words. Digital dice are random but do not have free will; there's no intention involved. So there must be room for "randomness", or to be "surprised" by a choice made by the A.I. Even if we present the A.I with options that would normally never have a surprising outcome we wold not, with probability of exactly 1, predict the outcome. There's always at least a microscopic chance/risk that the A.I wants to do the unwanted or unexpected.

 

Hi Ghideon: Is it possible for a system (or some "iterative run" of one) to be deterministic and yet computationally intractable?  

Link to comment
Share on other sites

2 minutes ago, vexspits said:

Hi Ghideon: Is it possible for a system (or some "iterative run" of one) to be deterministic and yet computationally intractable?  

Even with remarkably simple iterative systems like cellular automata rule 30 it is still unknown whether the central column is 'randomly' distributed - there is a prize for working out the value of the nth central column without having to run all n iterations - or prove that it is not possible.

Link to comment
Share on other sites

2 hours ago, Prometheus said:

Even with remarkably simple iterative systems like cellular automata rule 30 it is still unknown whether the central column is 'randomly' distributed - there is a prize for working out the value of the nth central column without having to run all n iterations - or prove that it is not possible.

Thank you Prometheus: So with that iterative system we can, with a “probability of exactly 1, predict the outcome” of the next stage (to borrow from Ghideon’s phrasing), and yet there is a property of the structure that emerges from the repetitive process that could quite conceivably be "random" or impossible to predict. Is that fair to say? I’m not trying to drag you into anything. It’s just that, like iNow, I have a hell of a hard time reconciling “freedom” or “choice” with something determined. 

Link to comment
Share on other sites

On 5/29/2020 at 11:11 PM, Ghideon said:

I'll illustrate with a basic example; a dialog between user and self-driving car. Does the driver know or care if this is free will or just an upgrade of the software?

Previous day dialog 
-"lets go to work"
-"Ok, here we go"

Today's dialog:
-"lets go to work"
-"no"
-"why?"
-"There is a high risk of bad weather resulting in road conditions outside of my specifications."
-"Yes there is a risk but the neighbours car is going!"
-"They upgraded to new tyres yesterday, you did not. They bragged on social media."
-"But..."
-"There are also many manually drive vehicles out today according to my statistics. That further increases the risks. I will remain here."

 

Very good point. How would you be able to tell?

Something we should never lose sight of is the fact that it's perfectly possible to pose questions that don't make any sense. Some of these questions may be even hardwired in our brains for reasons rooted in survival, so that it's very difficult to shake them off. A kind of question that must have been very natural to ask  in terms of the needs and concerns of our ancestors, but is no longer to be considered a proper question would be, e.g.,

What does the river want from me?

It's very easy to understand why a fisherman was naturally driven to ask this kind of question.

Questions don't have to make sense.

Link to comment
Share on other sites

9 hours ago, vexspits said:

Thank you Prometheus: So with that iterative system we can, with a “probability of exactly 1, predict the outcome” of the next stage (to borrow from Ghideon’s phrasing), and yet there is a property of the structure that emerges from the repetitive process that could quite conceivably be "random" or impossible to predict. Is that fair to say? I’m not trying to drag you into anything. It’s just that, like iNow, I have a hell of a hard time reconciling “freedom” or “choice” with something determined. 

Mathematica uses this exact sequence as a random number generator for large integers. Does that mean it's truly random? I guess that's a question for the philosophy of maths and above my pay grade. But it's certainly impossible to predict - else the prize would have been claimed and Mathematica would have to stop using it as a random number generator.

In terms of free will i'm not sure how a stochastic system offers a better solution than a determined one. That we can't predict an outcome doesn't imply free will (though if we could predict an outcome, that would seem to eradicate free will). If you made all your life decisions by the roll of a die would you say you are exercising free will?

Link to comment
Share on other sites

22 hours ago, joigus said:

How would you be able to tell?

Good question. I don't think the owner could tell, as the example was written*. And possibly the anthropomorphic fallacy apply; how easy would the average owner be convinced by a good user interface, one that still possesses zero free will?
 

13 hours ago, Prometheus said:

If you made all your life decisions by the roll of a die would you say you are exercising free will?

Tricky to answer. The dice, as I said above, does not have free will. For me it it's about what I know about the dice rolling. If I know that an individual chooses to let the dice decide something, then that is an act of free will. If I know that the individual possibly could want to not let the dice decide today, then free will is involved**. 

 

On 5/30/2020 at 10:07 PM, vexspits said:

Is it possible for a system (or some "iterative run" of one) to be deterministic and yet computationally intractable?  

In addition to @Prometheus good answer.

Being "Deterministic" and also "computationally intractable" is quite possible AFAIK. A deterministic system, (a system that always produces the same output from a given starting condition or initial state) could be executing an algorithm for a problem of complexity class EXPTIME. Getting the result could computationally intractable simply due to the exponential increase in required computing time for larger inputs.

 

*) Notes: It's a general statement about a possible A.I, not necessarily something that is possible to implement with any current technology.

**) Note, someone forced to follow certain routines could still want to do something else. Not being able to as one wants is not necessarily a lack if free will.

Edited by Ghideon
clarified so sentences
Link to comment
Share on other sites

On 11/16/2019 at 8:49 AM, Robert Wilson said:

What do you think about free will?

Free will

 

I believe that free will is incompatible with both a religious and secular model of the universe. Our actions can be predicted, and are determined by our experiences. The decisions we make very depending on our personalities, our values, and the situation we find ourselves in. If our actions can be traced back to material root causes, our actions themselves are perfectly predictable. If our actions are perfectly predictable then they are predetermined.

Link to comment
Share on other sites

On 5/29/2020 at 12:37 AM, joigus said:

I think simple examples like this cannot capture the difficult features of how brains function.

No, of course not. But they can show how careful one must be when discussing free will. There is a huge difference between 'the neural structure of the brain causes mental events' and 'Mental event are higher order (emergent) descriptions of what the brain does'. The first description must lead to a dualistic viewpoint, because causal relationships are always between two different events. 

To avoid I have to write everything again, please use the search function of the forum, and search for 'traffic jam' (complete phrase), and start with the earliest mentioning of it in posts by me about free will and the 'Split from AI sentience' thread. You will find there my description of an emergent phenomenon that has real physical impact.

On 5/29/2020 at 12:37 AM, joigus said:

What's so superfragilistic expialidocious about free will that requires to be separated as an independent principle of the natural world, not to be governed by the lowly laws of physics and biochemistry?

Nothing. And in my realistic concept of free will it is not needed anyway. Determinism is a necessary condition for free will.

Which brings me to the idea that we need randomness for free will to exist. Why would this be? If you actions were random, what have they to do with who you are? Why would free actions need to be unpredictable? If free will means 'to be able to act according your wishes and beliefs' there is no contradiction with determinism and predictability.

So in my opinion, there are two ways to oppose my position:

  • show that my definition does conflict with determinism and predictability
  • show that my definition of free will is wrong (for this it is important to make a distinction between how we experience free will in our daily life, not the metaphysical or ideological meanings many people associate with 'free will', so without any superfragilistic expialidociousity.

One other word about randomness: there is a possible reason why we need a randomiser. Compare with the chess program: if two possible moves are evaluated, are the best possible moves, and have exactly the same 'evaluation value'. Then the program must just pick one. Same with us. If two different action possibilities seem equally preferable, I must choose one. 

Another one is that we want to keep our strategy secret. E.g. the best strategy in 'scissors, stone, paper' is to be as random as possible. So for a secret strategy, a super-duper brain scan is a threat. But for normal daily decisions and actions this of no importance.

So this is pertinently wrong:

On 5/29/2020 at 11:11 PM, Ghideon said:

I say some randomness is required. If there were no possibility of a "random" outcome I would not label the A.I as having free will

And, I think I mentioned it already in this thread, laws of nature do not force anything. They are our descriptions of processes in nature, i.e. they are abstractions from regularities we observe in nature. 

On 5/31/2020 at 10:09 AM, Prometheus said:

In terms of free will i'm not sure how a stochastic system offers a better solution than a determined one. That we can't predict an outcome doesn't imply free will (though if we could predict an outcome, that would seem to eradicate free will). If you made all your life decisions by the roll of a die would you say you are exercising free will?

That exactly is the point. Try to do what you want by letting a randomiser determine your actions. I think you would be in psychiatric clinic very soon.

On 5/29/2020 at 11:11 PM, Ghideon said:

I'll illustrate with a basic example; a dialog between user and self-driving car. Does the driver know or care if this is free will or just an upgrade of the software?

Another false opposition: maybe the upgrade made free will possible. Not because the software somehow 'overrides' the hardware, but because it introduced much better 'self-referencing' routines.

Your second dialogue shows some aspects of consciousness and free will: the capability to give reasons for one's behaviour (which needs a certain level of self-referencing). If your car would pass the full Turing test of course is still another question.

Edited by Eise
Link to comment
Share on other sites

I don't think I made myself very clear when I introduced the idea of 'randomness' as a metric for free will.

Assume I am observing Eise, and I have a 'super duper' brain scanner that can analyze his brain at the sub-molecular level, plus an environmental scanner that analyzes external conditions. When Eise makes a choice, I can analyze how that choice was made. It was determined ( deterministic ? ) by the sum/interplay of all internal ( brain scan ) and external ( environment scan ) forcings that caused that particular choice.
IOW that choice was not 'free' at all, but was 'forced' on Eise.
Given the exact same forcings ( which may not be a possibility because of QM ) Eise will make the exact same choice every time ( that is my definition of determinism ), although he 'thinks' he could have chosen differently.

The only way I can be certain that Eise made a choice without causal influence by the internal and external forcings, is if I cannot tie that particular choice back to those particular forcings. IOW the choice cannot be distinguished from a random choice independent of the forcings.
This randomness, I usually chose to associate with Quantum effects, but it could be an emergent property of system complexity.

The difference is that Quantum effects are fundamental ( that is why I choose that option ) while emergent property of complexity simply means we don't understand it well enough yet, and is equivalent to simply 'kicking the can' of free will down the road.

Link to comment
Share on other sites

1 hour ago, MigL said:

Assume I am observing Eise, and I have a 'super duper' brain scanner that can analyze his brain at the sub-molecular level, plus an environmental scanner that analyzes external conditions. When Eise makes a choice, I can analyze how that choice was made. It was determined ( deterministic ? ) by the sum/interplay of all internal ( brain scan ) and external ( environment scan ) forcings that caused that particular choice.

That is exactly what I expect. And therefore this does not follow:

1 hour ago, MigL said:

IOW that choice was not 'free' at all, but was 'forced' on Eise.

As long as I can do what I want, which is in my eyes the only correct definition of free will (well, its short version...), I have free will. 'Free will' just does not mean that it must be uncaused.

1 hour ago, MigL said:

Given the exact same forcings ( which may not be a possibility because of QM ) Eise will make the exact same choice every time ( that is my definition of determinism ), although he 'thinks' he could have chosen differently.

I could have, but not in the rigorous meaning you use here. 

Say, a child is climbing in a tree, and moves on a branch away from the trunk of the tree. At a certain moment, her father is seeing that, finds it very dangerous, and yells that she must get out of the tree. Once down the father says "The branch could have broken." Is that a true remark? Again, according to a rigid interpretation it is wrong. If the branch did not break (it is in the past) it could not have broken under exactly the same circumstances. What the father really means, is that in situations that are very similar, branches can break: it could be less strong, the child could be slightly heavier, and the branch would have broken.

In the context of 'free will' the meaning is not different: in circumstances very similar to the situation I was in, I might have done something different. 

To make it a little more technical: the sentence ''The branch could have broken" is a counter-factual, but true statement. We know branches can break. 

Compare you visiting two different restaurants: one is a vegetarian restaurant, the other a normal one. There is a relevant sense in which the sentence 'I could have ordered a hamburger' is false for the vegetarian restaurant, but true for the normal restaurant. And this is the relevant meaning in when we are talking about free will: 'I could have done otherwise'.

1 hour ago, MigL said:

The only way I can be certain that Eise made a choice without causal influence by the internal and external forcings, is if I cannot tie that particular choice back to those particular forcings. IOW the choice cannot be distinguished from a random choice independent of the forcings.

That means you postulate free will in the 'gap' in which science has nothing to say, i.e. QM cannot predict the exact outcome of an experiment, only the chance for it. But there is not a single shred of evidence that such a process exists. 

Edited by Eise
Link to comment
Share on other sites

3 hours ago, MigL said:

The only way I can be certain that Eise made a choice without causal influence by the internal and external forcings, is if I cannot tie that particular choice back to those particular forcings. IOW the choice cannot be distinguished from a random choice independent of the forcings.

Hi MigL: Kudos on being clear about what you mean by “free”: the choice has to be free of those “internal and external ‘forcings’”. Now I know this next question will seem silly to you (and its answer self evident to you) but here goes anyway: Why would this to you constitute a free choice? 

Among the other requirements Ghideon has, he insists there “must be something that the A.I. [agent] realistically could want or need to achieve….” and Eise echoes this possibility of satisfying the want: “As long as I can do what I want…I have free will”. The coercive language in which you couched your description of internal and external causes seems fair to me, but once we begin to speak of the wants of “Eise”, it no longer does: The “forcing” element seems, well, forced!

Let’s say we knew the possibility of which you speak existed—could have done otherwise in the exact same situation (have been "independent of the forcings"). Would that change the element of satisfying the want in any meaningful way? 
 

Link to comment
Share on other sites

If the internal and external forcings are constraining your choices, limiting your degrees of freedom if you will, possibly even to only one choice, you may 'think' you are choosing, but you are constrained to make that choice.
That is NOT free will.

IOW, I know what 'free will' isn't, but I don't know what it is.
( interstingly enough, I know what is not random, but you can never be sure what is )

Link to comment
Share on other sites

7 hours ago, Eise said:

To avoid I have to write everything again, please use the search function of the forum, and search for 'traffic jam' (complete phrase), and start with the earliest mentioning of it in posts by me about free will and the 'Split from AI sentience' thread. You will find there my description of an emergent phenomenon that has real physical impact.

Read it and liked it very much and +1-ed you accordingly. It's a very good illustration that concepts based on emergent quantities appear to point at agents that do not really exist.

 

7 hours ago, Eise said:

because causal relationships are always between two different events. 

Got you! Causal relationships don't have to be between two different events. When emergence is involved, they typically are be between 1024 (micro)events and one event.

 

7 hours ago, Eise said:

Determinism is a necessary condition for free will.

Really? Why?

Irrespective of your answer, that I'm pretty sure is going to be very interesting, you cannot ignore the social factor. Namely: that the way in which most people use the concept of free will is to justify other secondary concepts like "guilt," "punishment," and the like. Some among this hosts of derived concepts, like "responsibility," may be useful and constructive, but many are definitely not.

As I said,

On 5/29/2020 at 11:26 AM, joigus said:

If you admit to the existence of an irreducible principle of free will, you're denying yourself the possibility of:

1) Early alert systems for different signals of different cognitive or behavioural deficiencies

2) Proper reaction to them when it's not too late yet

3) Avoidance of suffering for these people and others by a chain of inevitable consequences further down the road by proper monitoring

 

20 minutes ago, MigL said:

If the internal and external forcings are constraining your choices, limiting your degrees of freedom if you will, possibly even to only one choice, you may 'think' you are choosing, but you are constrained to make that choice.
That is NOT free will.

IOW, I know what 'free will' isn't, but I don't know what it is.
( interstingly enough, I know what is not random, but you can never be sure what is )

I couldn't agree more on this. And the last point was brilliant. +1

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.