# Why is Science Focused on Reductionism?

## Recommended Posts

I am curious as to why modern science has become so heavily reliant upon a mathematical, reductionist approach in explaining observable/natural phenomena (there are exceptions, of course, including game theory, sociology, and macroeconomics).

It seems that we have become addicted to explaining nature by disassembling it, rather than trying to understand how the parts influence each other in a higher-order, dynamic fashion. There is a place for reductionism in science, yet is there not also a place for emergence (in particular, strong emergence) and holism?

There is a level of disdain for explanations that speak of strong emergence or holism that borders on viewing them as pseudoscience.

A great deal of observable reality cannot be explained through a linear and/or reductionist approach, i.e. structures or properties that cannot be explained by their parts alone. This is because the products of the interacting parts "reach down" and affect the interaction of its own parts.

Perhaps the answer is that math is very difficult to apply to such systems, thus attempts to explain these come off as unscientific. Take for example a cycle in which neurons in a subsystem of the human brain (say, the midbrain) interact for a few hundred milliseconds and then send a signal to another subsystem (say, the cortex), and the second subsystem's neurons interact for a period of time then send a signal back to the first subsystem affecting its future output... how can we explain such an interaction mathematically even knowing the details of every signal neuron during every millisecond? Each time the emergent signal is relayed in a full loop, not only do the rules of the system change, but the efficacy of individual synapses do as well.

I don't have the answer, yet it doesn't mean we should abandon attempts to explain such structures in nature as these systems are as important, if not more so, than easily reduced systems.

##### Share on other sites

Some emergence/holism is pseudoscience.

Not all scientific investigation is reductionist. But if you want to make an upper-level connection, you have to investigate it rigorously. It's the lack of rigor that is viewed with disdain, AFAICT.

##### Share on other sites

Humans are categorical creatures by nature, and as such we simply enjoy the organization of systems to their simplest form.

##### Share on other sites
Some emergence/holism is pseudoscience.

Not all scientific investigation is reductionist. But if you want to make an upper-level connection, you have to investigate it rigorously. It's the lack of rigor that is viewed with disdain, AFAICT.

Would you consider it even possible to explain holistic or emergent systems through mathematical language? Emergent systems supervene on their components, potentially changing the lower-order rules at each juncture, and in turn the altered rules affect how the system behaves at the emergent level.

There's no point one can "step in" and measure any quantity meaningfully. And without meaningful quantification, how can math describe these systems? To understand some of it, you must understand (nearly) all of it.

I know that it seems unscientific, but there may be certain phenomena that to understand we must use means that are minimally mathematical. Another way of putting that is: it may take holistic intelligence to understand holistic systems, including intelligence itself.

Edited by Luminal

##### Share on other sites

It is an interesting question, and in terms of describing non-locality, the mathematical tools are not as well developed and/or not as well known as the local tools are.

The mathematical tool of locality is the differential. By definition, a derivative happens exactly at only one point in time or space. $\nabla$ only looks at the fluxes around an infinitesimal volume in space, not the one next to it, or the next one over, or third next one over, etc.

The tool of non-locality is the integral. And, it is a fact that while many scientists and engineers are very comfortable with differential equations, there are not nearly as many who are comfortable with integral or integro-differential equations.

But, there is a mathematical framework to describe these things. Many fields use non-local descriptions.

Turbulence is one I am familiar with. The energy cascade from large vortexes to small ones is a non-local phenomena. All the vortexes larger than the smallest one possible can feed into any vortex that is smaller. So, to see all the energy coming into a vortex of a particular size, you have to integrate over all the vortexes that are larger than the one you are looking at.

Another is population dynamics. Consider a distribution of particles that is undergoing a breakage process (i.e. in a grinder or something). If you are looking at the number of particles of a particular size, you have to integrate over all the large particles because any of the large particles may break forming particles of the size you are looking at. The reverse process, agglomeration of two small particles into a larger one(e.g. coalescence of oil droplets), actually is a double integration. Because, you have have to look at all the particles smaller than the size you are looking at, and then look at all the available particles for any of the smaller ones to join with that could join together to make a large particle of the size you are looking at.

In a different, but similar idea, there are some polymers that exhibit a memory. In that, what determines the stress in the polymer at a certain point in time depends on the stresses that the polymer had undergone in the past period of time. Depending on the polymer, the memory can be as short as a few microseconds, or as long as decades. When describing these, you have to integrate back in time over the memory of the polymer to describe the current state of the polymer and what it will do in the next instantaneous moment of time.

A. Cemal Erigen has written an entire book about non-local continuum theories: Nonlocal Continuum Field Theories that describes how to mathematicaly treat things like the examples I brought up above.

I guess my main point is that the tools to describe these things are actually there. One of the big things is that these non-local phenomena lead to significantly more complex models than the local ones. And one of the things is that we certainly haven't solved all the local phenomena. I.e. the simply differential world is difficult enough, without introducing the integro-differential equations. (As an example, there are many stiff very interesting problems with Newtonian fluids (stress is all local, instantaneous) that you don't have to go into polymer fluid mechanics (with non-local stresses) to try your hand at tough problems.)

But, there are people working on them. There just aren't as many working on them. In general, they are probably more difficult, and many people don't like to tackle the really difficult problems until the easier ones are completed. I.e. people usually pluck the low-hanging fruit first.

##### Share on other sites

Interesting. What would be your take on the distinction between 'weak' and 'strong' emergence given a mathematical approach?

Depending on how you define these terms, there could be upwards of 3-4 different types of systems (if you have a sturdier definition, disregard these):

1 - Uniform: The system has the same properties as its parts. For example, the properties of elements stay generally the same throughout changes in quantity.

2 - Weak Emergence: The system's parts interact in such a way that the system has properties that a single part cannot have, due to nonlocality or chaos. For example, weather systems or feedforward neural networks.

3 - Strong Emergence: The system's parts interact in such a way that it cannot be predicted by either viewing a single part or all the parts individually. The emergent properties are creating feedback to the components. For example, recurrent neural networks.

4 - 'Very Strong' Emergence - Same as above, except the feedback from the higher-order system is additionally changing the basic rules of its components. For example, the left and right hemispheres are communicating (with their individual neurons producing strong emergence), and one of the regions stimulates the release of a certain neurochemical in the other, which alters the underlying conditions for its neurons to fire temporarily. Little, if anything, can be predicted seeing as the rules are updating constantly. A different mathematical model would be needed each time the rules renewed.

Sorry if that meandered a bit.

Edited by Luminal

##### Share on other sites
Would you consider it even possible to explain holistic or emergent systems through mathematical language? Emergent systems supervene on their components, potentially changing the lower-order rules at each juncture, and in turn the altered rules affect how the system behaves at the emergent level.

I don't see how that's possible. The higher-order rules have to converge to the lower-order rules. You can't change the outcome of my experiment by writing down an equation after-the-fact unless you are willing to toss a few observed concepts, like causality.

Interesting. What would be your take on the distinction between 'weak' and 'strong' emergence given a mathematical approach?

Depending on how you define these terms, there could be upwards of 3-4 different types of systems (if you have a sturdier definition, disregard these):

1 - Uniform: The system has the same properties as its parts. For example, the properties of elements stay generally the same throughout changes in quantity.

2 - Weak Emergence: The system's parts interact in such a way that the system has properties that a single part cannot have, due to nonlocality or chaos. For example, weather systems or feedforward neural networks.

3 - Strong Emergence: The system's parts interact in such a way that it cannot be predicted by either viewing a single part or all the parts individually. The emergent properties are creating feedback to the components. For example, recurrent neural networks.

4 - 'Very Strong' Emergence - Same as above, except the feedback from the higher-order system is additionally changing the basic rules of its components. For example, the left and right hemispheres are communicating (with their individual neurons producing strong emergence), and one of the regions stimulates the release of a certain neurochemical in the other, which alters the underlying conditions for its neurons to fire temporarily. Little, if anything, can be predicted seeing as the rules are updating constantly. A different mathematical model would be needed each time the rules renewed.

Sorry if that meandered a bit.

At a very basic level, chemistry and biology are emergent theories of physics. Physics itself, with e.g. different classical vs quantum-mechanical descriptions, is emergent. Stochastic systems (e.g. thermodynamics) — are they emergent?

##### Share on other sites
I don't see how that's possible. The higher-order rules have to converge to the lower-order rules. You can't change the outcome of my experiment by writing down an equation after-the-fact unless you are willing to toss a few observed concepts, like causality.

Here's an example I found from a quick Google search, involving a cellular automaton: http://forum.wolframscience.com/archive/topic/788-1.html

My own example: in a program the variables u, v, and w are given a value by the user each loop, while x, y, and z are determined within the program (initialized randomly the first loop). There are many higher-order functions in the program dependent upon the interaction of these variables.

- The first loop: y = ux + wy. In another part of the program, the variable on the left (y in this case) is the number of times a function is called. One of these functions determines the order of variables and the operations used in the previous step.

- The second loop: x = vyu * uw - y. Now with the fundamental rules changed, including the variable chosen in upcoming functions, the emergent properties change as well, possibly even creating new ones or eliminating old ones.

It may be possible that this system would eventually converge upon some repeating pattern, except that there is input from the user (analogous to sensory input in the brain). The rules are forever changing in an unpredictable way and nonlocal information is preventing convergence.

I think understanding, classifying, and reproducing emergence is one of the greatest frontiers left in science.

Edited by Luminal

##### Share on other sites
Here's an example I found from a quick Google search, involving a cellular automaton: http://forum.wolframscience.com/archive/topic/788-1.html

My own example: in a program the variables u, v, and w are given a value by the user each loop, while x, y, and z are determined within the program (initialized randomly the first loop). There are many higher-order functions in the program dependent upon the interaction of these variables.

- The first loop: y = ux + wy. In another part of the program, the variable on the left (y in this case) is the number of times a function is called. One of these functions determines the order of variables and the operations used in the previous step.

- The second loop: x = vyu * uw - y. Now with the fundamental rules changed, including the variable chosen in upcoming functions, the emergent properties change as well, possibly even creating new ones or eliminating old ones.

It may be possible that this system would eventually converge upon some repeating pattern, except that there is input from the user (analogous to sensory input in the brain). The rules are forever changing in an unpredictable way and nonlocal information is preventing convergence.

I think understanding, classifying, and reproducing emergence is one of the greatest frontiers left in science.

How are the rules rewritten? How does y = ux + wy get changed to x = vyu * uw - y?

##### Share on other sites
How are the rules rewritten? How does y = ux + wy get changed to x = vyu * uw - y?

Well, there are several ways I that I'm aware of, including neural networks.

For example, a feedforward NN has 10 possible outputs (4 operators and 6 variables), and is looped as many times as the value of the left-most variable (recursion). So, if u is the left-most variable and equals 4, the NN's looped 4 times, with the first output going to the left of the =, and the next 3 going to the right. Next time the NN is run, it will loop a # of times equal to the new left-most variable's value. By the way, when an operator is the output, it must be between two variables otherwise the NN is repeated until a variable is the output.

Note: In C++, this would be accomplished through template functions. I'm not too sure about other languages, though.

## Create an account

Register a new account

×

• #### Activity

×
• Create New...