Jump to content

Evidence - The relation between perspective and


Villain

Recommended Posts

To a blind man the sun is something that offers warmth, but to those that see it is something that offers warmth and light, it exists to both of them but in different ways.

 

Does evidence cause your perspective or does perspective cause your evidence (notice there is no empirical in this question)?

Link to comment
Share on other sites

I would think that evidence would drive our perspective on something to the state of which we 'understood' it and from that point the inverse would apply i.e. we would look for evidence to support our perspective.

 

Evidence is always interpreted in terms of what is known at the time. However, data are data. Interpretation may require knowing the context under which the data were obtained, or more data.

 

Would you describe evidence as being 'opinionated' data or would you use evidence and data in a more inter-changeable sense.

Link to comment
Share on other sites

Would you describe evidence as being 'opinionated' data or would you use evidence and data in a more inter-changeable sense.

 

Interchangeable, but I thought this was not a discussion on what empirical evidence is.

Link to comment
Share on other sites

I think that often what is seen as evidence in one age will not be accepted as evidence in another age. To give an example of what I mean I consider homosexual acts between men. I am heterosexual and so was happy to accept the fact that, for over half of my life (I was in the British military until the age of 40), there must be reason for this discrimination and why it should be a crime. I think one classic piece of "evidence" was the "fact" that it was a reason for the collapse of the Roman Empire. We didn't want our Empire to go the same way did we? All shake your heads! I now believe that the collapse of the Roman Empire was a very complex matter and homosexuality had little or nothing to do with it. The human mind being what it is, the change in my inner feelings from abhorrence to complete acceptance that these feelings are normal for others has been a slow process. People should be careful and as objective as possible when they present or consider "evidence".

http://www.telegraph...osexuality.html

Edited by Joatmon
Link to comment
Share on other sites

Interchangeable, but I thought this was not a discussion on what empirical evidence is.

 

I'm under the opinion that data is without meaning, but evidence infers meaning because in order to call the data evidence it would have to be evidence of something.

 

Does evidence not imply empiricism, does anyone have an example of evidence which is not empirical? (I think spending some time defining aspects of the topic is in line with the topic and helps build a stronger conclusion, I didn't want to only be about empirical evidence though).

 

 

Link to comment
Share on other sites

I would think that evidence would drive our perspective on something to the state of which we 'understood' it and from that point the inverse would apply i.e. we would look for evidence to support our perspective.

 

Would you describe evidence as being 'opinionated' data or would you use evidence and data in a more inter-changeable sense.

 

It is fundamental to scientific endeavor to let the data guide your assumptions, rather than to let your assumptions guide your data.

 

Hence the formation of the hypothesis testing approach - you don't form a hypothesis and keep collecting and cherry picking evidence until you find the specific evidence which supports your a priori position. If the evidence as a whole and your hypothesis are untenable, you reject the hypothesis.

 

Opinion only enters into the equation in the interpretation level - i.e. I could conceivably do a distance-based redundancy analysis on my data x and y and find a significant correlation between them. My colleague could conduct a Mantel test and find no correlation between x and y. If we have differing opinions on the best way to analyze and interpret the data at hand, and thus come to differing conclusions with the same data based on different opinions of the best method of interpretation (until I go and do a simulation study and prove my colleague's method has a high type 2 error rate :P).

Link to comment
Share on other sites

I'm under the opinion that data is without meaning, but evidence infers meaning because in order to call the data evidence it would have to be evidence of something.

 

OK, that's reasonable.

 

Does evidence not imply empiricism, does anyone have an example of evidence which is not empirical? (I think spending some time defining aspects of the topic is in line with the topic and helps build a stronger conclusion, I didn't want to only be about empirical evidence though).

 

The strength of empirical evidence is that the evidence itself is not subjective. If I can't examine the evidence or compare it to nature, it has no scientific value.

 

To add to what Arete said, science doesn't just look at how data supports an argument. If it supports multiple arguments then it's weak support. It also looks at what data isn't there, if it's a case that data should exist.

Link to comment
Share on other sites

Villain,

 

I would like to throw in, the phrases "it's evident to me", "it is becoming evident" and "evidently....such and such...is true".

 

The "data" that provides us with these determinations, is what we ourselves notice or are told about by people we trust.

 

It is not a matter of high science, or probabilty theory, or error rates to look at the broken lever and say "evidently it was not strong enough for the job".

 

We learn about the world, as soon as we first sense it. Learning about it through our senses, and our movements through it. Building an accurate analogy of it, which improves in scope and detail as we age. There is no hesitation in modifying it (the model) immediately when a change in the real world, that the model is of, changes. That is the point. When we look at a tree, and see a tree, it is evidence enough, that the tree is there. Our model of the tree is real time, and adjusts automatically as we get closer to the tree, or farther away, or a branch breaks off in the wind.

 

I am of the recently gained opinion that much of our "thinking" goes right along with the world we are thinking about. Sure we make metaphors and analogies, and map stuff and take ratios, but it is always from or about, or in reference to, the actual world, that we are part of. We take the "stuff" we are thinking about from the world.

 

But that should be evident.

 

Regards, TAR2

Edited by tar
Link to comment
Share on other sites

As far as discrepancies in data are concerned, what is considered acceptable as 'human error' or something similar to that (perhaps someone could give an example of something that is considered similar if such a thing exists)? Is there such a value in the general scientific method (10% deviation in result) or is it specific to the kind of experiment/science being done?

Link to comment
Share on other sites

Almost all results will be reported with a statistical confidence interval and associated p value.

 

http://en.wikipedia....idence_interval

 

http://en.wikipedia.org/wiki/P-value

 

additionally, experiments are usually replicated a number of times the assure that a consistent result can be obtained in independent runs of the same experiment.

Edited by Arete
Link to comment
Share on other sites

Villian,

 

So what is the nature of the "error" that you are talking about?

The extent of the difference between the "real" item referred to and the "model" of the thing?

Or the accuracy in which the expression of the principles and rules, that seem to be present in the model, match the facts?

 

For instance. If a thing in reality is to be 100% true.

Then any analogy of it that we make is not complete. This error?

 

Or if our expression of the fact is to be initially considered 100% true, to what extent is it not? This error?

 

Regards, TAR2

Edited by tar
Link to comment
Share on other sites

Or if our expression of the fact is to be initially considered 100% true, to what extent is it not? This error?

 

Given that Villain started a thread in which they posted:

 

What I am saying is that faith is something that is not completely exclusive to religion and that the trust in humans of results and theories which have not personally been verified by oneself is very much the same as a faith in religion.

 

It would seem that they are exploring the possibility that, as scientists are human with natural and inescapable biases, and the fact that scientific endeavor is conducted by humans, the results of scientific inquiry are inherently biased and thus amount to the same as opinions and faith.

 

What Villain appears unaware of is that the scientific method and best practices are fully aware of potential bias. Not just personal biases of individual scientists, but sampling bias, ascertainment bias, autocorrelation, placebo effects, outlier events, etc and so on ad infinitum. There are entire fields of statistical methodology and experimental design which are entirely devoted to the reduction of bias.

 

Experimental designs such as replication, statistical tests, double blind testing, experimental controls, random simulation, the peer review process, etc, etc, etc. are all in place to counter and reduce biases. Extreme lengths are gone to in order to reduce potential bias in experimental results, however no experiment of data collection method is perfect, which is why virtually every empirical result in science is reported with a measure of error.

 

So in summary - scientists are fully aware of the role that biases play in the production of erroneous results. Rather serious lengths are gone to to eliminate and reduce bias in scientific results. It is widely acknowledged however, that no result is perfect. As such, results are reported with degrees of potential error in virtually all cases.

Link to comment
Share on other sites

As far as discrepancies in data are concerned, what is considered acceptable as 'human error' or something similar to that (perhaps someone could give an example of something that is considered similar if such a thing exists)? Is there such a value in the general scientific method (10% deviation in result) or is it specific to the kind of experiment/science being done?

 

I don't think human error, as the term is usually used, i.e. to mean a mistake, would be tolerable in any scientific result. Human error, once identified, invalidates the experiment unless it can be quantified and compensated for. e.g. if you misread a setting or reading, you have made a human error. You either go back and use the right numbers or you toss the data and do the experiment over.

Link to comment
Share on other sites

Arete,

 

Although I agree with you, that many methods have been developed to try and take subjectivity out of scientific investigations, it seems any investigation would be rather empty, if it did not have a human concern at its roots, and if the arguments, upon which the hypothesis that was under test were based, where not based on human assumptions and human understanding of the world.

 

Regards, TAR2

 

Any fact that has no bearing on humans, and no way for a human to imagine it, is rather unknowable. I would hardly say that a claim that such a thing, which could not be understood by a human, would have any meaning at all.

 

That being the case, I would vote against there being an "other" objective reality, than the one described by human peer reviewed science, that had any import to humans.

 

That is, a single human can not possess an understanding that is MORE objective, than the understanding arrived at when everybody puts their models of the world together, and what is accepted as objective reality, is that portion that is common to any and all subjective human takes.

Edited by tar
Link to comment
Share on other sites

Given that Villain started a thread in which they posted:

 

 

 

It would seem that they are exploring the possibility that, as scientists are human with natural and inescapable biases, and the fact that scientific endeavor is conducted by humans, the results of scientific inquiry are inherently biased and thus amount to the same as opinions and faith.

 

What Villain appears unaware of is that the scientific method and best practices are fully aware of potential bias. Not just personal biases of individual scientists, but sampling bias, ascertainment bias, autocorrelation, placebo effects, outlier events, etc and so on ad infinitum. There are entire fields of statistical methodology and experimental design which are entirely devoted to the reduction of bias.

 

Experimental designs such as replication, statistical tests, double blind testing, experimental controls, random simulation, the peer review process, etc, etc, etc. are all in place to counter and reduce biases. Extreme lengths are gone to in order to reduce potential bias in experimental results, however no experiment of data collection method is perfect, which is why virtually every empirical result in science is reported with a measure of error.

 

So in summary - scientists are fully aware of the role that biases play in the production of erroneous results. Rather serious lengths are gone to to eliminate and reduce bias in scientific results. It is widely acknowledged however, that no result is perfect. As such, results are reported with degrees of potential error in virtually all cases.

 

Hi Tar

 

Thanks for the reply, the previous thread to which you refer was not meant to be linked to this one. I am aware that there will be errors as you have pointed out and was trying to understand what was considered acceptable. I am not going to suggest that data must be completely the same to make a valid conclusion. The thread which you linked was based on human trust and how we all have to trust in some way and therefore religious faith is not as abstract a concept as people make it out to be. The 'human error' in this thread is not related to trust but refers to acceptable reasons and the amount of acceptable adverse data for a valid observation to hold. A basic example might be baking a cake with a certain recipe, 500/510 times it comes out 'perfectly' according to the recipe and 10/510 times it's a flop to which the 10 times are seen as 'human error'.

 

 

Link to comment
Share on other sites

The outcome from baking a cake is not typically a scientific result. Nothing is quantified, so nothing merits an error estimation.

 

Please consider which would be considered acceptable from a scientific perspective:

 

To quantify the result I would make the cake as per the recipe that I have given and then say that 1. this recipe when done a certain way produces a cake of exactly x*y*z proportions (knowing from experimentation that it would produce a 5% variation) or 2. would I say that it is likely to be within 5% of those proportions?

 

 

Link to comment
Share on other sites

Please consider which would be considered acceptable from a scientific perspective:

 

To quantify the result I would make the cake as per the recipe that I have given and then say that 1. this recipe when done a certain way produces a cake of exactly x*y*z proportions (knowing from experimentation that it would produce a 5% variation) or 2. would I say that it is likely to be within 5% of those proportions?

 

You might express that as a volume ± 5%, or each dimension ± the appropriate value, especially if the aspect ratio were important. What you would not do is include data if the cake were not prepared according to the recipe, owing to human error.

Link to comment
Share on other sites

You might express that as a volume ± 5%, or each dimension ± the appropriate value, especially if the aspect ratio were important. What you would not do is include data if the cake were not prepared according to the recipe, owing to human error.

 

Does the scientific method not work under the assumption that all things being equal the same result will occur? The human error that I talk of would be that reproducing something with all things being equal is probably impossible and therefore variance will occur when one is trying to produce the same result.

 

If the above is correct then how do we account for the variance without discounting the original result? At what point does the claim of x*y*z become untrue if I follow the recipe and recreate it as described, but attain a different result?

 

 

Link to comment
Share on other sites

Does the scientific method not work under the assumption that all things being equal the same result will occur?

 

No. In any test, multiple outcomes are anticipated. In the simplest model you'd have a test hypothesis and a null hypothesis. Sound experimental design doesn't rely on a particular a priori result from observation.

 

The human error that I talk of would be that reproducing something with all things being equal is probably impossible and therefore variance will occur when one is trying to produce the same result.

 

Variation in any empirical test is virtually inevitable. Hence we replicate results and use statistical analysis to verify the probability of observations supporting a hypothesis. Read the link on confidence intervals. Variance in empirical results is not "human error" per se, it's simply how reality operates.

 

 

If the above is correct then how do we account for the variance without discounting the original result? At what point does the claim of x*y*z become untrue if I follow the recipe and recreate it as described, but attain a different result?

 

We use replication and statistical verification, then apply a confidence interval. At least in my field marginal confidence is awarded to a result which supports the test hypothesis with >95% confidence and significance of the test is assumed when confidence is >99%. How firmly you can support a hypothesis is determined by the statistical robustness of the result in association with the number of replicates and support from independent, multivariate data sources.

 

No scientific result is ever reported as 100% proof. Some level of potential error is always recorded as a possibility in any scientific result. However if a theory - such as evolution through random mutation and natural selection is supported by hundreds of thousands of experimental, genetic, morphological, fossil, observational, behavioral studies each with dozens to millions of replicates, I'm sure you can understand that the appreciable support value supporting the result is rather overwhelmingly positive.

Link to comment
Share on other sites

Does the scientific method not work under the assumption that all things being equal the same result will occur? The human error that I talk of would be that reproducing something with all things being equal is probably impossible and therefore variance will occur when one is trying to produce the same result.

 

If the above is correct then how do we account for the variance without discounting the original result? At what point does the claim of x*y*z become untrue if I follow the recipe and recreate it as described, but attain a different result?

 

One thing on which it would depend is if you have a stochastic process, in which case you'd present the possible outcomes as probabilities. Baking a cake probably is not stochastic. If you have an event like the cake falling, you might notice that it correlated with a loud disturbance of some sort, and you could systematically test to see if that was causal and how much physical disturbance you can tolerate to make the cake turn out properly. You could similarly place tolerances on the ingredient measurements via systematic testing.

Link to comment
Share on other sites

Villian,

 

I am not exactly sure what happened with this thread, but "evidently" I had assumed I missed something, and was responding to Arete's post, which brought in a "clarification" of what the thread question was. You attributed something Arete brought to the table to me...I think.

 

But thats OK, nothing was said that I would be worried about. And it does bring up a point that I was sort of alluding to.

 

The point is, that part of the question of "error" is the question of truth, or "non-error". And there is an inherent difference between what one expects to be the case, and what is actually the case. I would express the difference in this manner. What one expects to be the case is an approximation, a guess, a prediction, that is based on what has happened before, as to what is going to happen next. The "truth" on the other hand, is what actually happens.

 

If your question is what amount of error is attributed to human expectations, I would have to answer that in light of the above expression of the issue, the answer is "all of it".

 

After all, the world can not do anything "wrong". It either does it, or it doesn't do it, and it can not take anything back, once done. And everthing "fits" automatically with that which is in the area, now, and eventually with things farther away, as the impulses from the thing spread to the rest of the universe, like the ripples in a pond.

 

So science is a matter of constantly improving our model of the world so we can more and more accurately predict what will happen, given certain starting situations, next. The starting situations have to be described, and the expectations of what will happen next, have to be described. If we are "surprised" by the results, it is not the "fault" of the world. It is the fault of our expectation, which we then automatically modify appropriately in determining what we expect to happen next.

 

The world itself is always in possession of the truth. We on the other hand are only in a position to internalize such truth, a little at a time. We cannot make a model of the thing that is actually the thing.

 

My favorite example is if you wrote the complete formula for a peanut butter cup, every atom, every quark, with its exact position and momentum, and took a bite of the formula, it would not taste like a peanut butter cup.

 

Regards, TAR2

Link to comment
Share on other sites

To a blind man the sun is something that offers warmth, but to those that see it is something that offers warmth and light, it exists to both of them but in different ways. Does evidence cause your perspective or does perspective cause your evidence (notice there is no empirical in this question)?

 

This is not a case of either or. It's a case of and. We are born with a unique physical perspective with which we gather -the presently ambiguously defined- "evidence". The perspective being described in the question is one of ideas, not just physical evidence, as implied by the blind man story.

 

What is accepted as evidence is different for each person, evidence is what makes a person believe they know something. What a person finds to be acceptable for convincing evidence depends on their prior perspective(experience).

 

Short of having a definition for evidence. We can change the way we see things, and from what angle we see things, and the way we think about things. Evidence does not change. Given this, evidence should cause perspective, there is no imperative though.

 

By definition there is no evidence beyond empirical evidence. Something touted as evidence that is not empirical is -at absolute best- a reasoned belief.

Edited by wanabe
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.