The Greek philosopher Plato arrived at much of his philosophy through thinking about things real hard. He did this because he believed that only through rational means could one come to truth - the world of facts, empiricism, was riddled with inaccuracies and error.
In his "Discourse", the renaissance philosopher Descartes performed many "mind experiments" to prove contentions about the existence of God. Following Plato, he thought error could be avoided in these "mind experiments" by accepting as fact only that which his mind could perceive "clearly and keenly" - another way of saying "thinking real hard".
For eons, "thinking about things real hard" stood as our science. This mindset led us to conclusions about our world that violently clashed with reality. Perhaps things reached a loggerhead with the works of Aristotle, who ironically is considered to be an empirical thinker. He once put forth the proposition that women had less teeth than men.
As the philosopher Betrand Russell noted:
Aristotle could have avoided the mistake of thinking that women have fewer teeth than men, by the simple device of asking Mrs. Aristotle to keep her mouth open while he counted. Bertrand Russell (1872-1970), British philosopher, mathematician. Unpopular Essays, "An Outline of Intellectual Rubbish" (1950).
What Russell is talking about here is the scientific method - the contention that the only real way to learn about the world was through our senses, even if it did mean that we could never have metaphysical certainty. Philosophers like Francis Bacon and John Stuart Mill helped revitalize the empirical view of knowing the world. It's the most direct and elegant manner of knowing that there is.
The scientific method is the best manner known to uncover truth, for it is the only system of thought that purposely attempts to reduce the bias and self-delusion that creeped into the philosophical discourses of these antiquated thinkers. While there is no actual, single "Scientific Method" used by all scientists (Scientists don't follow a rigid procedure-list called "The Scientific Method", scientists use a large variety of methods), the general steps of the method were first formulated by Francis Bacon:
The "Steps" of the Scientific Method[]
The above quotes around the word "steps" indicates that there is no one actual set of steps. Ian Hacking explains it best:
- "Why should there be the method of science? There is not just one way to build a house, or even to grow tomatoes. We should not expect something as motley as the growth of knowledge to be strapped to one methodology." -Ian Hacking
This said, an attempt to provide the most likely set of steps does prove useful:
1) Observe some aspect of the universe, "free from bias."
2) Invent a hypothesis that is consistent with what your empirically described observations.
3) Form a falsifiable theory to make predictions.
4) Test those predictions by experiments or further observations.
5) Modify the hypothesis into a theory in the light of your results.
6) Publish your findings in a peer reviewed journal - peers with an understanding of both the scientific method and your particular field of research.
7) Consider criticisms offered, and revise your theory
8) Go to step 3.
There are questions and criticisms of nearly every step of this process (and we'll cover them). In fact, one of the best criticisms would be to question whether or not it is really possible to place the method into a sequential series of steps in the first place. However, for now, it is essential that we grasp that the scientific method is a systematic manner of observing "things as they are", forming hypotheses on these observations, testing the hypothesis with experimentation (where possible), and then using the acquired facts to form theories to describe, explain, predict and perhaps even control these phenomena. The scientific method works best in cases where controlled experiments can be used to support the existence of a casual relationship between an independent and dependent variable, by disproving the antithesis of the theoretical relationship, the null hypothesis. We know from logic that when two hypotheses contradict one another (your hypothesis H1 and the Null Hypothesis), at least one of them must be wrong. If you disprove the Null hypothesis, then you support your hypothesis, providing that it fits all the facts.
We also do a validity and reliability check on our experiments.
A controlled experiment is said to be valid and reliable only when it can show that extraneous variables have been controlled for - reduced as much as possible. Differences in our research results that are due not to a real difference in reality, but through our biases are known as error variance.
If we can reduce as many intervening variables as possible, and (nearly) isolate the effects of the independent variable on the dependent variable, what effects we observe in the dependent variable are said to be caused to the independent variable - real differences in our experiements are called systematic variance. When we have systematic variance, we can say there is a causality! Yet, even in this case, the method requires independent replication of findings before a theory is formed - one experiment could be a fluke caused by random factors - replication reduces this error. And finally, even in the case of repeated replication, we never prove our theories absolutely true.
Caveats[]
"Ask a scientist what he conceives the scientific method to be and he adopts an expression that is at once solemn and shifty-eyed: solemn, because he feels he ought to declare an opinion; shifty-eyed because he is wondering how to conceal the fact that he has no opinion to declare." - Sir Peter Medawar
Theorists are fond of 'stage models'. They are neat. Simple. Quick. You can quantify the steps and stages, and even place an entire field of phenomena into it.
But reality is a continuum, not a discrete series of categories. There is no way to really say that every step happen, separate from each other in on specified order. There are many parts of science that cannot easily be forced into the mold of "hypothesis-experiment-conclusion." Astronomy is not an experimental science. Paleontologists don't perform paleontology experiments. And yet it would be an error to hold that a field that does not fit into the scientific method mold is not science. Science merely requires falsifiability. Both astronomy and paleontology make falsifiable claims. A new observation could falsify current theory.
Another reason to question the fitness of a stage model is that science is "anomaly driven." Isaac Asimov once noted that "the most exciting phrase to hear in science, the one that heralds new discoveries, is not 'Eureka!' (I found it!) but 'That's funny...' " This suggests that lots of important science comes NOT from proposing hypotheses or even from performing experiments, but instead comes from unguided observation and curiosity-driven exploration. Great discoveries often come about when scientists notice anomalies. They see something inexplicable during older research, and that triggers some new research. Or sometimes they notice something weird out in Nature; something not covered by modern theory.
Given this reality, it is probably better to examine the scientific method by looking at 'main principles' of the scientific method rather than specific steps.
What are the main principles of the Scientific Method?[]
The first principle of the scientific method is "unbiased" observation.
"Unbiased Observation" - or How to Observe, not Look
There is a difference between seeing and observing. The best example is from Sherlock Holmes. There is a story wherein Waton asks Holmes the difference between seeing and observing, and Holmes asks Watson how many steps leads up to their office. Watson is unable to answer despite years of having used the steps. Holmes says "This is because you have only seen them. There are 15 steps, and I know this, because I have observed them."
Some critics of the scientific method point out that "all observation is biased" because theory formation forms even before we observe - this is nothing but true. Others point out that all observation itself is impossible without a theory of knowledge - i.e. a means of picking out the "pertinent" information from amongst the "extraneous." But the better truth is that this very knowledge about the limitations of science CAME from science! Science is its own best policeman. Better to work in a system that we know to be limited, and is open to examination, than to a system of introspection that is closed to such self-knowledge and that provides no such mechanisms for evaluation!
When we learn how to observe, rather than just see, we recognize realities that obliterated our prejudiced presumptions about the facts, like Russell's example about Mrs. Aristotle's teeth. Using the above example, Watson may have previously thought there were either more or less steps leading up to their office, depending upon how difficult the climb appeared to him. This may have predisposed him to assume a certain number (the bias critics point to) but his actual trained observation of them would have obliterated this false presumption in lieu of the facts.
So yes, observation is biased, but trained observation by an open minded, (yet) experienced scientist can lead to a closer approximation of whatever the "truth" is, than any solely introspective method can. Better yet, as you will see in later sections, we do not have to rely on the observations of one observer - we tend to use only observations held by many people, over many individual times. And even more, we have methods of measuring inter-rater reliability. (Again, this will be discussed in a later section.)
The other principles of the scientific method are vigorous challenging of scientific theories, and tentative acceptance, rather than dogmatic adherence, to theories. These theories must make falsifiable statements, or statements that can be shown to be false. If you ever hear a purported scientist talking in absolutes about "proof" you know you have a phony on your hands. If you ever hear someone talk about a theory being "bulletproof" then they are talking about dogma. If you ever see me writing such a statement, then clearly it's a typo or taken out of context.)
What is the principle of falsifiability and why is it important to be able to wrong?
The principle of falsifiability states that theories must have confirmable (not fluid) positions and make specific predictions that are open to being disproved, - if they don't, if we don't know when they are wrong, then they have no predictive value. Citing a body of thought as having no falsifiability is a criticism, not a compliment. A predictive system that can never be wrong isn't science, its religion.
This principle is what differs science from religion. Since religious dogma is said to be always true, it cannot honestly predict anything because any outcome will support the dogma. Everything that happens supports the theory, therefore there is never any way of knowing what will happen! In fact, with dogma, what we really have is "postdictions" where outcomes of "prophecy" are interpreted to fit the prediction, after the event has already occured. Support for biblical prophecies always point backwards to events that have already occured - they are never used to predict future events. The scientific method rejects these postdictions as useless. The only valid way of evaluating predictive power of a theory is for the theory in question to first make coherent, clealry defined predictions before the event in question. Check the historical record: you'll see scientific theories predicting events or outcomes often centuries in advance, and on the other hand, you'll find the "predictions" of soothsayers and religions are only "uncovered" after the fact.
Instead, if we create a theory that is falsifiable - one that can be wrong, we then know what proves a theory to be false. If we know when a theory is proven false, we can then know what will support its truth. Through this we can verify the utility of the theory as a predictive device and use it to meaningfully predict events. Since scientific theories are contingent and inter-related with factual outcomes, we can support those theories that work and reject those that don't. This allows us to error check, and correct theories.
Falsifiablity and the Efficacy of Theory[]
The scientific method forfeits the pseudo "certainty" of dogma for the efficacy, usefulness and predictive power of theory. It is humorous to note that, for this reason, nonfalsifiable claims are not even wrong
These principles guide scientific research and experimentation, and also form the philosophic basis of those principles. The era of modern science is generally considered to have begun with the Renaissance, particularly with the works of Francis Bacon, but the rudiments of the scientific approach to knowledge can be observed throughout human history.
Scientific Tentativeness
Scientists are humans. Therefore, they fall into the trap of thinking in terms of certainty and absolutes. But science itself does not operate this way. Science assumes the existence of error. It even provides people with an estimate of the error it may be making. To do this, science use something called "error bars" to denote the level of uncertainty about its statements.
What is an error bar? Well, have you ever seen a political poll where you are told the information given to you is correct, + or - 3, or some other percentage? This is the error bar of the poll, the likelihood that the information is in error. There are various methods for determining what the error is in any scientific statement. This is of course a estimate in of itself, we're never sure - after all, in order to know the error percentage for certain, we would have to know the correct measure for certain!
There is no crying in baseball.
And "no certainty in science." (We reserve certainty for deductive reasoning and religion.)
The important point is that science not only admits to uncertainty, it tries to give you an idea of just how uncertain it is, and it refuses to make statements without humbly revealing its tentativeness.
Where are the church's error bars?
Inductive and Deductive Reasoning
The scientific method combines empiricism with two forms of reasoning, inductive and deductive reasoning.
The move from theory to data involves the logical process known as Deduction, reasoning from general abstract statements towards the prediction of specific empirical events with a probabilty greater than chance. The predictions about specific events that are derived this way from a theory are called hypotheses. Hypotheses lead to the design of a study, which produces results that may support the predictions of the hypothesis. If the hypothesis is supported by a large body of research, confidence is high that the theory behind the hypothesis is sound.
In this case, you could say that inductive support for the theory increases when individual experiments confirm the theory. Induction is the logical process of reasoning from the specific experiment to the general theory.
The scientific method also involves the interplay of inductive reasoning (reasoning from specific observations and experiments to more general hypotheses and theories) and deductive reasoning (reasoning from theories to account for specific experimental results). This process is also known as the Rational-Empirico method. By such reasoning processes, science attempts to develop the broad laws such as Isaac Newton's law of gravitation that become part of our understanding of the natural world.
Science has tremendous scope, however, and its many separate disciplines can differ greatly in terms of subject matter and the possible ways of studying that subject matter. No single path to discovery exists in science, and no one clear-cut description can be given that accounts for all the ways in which scientific truth is pursued. One of the early writers on scientific method, the English philosopher and statesman Francis Bacon, wrote in the early 17th century that a tabulation of a sufficiently large number of observations of nature would lead to theories accounting for those operations the method of inductive reasoning. At about the same time, however, the French mathematician and philosopher René Descartes was attempting to account for observed phenomena on the basis of what he called clear and distinct ideas the method of deductive reasoning.
A closer approach to the method commonly used by physical scientists today was that followed by Galileo in his study of falling bodies. Observing that heavy objects fall with increasing speed, he formulated the hypothesis that the speed attained is directly proportional to the distance traversed. Being unable to test this directly, he deduced from his hypothesis the conclusion that objects falling unequal distances require the same amount of elapsed time. This was a false conclusion, and hence, logically, the first hypothesis was false. Therefore Galileo framed a new hypothesis: that the speed attained is directly proportional to the time elapsed, not the distance traversed. From this he was able to infer that the distance traversed by a falling object is proportional to the square of the time elapsed, and this hypothesis he was able to verify experimentally by rolling balls down an inclined plane.
Such agreement of a conclusion with an actual observation does not itself prove the correctness of the hypothesis from which the conclusion is derived. It simply renders the premise that much more plausible. The ultimate test of the validity of a scientific hypothesis is its consistency with the totality of other aspects of the scientific framework. This inner consistency constitutes the basis for the concept of causality in science, according to which every effect is assumed to be linked with a cause.
Common Questions from Neophytes[]
If science is never certain, can it ever prove anything?
On stricly logical grounds, it is impossible to prove a theory, while is is possible to disprove a theory. To do otherwise would violate the tenets of formal conditional logic, by committing the formal fallacy of "affirming the consequent."
The logical fallacy of Affirming the consequent
p > q
q
p
Example of the fallacious thought: If you administer this drug, then the subject will feel better. The subject feels better, therfore the drug worked.
Reason for the error: The person could feel better for other reasons.
The logically correct form of Modus Tollens
p > q
~q
~p
Example: If you administer this drug, then the subject will feel better. The subject doesn't feel better, therfore the drug didn't work.
The distinction between affirming the consequent and modus tollens can be applied directly to theory testing. Hypotheses take on a conditional form: If X is true, then Y should occur. From this, we can see that if Y does not occur, then the theory is false. This is pure Modus Tollens at work.
However, imagine that Y does appear. Can we then say that the hypothesis is absolutely true? Unforuntately, no, because the logical fallacy of affirming the consequent informs us that there may always be other factors at work, even random factors, that are behind the phenomena. We can disprove, but never prove.
Note: It is important to note that realistically, single experiments are not really seen to disprove a theory. Other factors may have prevented the drug from working. Since error can occur, this error is corrected by the use of several experiements performed independently. Only then would we discard what otherwise seemed a good theory. But once we encounter multiple outliers, we must revise or abandon our theory, or risk mutating science into religion.
Even then, we record our results... future scientists may uncover our methodological errors.
For reasons of logic, theories produced by the scientific method are never seen to be "absolutely proven". Even oft supported scientific "laws" may be repealed by contradicting experiment. Instead, experiments can only support theories, and this occurs when they fail to disprove them. In science, unlike any other human endeavor, we seek to shoot down our own ideas. And the existence of one contradicting fact can tear down an entire theory. As Sir Francis Bacon once noted: "The existence of one white crow automatically disproves the statement "all crows are black." Even scientific "laws" are not safe. If tomorrow, you drop something and it falls up, there goes the law of gravity.
When it comes to scientific theories, at best all we can say that these theories represent a reasonable facsimile of reality. As Karl Popper says "All theories will be replaced by better ones." This shows that the method is self continuing in that these new theories lead to further refinements and further questions on the nature of "things as they are."
For instance, there is little doubt that an object thrown into the air will come back down (ignoring spacecraft for the moment). One could make a scientific observation that "Things fall down." I am about to throw a rock into the air. I use my observation of past events to predict that the stone will come back down. And it does.
But next time I throw a rock, it might not come down. It might hover, or go shooting off upwards. So not even this simple fact has been really proved. But you would have to be very perverse to claim that the next thrown rock will not come back down. So for ordinary everyday use, we can say that the theory is true.
You can think of the reliability of theories (not just scientific ones, but ordinary everyday ones) along a scale of certainty (with very low alphas.) Up at the top end we have facts like "things fall down." Down at the bottom we have "the Earth is flat." In the middle we have "I will die of heart disease." Some scientific theories are nearer the top than others, but none of them ever actually reach it. Skepticism is usually directed at claims that contradict facts and theories that are very near the top of the scale and claims that support ideas at the very bottom of the scale. Scientific debate usually occurs near the middle of the scale - an ever moving window of concepts that are open to discussion, based on what we have learned most recently. Ideas at this range tend to be provocative, alluring, but not yet within our grasp.
Doesn't this mean that Scientic claims are relativistic then? Or that everything science says is in doubt?
People tend to confuse uncertainty with confusion. To be uncertain about the laws of gravity only means that we do not hold to the scientic concept as a dogam. That is all. It does not mean that our claim is 'relativistic' or nihilistic or that scientic claims are therefore 'unsupported'.
If scientific theories keep changing, where is the Truth?[]
In 1666 Isaac Newton proposed his theory of gravitation. This was one of the greatest intellectual feats of all time. The theory explainedvall the observed facts, and made predictions that were later tested and found to be correct within the accuracy of the instruments then being used. As far as anyone could see, Newton's theory was the Truth.
During the nineteenth century, more accurate instruments were used to test Newton's theory, and found some slight discrepancies (for instance, the orbit of Mercury wasn't quite right). Albert Einstein proposed his theories of Relativity, which explained the newly observed facts and made more predictions. Those predictions have now been tested and found to be correct within the accuracy of the instruments being used. As far as anyone can see, Einstein's theory is the Truth. Er, except where his views clash with quantum theory...
So how can the Truth change? Well the answer is that it hasn't. The Universe is still the same as it ever was, and Newton's theory is as true as it ever was. If you take a course in physics today, you will be taught Newton's Laws. They can be used to make predictions, and those predictions are still correct. Only if you are dealing with things that move close to the speed of light do you need to use Einstein's theories. If you are working at ordinary speeds outside of very strong gravitational fields and use Einstein, you will get(almost) exactly the same answer as you would with Newton. In turn, Einstein's theory works very well until you get down to very small phenomena - things smaller than a molecule. Then quantum theory predicts better. But in both cases it's not like the older theory was utterly wrong - it just broke down in certain situations.
One other note about truth: science does not make moral judgements. Anyone who tries to draw moral lessons from the laws of nature is on very dangerous ground. Evolution in particular seems to suffer from this. At one time or another it seems to have been used to justify Nazism, Communism, and every other -ism in between. These justifications are all completely bogus. Similarly, anyone who says "evolution theory is evil because it is used to support Communism" (orany other -ism) has also strayed from the path of Logic. Oddly enough, many of these same people defend gun ownership by stating "guns don't kill people, people kill people", so they are being either disengenious or inconsistent.
David Hume‘s claim that "Extraordinary evidence is needed for an extraordinary claim"[]
An extraordinary claim is one that contradicts a fact that is close to the top of our certainty scale discussed above. So if you are trying to contradict such a fact, you had better have facts available that are even higher up the certainty scale. In other words, if you want to make a claim that violates a well established fact of nature, you better have some really good replicable evidence.
What is Occam's Razor?[]
Ockham's Razor ("Occam" is a Latinised variant) is the principle proposed by William of Ockham in the fifteenth century that"Pluralitas non est ponenda sine neccesitate", which translates as"entities should not be multiplied unnecessarily". In more modern terms, if you have two theories which both explain the observed facts adequately, then you should use the simplest until more evidence comes along. Realize, however, that this does not mean that a simpler theory is better solely because it is simpler - it must also explain the phenonemon just as well as the more complex theory it seeks to replace.
A related rule, which can be used to slice open conspiracy theories,is Hanlon's Razor: "Never attribute to malice that which can be adequately explained by stupidity."
Paradigm Shifts[]
My original description of the scientific method is a bit pollyanna-ish. There are several reasons why it's not likely that research can really begin with unbiased observation. First, there is the problem of Thomas Kuhn‘s paradigmatic quagmires. This refers to the fact that the scientific community often goes through extended periods of intellectual stagnation - periods of time where the only research being done is the slow extension of current scientific knowledge. This is akin for the proverbial "search under the streetlight, where the light is best" approach to discovery. When paradigms rule, they act to (unconsciously) limit how we view evidence.
Things change, however, when new fresh thinkers enter the field. There is a time where some, often young, theorist arises who point out outliers in the current data, without having to explain them away with the paradigms of the older generation. This leads to some young Einstein creating a sudden revolution - known as paradigm shift.
What is the difference between a fact, a theory a hypothesis and a law?[]
A fact is something difficult to define completely, and is in many ways somewhat of an axiomatic concept. We presume that a fact is something discoverable by empirical observation, or logical inference - and that it corresponds to reality in that it is not contradicted by observation or logic.
In popular usage, a theory is something less than a fact. This is an ignorant view that confuses theorizing for hypothesing, or even merely guessing. A theory is actually a conceptual framework, designed to describe, explain predict and help control some phenomenon, based on a preponderance of facts. The theory supports itself with non-vague, operationizable predictions that are held to be accurate and in accordance with observed reality. Theories and facts are not antonyms - they are inter-related and interpedent. Facts are used to support theories, and theories explain existing facts and predict new ones. A great example would be Mendelev's theory of elements, better known as the "periodic table of elements." When Mendelev proposed his theory, it was considered ludicrous. However, without any knowledge of atomic structure, the theory predicted the existence of then undiscovered elements. When these predicted elements were discovered after the prediction and not before, the truth of the theory was supported. In other words, the table predicted reality, and was then therefore supported by reality. Evolution is yet another theory that has overwhelming predictive power. That the term "theory", a concept which can only thrive in an ocean of facts, became seen as something less than a fact, is a testament to our society's scientific ignorance.
One of the most galling statements that can be made in reference to theories is the oft heard whine: "That's only a theory" as if to then claim that dogmatic "certainty" is superior theoretical tentativeness. The truth is that dogma often exists in stark contradiction to fact, while theory can only exist on the basis of facts. Another glaring difference between theory and dogma is that theory offers a coherent explanation of a phenomena. Dogma often offers nothing other than blanket authority statements concomitant with blatant threats of violence and harm to non believers.
A hypothesis is a tentative theory that has not yet been tested. It's an early stage in theory creation. See the list of experimental steps at top. You will see hypothesizing occuring early, at stage 2, although it is a bidirectional process that is undertaken during stage one as well (See caveats section below). Typically, a scientist devises a hypothesis from his experience and his prior knowledge, and then tests this hypothesis experimentally. If the hypothesis is not refuted by experiement, the scientist declares it to be a theory.
An important characteristic of a scientific hypothesis is that it be stated in a "falsifiable" manner. This means that there must be some experiment or possible discovery that could prove the theory untrue. For this reason, a hypothesis should be stated in the form "If x, then y", meaning if we do or see X, then y must follow, otherwise, the hypothesis is false. An example of this is seen in Einstein's theory of Relativity. The theory made predictions about the results of experiments, such as "If a large mass moves near the path of a ray of light, the ray of light will bend. "If x, then y". We can watch large bodies in our solar system. The results of these observations could have produced results that contradicted Einstein, making the theory falsifiable.
We can see that even at the hypothesis stage, scientific thinking is superior to dogma, because religious dogma is not falsifiable. Everything religion says is held by its adherents to be true, no matter the outcome. In science, failed hypotheses are scrapped. In religion, "failed outcomes" are re-interpreted to fit the dogma. Therefore, it is of no use as a predictive tool, because it literally predicts anything and everything.
A law is an exact formulation of a principle (as in the law of the conservation of energy). Theories don't graduate into laws and laws are not former theories that are now somehow protected from disproof. This is another common misunderstanding. Both theories and laws add to our scientific understanding and one is not somehow superior to another. Laws, just like theories, can be refuted.
Basically, most people you meet won't have the slightest fucking clue as to what the terms: theory, hypothesis, fact and scientific law, really means, but they won't let ignorance get in the way of their acting like experts anyway. If you grasp the concepts, you already rate higher in scientific literacy than the majority of America.
Doesn't the Scientific Method itself needs to be Empirically Validated before we hold it to be "True"?
Science's demand for naturalism is not a descriptive statement, but a methodological statement. It is not a statement of a theory but of a method. Thus, it itself does not need to be subjected to falsification: it is the standard by which falsification is possible. A method is validated by utility - by results. The long observed asymmetry between positive results for science and failure for magical or mythical methodologies is the most powerful evidence for the scientific method.
(Note: Inductive logic itself is founded upon a deductive theory: Bayes' theorem, so there is a deductive foundation for the logic science uses.)
You say: No certainties in science.... well, I have one question for you: Are you certain?![]
First: Stop listening to religious fundamenatlists - for your own sanity!
Now:
The proscription against certainties in science is not a discovery of science, ergo there is no 'impossible contradiction' inherent in the declaration. (Anyway, you'd think churchmen would want to avoid bringing up beliefs in internal contradictions!) This statement is a statement about the scientific method itself. The fact that science cannot produce certainties is known deductively. Deductive logic provides us with certainties. (For example, we can be certain that 2 is less than 3, for they are defined that way.) This statement about the limits of the scientific method is not part of science itself, it is not a result of the method.
What about the Supernatural? Why do scientists seem so 'closed minded' about the supernatural?
Science must be a naturalistic enterprise, not because 'naturalism is true'or 'only naturalism is true' (again, it is not true or false, but a statement about the most pragmatic methodology) rather, science must be materialistic because only naturalistic causes are studiable in the fashion of replicable evidence checkable by the senses. Supernatural claims involve claims about putatiave entites beyond nature, ergo no natural method can be used to study them. Science must remain agnostic on the question of the supernatural.
What about Victor Stenger's argument that religion makes falsifiable statements?
In these cases I agree with Stenger. However if a person or a religion makes a claim about the supernatural that does not imply falsifiable claims then the problem holds.
Other Caveats and Criticisms of the Principles of the Scientific Method[]
As noted above, while the scientific method is said to begin with objective observation of "things as they are", without falsifying or "interpreting" observations according to some preconceived world view (as with politics and religion), the philosopher Karl Popper has shown that this viewpoint is naive. He suggests that there can be no observation without some preconceived notion - in fact, he points out that there is no way to observe anything meaningfully without some preconceived notions about what we see - otherwise we would be reduced to the unending stream of consciousness that makes up an infant's worldview. So in effect, we are both 1) biased and 2) never without some form of hypothesis, even at the beginning of the 'steps' of the method.
Observation and hypothesizing is more likely to be a bidirectional process. We notice some odd things in our environment, form a hypothesis, and then begin to notice the odd things a hell of a lot more than we would if we did not hold the hypothesis. This bias exists. But again, it is only the scientific method itself that points out this bias. In fact, there are social-scientific experiments which guage and measure the bias!
For this reason, I maintain that it is correct to say that the scientific method attempts to reduce biases as much as humanly possible, in favor of empirical descriptions of "things as they are." The scientific method also suggests the need to examine our philosophical assumptions and test them where possible. Lastly, while scientists are biased, the scientific method itself is as free of bias as humanly possible, and while some scientists may be biased or outright fraudulent, the community at large demands the verification and replication that eventually corrects errors and uncovers frauds - frauds won't be reproduced by legitimate experiment. As Carl Sagan notes, no other human institution includes the built in error detecting and error correcting safeguards of the scientific method, save for democratic government - which too was born of Greek philosophy.