We have been introduced to the Scientific Method in a previous post. To recap, the process goes a bit like this:
Make an observation
Ask a question
Form a hypothesis
Test the hypothesis
Analyze the data
Form a conclusion
Form a new hypothesis and repeat steps 3 through 6 until the hypothesis is verified.
So how do we know when a hypothesis is sufficiently verified? How do we take the data and move from hypothesis to an emergent truth? We set up an argument. Arguments take one of two forms; deductive or inductive. We have seen these before in What is a Good Argument? Deductive arguments are arguments where the truth of the conclusion (C) is guaranteed if the premises (P) are true. For example:
P1: All green grass is healthy grass
P2: The grass in my yard is green
C: The grass in my yard is healthy
This is an example of a deductive argument that is valid (i.e., the argument is structured in such a way that if the premises are true, then the conclusion must also be true) but not necessarily sound. A sound argument has the additional requirement that the premises actually are true. I am not a grass expert, but I can imagine that it is possible for grass to become sick and be green despite illness.
Inductive arguments are arguments in which the observations (O) support but do not guarantee a given conclusion. An example of an inductive argument would be:
O1: Every time I have dropped a ball from a height, x, it falls to the floor.
O2: This has been true for all x near the surface of Earth.
C: Dropping a ball near the Earth will cause it to fall to the floor.
This inductive argument, simply stated, supports that gravity works as long as you aren’t dropping the ball in orbit or on another planet. However, there are times when the premises could both be true and the conclusion false. Perhaps, if the moon were replaced with a black hole we would then observe a ball dropped from a height near the surface of the Earth actually moves up toward the black hole! Yet, we consider this argument to hold true most of the time.
Generally speaking, science primarily formulates arguments using inductive logic. Scientists make observations, analyze the data, then formulate an argument in which the data supports a conclusion. Here is another example of an inductive argument:
O1: My new anti-inflammatory drug has been effective on 95% of patients participating in a trial.
O2: Clinical trials have been good indicators of effectiveness for larger groups of people.
C: We conclude this drug to be effective for the majority of all people.
While clinical testing is more complex than this argument suggests, you might begin to wonder why we should trust these type of arguments when they don’t present a hard truth, but a perceived truth based on probability. After all, a deductive argument has the benefit that if the premises are true and the argument is valid, the conclusion must be true. If science is as great as scientists claim it to be, shouldn’t science be using deductive arguments rather than inductive? This problem was first roughly stated by David Hume in his book "An Enquiry Concerning Human Understanding" and has become known as the “Problem of Induction”.
Problem of Induction
There once was a turkey. That turkey stumbled upon a patch of grass full of bird feed. The turkey ate its fill and there was some food left so it stayed close to the area. The next day, the turkey returned to find the bird feed had been replenished. The turkey remained for one-hundred days, expecting food to be there each day, and found the food had been replenished on all of the one-hundred days. Then, on the fourth Thursday of that November when the turkey arrived it found no food. Instead, a man appeared and butchered the turkey for Thanksgiving dinner .
The turkey fell victim to what Hume calls the “Principal of Uniformity of Nature”; the assumption that because events have always played out the same, they will continue to be the same in the future . Essentially, Hume is pointing out a fault in what is known as Determinism. Determinism states that for every effect, Y, there is a specific cause, X. We can generalize this as an “If-Then” statement. If X, then Y. The turkey from before thought if it went to the location it would find food. Some other examples might be: If I eat nutritional food, then I will not die of malnutrition. If I do not eat anything, then I will die of malnutrition. If I set an object in motion, it will continue in a straight line unless acted upon by another force. The latter example is taken from Newton’s First Law of Motion. We consider Newton’s Laws of Motion to be fundamental to how the universe functions. They are consistently observed, and we can even use them to follow the history of an object's motion (observing an effect, Y, and finding the cause, X). Yet, despite all the consistency in observations now, what guarantee do we have that Newton’s First Law of Motion will still be true in a million years?
This problem is the basis for the problem of induction. We can make as many observations as we want and formulate a conclusion based on those observations, but we can’t necessarily say that future observations will be consistent. If there is a chance that future observations will be different, the conclusion cannot be universally true. Conclusions that are only temporarily true seem like they pose a problem for scientific advancement. If the basic conclusions scientists have made regarding the laws of electricity and magnetism were to suddenly change, that would pose a very big problem to modern society. Yet, we don’t concern ourselves with an idea that Newton's Laws of Motion may suddenly cease to exist, or that Maxwell’s Equations in electricity and magnetism may suddenly change. Why? Because we have observational evidence that they have been consistent throughout history, and thus we form an inductive argument that there must be a solution to the problem of induction even if we aren’t quite sure what the solution is.
As we have now seen, science has run into a predicament with its reliance on inductive reasoning. How can we trust something that takes a limited set of data to make a general conclusion? A philosopher named Karl Popper argued that we can trust science based on the ability for good hypotheses to be falsifiable. What Popper proposed is that instead of trying to validate a hypothesis, scientists either reject a hypothesis or don’t reject a hypothesis; rejection of a hypothesis being through a process called falsification. Consider the hypothesis, “All oranges are acidic.” In order to validate the hypothesis, “All oranges are acidic,” we would need to test every orange in the universe. However, this hypothesis is falsifiable by finding just one non-acidic orange. Popper suggests that along with searching for evidence that corroborates the original hypothesis we also try to falsify the original hypothesis by testing and validating less broad hypotheses that would contradict the original hypothesis. These alternative hypotheses need to be verifiable from an achievable sample size. One such hypotheses is, “No oranges are acidic.” By verifying that even one orange is acidic, this allows us to construct a deductive argument:
P1: If no oranges are acidic then we would not find an acidic orange.
P2: We have found an acidic orange
C: ¬No oranges are acidic
The symbol ¬ is used as a negative and is similar to the statement, “It is not the case that...” By formulating hypotheses that can be proven or falsified with a single sample, we are able to create deductive arguments. This argument specifically follows the format of modus tollens:
P1: If P then Q
Modus tollens is a logical form derived from modus ponens . The modus ponens form is the standard If-Then statement:
P1: If P then Q
We are making a statement that if some proposition, P, is true, then some other proposition, Q, must be true. We verify the truth of P, which then implies the truth of Q. Here is an example of modus ponens written in statement form:
P1: If a dropped ball falls to the floor, then gravity is working.
P2: The dropped ball fell to the floor.
C: Therefore, gravity is working.
It is clear to see with the If-Then statement that verifying the ‘if’ implies the ‘then’. Remember, we are looking at structure and not content. This falls under tests of validity rather than soundness. We could make the argument that: If chickens walk, then pigs fly. Chickens walk. Therefore, pigs fly. This is a valid modus ponens, but it is clearly not sound.
Modus tollens works by falsifying the conclusion of a modus ponens. Consider the original statement, “If a dropped ball falls to the floor, then gravity is working.” What if we know that gravity is not working?
P1: If a dropped ball falls to the floor, then gravity is working.
P2: Gravity is not working.
C: It cannot be the case that the ball will fall to the floor.
Now, the falsified conclusion of the modus ponens argument becomes a premise in the modus tollens argument. Showing the ‘then’ part of an If-Then statement to be false must imply the ‘if’ part of the statement is also false or the result would be a contradiction.
The Poppernian Method of science is to falsify, through deduction, hypotheses that would compete with a hypothesis that can only be shown through induction. In this method we may never verify a hypothesis that contains a general truth such as, “All oranges are acidic,” because of the problem of induction, but we can use deduction to falsify competing arguments until we narrow down the amount of competing hypotheses. This is why some scientists refer to science as finding ‘emergent truths’. We may never be able to definitively say something is objectively true, but we can use deduction to clear the path toward a truth.
The beauty of the scientific method and falsification is that the process has a built in error-correcting mechanism. As technology improves, we can test broader hypotheses using deduction. For example, “This small lake contains no fish,” in the past could have only been shown inductively by trying to catch fish or dive into the lake to observe fish and finding no fish. Now with radar technologies we could survey the entire lake, find no fish, and deductively prove the hypothesis using a tautological argument (a tautology is an argument that proves itself; i.e., If A then A). Alternatively, the once inductively shown argument for the lake containing no fish may be falsified by finding a fish hiding in the lake, in which case we would consider the error corrected.
The fact that inductive arguments don’t guarantee their conclusions can lead to problems trusting induction. We make inductive arguments every day, any time we do something and expect a specific outcome. Therefore, leading us to believe that we have very good reason to put faith in induction. Hume points out the flaw in this reasoning because we are trying to validate induction through use of induction. This can cause us to question the validity of the scientific method because of the reliance of science on inductive reasoning. Popper presented just one of many attempts at resolving this problem by arguing a process of falsification, which allows us to bring deductive logic into the scientific process. In this way, we have hypotheses that are not falsified and hypotheses that are falsified. Non-falsified hypotheses, so long as they have sufficient inductive evidence, are accepted while falsified hypotheses are rejected. This process allows us to narrow in on an emergent truth about reality, even if we may never reach objective truth.