|
|
|
|
|
|
|
|
|
What's Logic got to do with it?
Some of the greatest flashes of scientific inspiration
were sparked by utterly illogical thinking. Marcus Chown celebrates three
triumphs of muddled reason
Popular belief has it that science is the preserve of logical
Mr Spocks. A great scientific discovery must surely spring from a series
of logical steps, each taken coolly and calmly, in the rational order. But
take some time to leaf through the pages of history and you will find the
surprising truth. Some of the greatest discoveries in science were only made
because logic fell by the wayside and some
mysterious intuition came into play.
Fortune has occasionally smiled on those who abandon all reason,
and what better year to celebrate them than 1996? For it is exactly 100 years
since the French chemist Henri Becquerel was led-by an unfounded belief that
certain rocks emit X-rays and some inexplicable experiments in his laboratory
in Paris- to one of the most monumental discoveries in history-that of
radioactivity. Like his father and grandfather before him, Becquerel had
an obsessive interest in minerals that glowed, or fluoresced , after exposure
to sunlight. He was trying to get to the bottom of this in January 1896 when
he heard the sensational news of the discovery of X-rays by the German physicist
Wilhelm Röntgen.
Becquerel was struck by the thought that the fluorescent minerals
he had been studying might react to sunlight not only by glowing with visible
light, but also by emitting invisible X-rays. He set out to test this by
wrapping a photographic plate in dark paper, so that light could not get
at it, and placing it on a sunlit windowsill. On top of the plate he arranged
various fluorescent minerals. He reasoned that if sunlight triggered a mineral
to produce X-rays, in addition to visible light, then the X-rays should easily
penetrate the paper and blacken the photographic plate.
Flash of genius
To Becquerel's disappointment, a whole series of fluorescent minerals failed
to blacken the wrapped plate. Nonetheless, he persisted for weeks with various
samples and got round to the uranium salt potassium uranyl disulphate. He
came up trumps. On 24 February 1896, he reported to the French Academy of
Sciences that this uranium mineral emitted rays that blackened a photographic
plate. Without firm evidence that the mystery rays were actually X-rays,
Becquerel set about investigating their properties. He began another windowsill
experiment in which he placed a small copper cross between the sample and
the wrapped photographic plate. If the rays travelled in straight lines,
as Röntgen's X-rays did, then the developed plate would show the shadowed
outline of the cross.
On 26 February, much to Becquerel's frustration , the Parisian
sky was completely overcast and he was unable to carry out his experiment.
Instead, he took the entire apparatus-uranium salt, wrapped photographic
plate and copper cross -and placed it in the drawer of a cabinet. There it
remained, in total darkness, for several days during which time the Sun made
only fleeting appearances in the wintry sky above the city. Eventually
Becquerel's impatience got the better of him. On 1 March he removed his apparatus
from the dark drawer and developed the photographic plate.
Why he did this is a fascinating question worthy of an article
in itself. Becquerel was studying an effect which he believed was triggered
by sunlight, yet he developed the plate knowing full well that it had languished
for days in complete darkness. Perhaps he had a hunch. Perhaps it was a sixth
sense -the flash of unpredictable genius that separates the few scientists
who make great discoveries from the many who do not.
Whatever his motivation, Becquerel developed the plate. And
what he saw left him open-mouthed in disbelief. Shining out in brilliant
white against the black background was the image of the copper cross. The
rays that he had reported to the Academy of Sciences barely a week before
were still emitted, in the dark, with undiminished intensity.
There was only one explanation. The rays coming from the uranium
mineral were not triggered by sunlight or by any other obvious external agent.
They had nothing to do with fluorescence. Instead, they were intrinsic to
the uranium salt. What Becquerel had discovered was an entirely new phenomenon-
one which Marie Curie would two years later christen "radioactivity".
Bottomless energy
The characteristic of radioactivity that Becquerel found most astonishing
was its persistence. Becquerel could detect no weakening in the "uranium
rays", as he called them. They poured out in an unending stream, week after
week, month after month , drawing on an apparently bottomless source of energy.
It was the first indication that inside ordinary matter is a mind-boggling
energy supply. For his epoch-making discovery, Becquerel shared the 1903
Nobel Prize for Physics with Marie and Pierre Curie.
Becquerel is not alone in being led to a major scientific discovery
by a faulty chain of logic. Take the case of William Harvey, the 17th-century
English physician who discovered the circulation of the blood. Harvey, who
treated James I and Charles I, saw the human body as a microcosm of the Universe.
He believed that the same "absolute ruler" governed both, and so he looked
to the heavens for insights into the workings of the body.
And so, bizarre as it may sound, the
orbits of the planets inspired Harvey's triumphant
discovery of the circulation of the blood. "I began to think whether there
might be a motion of the blood as if it were in a circle," wrote Harvey.
He then pondered the discovery made a century earlier by
Nicolaus Copernicus that the planets did
not circle the Earth but instead orbited the Sun, the life-giving source
of energy in the
Solar System. The energy source for
the circulation of the blood then seemed clear to Harvey- it must be a central
organ , most likely the heart. "The heart," he wrote, "is the Sun of the
microcosm."
Harvey went on to test his ideas on circulation by dissection
and experiment. He demonstrated, for instance, that blood flows through arteries,
veins and heart valves in one direction only. He showed that the heart is
a muscular pump that expels blood by contracting, and that blood returns
to the heart through the veins. Yet Harvey made his great discovery-and in
the process founded the science of modern physiology -on the basis of a
fallacious theory that there was an intimate connection between blood and
the planets. In common with physiology, the modern theory of the origins
of the Universe - the big bang - had some rather
dubious early days. The big bang theory was first suggested by Soviet-American
physicist George Gamow. In the late 1930s, Gamow set out to explain where
the chemical elements had come from. What was the origin of the iron in our
blood, the calcium in our bones, and the oxygen that fills our lungs?
When Gamow began thinking about this, scientists had already
found an Important clue. Astronomers had examined the spectra of countless
stars and from the patterns of missing colours they had deduced not only
which elements were absorbing the light but how common each element was.
They had concluded that everywhere in the Universe the elements existed in
roughly the same relative proportions.
To some this was an indication that a common process had built
up all the elements, starting perhaps from the simplest, hydrogen. Indeed,
there was a precedent for such an element - building process. In 1919, the
New Zealand physicist Ernest Rutherford had bombarded a light element (nitrogen)
with alpha particles and turned it into a heavier element (oxygen) . Could
nature have done the same thing?
The obvious site for building elements was inside stars. In
the 1930s, the German physicist Carl-Friedrich von Weizsäcker had
investigated plausible element- building nuclear reactions. He concluded
that synthesis of all the chemical elements from hydrogen would require a
furnace with a very wide range of densities and temperatures, increasing
to billions of degrees. However, at that time everyone thought, incorrectly,
that all stars were much the same as the Sun, which has a core temperature
of only 15 million °C.
It was against this backdrop that Gamow began looking for an
alternative site that could have forged the chemical elements. Where in the
Universe was there a "furnace" that could reach a temperature of billions
of degrees? Gamow realised the entire Universe must have been such a furnace
when it was very young.
Over the previous decade or so, it had become clear the Universe
was expanding. Run this expansion backwards, and the Universe would become
hotter as it became denser, just as air in a bicycle pump heats up when it
is compressed. This led Gamow to suggest that the Universe was born in a
"hot" big bang. He envisaged the early Universe as a searing hot mass of
protons, neutrons and electrons compressed into a tiny volume. Something
then triggered this mass to start expanding and cooling, and as it did so
nuclear reactions among the basic ingredients forged all the elements. This
must have happened in the first few minutes of the Universe's existence before
the fireball became too cool and rarefied for nuclear reactions to
continue.
But this theory didn't entirely fit the evidence. Although
Gamow found that it was possible to make helium and other light elements
in this way, it proved impossible to build the heavy elements - whatever
mixes of initial ingredients he chose. The early Universe simply did not
stay hot and dense long enough for a succession of nuclear reactions to build
up elements such as oxygen and calcium. Gamow's theory was a miserable failure.
Inside stars
By the 1950s, however, the way that stars generate energy was better understood.
Their interiors supported a far wider range of densities and temperatures
than anyone had dreamed was possible. In fact, the hot interiors of stars
have manufactured virtually every element heavier than helium.
Gamow's big bang theory had risen from the ashes of an idea
about the cores of stars that was entirely wrong. Nevertheless, his achievement
was immense. He was the first person to think seriously about the conditions
in the early Universe. He also laid the foundations of the modern view that
only particle physics can provide answers to the ultimate questions about
the first few minutes after the Universe was born. Gamow, Becquerel and Harvey
were just three of many scientists who were right for the wrong reason. Evidence,
if evidence were needed, that great scientific discoveries often come about
in the most unexpected of ways and that the progress of science is not as
logical as the textbook would have us believe.
REASONING
When we think propositionally our sequence of thoughts is organized. Sometimes
our thoughts are organized by the structure of long-term memory. A thought
about calling your father; for example, leads to a memory of a recent
conversation you had with him in your house, which in turn leads to a thought
about fixing the house's attic. But memory associations are not the only
means we have of organizing thought. The kind of organization of interest
to us here manifests itself when we try to reason. In such cases, our sequence
of thoughts often takes the form of an argument, in which one proposition
corresponds to a claim, or conclusion, that we are trying to draw. The remaining
propositions are reasons for the claim, or premises for the conclusion.
DEDUCTIVE REASONING
LOGICAL RULES
According to logicians, the strongest arguments are deductively valid, which
means that it is impossible for the conclusion of the argument to be false
if its premises are true (Skyrms, 1986). An example of such an argument is
the following. 1. a. If it's raining, I'll take an umbrella. b. It's raining.
c. Therefore, I'll take an umbrella. How does the reasoning of ordinary people
line up with that of the logician? When asked to decide whether or not an
argument is deductively valid, people are quite accurate in their assessments
of simple arguments. How do we make such judgments? Some theories of deductive
reasoning assume that we operate like intuitive logicians and use logical
rules in trying to prove that the conclusion of an argument follows from
the premises. To illustrate, consider the following rule: If you have a
proposition of the form If p then q, and another proposition p, then you
can infer the proposition q. Presumably, adults know this rule (perhaps
unconsciously) and use it to decide that the previous argument is valid.
Specifically, they identify the first premise ("If it's raining, I'll take
an umbrella") with the If p then q part of the rule. They identify the second
premise ("It's raining") with the p part of the rule, and then they infer
the q part ("I'll take an umbrella"). Rule-following becomes more conscious
we complicate the argument. Presumably,we apply our sample rule twice when
evaluating the following argument: 2. a. If it's raining, I'll take an umbrella.
b. If I take an umbrella, I'll lose it. c. It's raining. d. Therefore, I'll
lose my umbrella. Applying our rule to Propositions a and c allows us to
infer "I'll take an umbrella"; applying our rule again to Proposition b and
the inferred proposition allows us to infer "I'll lose my umbrella," which
is the conclusion. One of the best pieces of evidence that people are using
rules like this is that the number of rules an argument requires is good
predictor of the argument's difficulty. The more rules that are needed, the
more likely it is that people will make an error, and the longer it will
take them when they do make a correct decision (Rips, 1983,1994).
EFFECTS OF CONTENT
Logical rules do not capture all aspects of deductive reasoning. Such rules
are triggered only by the logical form of propositions, yet our ability to
evaluate a deductive argument often depends on the content of the propositions
as well. We can illustrate this point by the following experimental problems.
Subjects are presented four cards. In one version of the problem, each card
has a letter on one side and a digit on the other (see the top half of Figure
9-3). The subject must decide which cards to turn over to determine whether
the following claim is correct: "If a card has a vowel on one side, then
it has an even number on the other side." While most subjects correctly choose
the "B" card, fewer than 10 percent of them also choose the "7" card, which
is the other correct choice. (To see that the "7" card is critical, note
that if it has a vowel on its other side, the claim is disconfirmed.) Performance
improves drastically, however, in another version of the above problem (see
the bottom half of Figure 9-3). Now the claim that subjects must evaluate
is "If a person is drinking beer, he or she must be over 19." Each card has
a person's age on one side, and what he or she is drinking on the other.
This version of the problem is logically equivalent to the preceding version
(in particular, "Beer" corresponds to "E," and "16" corresponds to "7");
but now most subjects make the correct choices (they turn over the "Beer"
and "16" cards). Thus, the content of the propositions affects our reasoning.
Results such as the above imply that we do not always use logical rules when
faced with deduction problems. Rather, sometimes we use rules that are less
abstract and more relevant to everyday problems, what are called pragmatic
rules. An example is the permission rule, which states that "If a particular
action is to be taken, often a precondition must be satisfied." Most people
know this rule, and activate it when presented the drinking problem in the
bottom half of Figure 9-3; that is,they would think about the problem in
terms of permission. Once activated, the rule would lead people to look for
failures to meet the relevant precondition (being under 19), which in turn
would lead them to choose the "16" card. In contrast, the permission rule
would not be triggered by the letter-number problem in the top half of Figure
9-3, 50 there is no reason for people to choose the "7" card. Thus, the content
of a problem affects whether or not a pragmatic rule is activated, which
in turn affects the correctness of reasoning (Cheng, Holyoak, Nisbett, &
Oliver, 1986). In addition to rules, subjects may sometimes solve the drinking
problem by setting up a concrete representation of the situation,what is
called a mental model. They may, for example, imagine two people, each with
a number on his back and a drink in his hand. They may then inspect this
mental model and see what happens, for example, if the drinker with "16"
on his back has a beer in his hand. According to this idea, we reason in
terms of mental models that are suggested by the content of the problem
(Johnson-Laird, 1989). The two procedures just described-applying pragmatic
rules and constructing mental models-have one thing in common. They are
determined by the content of the problem. This is in contrast to the application
of logical rules, which should not be affected by problem content. Hence,
our sensitivity to content often prevents us from operating as intuitive
logicians.
INDUCTIVE REASONING
LOGICAL RULES
Logicians have noted that an argument can be good even if it is not deductively
valid. Such arguments are inductively strong, which means that it is improbable
that the conclusion is false if the premises are true (Skyrms, 1986). An
example of an inductively strong argument is as follows: 3. a. Mitch majored
in accounting in college. b. Mitch now works for an accounting firm. c.
Therefore, Mitch is an accountant. This argument is not deductively valid
(Mitch may have tired of accounting courses and taken a night-watchman's
job in the only place he had contacts). Inductive strength, then, is a matter
of probabilities, not certainties; and (according to logicians) inductive
logic should be based on the theory of probability. We make and evaluate
inductive arguments all the time. In doing this, do we rely on the rules
of probability theory as a logician or mathematician would?
|
Content Effects in Deductive Reasoning
The top row illustrates a version of the problem in which subjects had to
decide which two cards should be turned over to test the hypothesis, 'If
a card has a vowel on one side, it has an even number on the other side."
The bottom row illustrates a version of the problem where subjects had to
decide which cards to turn over to test the hypothesis, "if a person is drinking
beer', he or she must be over 19." (After Griggs & Cox, 1982; Wason &
Johnson-Laird, 1972) |
One probability rule that is relevant is the base-rate rule, which states
that the probability of something being a member of a class (such as Mitch
being a member of the class of accountants) is greater the more class members
there are (that is, the higher the base rate of the class). Thus, our sample
argument about Mitch being an accountant can be strengthened by adding the
premise that Mitch joined a club in which 90 percent of the members are
accountants. Another relevant probability rule is the conjunction rule: the
probability of a proposition cannot be less than the probability of that
proposition combined with another proposition. For example, the probability
that "Mitch is an accountant" cannot be less than the probability that "Mitch
is an accountant and makes more than $40,000 a year." The base- rate and
conjunction rules are rational guides to inductive reasoning- they are endorsed
by logic-and most people will defer to them when the rules are made explicit.
However, in the rough-and-tumble of everyday reasoning, people frequently
violate these rules, as we are about to see.
HEURISTICS
In a series of ingenious experiments, Tversky and Kahneman (1983; 1973) have
shown that people violate some basic rules of probability theory when making
inductive judgments. Violations of the base rate rule are particularly common.
In one experiment, one group of subjects was told that a panel of psychologists
had interviewed 100 people-30 engineers and 70 lawyers-and had written
personality descriptions of them. These subjects were then given a few
descriptions and for each one were asked to indicate the probability that
the person described was an engineer. Some descriptions were prototypical
of an engineer (for example, "Jack shows no interest in political issues
and spends his free time on home carpentry"); other descriptions were neutral
(for example, "Dick is a man of high ability and promises to be quite
successful"). Not surprisingly, these subjects rated the prototypical description
as more likely to be an engineer than the neutral description. Another group
of subjects was given the identical instructions and descriptions, except
they were told that the 100 people were 70 engineers and 30 lawyers (the
reverse of the first group). The base rate of engineers therefore differed
greatly between the two groups. This difference had virtually no effect:
Subjects the second group gave essentially the same ratings as those in the
first group. For example, subjects in both groups rated the neutral description
as having a 50-50 chance of being an engineer (whereas the rational move
would have been to rate the neutral description as more likely to be in the
profession with the higher base rate). Subjects completely ignored the
information about base rates (Tversky & Kahneman, 1973). People pay no
more heed to the conjunction rule. In one study, subjects were presented
the following description: Linda is 31 years
old, single, outspoken, and very bright .In college, she majored in philosophy...
and was deeply concerned with issues of discrimination. Subjects then estimated
the probabilities the following two statements: 4. Linda is a bank teller.
5. Linda is a bank teller and is active in the feminist movement. Statement
No.5 is the conjunction of Statement No.4 and the proposition "Linda is active
in the feminist movement." In flagrant violation of the conjunction rule,
most subjects rated No.5 more probable than No.4. Note that this is a fallacy
because every feminist bank teller is a bank teller, but some female bank
tellers are not feminists, and Linda could be one of them (Tversky &
Kahneman, 1983). [see Also Bayesian statistics -LB] Subjects in this
study based their judgments on the fact that Linda seems more similar to
a feminist bank teller than to a bank teller. Though they were asked to estimate
probability, subjects instead estimated the similarity of Linda to the prototype
of the concepts "bank teller" and "feminist bank teller." Thus, estimating
similarity is used as a heuristic for estimating probability, where a heuristic
is a short-cut procedure that is relatively easy to apply and can often yield
the correct answer, but not inevitably so. That is, people use the similarity
heuristic because similarity often relates to probability yet is easier to
calculate. Use of the similarity heuristic also explains why people ignore
base rates. In the engineer-lawyer study described earlier, subjects may
have considered only the similarity of the description to their prototypes
of "engineer" and "lawyer." Hence,given a description that matched the prototypes
of "engineer" and "lawyer" equally well,subjects judged that engineer and
lawyer are equally probable. Reliance on the similarity heuristic can lead
to errors even by experts. Reasoning by similarity shows up in another common
reasoning situation, that in which we know some members of a category have
a particular property and have to decide whether other category members have
the property as well. In one study, subjects had to judge which of the following
two arguments seemed stronger 6 a. All robins have sesamoid bones. b. Therefore
all sparrows have sesamoid bones. versus 7 a. All robins have sesamoid bones.
b. Therefore all ostriches have sesamoid bones. surprisingly, subjects judged
the first argument stronger, presumably because robins are more similar to
sparrows than they are to ostriches. This use of similarity appears rational
inasmuch as it fits with the idea that things that have many known properties
in common are likely to have unknown properties in common as well. But the
veneer of rationality fades when we consider subjects' judgments on another
pair of arguments: 7. a. All robins have sesamoid bones. b. Therefore all
ostriches have sesamoid bones (same as the preceding argument). versus 8.
a. All robins have sesamoid bones. b. Therefore all birds have sesamoid bones.
Subjects judged the second argument stronger, presumably because robins are
more similar to the prototype of birds than they are to ostriches. But this
judgment is a fallacy: Based on the same evidence (that robins have sesamoid
bones), it cannot be more likely that all birds have some property than that
all ostriches do, because ostriches are in fact birds. Again, our
similarity-based intuitions can sometimes lead us astray (Osherson, et al.,
1990). Similarity is not our only strong heuristic; another is the causality
heuristic. People estimate the probability of a situation by the strength
of the causal connections between the events in the situation. For example,
people judge Statement No.10 to be more probable than Statement No.9: 9.
Sometime during the year 2000, there will be a massive flood in California,
in which more than 1,000 people will drown. 10. Sometime during the year
2000, there will be an earthquake in California, causing a massive flood,
in which more than 1,000 people will drown. Judging No.10 to be more probable
than No. 9 is another violation of the conjunction rule (and hence another
fallacy). This time, the violation arises because in Statement No.10 the
flood has a strong causal connection to another event, the earthquake; whereas
in Statement No.9, the flood alone is mentioned and hence has no causal
connections. So our reliance on heuristics often leads us to ignore some
basic rational rules, including the base-rate and conjunction rules. But
we should not be too pessimistic about our level of rationality. For one
thing, the similarity and causality heuristics probably lead to correct decisions
in most cases. Another point is that under the right circumstances, we can
appreciate the relevance of certain logical rules to particular problems
and use them appropriately (Nisbett et al., 1983). Thus, in reading and thinking
about this discussion, you were probably able to see the relevance of the
base-rate and conjunction rules to the problems at hand. |
|
|
|
|
|
|
|
|
|
|