Do we need lessons in rational decision making? David Hardman and Clare Harries
IN the areas covered by other articles in this special issue,
as well as in many other domains, a major concern is whether the thought
processes involved are suitable for producing the best outcomes. However,
a wealth of evidence indicates that people do not generally think in accordance
with the rational principles described by decision theory (for examples see
Baron, 1994b). For some researchers this implies that people require education
in decision-making techniques (e.g. Baron & Brown, 1991), but there are
also some who question the appropriateness of rational models or claim that
simpler processes can often be highly successful. In this article we review
some of the research that addresses the tensions between these two viewpoints.
Violations of rational principles
This value function' is part of the basis of prospect theory
(Kahneman & Tversky, 1979; see Connolly et al., 2000, for a simpler
description). Not only are outcomes treated as gains or losses from a subjective
reference point, but people are (a) cautious about obtaining gains, preferring
sure-things over gambles, and (b)
risk-seeking for losses - to avoid a certain
loss they will take a gamble that could lead to an even bigger loss.
The value function also predicts framing effects, such as
in the factory closure example described by Maule and Hodgkinson in this
issue (p.68). In that problem, objectively identical outcomes were described
in terms of the number of jobs lost or the number of jobs saved. Participants
switched preferences according to which of these descriptions was used,
supporting the idea that people are concerned with perceived gains or losses
in relation to a subjective reference point.
Prospect theory is successful in describing and predicting
a wide range of data, and it continues to be a leading theory of
decision making. However, there are also non-supportive
findings (e.g. Schneider, 1992; Wang, 1996), and there is no apparent rationale,
for example, for the value function described above. The value functions
and subjective functions are descriptive of behaviour but do not go beyond
this. Furthermore, the theory proposes that decisions are made on the basis
of a pre-processed, and possibly simplified, representation of the decision
situation, yet there is no clear specification of the 'editing' processes
that create the problem representation.
Other examples of the violation of rational principles are
attributed to the apparent failure to think through the consequences of uncertain
alternatives. For example, if you are awaiting the outcome of an examination
then your future planning requires you to imagine two possible futures in
which you have passed or failed the exam. In an experiment based on this
scenario students were told to imagine they had just taken a tough qualifying
examination. Most of those who were told the 'result' of the exam chose to
buy a cheap holiday to Hawaii in a one-day special offer, regardless of
whether they had passed or failed. However, students whose results were
not yet released preferred to pay a small deposit to defer the holiday decision
until after the exam results were obtained, which suggests that they had
not thought through the consequences of passing or failing - namely that
they might feel the need either to celebrate if they passed or take a break
anyway if they failed (Shafir et al., 1993). The behaviour of participants
in this study is a violation of Savage's 'sure-thing principle': If you prefer
A to B in all possible states of the world (most wanted to go whether they
had passed or failed), then you should prefer A to B in any particular state
of the world.
Other systematic violations of rationality ('biases') have
been attributed to specific shortcuts or 'heuristics' (e.g. Kahneman
et al., 1982). Consider the Linda problem' (see box). In a study by Tversky
and Kahneman (1983) 85 per cent of respondents indicated that Linda was
less likely to be a bank teller than both a bank teller and a feminist. However,
because the set of women bank tellers includes, and must be at least as large
as, the set of women who are both bank tellers and feminists, it is wrong
to suppose that Linda is more likely to be both. According to the
representativeness heuristic, people overlook the basic principles of probability
and make their judgments according to the perceived similarity between the
statement and the description of Linda.
The optimistic view of judgment and decision making
Over the years a number of decision strategies. have been proposed whereby people avoid effortful trade-offs between the good and bad points of an option. These are known as non-compensatory strategies. Examples are Simon's (1957) 'satisficing' heuristic (See Maule & Hodgkinson, this issue) and 'elimination-by-aspects' (Tversky, 1972). To illustrate the latter, imagine that you are looking for a new car, and the most important feature is petrol consumption. You begin by comparing all cars on that criterion, and eliminate from consideration any models that fall short. Then you compare the remaining cars on the next most important feature, and so on.
Payne et al. (1993) found that simple strategies such
as those described above may be used to reduce the choice set before applying
a more complex (trade-off) strategy to the remaining alternatives. Thus.
for example, having eliminated all cars that have poor petrol consumption
and all cars that are above your price range, you may compare three remaining
models simultaneously on the grounds of colour, shape, headroom and luxury
of the interior, assessing each in terms of these variables and giving them
different weights.
Although simple strategies may precede more complex ones, computer
simulations been eliminated, the final choice may be made on minor considerations
such as colour indicated that the simple non-compensatory strategies can
actually be highly effective in terms of achieving desirable outcomes.
Performance using simple heuristics may depend on the nature
of the task. In the field of probability judgment Tversky and Kahneman
(1983) and others have found that conjunction errors (see
the Linda problem above) greatly decrease when the task instruction following
the personality sketch reads as follows:
Some researchers argue that people are evolutionarily adapted
to reason about naturally sampled frequencies (e.g. sequences of events)
rather than single events (e.g. Gigerenzer & Hoffrage, 1995; see also
Cosmides & Tooby, 1996). However, on some problems the size and source
of such format effects are disputed (e.g. Evans et al., 2000; Harries
& Harvey, 2000). Furthermore, people sometimes fail to apply statistical
knowledge that they possess. When forecasting how long it will take to complete
a project, people may fail to consider previous projects they have undertaken.
Rather, they take an 'inside view' of the current project, thinking only
about their plans and scenarios leading to successful completion (Kahneman
& Lovallo, 1993). This results in overly optimistic forecasts. (We see
this annually with final-year undergraduate projects!)
Following Brunswik (1952), another strand of research attempts
to model the environment, as well as the judgment or decision, in terms of
the available information. The predictability of a person's performance can
be seen in relation to the predictability of the environment. For example,
how well does a particular medical symptom predict the existence of a particular
disease? Brunswik emphasised that humans can learn about the probabilistic
relationships between information and a criterion and can learn to substitute
different pieces of information for each other. Research in this area has
examined the integration of multiple cues to produce a judgment or decision.
and has typically relied on regression analysis to determine which
factors are most predictive (Cooksey. 1996: Doherty & Kurz. 1996: see
also Hammond & Stewart. 2001).
More recently Gigerenzer and colleagues (1999) have rejected
the notion that people are computing regression equations when making judgments,
in favour of the use of 'fast and frugal' heuristics. They argue that information
in the environment is structured such that a single cue can be good enough
(i.e. highly predictive) for us to make very accurate judgments or decisions.
In other words, good judgments and decisions can often be made on the basis
of one reason. For example, whether a German city has a football team in
the Bundesliga is a valid (but not infallible) cue to city size: a city that
does is likely to be larger than one that does not.
But perhaps the simplest reason for making a decision is the
fact that you recognise something. This can be a very profitable strategy:
Borges et al. (1999) found that stock portfolios constructed on the
basis of company name recognition were more successful than those constructed
by business students on the basis of knowledge. Stock
portfolios involving foreign companies are also more likely to be based
on name recognition than those involving home companies, because there is
less knowledge available. Indeed, with both German and American students
the researchers found superior performance for foreign stock portfolios over
home portfolios.
Gigerenzer's work explicitly draws on Simon's notion of bounded
rationality, though using the term ecological rationality to refer
to the match between mind and environment that is emphasised in their work.
Gigerenzer and Goldstein (1996) have used computer
simulations to show that judgments based on single reasons such as this
are at least as accurate as judgments based on the integration of several
items of information. However,some authors are critical of the assumptions
underlying this approach (see e.g. the exchanges in Behavioral and Brain
Sciences. 2(5), 727-780).
The effectiveness of simple decision strategies is supported
by research from naturalistic decision making (NDM). Much of this research
studies important real-life decisions made by experts under conditions of
time pressure and stress. A typical finding is that decision makers rarely
consider more than one course of action at a time. According to the
'recognition-primed' theory of decision making (Klein, 1998), a decision
maker mentally simulates the consequences of following the same course of
action that worked on a previous occasion. Only if this simulation is not
acceptable does the decision maker consider some alternative course of action.
In essence, expert decision makers are using Simon's satisficing heuristic.
So should we teach rational decision making?
From a Brunswikian perspective this allows people to gain an
understanding of the underlying structure of the environment. However, it
relies upon receiving the sort of outcome feedback that is often unavailable
in real life. For example, jurors will usually never know whether they imprisoned
the wrong person. More typically. behaviour is changed using cognitive feedback
in the form or ideal weighting of information given the environmental
structure.
Another approach is to change aspects of the environment to
fit people's inherent information-processing behaviour. As described above,
the specific wording of some probability problems has been shown to facilitate
reasoning performance. In another domain, Klayman and Brown (1993) found
that medical diagnosis was improved when training involved the presentation
of contrasted information about two diseases rather than learning about those
diseases separately. A decision aid may take doctors and patients through
a series of steps in which they discover underlying preferences, possible
options and which course of action is actually the best one.
It is important to consider a range of situations in which
rationality in both personal and public policy decisions can be undermined
by certain intuitions (see Baron, 1994a, 1998). For example, we generally
consider it important to avoid doing harm through our actions, but are sometimes
willing to risk harm through inaction. In one study many participants
voted against hypothetical social reforms that they agreed would be beneficial
overall. often on the grounds that some people would nonetheless be worse
off (Baron & Jurney, 1993).
This 'do no harm' [Ref: JS.Mill "Offence and Harm"; A.Rand
{Collectivism}] intuition can influence people's willingness to vaccinate
a child where the vaccine itself carries some smaller
risk of harm.
This reluctance to make trade-offs suggests that education
in rational decision-making techniques may be of value, at least in situations
not seriously constrained by time pressure. For example, school students
who have had classes in rational decision making could put this into practice
in their choice of examination subjects, universities, and later important
life decisions.
Conclusions and implications
Despite the shift towards the optimistic camp in judgment and
decision making, there are still questions
to be addressed relating to contemporary decision environments in which our
evolved thought mechanisms may not be particularly helpful (Ayton
& Wright, 1994). For example, it is hard to see how heuristic thinking
by jurors could be of benefit to the legal process, and researchers are keen
to identify the kinds of context that will facilitate a more analytical mode
of thought (see Honess & Charman, this issue). In some situations decision
education and decision-analysis techniques may be
of use. However, the effectiveness of such methods, though widely assumed,
is largely untested (Clemen, 1999) and perhaps even doubted (Klein, 1998).
What is needed is an investigation of specific decision
domains, and whether factors that optimise performance in one domain
also help in another domain. |
References
Ayton, P &Wright G. (1994). Subjective probability: What
should we believe? In G.Wright & Payton (Eds.) Subjective
probability. Chichester: Wiley. |
Chaos | Quantum | Logic | Cosmos | Conscious | Belief | Elect. | Art | Chem. | Maths |