Chapter 12 - Descriptive Approaches to Decision Making

 

THIS CHAPTER WILL DISCUSS:

 

1.  The difference between ÒoptimizingÓ and ÒsatisficingÓ models of individual decision making.

 

2.  The effect of Òdecision heuristicsÓ on individual and group decision making.

 

3 . The utilization of information during group discussion.

 

4.  The meaning of the term "groupthink."

 

INTRODUCTION

 

            In this chapter, we return to the issue of decision making.  This chapter discusses how people and groups make decisions.  Then, in Chapter 13, "Formal Procedures for Group Decision Making," we shall describe how some theorists think groups should make decisions. Thus we can say that this chapter contains a description of how decisions are made, and Chapter 13 contains a prescription for how decisions perhaps should be made.

            This chapter is a necessary stepping-stone to Chapter 13.  Before scientists can create theories about how groups should make choices, they must have knowledge about how people tend to approach decisions.  In essence, they need to know what group members are capable of doing.  What decision-making capabilities do humans have?  As we shall see, there is much disagreement about the answer to this question.

 

THEORETICAL APPROACHES TO INDIVIDUAL DECISION MAKING

 

Optimizing Decision Theory

 

            We will begin with a discussion of two different types of theories about how individual people make decisions.  Some scientists have adopted one of these, the ÒoptimizingÓ type of theory.  Optimizing theories make a number of assumptions about how people make decisions.  First, decision makers are believed to consider all possible decision options.  Second, decision makers are seen as assessing all of the available information when making their choice.  Third, decision makers are seen as choosing that option that provides them with the best possible outcome.

 

The Subjective Expected Utility Model

 

            To begin our discussion, we will examine the Òsubjective expected utilityÓ, or ÒSEUÓ model.  What is this model?  It is an equation that allows us to predict the decision that individual people will make when faced with a number of options.  Use of the SEU model implies that people act as if they had calculated the Òexpected utilityÓ of each option.  When people do this, they choose the alternative that they believe has the highest expected utility.

 

Demonstration of the SEU model.  Let us go through a demonstration of how a person could use the SEU model from start to finish as he or she chooses a course of action.  Fischoff, Goitein, and Shapira (1982) presented this full-blown example.  Let us say that you are the person making a decision.

            You are home, and you must decide how to get to class on a nice spring day.  The classroom is five miles away, and you have an hour to get there.  The SEU model requires that you take the following steps:

 

1.  You must list all feasible options.  It turns out that you can walk, take a bus, ride your bicycle, or drive.

 

2.  You next need to enumerate all the possible consequences of each action.  For this example, you can list two consequences.  One is getting to class on time and the other is getting some needed exercise in the bargain.

 

3.  Imagine that each option has occurred and assess the attractiveness or averseness of each of its consequences.  For instance, how attractive is getting to class on time by walking?  How attractive is getting exercise through walking?  You can use a scale of one to ten to decide the attractiveness, or "utility," of your consequences.  On this day, you decide that both consequences to walking are attractive.  Hence, for the option of walking, getting to class on time gets a utility rating of nine and getting sufficient exercise also receives a nine.

            Other similar evaluations can follow.  Imagine that you give almost every option the utility of nine for each consequence.  The only exception is the attractiveness of bicycling as an exercise.  It is lowered to six by the prospect of having to sit in class while sweating.

 

4.  Evaluate the probability that the consequences of each option will actually occur.  For instance, will walking to class really get you to class on time?  Will walking also truly lead to exercise?  Again, using a scale of one to ten, you can assess this probability.  You feel that walking is decent exercise, so it probably gets a probability rating of six regarding your desired consequence of getting exercise.  You will not get to class on time, however, so walking gets a probability rating of only one for this consequence.

            You can rate other options the same way.  The bus is very reliable transportation (probability = nine), but the only exercise you would get is walking to the bus stop (prob. = two).  Bicycling is good exercise (prob. = eight), and barring a flat tire, you will get to class on time (prob. = eight).  Driving is dependable if you can find a parking space.  If you cannot find a space, you will be late for class (prob. = five).  Also, unless you park far from class, you will get no exercise (prob. = one).

 

5.  You need to compute the expected utility, or "EU," of each option.  You do so by multiplying how much each consequence of the option is worth to you (which is its basic utility rating) by the probability that it will occur.  The product of this multiplication is the EU for the consequence.  The EU of the entire option is the sum of the EU of all possible consequences within that option.  For example, consider the option of riding your bicycle.  To find out if you should or not, you want to know the EU of that option.  You believe that riding your bicycle is associated with two consequences: being on time and getting exercise.  Each of these consequences has its own EU.  To find out just what should happen if you ride your bike, you need to examine the EU of both its consequences.  Table 12.1 shows these calculations.

 

6.  Finally, you choose the outcome with the greatest EU.  As you can see, you should ride your bicycle to class today.

 

            The SEU model is thus an optimizing decision model that is based on a person's own personal estimates of probability and value.  We can use it in circumstances in which it is difficult to obtain objective estimates of decision-making outcomes.  This is often true with decisions that people make in the "real world."  For example, how can a person place an objective value on a scenic view?  Yet millions of people decide every year to visit the Grand Canyon.

 

Table 12.1

On Time

 

 

Exercise

 

 

 

Means

(Prob X

Utility)

Plus

(Prob X

Utility)

Equals

EU

Walk

(1 X

9)

Plus

(6 X

9)

Equals

63

Bus

(9 X

9)

Plus

(2 X

9)

Equals

99

Bicycle

(8 X

9)

Plus

(8 X

6)

Equals

120

Drive

(5 X

9)

Plus

(1 X

9)

Equals

54

 

Using the SEU model, we can assume that people make their best decision when they try to get the best results for themselves or for whomever the decision should benefit.  This idea fits in with optimizing decision theory.  Remember though that the judgments of probability and utility are made from each individualÕs standpoint.  Therefore, the option that is chosen as the "best" is likely to vary from person to person.

 

Criticisms of the SEU model.   However, the model falls prey to other criticisms.  First, it assumes that decision making is in some sense "compensatory."  This means that a good subjective estimate can counterbalance a bad subjective estimate.  In our example, bicycling received a bad utility rating because of the inconvenience of becoming sweaty.  However, it also received a good estimate for the probability of getting to class on time.  Thus, bicycling was the best choice.

            The problem is that some circumstances clearly cannot meet this compensatory assumption.  For instance, a situation can be "conjunctive."  When this happens an option that fails in one criteria cannot make up for that failing.  All other criteria are immaterial.  Fischoff et al. (1982) used the example of a couple planning a vacation to illustrate the idea of the conjunctive situation.  The couple wishes to travel to a place that is reasonably priced, available, sunny, and quiet.  They say they will stay at home if no place can meet all four criteria.  For instance, if they arrive at a place that is cheap, available, and sunny, their whole vacation will be ruined if their hotel is close to a noisy highway.

            Other situations may be "disjunctive."  This means a person will choose an option if it is adequate on any criterion.  Fischoff et al. used an investment opportunity to illustrate this idea. The investment is acceptable if it is a good speculation, a good tax shelter, or a good hedge against inflation.  The person will make the investment if it is any of these three things.  The point that Fischoff et al. make is that different circumstances require different procedures for decision making.

            Second, scientists have criticized the model because they are not sure that it accurately reveals the steps that people take as they make decisions.  For example, assume that we have seen Janet bicycling to class.  We wish to discover how she made her decision to use her bicycle.  We ask Janet to tell us of the various alternatives she had available to her, as well as the possible consequences of each.  We further ask her to tell us the probability and utility of each consequence, in relation to every possible action.  We then compute the expected utility of each option.  The model predicts that Janet would have bicycled.  We conclude that Janet used the SEU model to make her decision.

            Our conclusion could easily be wrong.  It may be that Janet only considered the probabilities of getting sufficient exercise and of arriving at class on time.  To make her decision, she simply added the probabilities together.  A model for her decision is illustrated in Table 12.2.

 

Table 12.2

 

Probabilities

 

 

 

Means

On Time

Plus

Exercise

Equals

EU

Walk

1

Plus

6

Equals

7

Bus

9

Plus

2

Equals

11

Bicycle

8

Plus

8

Equals

16

Drive

5

Plus

1

Equals

6

 

            As you can see, Janet made the same decision that the SEU model predicted she would make.  However, she did not consider the utility of each consequence.  Janet was only concerned with the probability of whether the consequence would occur.  It was true that Janet could report the utility of each consequence when we asked her.  Still, she did not use these utility ratings when she originally made the choice to bicycle to class.

            We can propose many other models that would make the same prediction.  Each would show that bicycling would be the best course of action for Janet, based on her situation.  Note, for example, the probability ratings for getting sufficient exercise.  These alone could lead to a prediction that bicycling was the best option for Janet.

            Thus, many models can predict decisions as well as the SEU model.  This means scientists must turn to other evidence to discover how people make decisions.  Researchers have done just that.  Some evidence has even cast doubt on the theory behind the SEU model.  These findings suggest that people may not naturally optimize when they make decisions, even when scientists can predict their decisions by using the SEU model.

 

Satisficing Decision Theory

 

            Simon (1955) was the first prominent theorist to doubt that people are able to calculate the optimal choice.  He believed that it is impossible for people to consider all the options and all the information about those items that the SEU and similar models assume.  Simon proposed his own model of decision making as an alternative to the optimizing approach.  He called his proposal the ''satisficing'' decision model.  It implies that people think of options, one by one, and choose the first course of action that meets or surpasses some minimum criterion that will satisfy them.

            Simon believed that decision makers establish a criterion (their Òlevel of aspirationÓ) that an alternative must meet in order to be acceptable.  People examine possible options in the order that they think of them.  Eventually, they accept the first option that meets their criterion.  To illustrate Simon's idea, we shall return to the example of choosing how to get to class.

 

Example

 

            Suppose four possible courses of action will help you get to class.  Each has a number that represents its subjective value.  One of the possibilities is walking, which has a value of 6. The others are taking the bus (10), bicycling (12), and driving (5).  Keeping these subjective values in mind, you begin the process of deciding on a course of action.

            First, you establish a level of aspiration.  You decide, for example, that an option must have the value of at least 8 before it will satisfy you.  Next, you evaluate your options. You first think of walking.  It has a value of 6.  This does not meet the level of aspiration. Therefore, you reject it as a possibility.  The second option that comes to your mind is taking the bus. It is worth 10.  This meets the level of aspiration, so you accept it.

 

Satisfactory versus optimal.  You may wonder why our example above did not once again lead to the decision to bicycle to class.  We know that bicycling is the optimal decision, because it has a value of 12.  However, Simon believed that you would never consider bicycling.  The idea of taking the bus came into your head before you had a chance to think about bicycling. Once you found the satisfactory option of taking the bus, you never thought of any other possibilities.  Hence, you end up with a satisfactory option, but not the optimal one.

            Despite the example above, Simon believed that, in the long run, the satisficing process leads to the optimal decision more often than not.  He believed that a person's level of aspiration can rise and fall over time.  This fluctuation depends on the respective ease or difficulty of finding satisfactory options.  In our example, you were able to find a satisfactory option fairly easily.  Taking a bus was only the second alternative you considered. Perhaps you will become more demanding the next time you wonder how to get to class.  You reached a decision so easily the first time you may feel more confident that there is an even better option available to you.

            In this situation, you will probably raise your level of aspiration.  It is hoped that the criterion will continue to shift upward over time.  Ideally, it should reach the point where only the optimal choice will be satisfactory.  If this happens, the results of the satisficing model will approximate the outcome of an optimizing model.  People will make their best choice despite their inability to optimize.

 

Decision Heuristics

 

            Simon's satisficing model is an example of a "heuristic."  A heuristic is a simplified method by which people make judgments or decisions.  These methods approximate the results of more complex optimizing models, but they are easier for people to use.  Many studies have shown that people usually use heuristics when they make judgments and decisions.  This evidence continues to mount.

 

Tversky and Kahneman Heuristics

 

            In a classic article in 1974, Tversky and Kahneman proposed three heuristics that people seem to use when they estimate the probabilities of events.  As with Simon's satisficing model, these heuristics are far simpler than analogous optimizing methods.  They also usually lead to the optimal judgment, as Simon's methodology does.

            However, heuristics do have a negative side.  When they backfire, the errors that result are not random.  Thus, the results will not cancel each other.  Instead, when people follow a heuristic model, their errors will be biased in ways that are harmful to decision making.  This is an important aspect of the heuristics that we shall examine.

 

Representativeness heuristic.  The first heuristic that Tversky and Kahneman proposed was the representativeness heuristic.  The representative heuristic is relevant when people attempt to estimate the extent to which objects or events relate to one another.  The representativeness heuristic maintains that, when people do this, they note how much objects or events resemble one another.  They then tend to use this resemblance as a basis for judgment when they make their estimates.

            As with other heuristics, the representativeness heuristic usually leads to correct judgments.  Nisbett and Ross (1980) provide an example of this.  Someone asks Peter to estimate how clearly an all-male jury relates to the United States population as a whole.  He needs to decide how representative of the population the jury is.  He will no doubt give the jury a low estimate, and he would be correct.  Clearly, the population of the United States is made up of both men and women.  Therefore, an all-male jury does not "look like" the general population.  Peter notes this and makes the correct, low estimate.

            However, in many circumstances basing judgments on resemblance leads to error.  For instance, people may have additional information that can help them find out the probability that the objects or events they consider are related.  In these situations, people are incorrect if they use resemblance as the sole basis for judgments.

            In one of Tversky and Kahneman's studies, the researchers gave participants a personality description of a fictional person.  The scientists supposedly chose the person at random from a group of 100 people.  They told participants that 70 people in the group were farmers and 30 were librarians.  They then asked the participants to guess if the person was a farmer or librarian.  The description of the fictional person was as follows:

 

Steve is very shy and withdrawn. He is invariably helpful, but he has little interest in people or in the world of reality.  A meek and tidy soul, he has a need for order and structure and a passion for detail.

 

Most people in the experiment guessed that Steve was a librarian.  They apparently felt that he resembled a stereotypical conception of librarians.  In so doing, the participants ignored other information at their disposal.  They knew that Steve was part of a sample in which 70 percent of the members were farmers.  Thus, the odds were that Steve was a farmer, despite his personality.  The participants should have taken these odds into account when they made their decision.

 

            Cause and result.  Individuals may also err when they judge whether an event is the result of a certain cause.  This might happen if they look for the extent to which the event resembles one of its possible causes.  If people use this resemblance, they may choose an incorrect cause.

            For example, imagine being shown various series of the letters "H" and "T."  You are told that each series came from tossing a coin.  One side of the coin was "H" ("Heads") and the other side was "T" ("Tails").  Many people think that a series similar to HTHTTH is most likely caused by a random tossing of the coin.  This is because the series looked random to them.  In contrast, they do not think that a series such as HHHHHH or HHHTTT resulted from a random process.  They are wrong.  A random process can cause all of the different series.

            Many people misunderstand random processes.  They think the result of a random cause should "look" random.  This is not necessarily true.  We can see how a random process would lead to results that look rather unrandom.  On the first toss of a coin, for example, there is a 50 percent chance of H and a 50 percent chance of T.  No matter what the result of this first flip is, the second toss will have the same odds.  There will again be a 50 percent chance of either H or T.  Thus there is a 25 percent chance of any of the following combinations: HH, HT, TH, or TT.  Continuing this logic, for six tosses there is a 1.5625 percent chance of HTHTTH, HHHTTT, HHHHHH, and all of the other 61 possible series combinations of coin flips.  As you can see, all the different series combinations have the same odds, and all have a random cause.

            A similar error is the "gambler's fallacy."  This is the feeling that, for instance, after a series of HHHHHH, the next flip ought to be a T.  The "gambler" believes this because a T would "look" more random than another H would.  However, as long as the coin is fair, there is still a 50-50 chance that the next flip will be an H.

            Hence, the representativeness heuristic often leads to correct answers, but it can also cause people to err in their judgments.  Outcomes that resemble one another are not necessarily related.

 

Availability heuristic.  Tversky and Kahneman's second proposal was the availability heuristic.  This heuristic maintains that the ease with which examples of an object or an event come to mind is important.  People tend to estimate the probability that an event will occur or that an object exists, based on whether they can think of examples easily.

            As with the representativeness heuristic, this strategy usually leads to satisfactory decisions.  For example, someone may ask you if more words in the English language begin with "r" or with "k."  You can think of words at random, tallying them up as they come into your mind.  You are then able to figure out the percentage of words that begin with each letter.  In this way, you could no doubt correctly decide which letter starts the most words.  Similarly, availability helps the satisficing model work as well.  One reason satisficing usually results in the optimum choice is that the best option usually comes to mind quickly.

            However, as with the representativeness approach, the availability heuristic can easily lead people astray.  There are many factors that bring an object to our attention.  Some of these factors are not conducive to good judgment.

            One study revealed that the factor of how well known something is can cause people to make incorrect decisions.  In the experiment, participants heard a list of names of men and women.  The researchers then asked them to judge if the list had more men's names or more women's names.  The list actually had an equal number of names from each gender.  However, some of the names were more well-known than others.  The well-known names were mainly from one gender, and the participants tended to choose that gender as the one that supposedly dominated the list.

            In another study, experimenters asked participants which English words were more common, those with "r" as their first letter or those with "r" as their third letter.  Most people said that words that begin with "r" are more numerous.  They probably did so because it is easy to think of relevant examples, such as "rat," "rabbit," "really," etc.  However, this was the wrong answer.  You need only look at any random piece of writing to see this.  In fact, you can look at the words in the sentence that described this experiment: "participants," "words," "were," "more," and "first."  However, in comparison with words that begin with "r," it is relatively difficult to think of examples in which "r" is the third letter in a word.  This is because we tend to use first letters to organize words in our minds.

            Thus, the availability heuristic often leads to correct conclusions.  However, it can also create errors.  People may think quickly of well-known or vivid examples.  It may be, however, that the more well-known options are not the best decisions that people can make.

 

            Conjunctive fallacy.  An implication of the representativeness and availability heuristics is the conjunctive fallacy.  The conjunctive fallacy is the tendency to believe that the conjunction, or combination, of two attributes or events (A and B) is more likely to occur than one of its parts (A).  The conjunctive fallacy occurs either because the conjunction is more representative of  stereotypes or more available to our imagination. 

            For example, imagine that the probability that Sue Blue is smart is 40 percent and the probability that Sue Blue wears glasses is 30 percent.  Given this information, what is the probability that Sue is both smart and wears glasses?  The most it can be is 30 percent, and only when everyone who wears glasses is smart.  Normally, the probability of a conjunction will be less than either of its parts.  However, if we have a stereotype in our minds that smart people wear glasses, or find this easy to imagine, we might consider the probability to be higher than 40 percent.

            Tversky and Kahneman (1983) found evidence for the conjunctive fallacy in a study of research participantsÕ estimates of the attributes of imaginary people.  They gave participants descriptions such as:

Bill is 34 years old.  He is intelligent but imaginative, compulsive, and generally listless.  In school, he was strong in mathematics but weak in social sciences and humanities.

They then asked their participants to judge the probability that Bill

A - is an accountant

B - plays jazz for a hobby

A & B - is an accountant who plays jazz for a hobby.

About 85 percent of the participants gave a higher probability to the A-B conjunction than to B alone.  One can guess that the description of Bill meets the stereotype of an accountant but not the stereotype of a jazz musician.  Nonetheless, given that participants thought it likely that Bill was an accountant, they must also have thought it reasonable that he might have an unusual hobby.

            Leddo, Abelson, and Cross (1984) found a similar effect when they told their participants phony facts such as ÒJill decided to go to Dartmouth for collegeÓ and asked them to judge the probability that each of the following were reasons for the decision:

1 - Jill wanted to go to a prestigious college.

2 - Dartmouth offered a good course of study for her major.

3 - Jill liked the female/male ratio at Dartmouth.

4 - Jill wanted to go to a prestigious college and Dartmouth offered a good course of study for her major.

5 - Jill wanted to go to a prestigious college and Jill liked the female/male ratio at Dartmouth.

76 percent of the participants chose one of the conjunctive explanations over any of the single ones.

 

Vividness criterion.  Nisbett and Ross (1980) argued that there is one significant reason that the representativeness and availability heuristics sometimes lead to incorrect decisions.  They proposed a "vividness criterion."  They believed that this criterion was the basis for much of the misuse of the two heuristics.  The criterion involves the idea that people recall information that is "vivid" far more often and far more easily than they recall "pallid" information.  Something is vivid when it gets our attention and holds our attention.  There are different ways in which information can get and hold our attention.  One way is the extent to which the data is emotionally interesting and relevant to ourselves or to someone whom we value.  Another way in which information can be vivid is the extent to which it is image-provoking or concrete. Something is also vivid if it is temporally/spatially proximate to us, making it close to us in time or distance.

            Judging by news media attention, people appear to have far greater interest in events that happen near to them than in events that take place far away.  For instance, they will have a large amount of interest in the murder of one person in their town.  This will be particularly true if the story is accompanied by vivid pictures of the victim.  In contrast, people will be only somewhat interested in the murder of thousands of people in some far-off land.  They will have even less interest if there are not pictures accompanying the report.

            We can see how the idea of the vividness criterion was at work in some of the heuristic examples we have already discussed.  For instance, people placed a great deal of trust in the concrete description of "Steve."  The description evoked images of a shy and orderly man.  In contrast, the participants did not pay much attention to the pallid, statistical information that 70 percent of the sample were farmers.  Hence, the participants made incorrect decisions because they concentrated only on the vivid information.  Nisbett and Ross have shown this to be a normal tendency in other studies they have described.

 

Anchoring heuristic.   Tversky and Kahneman proposed a third heuristic called the anchoring heuristic.  This approach involves the manner by which people adjust their estimates.  When people make estimates, they often start at an initial value and then adjust that value as they go along.  Researchers have found that people tend to make adjustments that are insufficient.  In other words, people are too conservative in the weight that they give new information.  They tend to use their first estimate as an "anchor," and it is difficult for them to move away from it and create new estimates.  The anchoring heuristic describes this tendency.

            In one of their studies, Tversky and Kahneman asked participants to estimate the product of 8-7-6-5-4-3-2-1 and the product of 1-2-3-4-5-6-7-8.  As you can see, the two series are the same.  However, it appears that people place too much weight on the first few numbers in such a series.  The mean estimate that participants gave for the first product was 2,250.  This was far greater than the mean estimate for the second product, which was 512.  In fact, the adjustment was woefully inadequate for both series.  The participants were far off in their calculations. The correct answer is 40,320.

 

Framing.  More recently, Kahneman and Tversky (1984) have shown that the way in which someone describes a decision has a large effect on how people will make it.  Kahneman and Tversky gave all their participants the following description of a problem:

 

Imagine that the U.S. is preparing for the outbreak of an unusual disease.  Officials expect that the disease will kill 600 people.  They have proposed two alternative programs to combat the illness.  Assume that the exact scientific estimates of the odds for the various programs are as follows:

 

The researchers then gave half their study participants the following options and told them to choose one of them:

 

If the country adopts Program Alpha, 200 people will be saved.

 

If the country adopts Program Beta, there is a 1/3 probability that 600 people will be saved but a 2/3 probability that no people will be saved.

 

Through calculations we can see that both programs have an "expected utility" that leads to a death rate of 400.  Thus, to the optimizing theorist they are equivalent.  However, 72 percent of the experimental participants chose Program Alpha.  Apparently, they were reacting to the probable loss of 600 lives in Program Beta.

            The experimenters gave the other half of the participants the following options instead:

 

If the country adopts Program Theta, 400 people will die.

 

If the country adopts Program Omega, there is a 1/3 probability that nobody will die, but a 2/3 probability that 600 people will die.

 

As you can see, the wording of the two programs has changed.  Program Theta is exactly equivalent to Program Alpha.  Program Omega is the same as Program Beta.  The only difference is in the framing.  Theta and Omega (the ÒlossÓ frame) enumerate how many of the 600 will die, whereas Alpha and Beta (the ÒgainÓ frame) describe how many will live.

            The results for this half of the participant population contrasted with the outcome from the half that saw Program Alpha and Program Beta; 78 percent of this new half chose Program Omega.  The researchers believed that the participants reacted to the chance that nobody would die.  Clearly, the different descriptions had a huge effect on people's choices.  The experimenters simply described the same options in terms of dying, rather than in terms of people saved, and thereby changed which option their participants found most attractive.

 

Decision Heuristics in Group Decisions

 

            All of the studies we have just described show that heuristics can cause bias in decisions made by individual people.  Do the same effects occur in decision-making groups?  Arguments can be made on both sides of the issue.  One can claim that discussion provides the group with the opportunity to correct the errors in judgment made by the individual members.  However, Tindale (1993) made a good argument for the other side.  Suppose a group makes a decision based on majority rule.  Also suppose that there is a relevant decision heuristic that leads more than half of the groupÕs members to make the wrong judgment.  In that case, the group is likely to make the wrong decision, because the incorrect majority will outvote the correct minority.

            Thus there are good reasons in favor of either side of the issue.  Given this, it should not be surprising to find that, according to research, groups are sometimes more and sometimes less susceptible to judgment bias than individuals.  In the following paragraphs, we will describe some of this research.

           

Representativeness Heuristic.  Argote, Devada, & Melone (1990) performed a study similar to the Tversky and Kahneman research described earlier.  Five-member groups and individuals were told that, for example, 9 engineers and 21 physicians had applied for membership in a club.  Then participants were given brief descriptions of three of the applicants: Ben, who was described as a stereotypical engineer; Roger, who was described as a stereotypical physician; and Jonathan, whose description fit neither stereotype.  The participants were then asked to estimate the probability that each of the three was an engineer.  In addition, the individuals were asked to discuss the problem as they solved it so that the researchers could compare group and individual Òdiscussion.Ó 

            Given that 30 percent of the applicants for the club were engineers, the participants would have made unbiased judgments if their estimates for all three were 30 percent.  Table 12.3 shows the average judgments the participants made.

 

Table 12.3

Argote et al. Study

 

Applicants

Individual Judgment

Group Judgment

Ben

63%

77%

Roger

25%

17%

Jonathan

39%

30%

 

As you can see, both the individuals and the groups were biased in their judgments for the two stereotypical applicants.  Further, the groups were more biased than the individuals in these judgments.  In contrast, when there was no relevant stereotype, the individuals and, even more so, the groups, made unbiased judgments.

            A content analysis of the comments made by the participants during their decision making gives an indication of the role played by group process in these decisions.  For example, groups were more likely to say that the description of Ben was similar to an engineer and dissimilar to a physician, and that the description of Roger was similar to a physician and dissimilar to an engineer, than individuals.  Individuals were more likely to say that the descriptions of Ben and Roger were not relevant than were groups.  All of this is evidence that groups were more likely to be focusing on the individual descriptions rather than the relevant proportions for the two stereotypical applicants.  This likely accounts for why groups tended to make more biased decisions than individuals in these cases.  In contrast, groups were more likely to refer to the relevant proportions when discussing Jonathan than were individuals.  This may be why groups made less biased decisions.

 

Availability Heuristic.  Unfortunately, there does not appear to be a study directly comparing group and individual susceptibility to the availability heuristic.  There is, however, a study comparing group and individual performance relevant to the conjunctive fallacy, which can result from the availability of examples.  As described above, the conjunctive fallacy is the tendency for people to judge that the combination of two attributes or events is more likely to occur than one of the attributes or events alone.  Tversky and Kahneman (1983) and Leddo, Abelson, and Cross (1984) found evidence for the conjunctive fallacy in two different types of circumstances.  Tindale (1993) reported a study by Tindale, Sheffey and Filkins in which groups and individuals were given problems of both types.  Overall, individuals made the conjunctive fallacy about 66 percent of the time, and groups 73 percent of the time.  Thus groups were more susceptible to this bias than individuals.

 

Anchoring Heuristic.  Davis, Tindale, Nagao, Hinsz, & Robertson (1984) performed a study that shows that both groups and individuals are susceptible to anchoring effects, although we cannot easily tell if either is more susceptible from the study.  Individuals and six-member mock juries were shown videos of a mock trial in which a defendant was charged with, in order of seriousness, reckless homicide, aggravated battery, and criminal damage to property.  Participants were then asked to deliberate either from the most to least serious (reckless homicide first, criminal damage to property last) or least to most (criminal damage to property first, reckless homicide last).  In all cases, participants discussed aggravated battery between the other two charges.  If anchoring were to occur, participants would be more likely to view the defendant harshly and find him guilty on the intermediate charge (aggravated battery) after discussing reckless homicide than after discussing criminal damage to property.  Further, if anchoring were to occur, participants were more likely to find the defendant guilty after finding him guilty on the first charge discussed than after finding him not guilty on the first charge.  Both anchoring effects occurred.

 

Framing Effects.  A study by Neale, Bazerman, Northcroft, and Alperson (1986) implies that groups may be less susceptible to framing effects than individuals.  Neale et al. asked participants to make decisions individually on problems similar to the Kahneman and Tversky (1984) ÒdiseaseÓ problem discussed earlier.  The results of the individual decisions were consistent with the Kahneman and Tversky results; those given the ÒgainÓ frame tended to choose the first option and those given the ÒlossÓ frame tended to choose the second option. The researchers then asked their participants to make a second decision about the problems, this time in groups consisting of four members, all of whom had the same frame.  The group decisions were less susceptible to framing effects than the individual decisions.

 

General Conclusions

 

            As we have shown, there is overwhelming evidence that people generally do not use optimal methods for estimating the probabilities of objects and events.  The experiments we have discussed above often found that people did not carefully calculate their estimates.  It may be that the calculations for the SEU model and other optimal approaches are difficult.  We should note here, however, that Nisbett et al. (1983) provided information that people can use optimal methods when they make the effort to do so.  Nevertheless, the truth is that people usually do not use optimal models.  Instead, they use heuristic methods.  Heuristics usually lead to reasonably accurate judgments, though they can also lead to errors.  Interestingly, researchers have been able to predict many of these errors.  Well-known and vivid data can cause errors, for example.  Incorrect estimates may also occur when a person's initial guess is far off the mark.

            Does group discussion help individuals overcome the errors that the use of decision heuristics cause?  Or does group discussion make these errors more likely?  The research we have just described does not allow us to reach a clear conclusion about this issue.  The answer to these questions seems to differ among the various heuristics Tversky and Kahneman identified.  It also seems to depend on the specific judgment that group members are trying to make.  Much more research is needed before we will be able to predict when groups are and are not more prone to judgmental errors than individuals.

 

Information Recall

 

            Another area in which groups do not optimize is the success they have in recalling information.  Suppose Evelyn and Gertrude were each shown ten items of information one day, and asked to remember it the next day.  Working alone, each is able to remember five of the items.  If they were working together, would their memory be any better?

            It turns out that this problem can be thought of as a productivity task, and treated just like the tasks of this type that we discussed back in Chapter 2.  To use terminology from Chapter 2, it is possible that remembering items of information during group discussion is either wholist (people working together inspire one another to better thinking than if each were working alone), reductionist (members either Òsocially loafÓ or get in one anotherÕs way when performing the task together) or has no effect on individual recall.  If information recall is unaffected by interaction, then the number of items of information recalled by group members should be accurately predicted by Lorge and SolomonÕs (1955) Model B.  Model B, described in Chapter 2, is relevant to the situation in which a group must make a set of independent judgments or decisions.  Recalling a set of informational items would be an example of this situation.  In this situation, Model B presumes that the odds of a group recalling each item is governed by Model A, and that the total number of items recalled by the group are those odds multiplied by the number of items that the group has been given to remember.  So, for example, if the odds that a person remembers an item of information is .4, then the odds that a group of two members (a dyad) would recall it is .64 (as computed by Model A) and if the dyad were given 20 items of information, Model B predicts that they would remember .64 multiplied by 20, or 12.8 of them, on average.

            There are two research studies that show that Model B does a good job of predicting the average number of items of information recalled by dyads.  Meudell, Hitch & Kirby (1992) did a series of experiments that support the notion that memory is not facilitated by interaction.  They were as follows:

Experiment 1 - participants were given a list of 24 words to remember.  Three months later, they recalled them either alone or in a dyad.

Experiment 2 - participants were shown a 10-minute film clip, then after a delay performing an irrelevant task were asked to recall 20 items in the film either alone or in a dyad.

Experiment 3 - participants were shown the names and faces of 25 well-known people and 25 strangers.  Later, they were shown the faces and asked to recall the associated names.

Experiment 4 - a replication of the first study, except that recall was after a short delay rather than after three months.

Wegner, Erber and Raymond (1991) asked participants to remember a list of 64 paired words either with their dating partner or with a stranger.  In addition, some participants were told that each member should ÒspecializeÓ on remembering particular categories.

            The results of all these studies can be found in Table 12.4.

 

Table 12.4

Number of Recalled Items

 

Study

Individual Recall

Model B Prediction for Dyads

Dyad Recall

Meudell et al.

 

 

 

     Study 1

3.9

7.2

6.2

     Study 2

9.1

14.0

11.4

     Study 3 - Familiar

12.5

18.7

16.5

               - Unfamiliar

5.1

9.0

8.6

     Study 4

11.5

17.5

16.0

Wegner et al.

 

 

 

     Dating couples,           assigned special.

13.7

17.5

16.0

     Dating couples,  no assigned special.

18.9

32.2

31.4

     Stranger couples,        assigned special.

18.2

31.2

30.1

     Stranger couples, no assigned special.

16.3

28.5

25.4

 

            Note that throughout these data, dyad performance was if anything worse than Model B.  The experience of recalling information in dyads did not help individual performance.  The findings of the Wegner et al. study are particularly noteworthy.  Even dating couples whose members specialized in particular categories did no better in remembering the paired words than Model B.

 

BRIDGING THE GAP

 

            We have just examined the heuristic-based, satisficing approach to decision making. As we have shown, it is a simplified idea of how people make judgments and choose their courses of action.  There is a gap between researchers who support the optimizing models and those who prefer the satisficing approach instead.  The debates between proponents of the two viewpoints are loud. However, this has not stopped other theorists from trying to bridge this gap.  Now, we will describe ways in which researchers have tried to combine the best elements of both approaches.

            In so doing, theorists have identified a circumstance in which group members satisfice regularly.  This has negative consequences.  We have come to call this type of situation "groupthink."

 

Arousal Theory

 

            Some scientists argue that researchers need to revise the whole theory behind the models that we have examined.  They believe that theorists should not search for a single model that always represents the decision-making process, and they argue that experimenters could spend their time better in another way: by discovering how various circumstances relate to different models. Hence, they believe that an alternative theory--one that relates to situations--is in order.

            This new theoretical concept in some ways accepts the idea of optimizing.  It assumes that people are capable of optimizing under ideal circumstances.  However, the theory also maintains that as situations become less and less ideal individuals are less and less able to optimize.

            If this idea is true, different models are accurate at different times.  The trick is to find when each model is most applicable.  For instance, when you need to decide how to escape a burning building, you will probably follow a simplified model of decision making.  You need to make a decision quickly.  In contrast, when you sit down to plan your vacation, you may follow more complicated steps as you make your decisions.

            This view is consistent with a group of similar proposals that focus on how situations affect humans.  These proposals fall under one overall concept.  We call this concept "arousal theory" (see, for example, Berlyne, 1960).  The theory maintains that a sort of cognitive "energy" exists in all of us that drives our psychological operations.  Arousal takes place as this energy increases.  Different situations "produce" different amounts of arousal.  When situations become more "complex," arousal increases.

            Many variables can contribute to the complexity of a situation.  One variable is the amount of information that the person must process.  Others include the novelty of the information and the consistency between pieces of data.  Still other variables involve the extent to which the information changes over time, the clarity of the data, and the difficulty of understanding the information.

            Our ability to process information and make decisions based on that information is an inverted U-function of arousal.  In other words, a graph of how arousal affects the decision-making process would look like an upside-down U.  At the beginning of the graph, the situation is not complex and arousal is low.  We are not interested enough to make good decisions. In short, we are bored.  If the situation begins to be more complex, the graph line will start to move up.  We are now becoming more interested and excited, and we make better decisions.  However, as complexity increases past some point, it "overloads" us.  This is where the line of the graph levels off and begins to move down.  We start to panic and make poor choices.  The more complexity continues to increase, the more we panic, and the worse our decisions become.  Thus, there is an optimum amount of arousal.  At this amount, we are able to make our best decisions.  However, when the level of arousal increases or decreases from this optimum point, our decisions become less than best.

 

Conflict Theory

 

            Janis and Mann (1977) proposed a theory of decision making based on arousal theory. They claimed that choices that are personally important lead to complex situations.  The complex situations can, in turn, result in intrapersonal conflict.  This is particularly true if the possible options have potentially serious shortcomings.  Such a circumstance produces arousal.  The arousal tends to increase until the person makes a decision and then to decrease.

            For example, Abby must decide if she should leave her job and join a new, small company that promises her a management position.  This is a personal decision that is very important to her. Both options have shortcomings.  If she stays at her present job, she does not feel that there are opportunities for advancement.  If she joins the small firm, she will be in an insecure position because the company may fail.  This dilemma causes her great anxiety. Abby feels conflict within herself over which option to choose.  She will probably continue to feel anxiety until she makes a decision, one way or another.

            Janis and Mann emphasize that decisional conflict may be either good or bad for the person making the decision.  This is consistent with arousal theory.  Whether the conflict is good or bad depends on the amount of stress that a person feels.  Optimal amounts of arousal help people to use optimizing decision-making procedures.  In contrast, arousal levels that are either greater or lesser than optimal may cause people to use satisficing procedures instead.

            For instance, if Abby feels little internal conflict, she may not be very aroused concerning her decision.  She may just decide quickly on one of the options.  If she has the right amount of stress, Abby will be seriously concerned with her choice.  She may sit down and carefully figure out just what she should do.  In this case, she may follow the steps and calculations of an optimizing model.  Finally, if Abby feels too much stress, she may just want to make the decision as quickly as possible.  In that case, she might use a simplified satisficing method.

 

Questions and answers model.  The specific theory Janis and Mann created is based on a question-answer model.  Their model claims that a decision maker asks himself or herself a series of questions.  The answers that the person gives influence the amount of arousal that he or she feels.  This, in turn, influences what process of decision making the person will use. Let us go through the questions that make up this model, assuming that the decision maker is faced with an unsatisfactory circumstance:

 

1.  Question 1 = "Are the risks serious if I take no action?"  If the answer is no, little arousal occurs, and the person will take no further action.  If the answer is yes, some arousal takes places, and decision making begins.  Usually the person will begin by thinking of the first available alternative to the status quo.  For example, Abby may answer this question by saying, "Yes, the risks are serious.  My present job will not lead to a management position."

 

2.  Question 2 = "Are the risks serious enough if I take the most available action?"  If no, the decision maker chooses the most available option besides the status quo.  For instance, Abby would simply join the small firm.  The person's arousal will then decrease.  This is a satisficing decision strategy but sufficient for the circumstance.  If, however, the decision maker answers yes, arousal increases.  For instance, Abby may say, "Yes, the risks are great.  The new company is not very stable and could fail tomorrow.Ó

 

3.  Question 3 = "Is finding a better alternative than the most available one a realistic possibility?"  If no, then "defensive avoidance" takes place.  The person will try to avoid finding a new alternative.  The exact nature of this technique depends on how the person answers two auxiliary questions.  He or she will ask these only if the answer to Question 3 is no:

 

a.  Auxiliary Question 3a = "Are the risks serious if I postpone the decision?"  If the answer is no, the individual avoids making a choice through procrastination.  If the answer is yes, he or she moves onto Auxiliary Question 3b.

 

b.  Auxiliary Question 3b = "Can I turn the decision over to someone else?"  If yes, the person does just that. If the answer is no, the decision maker will choose the most available alternative.  This is again a satisficing strategy, but in this case, it is not sufficient for the circumstance.  The person attempts to make the decision "feel" better.  He or she does this by psychologically exaggerating the positive consequences and minimizing its negative consequences.  The person may also try to minimize the responsibility that he or she feels for the decision.

            No matter which technique the individual chooses, the person who answers no to Question 3 will eventually lower his or her arousal.  However, the person will probably have made a poor decision.  Neither Auxiliary Question 3a nor 3b will be a factor, however, if the person answers yes to Question 3.  If the individual answers yes, his or her arousal continues to increase.  Abby may say, "Yes, I think that perhaps I could find a better job than either of the two options."  She is now getting very concerned about what course of action she should take.

 

4.  Question 4 = "Is there sufficient time to make a careful search for data and time to evaluate the information once I have it?"  If the answer is yes, arousal should be approximately at the optimal level it can reach for the person.  This allows optimizing decision making and the potential for the best decision.  For instance, Abby says that yes, she has the time.  She can stay at her old job for a bit, and the new company says that it will wait for a time.  She investigates other job opportunities and finds out more about the stability of the new company. Finally, she decides that the new company is on firm ground, so she joins it and gets her managerial position.  In contrast, a person may feel that there is no time and that he or she must answer no to Question 4.  In this case, arousal increases beyond the optimal level.  The decision maker panics.  He or she will then follow a quick, satisficing method and come to a poor decision.

 

Optimizing process.  According to Janis and Mann, definite steps can lead a person to an optimal decision.  The process begins if the decision maker has little confidence in the status quo and little desire to pursue the most available course of action.  It continues if the person has high confidence that a better alternative exists.  The process continues further if the individual believes that he or she has enough time to find the best alternative.  All of these factors lead a decision maker to be optimally aroused.  In this psychological state, he or she is most likely to use an optimizing decision method, leading the person to make a good choice.

 

Satisficing process.  If the process does not follow these steps, the decision maker is likely to be either overaroused or underaroused.  In either psychological condition, the person will most likely use a satisficing decision strategy.  This may not matter if either the status quo or the most available option is sufficient.  However, if these two alternatives are not the best courses of action, there can be problems.  The chances are good that the individual will make a poor and potentially harmful decision.

            As we have shown, Janis and Mann have created a method to predict behavior during decision-making situations.  Their model predicts when people will optimize and when they will satisfice.

 

Groupthink

 

            In 1972, Janis labeled a view of group decision making as Ògroupthink," which he defined as a circumstance in which a group establishes a norm that consensus is the group's highest priority.  This means that agreement takes precedence over all other matters for the group.  Of course, we have seen how consensus is necessary for a group to reach a decision. However, the desire for consensus should not preclude an open discussion.  Group members should closely examine all possible courses of action.  In a groupthink situation, members apparently do not do this.  Instead, they believe that the most important consideration is that they all stand together.  Janis later used conflict theory to reinterpret the idea of groupthink.

 

Example: Bay of Pigs

 

            On April 17, 1961, a small band of Cuban exiles landed on the southern coast of Cuba, at the Bay of Pigs, with the aim of overthrowing the government of Fidel Castro.  The United States Central Intelligence Agency (CIA) had trained and armed the exiles.  It was soon clear that this had been a quixotic, doomed adventure.  Three days after the landing, the survivors were forced to surrender to overwhelming Cuban forces.  Historians have come to consider the exiles as victims of a poorly planned United States government operation.  They now consider the Bay of Pigs to have been "among the worst fiascoes ever perpetrated by a responsible government" (Janis, 1972, p. 14).

            It is true that the CIA originally proposed and planned the overthrow attempt during the Eisenhower administration.  However, the real responsibility for the Bay of Pigs failure must rest with the Kennedy administration.  This administration included President John Kennedy and aides such as Secretary of State Dean Rusk, Secretary of Defense Robert McNamara, Attorney General Robert Kennedy, Secretary of the Treasury Douglas Dillon, and foreign affairs advisor McGeorge Bundy.  These were the people responsible for the decision to go ahead with the Bay of Pigs operation.  With seeming unanimity, these men approved the ill-fated venture.

 

Example: Cuban Missile Crisis

 

            Soon after the Bay of Pigs fiasco, the Kennedy administration faced another problem. In October 1962, the United States found evidence that the Soviet Union had agreed to supply Cuba with atomic missile installations.  In response to this evidence, the United States instituted a naval blockade of Cuba.  The United States also announced that it would search any ships that attempted to enter Cuban waters.  For a week, these developments brought the world to the brink of nuclear war.  However, eventually the Cuban Missile Crisis resulted in a general easing of Cold War tensions.  Further, its effects lasted for some time afterward.

            It may have been that some of the cooler heads inside the Kremlin were responsible for the Soviet decision to back down.  However, the real responsibility for this tremendous strategic success must, overall, again rest with the Kennedy administration.  Once more, this group included President John Kennedy and aides such as Secretary of State Dean Rusk, Secretary of Defense Robert McNamara, Attorney General Robert Kennedy, Secretary of the Treasury Douglas Dillon, and foreign affairs advisor McGeorge Bundy.

            How is it that the same policy-making group could make such two different decisions? In the disastrous decision of the Bay of Pigs, the group chose a covert, ill-planned mission.  In the handling of the Cuban Missile Crisis, the group made a series of well-reasoned decisions that proved successful.  Could the group have changed so much in little over a year?  No. Instead, something else was at work.  We can assume that it was the decision-making method that changed drastically between the two instances, not the group itself.  Janis took this assumption and studied it.

            He analyzed historic documents that revealed the various decision-making procedures used by high-ranking United States government decision-making groups.  Janis looked at successful decisions, such as the planning and implementation of the Marshall Plan to rebuild Europe after World War II.  He also examined government failures, such as the inadequate protection of U.S. naval forces at Pearl Harbor before the Japanese attack, the attack of North Korea during the Korean War, and the escalation of the Vietnam War by Lyndon Johnson and his advisors.  In 1972, Janis concluded that differences in the decision-making situations led to either successes or failures.  He coined the term "groupthink."  This was the circumstance, Janis believed, that led to many of the government's most costly decision failures.

 

Refined Concept of Groupthink

 

            Janis (1983) proposed a refined conception of groupthink.  To begin, there are six conditions that make the occurrence of groupthink possible.  The first of these factors is high group cohesiveness.  Usually cohesiveness leads to the free expression of ideas; however, in groupthink circumstances, the opposite occurs.  Second, the members have an authoritarian-style leader who tends to argue for "pet" proposals.  Thus, we would not expect groupthink to occur in groups that have a tradition of democratic leadership.  Third, the group is often isolated from the "real world"; that is, the group is not forced to deal with what is happening "out there" beyond the group.

            Fourth, the group does not have a definite procedure, or method, for decision making.  In Chapter 13, we will discuss procedures for decision making that help protect against groupthink.  Fifth, the members of the group come from similar backgrounds and have similar viewpoints.  The sixth condition for groupthink follows from Janis and Mann's arousal theory of decision making.  The group is in a complex decision-making situation that causes a significant amount of arousal in each member, and the members feel that finding an alternative better than the leader's pet proposal is unrealistic.  As discussed earlier under the Questions and Answers model, "defensive avoidance" will occur, and the group will either procrastinate or, more likely, adopt the leader's pet proposal.  The presence of any one of these six conditions will not ensure that a cohesive group will suffer from groupthink.  The more of these conditions that exist, however, the more likely it is that groupthink will occur.

            Eight "symptoms" accompany groupthink.  Two concern a tendency for the group to overestimate itself:

 

1.  The group members have the illusion of invulnerability.  They believe that their decisions cannot possibly result in failure and harm.  For example, during the Bay of Pigs planning sessions, the Kennedy group did not accept the possibility that the administration, rather than the Cuban exiles themselves, would be held responsible for the attack.  The Kennedy administration also did not expect that worldwide condemnation would be directed towards the United States as a result.

 

2.  The group has unquestioned belief in the morality of its position.  Johnson's administration felt that bombing raids on civilian targets and the spraying of napalm and Agent Orange were all acceptable tactics of combat in Vietnam.  This was because the group believed that its cause was just.

 

            Two of the symptoms concern the resulting close-mindedness of the group members:

 

3.  The group members construct rationalizations to discount warning signs of problems ahead.  This apparently occurred constantly in the Lyndon Johnson group.  The group rationalization buttressed the mistaken belief that continual bombing raids would eventually bring the North Vietnamese to their knees.

 

4.  The people in the group stereotype their opponents as evil, powerless, stupid, and the like. Kennedy's group believed that the Cuban army was too weak to defend itself against attack, even attack from a tiny force.  The Kennedy staff also believed that Castro was so unpopular among Cubans that they would flock to join the attacking force.  This was despite the fact that the group saw data that showed that Castro was quite popular.

 

            Four symptoms concern pressures toward uniformity in opinions among members of the group:

 

5.  The group exerts pressure on group members who question any of the group's arguments. Members of Johnson's group, including the president himself, verbally berated members who expressed uneasy feelings about the bombing of North Vietnam.

 

6.  Group members privately decide to keep their misgivings to them­selves and keep quiet. During the Bay of Pigs planning sessions, participant Arthur Schlesinger kept his doubts to himself. He later publicly criticized himself for keeping quiet.

 

7.  The group has members whom Janis called "mindguards."  These are members who "protect" the group from hearing information that is contrary to the group's arguments.  These members take this responsibility on themselves.  We know that Robert Kennedy and Dean Rusk kept the Kennedy group from hearing information that may have forced it to change the Bay of Pigs decision.

 

8.  The group has the illusion of unanimity.  There may be an inaccurate belief that general group consensus favors the chosen course of action when, in fact, no true consensus exists. This illusion would follow from the "biased" communication and mindguarding, self-censorship and direct pressure create.  In fact, after the Bay of Pigs fiasco, investigators discovered that members of Kennedy's group had widely differing ideas of what an attack on Cuba would involve.  The members did not know that they had differing opinions, however.  Each participant mistakenly believed that the group had agreed with his own individual ideas

 

            The presence of groupthink and its accompanying symptoms leads to various outcomes.  Groupthink results in the following:

 

1.  The group limits the number of alternative courses that it considers.  Usually such a group examines only two options.

 

2.  The group fails to seriously discuss its goals and objectives.

 

3.  The group fails to critically examine the favored course of action.  The members do not criticize, even in the face of obvious problems.

 

4.  The members do not reach outside the immediate group for relevant information.

 

5.  The group has a selective bias in reactions to information that does come from outside.  The members pay close attention to facts and opinions that are consistent with their favored course of action and ignore facts and opinions that are inconsistent with their choice.

 

6.  After rejecting a possible course of action, the group never reconsiders the action's strengths and weaknesses.

 

7.  The group fails to consider contingency plans in case of problems with implementation of the course of action the members choose.

Lowering the Possibility of Groupthink

 

            Group members can take several steps to lower the possibility for groupthink.  During the Cuban Missile Crisis, President Kennedy apparently took the following measures that undoubtedly worked to his advantage:

 

1.  The president assigned the role of "critical evaluator" to each member of his group.  The norm of the "critical evaluator" was to be responsible for questioning all facts and assumptions that group members voiced.  They were also to question the leader's opinions.  Kennedy also assigned to his brother Robert the special role of "devil's advocate."  In this role, Robert Kennedy took the lead in questioning other group member's claims.

 

2.  The president refused to state which course of action he preferred until late in the decision-making process.

 

3.  He consulted with informed people outside the group.  He also invited them to meetings. The outside people added information and challenged the group's ideas.

 

4.  He divided the group into subgroups.  Each subgroup made preliminary decisions concerning the same issue.  The larger group would then reconvene to compare these preliminary decisions and hammer out the differences among the various options.

 

5.  Kennedy set aside time to rehash earlier decisions.  He wanted a chance to consider any new objections to the decisions that the group members might have.

 

6.  He had the group search for signs warning the members of problems that the chosen course of action might be having, after the administration had begun to implement the plan. Thus, he could reconsider the course of action even after the group had made the decision to implement it.

 

Groupthink: Phenomenon or Theory?

 

            As we can see, Janis created various steps that can warn a group when groupthink may be a problem.  He also provided powerful examples from President Kennedy's group to show when the groupthink process influenced a decision and when it did not.  However, what Janis has provided is more of a proposal of a phenomenon rather than a theory behind a decision-making model.  As a formal theory of group decision making, the groupthink hypothesis falls far short.

            Longley and Pruitt (1980) pointed out some failings of the groupthink hypothesis.  As they explained, Janis has not provided an analysis of the causal linkages among the proposed input, process, and output variables.  Janis has outlined the input variables, or precipitating conditions.  Further he has given information about the process variables, such as the symptoms and some of the outcomes of groupthink.  Janis also has revealed other results as output variables.  However, he has not shown how all these variables relate to one another. Without the necessary linkages, it has been difficult to make a good experimental test of the hypothesis.  Some scientists have attempted to simulate groupthink in the laboratory, but most studies have been inadequate to the task.

            Nonetheless, some definite progress has been made in clarifying the groupthink hypothesis.  For example, early research showed mixed results for the effect of cohesiveness in relevant experiments.  However, the research review by Mullen, Anthony, Salas, and Driskell (1994) that we discussed in Chapter 3 helped to clear things up.  Unlike earlier reviews, Mullen et al. distinguished between task- and maintenance-based cohesiveness.  The higher a groupÕs maintenance-based cohesiveness, the worse their decision quality tended to be.  It follows that it is maintenance-based and not task-based cohesiveness that increases the likelihood of groupthink.  In addition, Mullen et al. found that two of the other conditions Janis had proposed, the presence of authoritarian-style leadership and the absence of methodical decision procedures, also had strong negative effects on decision quality.

            This increased understanding has allowed for better experimental research concerning groupthink.   For example, Turner, Pratkanis, Probasco, and Leve (1992) asked sixty three-member groups to make decisions about human relations problems under either high or low threat  and either high or low cohesiveness conditions.  Under high threat conditions, groups were videotaped and told that the videos of poorly functioning groups would be shown in training sessions on campus and in corporations.  Low threat groups had no analogous experience.  The members of high cohesive groups were given name tags with a group name to wear and given five minutes before their decision-making session to explore similarities among themselves.  In contrast, low cohesive group members were given no tags and given five minutes to explore their dissimilarities.  Judges rated the groupsÕ subsequent decisions as significantly poorer when they were under high threat and had high cohesiveness (which approximates groupthink) and when they were under low threat and had low cohesiveness (and presumably no motivation to make a good decision) than under either the high threat/low cohesiveness or the low threat/high cohesiveness circumstances.

            Thus we are slowly coming to a better understanding of how groupthink can occur and damage group decision quality.  Of course, as Janis (1983) reminded his readers, groupthink is only one of several reasons that groups may make unsuccessful decisions.  Groups may strive to gather information from the outside world, only to receive misinformation in the process.  Group members may succumb to the types of individual decision-making errors that we have discussed throughout this chapter.  Further, a group may make a good decision, but the decision may fail anyway because of poor implementation by people outside the group, unpredictable accidents, or just plain bad luck.  Nonetheless, it is plausible that groupthink does lead to poor decisions in many circumstances.  Further, the recommendations that Janis provides for combating groupthink are very valuable.  Any decision-making group should practice his recommendations, whether or not groupthink actually exists.

            We would like to emphasize one of the recommendations from Janis's list.  Every job should have a general procedure for running a group meeting that allows the group to make optimal decisions.  Scientists have proposed different procedures to help groups do this.  In the next chapter, we shall describe several of these.  We will also discuss the conditions under which a group may use each.  Further, we shall examine experimental evidence conceming the value of each procedure to a decision-making group.

 

SUMMARY

 

            The study of individual decision making has been dominated by two overall approaches.   Traditionally, decision-making theories have assumed that the ideal decision maker is capable of optimizing; in other words, choosing their best option after a thorough examination of all feasible options and all relevant information.  This best option can be predicted through multiplying each optionÕs probability of occurrence with its ÒutilityÓ for the decision maker.  However, Simon believed that this approach was unrealistic.  He

predicted that people choose the first satisfactory option that comes into their minds.  This

is called a satisficing approach.

            There is a lot of evidence that people normally satisfice when making decisions.  For example, Tversky and Kahneman have proposed a number of decision heuristics, or simplified methods for making judgments about objects and events.  The representativeness heuristic is used when people use the resemblance between different objects or events to estimate their relatedness.  The availability heuristic is used when people estimate the likelihood of an event based on how easily it comes to mind.  The anchoring heuristic is used when people use the initial value as a basis for estimating a whole series of values.  Finally, framing effects occur when peopleÕs judgments are influenced by the way in which the relevant information is worded.  Decision heuristics usually lead to reasonably accurate judgments, but in some circumstances can lead to judgmental biases.  Research comparing group and individual susceptibility to these biases has lead to inconsistent conclusions.

            Despite this evidence for satisficing, it is most likely true that people can both "optimize" and "satisfice."  Some theorists claim that the style of decision making people follow depends on the amount of stress that they feel.  Stress causes people to become aroused. Research has discovered that decision makers are at their best under intermediate amounts of arousal. Too little arousal, and people are not vigilant enough.  Too much stress, and they panic.

            Janis has proposed that cohesive groups can suffer from a problem he called "groupthink."  Groupthink is a condition that occurs when groups under stress establish the norm that displaying consensus is the group's number one priority.  The hypothesis of groupthink was originally too vague to undergo experimental analysis.  Nevertheless, certain historical events, in which groupthink seems to have occurred, support it.  Further, recent work has begun to clarify the idea.