Reasoning and Decision Making

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

criticism of Lichtenstein et al. (1978)

The bias here is in the environment (the over-reporting of rare events) rather than a consequence of a flawed estimation strategy. This kind of effect does not directly demonstrate availability-based judgment, because no assessment of the ease-of-retrieval has been made. The tendency to over-/under-estimate rare and common events can be explained in other ways. In particular, it can be seen as an instance of the general central tendency of judgment, where estimates for extreme items being biased towards the mean of the set. This central tendency is widespread and can be seen as an optimizing strategy - when one is uncertain, guessing the mean of the distribution is sensible - without invoking availability.

Ayton and Fischer (2004) alternative account, based on ecological considerations, for hot hand and gambler's fallacy

1. Many physical processes involve sampling without replacement, results in diminishing probability for a given outcome the more times that it has occurred. 2. Correspondingly, the GF reflects an over-generalization of this ecologically-sensible principle to other random, mechanical processes - e.g., roulette wheels and coin tosses - about which we have very limited experience. 3. By contrast, many aspects of intentional human performance really do show positive recency. If you practice a new game, your shooting success will increase. So the hot hand fallacy can be seen as an appropriate generalization of this principle to situations which also human performance, but where the outcome probabilities are in fact independent.

proposed cognitive processing for estimating engineer and lawyers probability + base rate neglect?

1. People assess the extent to which the description of Jack is similar to (or representative of) each of the two categories - lawyers and engineers. 2. To the extent that Jack is more similar to the stereotypical engineer, he is more likely to be judged an engineer. 3. Because this assessment of similarity is independent of the prevalence of lawyers and engineers in the population, the resulting probability judgment is independent of the base rates for these two professions.

Other mechanisms that might contribute towards anchoring/assimilation effects include

1. consideration of the anchor as a possible value for the estimated quantity activates relevant semantic knowledge, this activated knowledge then shapes or biases our final estimate (Chapman & Johnson, 1999) 2. anchor value changes our perception of the magnitude of other candidate values (e.g., if we've just been thinking about a 12% probability, 50% seems quite large; if we've been considering 88%, 50% seems quite small; Frederick & Mochon, 2011). 3. externally-presented anchors may be seen as a "hint" or suggestion, even if they are ostensibly uninformative -> possibilities are not mutually exclusive -> do not all fit with the idea that anchoring stems from the application of quick-and-easy-but-biasing heuristics

sources of anchors,

1. own recent judgments 2. externally provided, and ostensibly irrelevant

"Availability" Heuristic, when rational when bias?

1. rational:If an event happens a lot then it will be easy to think of many past instances 2. bias: - our experience of past events does not reflect their true frequencies, or - events are easy to recall for some reason other than their frequency of occurrence.

attribute substitution, what is it? what three types?

A phenomenon observed when individuals must make judgements that are complex but instead substitute a simpler solution or perception, replacing the evaluation of a complex or difficult quantity with an easier-to-evaluate dimension; 1. Availability Heuristic 2. Representativeness Heuristic 3. Anchor-and-Adjust heuristic

More direct evidence for the role of representativeness, Kahneman and Tversky (1973), results

Across the 9 subjects, probability judgments were very highly correlated with representativeness judgments (r = .97) but negatively correlated with base-rate judgments (r= -.65). -> predictions were based on how representative people perceive Tom W. to be of the various fields, and ignored the prior probability that a randomly-selected student would belong to those fields (base rate neglect).

Tversky and Kahneman (1974)

Anchors can be externally provided, and ostensibly irrelevant; span a wheel of fortune that landed on 10 (a low anchor) for one group of participants and on 65 (a high anchor) for another group. Participants were asked whether the proportion of African Countries in the United Nations was more or less than the anchor, and then asked for their best estimate of the true value. The median estimate was 25 in the low anchor condition and 65 in the high anchor condition - that is, the participants' judgments were pulled towards the anchor values

Chapman and Johnson (1999)

Anchors can be externally provided, and ostensibly irrelevant; write down the last 2 digits of their social security number and treat it as a probability (e.g., "14%"). Participants were asked whether the probability that a Republican would win the 1996 US Presidential Election was more or less than this probability, prior to giving their best estimate of the true probability. The larger the anchor, the larger the best estimate, with a correlation of r = 0.45.

axiom-violation resulting from representativeness heuristic, why?

Base Rate Neglect; Similarity-based judgments are insensitive to prior probabilities, so judgments based on representativeness will be largely independent of base rates

availability heuristic is posited to produce judgments that deviate from the rules of probability theory, what?

Conjunction Fallacy

A different illustration of potential "ecological rationality"

Gambler's Fallacy and Hot Hand Fallacy, how people judge the probability of streaks of a random outcome

Croson and Sundali (2005)

Gambler's Fallacy; examined roulette betting patterns from a Nevada casino. They focused on "even money" outcomes looked at bets as a function of the number of times that an outcome had occurred in a row; As the run-length increased, people were increasingly likely to bet that the next outcome would be the opposite of the streak

stronger demonstration of the use of the availability heuristic, Tversky and Kahneman (1973)

Participants listened to a list of 39 names. Condition1: names comprised 19 famous men and 20 less famous women; Condition2: 19 famous women and 20 less famous men. After listening to the list, some participants had to write down as many names as they could recall; others were asked whether the list contained more names of men or of women. Results: participants retrieved more of the famous names -> famous names were more available. judged the gender category that contained more famous names to be more frequent. -> people made their proportion estimates by assessing the ease with which examples of each come to mind. When one category was easier to retrieve (via the fame manipulation) it was judged more frequent, even when it was actually experienced less often.

Teigen (1983)

Probability theory does not "come naturally" to people - without training, it is easy to make mistakes; gave participants a murder story about a battered-to-death teacher. One scenario described five suspects (e.g., the middle-aged female cleaner who found the body). Participants indicated the probability of guilt for each suspect, from 0-100%. The mean total probability judgment was 159.1%, in defiance of formal probability

Hahn and Warren (2009) alternative account, based on ecological considerations, for hot hand and gambler's fallacy

With an infinitely long sequence of coin flips, all sequences of a given length occur with equal probability; However, humans do not experience or remember infinitely-long sequences; for shorter sequences, the probability of encountering HHHT and HHHH are not equal.

Experiment demonstrating conjunction fallacy, Tversky and Kahneman (1983)

_ _ _ _ i n g v.s. _ _ _ _ _ n _; basing their judgments on the mental availability of relevant instances: it is easy to think of "ing" words (for example, by thinking of words that rhyme) but we are less accustomed to organizing/retrieving words based on their penultimate letter, so _n_ words are harder to retrieve and thus seem rarer. If/when participants apply a more systematic mental search strategy, we would expect the conjunction fallacy to disappear.

Gilovich et al., 1985, explanation of hot hand fallacy

a run of one outcome doesn't seem representative of randomness, leading people to conclude that the underlying process is not random

gambler's fallacy.

a run of one outcome increases the probability of another (when the events are actually independent); often attributed to the representativeness heuristic: people expect a "local" sequence to be representative of the underlying process

why we should consider the ecological context in which judgments are made?

apparent biases may be rational responses given the informational and cognitive constraints

anchoring

assimilation of a numeric estimate towards another, anchor value

people sometimes simplify judgments by substituting an easier-to-evaluate entity for the target dimension, two examples

availability and representativeness heuristics

ecology and adaptation argument for overcoming probability theory violations?

better at probability tasks when the information is presented in way that matches our supposed "evolved" cognitive capacities for handling this kind of information. In particular, it has been argued that humans evolved to process frequencies (counts) obtained by sampling the environment, rather than normalized probabilities.

evidence for cognitive strategies, what kinds?

biases axiom-violations

alternative explanations for why the natural frequency format makes the task easier

clarifies the set relations between the various event categories, and that any manipulation which achieves this will have the same beneficial effect

inverse fallacy, in Eddy (1982)

confuses the probability of a positive test result given the presence of cancer, p(positive|cancer), with the probability of cancer given the positive test result, p(cancer|positive); probabilities are not the same: the chances of cancer given a positive test depend on the base rate (prior probability) of cancer in the population; A positive test is more likely to indicate cancer when cancer is widespread than when it is very rare

what illuminate the judgment process?

deviations from prescriptions, Human probability judgments do not always follow the laws of probability

gambler's fallacy, ecological explanation?

ecological experience with different types of generating process, intentional human v.s. mechanical

alternative framework of "two systems"/ "heuristics and biases" approaches

ecology and adaptation, emphasizes the role of ecological conditions and informational constraints on people's judgments and decisions

Availability Heuristic

estimating the likelihood of events based on their availability in memory; if instances come readily to mind (perhaps because of their vividness), we presume such events are common

Lichtenstein et al. (1978)

evidence for the availability heuristic; people commonly overestimate the frequency or probability of rare events and underestimate common ones; participants estimate the number of US deaths per year due to 40 causes ranging from very rare to very common; systematically over-estimated the common and under- estimated rare attributed to availability: rare events are often given disproportionate publicity and are correspondingly more mentally-available than their environmental frequency would merit.

In the "two systems"/"heuristics and biases" view, the problems caused by using availability or representativeness as the basis for probability judgments can be overcome by what? Evidence?

evoking "System 2" - i.e., by employing a more effortful processing strategy; alert people to possible bias and/or tell them to put more effort into their judgment e.g. people can discount a potentially-misleading but readily-accessible cue such as stimulus familiarity (e.g., Oppenheimer, 2003).

Gilovich et al. (1985), criticism

hard to establish that the outcomes of each basketball shot really are independent events

recurring theme

hile humans are capable of sophisticated reasoning and abstract thought, also how frequent and systematic departures from the dictates of formal logic.

Ayton and Fischer (2004)

hot hand fallacy has been found in situations where the success of consecutive attempts really cannot be any guide to future performance; people play a roulette-style game and found that their confidence in their predictions for the next outcome was greater after a run of successful predictions - even though the probability of them being right next time must be independent of past success because the roulette spins are random

We can also consider probability judgments in their ecological context, an example?

humans evolved to process frequencies, not normalizing probabilities

contraindications to "anchor-and-adjust" heuristic

in the "wheel of fortune" task described above, warning people about anchoring effects and/or giving them an incentive to be accurate often has little effect on the extent to which people anchor on the provided value (e.g., Epley & Gilovich, 2005), which doesn't fit with the idea that the anchoring effect reflects a "lazy" or "intuitive" judgment system that can be over-ridden by effortful deliberation

Representativeness Heuristic

judging the likelihood of things in terms of how well they seem to represent, or match, particular prototypes; may lead us to ignore other relevant information

Gilovich et al. (1985)

opposite of Gambler's fallacy, Hot Hand Fallacy; basketball players' shooting accuracy was independent of their recent performance: the probability of scoring was the same after a run of "baskets" as after a run of "misses". However, basketball fans believed that a player's next shot was more likely to score after a run of successful shots than after a run of misses

Matthews and Stewart (2009)

own recent judgments as anchors; people estimate the prices of shoes from Top Shop; the judgments on trial n positively correlated with the judgments on trial n-1 for 26 out of 28 participants.

Heuristics and Biases framework

people are assumed to base their intuitive probability and frequency judgments on simple, experience-based strategies ("heuristics") which are right (or good enough) most of the time but sometimes lead to biased or illogical responses

One observation that is often taken as evidence for the availability heuristic

people commonly overestimate the frequency or probability of rare events and underestimate common ones; Lichtenstein et al. (1978)

probability (and other) judgments are sometimes "biased", why?

people use simplifying strategies that reduce effort but are prone to error

More direct evidence for the role of representativeness, Kahneman and Tversky (1973), conditions

personality sketch, The prediction group was told that the sketch of Tom was prepared by a psychologist during Tom's final year in high school, and that Tom is now a graduate student. They were asked to rank the 9 academic subjects by the probability that Tom W. is specializing in that topic. The base-rate group was not shown the Tom W. sketch but "consider[ed] all first year graduate students in the US today" and indicated the percentage of students in each of the 9 subject areas - that is, the estimated the base rates for each subject area. The representativeness group ranked the 9 subject areas by the degree to which Tom W. "resembles a typical graduate student" in that subject area.

support for Ayton and Fischer (2004) alternative account, based on ecological considerations

presented sequences of outcomes with varying alternation rates Participants had to judge which of two processes generated each sequence (e.g., a series of basketball shots or a sequence of coin tosses). As the streak length increased, participants were more likely to attribute the sequence to intentional human performance like basketball than to a random mechanical process like coin- flipping.

Gigerenzer and colleagues

probabilities only make sense when conceived as long- run frequencies, it does not make sense to talk about the probability of a one-off event (e.g., that a given person has a disease) humans evolved to keep track of event frequencies, estimated over time by "natural sampling" (i.e., encountering different types of events and remembering the number of times they occur)

Robinson and Hastie (1985)

probability estimation error can be overcome by training; had people read short murder-mystery stories and estimate the probability that each of 5 suspects were guilty. Despite being told that the murderer was definitely one of these five people, and that he or she had acted completely alone, participants' total probability judgments averaged approximately 210%. However, a separate group received a small amount of training in probability and avoided this error.

criticism of representativeness of heuristics in explanning biases

problematic to use the same mechanism to "explain" two completely contradictory findings, i.e. hot hand and gambler's

Gigerenzer and colleagues arguments implication for inverse fallacy, in Eddy (1982)

re-express the diagnosis problem in terms of natural frequencies (number of events of each type) rather than normalized probabilities, then people should find it much easier; the answer can easily be "seen" to be 8 out of 107 = 7.5%. They found that only 8% of physicians answered correctly (gave a judgment within 5% of the true value) in the original wording of the task, but that this increased to 46% with the natural frequency format

Hahn and Warren (2009) demonstration of their alternative account

shorter sequences, the probability of encountering HHHT and HHHH are not equal; simulated 10,000 sequences of 10 coin flips. The pattern HHHH only appeared in about 25% of the sequences, whereas HHHT occurred in about 35% of the simulated samples. In other words, if we had 10,000 people each of whom had experienced 10 flips of a fair coin, it would be perfectly reasonable for more of them to expect a sequence HHH to end with a T than with another H.

implication of Hahn and Warren (2009) demonstration

supposed "fallacies" of human judgment and decision-making are often, perfectly rational given, the finite and imperfect information afforded by the environment and our limited mental capacities

evolutionary ideas about inverse fallacy

the task is difficult in the original version because the use of normalized probabilities (which necessitate the explicit incorporation of base rates/priors) deviates from how we "naturally" evaluate chance

Kahneman and Tversky (1973) demonstration of base-rate neglect

told participants that a panel of psychologists had interviewed a number of engineers and lawyers and produced character sketches of each person. They were told that 5 such descriptions had been randomly selected and that they should rate, from 0-100, the likelihood that each sketch described one of the engineers (rather than one of the lawyers) + told base rate; personality description might provide some information about Jack's likely occupation, but this should be combined with information about the number of engineers and lawyers in the population from which his description was randomly drawn. However, people ignored these base probabilities

"anchor-and-adjust" heuristic elaborate

we use the anchor as an initial estimate of the target value and adjust from that starting point in the right direction; because the adjustment is effortful, we often adjust insufficiently and so our judgment is biased towards the anchor value

representativeness heuristic, elaborated

when estimating a probability - for example, how likely it is that a person belongs to a particular category or the probability that an observed sample was drawn from a particular population - people assess the similarity between the outcome and the category (or between the sample and the population); involves "an assessment of the degree of correspondence between a sample and a population, an instance and a category, and act and an actor or, more generally, between an outcome and a model."

Anchor-and-Adjust heuristic

when having to make a judgment or estimate along a scale (e.g. having to come up with a price or probability), we often unknowingly use a reference point that has been suggested to us as a mental anchor, and then adjust our estimate from there.

Conjunction Fallacy

when people think that two events are more likely to occur together than either individual event


Kaugnay na mga set ng pag-aaral

The Special Supplemental Nutrition Program for Women, Infants, and Children (WIC)

View Set

CISSP PRACTICE TESTS Chapter 1▪Security & Risk Management (Domain 1)

View Set

Employment Law for Human Resource Practice Chapter 1

View Set

Neuroscience of Behavior - Week 7

View Set

Chapter 6 Bones and Skeletal Tissues

View Set

Political Parties-Shaw Midterm 2

View Set

Musician's Guide to Fundamentals: Chapter 1 - 4 Review

View Set

Med Surg- Chapter 48 : Diabetes Mellitus

View Set