Chapter 13: Choice

¡Supera tus tareas y exámenes ahora con Quizwiz!

The Rich Uncle Joe Experiment

Before we continue, we would like you to answer a few questions that we will revisit later in this chapter. It will not take you long to answer these questions. Answering them will give you greater insights about your own choices and the variables that affect them. If you are ready, grab a pen and read on. Imagine that a trusted member of your family, your rich Uncle Joe, says he is going to give every member of your immediate family $1,000 in cash in 5 years. Until that time, the money will be in a safety deposit box, where it will earn no interest. At the end of 5 years, Uncle Joe's lawyer will give you the cash. Next imagine that a friend of the family, Camille, says that she would like to purchase one of Uncle Joe's $1,000 gifts. Camille will pay in cash right now and, in 5 years, Uncle Joe's lawyer will give her the $1,000 gift that she purchases. Because everyone in the family is at least a little interested in selling their gift to Camille, she suggests everyone privately write down their selling price. Camille will buy the gift from the family member who writes down the lowest price. Please write your answer here. "I will sell Uncle Joe's gift ($1,000 in 5 years) if you give me $_______ in cash right now." Now erase that scenario from your mind. This time, imagine that your rich Uncle Joe is going to put your $1,000 gift into his safety deposit box for just 1 month. After that, Uncle Joe will give you the cash. Now what is the smallest amount of cash that you would sell this for? "I will sell Uncle Joe's gift ($1,000 in 1 month) if you give me $_______ in cash right now." If you are paying attention and answering honestly, the amount you just wrote down will be more than the amount you sold the gift for when it was delayed by 5 years. We will do this two more times and then be done. This time, imagine that Uncle Joe is going to put the money into his safety deposit box for 1 year. Now what is the smallest amount of cash that you would sell the gift for? "I will sell Uncle Joe's gift ($1,000 in 1 year) if you give me $_______ in cash right now." And finally, imagine that Uncle Joe's gift will sit in the safety deposit box for 25 years. Uncle Joe's law firm has instructions to give you the money in 25 years, should Uncle Joe die before the time elapses. Now what is the smallest amount of cash that you would sell the gift for? "I will sell Uncle Joe's gift ($1,000 in 25 years) if you give me $_______ in cash right now."

Choice

Decision-making (i.e., making choices) is one of the most significant and complicated things we do every day. Shall I get out of bed or snooze the alarm? How much shampoo should I use: a little or a lot? Should I walk, take the bus, or drive? Shall I stick with my healthy eating plan or splurge on junk food with friends? Do I scroll further through my social-media feed, or do I study instead? Should I study in the library or the coffee shop where I know I will be distracted by music and nearby conversations? Do I succumb to peer pressure and have another drink/smoke? Our verbal answers to these questions - "Yes, I'll get out of bed when my alarm goes off" - may be different than our behavioral answers. What we verbally decide to do and our actual choices (pressing the snooze button eight times before getting out of bed) may be very different. The choices we actually make, by behaving one way or the other, are much more important than what we say we will do. Our New Year's resolutions will not improve our lives; only by repeatedly choosing to adhere to them can we improve our health, well-being, and the health of the environment. What does behavior analysis have to say about choice? Can behavioral scientists accurately predict the choices people make? Have they identified functional variables that can be used to positively influence these decisions? Spoiler alert: We can predict some choices, but we are a long way from being able to predict all of them. Once again, this is where readers of this book are important. If you choose a career in the behavioral sciences, you could make discoveries that help to improve the accuracy of our predictions and, importantly, improve the efficacy of interventions designed to positively influence human decision-making. So, let's get started.

Predicting Preference Reversals

Figure 13.12 illustrates a unique prediction of hyperbolic discounting - that there are predictable times when, from one moment to the next, we will change our minds about what is important in life. In the graph from the preceding section (Figure 13.11), when dessert was immediately available, our decision-maker preferred cake (SSR) over the delayed benefits of dieting (LLR). But in Figure 13.12, neither reward is immediately available at T2, where the stick-figure decision-maker contemplates its options. From this location in time, both rewards are hyperbolically discounted in value and, importantly, these subjective values have changed places. Now the subjective value of the LLR (red dot) is greater than the discounted value of the SSR (green dot). At T2 the decision-maker can clearly see the benefits of dieting and resolves to stick with it. Unfortunately, when an immediately available dessert is encountered again, moving the decision-maker from T2 to T1 (in Figure 13.11), a "change of mind" will occur. The subjective values reverse positions again, dessert is consumed, and self-loathing ensues. Our stick-figure decision-maker notes its lack of "willpower" and resolves to buy a self-help book (How to Stick to Your Decisions). Preference reversals have been extensively studied by behavioral scientists interested in human and nonhuman behavior; they predictably occur when the decision-making is moved, in time, from T1 to T2 (or vice versa; Ainslie & Herrnstein, 1981; Green et al., 1994; Kirby & Herrnstein, 1995). One reason for the interest in preference reversals is that "changing your mind" from a self-control choice to an impulsive choice is maladaptive. Another reason is that it seems so irrational, and yet we all do it. For example, most of us set the alarm with good intentions to get up, but when the alarm goes off we snooze it at least once. This is a preference reversal. The night before (at T2 in Figure 13.12), the subjective value of getting up early (the red dot at T2) was higher (more valuable) than the discounted value of a little more sleep tomorrow morning (the green dot at T2). However, while sleeping, the decision-maker moves from T2 to T1. When the alarm goes off at T1, extra sleep is an immediate reinforcer and it is more valuable than the discounted value of the delayed benefits of getting up on time. A very similar preference reversal occurs when individuals with a substance-use disorder commit to outpatient therapy with the best of intentions to quit (Bickel et al., 1998). These choices are often made at T2, in the therapist's office when drugs are not immediately available. Here the subjective value of the delay benefits of drug abstinence (the red dot at T2) is subjectively greater than the discounted value of drugs (the green dot at T2). A few days later when drugs are available immediately (T1), the individual finds that they cannot resist the immediate temptation of drug use. The subjective values of these choice outcomes have reversed, as has the decision to abstain from drugs.

Herrnstein's Matching Equation

Herrnstein (1961) developed a simple equation that predicted how pigeons chose to allocate their behavior between pecking the left (BL) and right keys (BR). He hypothesized that these choices would be influenced by the reinforcers obtained on the left (RL) and the right (RR) keys: BL / (BL + BR) = RL / (RL+RR) Before we plug any numbers into Herrnstein's equation, note that behavior (pecking BL or BR) appears on the left side of the equation and reinforcers (RL and RR) appear on the right. The equals sign in the middle is important. The equals sign indicates that the proportion of BL responses should match (or be equal to) the proportion of reinforcers obtained on the left (RL). If you are more familiar with percentages than proportions, then Herrnstein's equation says the percentage of responses allocated to BL should match (or be equal to) the percentage of reinforcers obtained on the left (RL). To convert proportions to percentages, we simply multiply the proportion by 100. Let's plug some very simple numbers into the equation. In Herrnstein's (1961) experiment, the VI 90-second schedule arranged reinforcers to occur at a rate of 40 per hour (i.e., 1 reinforcer, on average, every minute and a half), whereas extinction arranged none. Thus, RL=40, RR=0 Simple enough. Next, we plug these rates of reinforcement into the matching equation: 40 per hour / (40 per hour + 0 per hour) = 40/40 = 1/1 By multiplying the right side of the equation by 100, we obtain the percentage of reinforcers available on the left: BL / (BL+BR) = 1/1*100 -> BL / (BL=BR) = 100% of the reinforcer are on the left key If the right side of the equation is 100%, then Herrnstein's equation predicts that the percentage of responses allocated to the left key should match this number; that is, 100% of the responses should be made on BL (exclusive choice). Figure 13.5 shows this prediction graphically. On the x-axis is the percentage of reinforcers obtained on the left, and on the y-axis is the percentage of left-key pecks. The lone data point on the graph shows that when 100% of the reinforcers were arranged on the left, the pigeons should choose to allocate 100% of their pecks to that key. The pigeons conformed to this prediction, allocating over 99% of their behavior to BL.

RESEARCH SUPPORT FOR HERRNSTEIN'S EQUATION

Herrnstein's matching equation has been studied in rats, pigeons, chickens, coyotes, cows, and monkeys; it does a good job of predicting how behavior will be allocated when the outcomes of choice are uncertain (Dallery & Soto, 2013; Davison & McCarthy, 1988). If you take more advanced coursework in behavior analysis, you will learn that Herrnstein's matching equation does not always make perfect predictions. For example, cows tend to deviate from predictions of the matching equation more than the pigeons in Figure 13.8 (Matthews & Temple, 1979). In these more advanced classes, you will consider exceptions to the matching equation and will study theories proposed to account for these exceptions. What about humans? Can the matching equation be used to predict actual choices made by humans? Early research supported the predictions of the matching equation: humans chose to allocate more of their behavior toward the richer of two VI schedules of reinforcement (e.g. Bradshaw et al., 1976, 1979). However, a series of important experiments conducted by Horne and Lowe (1993) revealed several problems with these early studies. For example, subtle cues informed participants about the frequency of reinforcers. When these cues were removed, very few participants behaved in accord with the matching equation (Horne & Lowe, 1993). More recent research suggests humans conform to the predictions of the matching equation, but only when participants are paying attention. Much of the early research on human choice and the matching equation arranged button-pressing tasks resembling those used with rats and pigeons. While these tasks engage nonhuman attention, humans find them boring and mostly press the buttons without paying attention to the consequences of their behavior. Madden and Perone (1999) explored the role of participant attention by changing the task from one in which choices could be made without paying much attention, to one that was more engaging. As expected, when human attention was elsewhere, choice did not closely conform to the predictions of the matching equation. However, as shown in Figure 13.9, the matching equation accurately predicted human choice when participants paid attention to the engaging task. Other experiments have reported comparable findings with tasks that effectively engage human attention (Baum, 1975; Buskist & Miller, 1981; Critchfield et al., 2003; Magoon & Critchfield, 2008; Schroeder & Holland, 1969). We have prepared a reading quiz to give you practice calculating and graphing the predictions of Herrnstein's matching equation. For readers who wonder why all of these graphs and calculations matter, we invite you to read Extra Box 1. The matching equation makes strong, empirically supported predictions about what controls human decision-making and, as such, has important things to say about how to solve existential threats that have at their core disordered human choice.

COMMITMENT STRATEGIES

If we are more likely to choose the larger-later reward at T2 (Figure 13.12), when neither reward is immediately available, then an effective strategy for improving self-control is to make choices in advance, before we are tempted by an immediate reward. For example, making a healthy lunch right after finishing breakfast is a commitment strategy for eating a healthy lunch. After eating breakfast, we are full and, therefore, are not tempted by a beef quesarito. However, we know we will be tempted at lunch time, so it would be wise to use a commitment strategy now. By packing a healthy lunch, we commit to our diet, and the larger-later rewards it entails. Of course, we will be tempted to change our mind at T1, when lunch time approaches, but having the packed lunch can help in resisting the temptation to head to Taco Bell. Rachlin and Green (1972) first studied commitment strategies in pigeons. In their experiment, pigeons chose between pecking the orange and red keys at T2 as shown in Figure 13.14. At this point in time, no food reinforcers were immediately available, so hyperbolic discounting suggests the larger-later reward will be preferred. If the red key was pecked, a delay occurred until T1, at which time the pigeon chose between one unit of food now (smaller-sooner reward of pecking the green key) and three units of food after a delay (larger-later reward). Given these options, if the pigeons found themselves at T1, they nearly always made the impulsive choice. But note what happens when the pigeons chose the orange key at T2. Instead of waiting for another choice at T1, they were given no choice at all. By choosing the orange key, the pigeon committed itself to making the self-control choice at T1. Consistent with hyperbolic discounting of delayed reinforcers, Rachlin and Green's (1972) pigeons frequently committed themselves to this course of self-control. The Nobel prize-winning behavioral economist, Richard Thaler, used a similar commitment strategy to help employees save money for retirement (Thaler & Benartzi, 2004). New employees are reluctant to sign up for a retirement savings program, in part, because money put into savings is money not used to purchase smaller-sooner rewards. Thaler's insight was to commit to saving money at T2, when the money to be saved is not immediately available. This was accomplished in the Save More Tomorrow program by asking employees to commit some of their next pay raise, which would not be experienced for many months, to retirement savings. At T2, with no immediate temptations present, the number of employees who signed up for retirement savings more than tripled. Although employees were free to reverse their decision at T1, when the pay raise was provided, very few took the time to do so. The program has been widely adopted in business and is estimated to have generated at least $7.4 billion in new retirement savings (Benartzi & Thaler, 2013). This is a wonderful example of how behavioral science can positively influence human behavior.

WHAT SUBSTITUTES FOR DRUG REINFORCERS?

If you grew up in the US in the last 40 years, there is a good chance you were taught that drugs are the ultimate reinforcers; they are so powerful that they have no substitutes. If you start using drugs, you were told, no other reinforcer will be able to compete and your life will be ruined. Is this true? No, not really. In his 2015 TedMed talk, Columbia University neuroscientist, Dr. Carl Hart, discusses the research that started the myth of drugs as the ultimate reinforcers. In these studies, rats were placed in an operant chamber where they could press a lever to self-administer a drug reinforcer, such as cocaine. Most of the rats pressed the lever and consumed high doses of drugs on a daily basis. When this finding was translated to humans, policy-makers concluded that drugs were the ultimate reinforcers; that a "war on drugs" was necessary, as were mandatory minimum jail sentences for those found in possession of drugs. What policy makers didn't realize was that these experiments were conducted in impoverished environments - the researchers made only one reinforcer available and it was a drug reinforcer. Meanwhile, other researchers were collecting data that challenged the policy-makers' conclusions. Researchers like Dr. Bruce Alexander wondered if drugs would lose their "ultimate reinforcer" status if the environment was not impoverished; that is, if other reinforcers were available that might compete with (substitute for) drugs. To find out, Dr. Alexander and his colleagues (Hadaway et al., 1979) raised one group of rats in an impoverished environment (living alone in a cage with nothing to do); these rats used a lot of morphine when given the opportunity. The other group of rats was raised in an environment in which there were opportunities to engage in behaviors that produced natural reinforcers - they could play with each other, they could climb on objects, and they could build nests. These animals raised in the "rat park" consumed far less morphine than their isolated counterparts. Was this because of alternative reinforcers, or was it because the rats living alone were stressed out? To find out, Dr. Marilyn Carroll housed monkeys in cages and periodically gave them the opportunity to press a lever to self-administer a drug (phencyclidine or "angel dust"). In one phase of the experiment, only one lever was inserted into the cage - take it or leave it. In another condition, two levers were inserted and the monkeys could choose between phencyclidine or a few sips of a sweet drink. The results were clear - monkeys took drugs less often when given a choice between drugs or soda pop (Carroll, 1985; Carroll et al., 1989). Yes, soda pop substituted for angel dust. The finding that substitute reinforcers can dramatically decrease drug use has been replicated many times, with many drugs (Comer et al., 1994), with many different substitutes (Cosgrove et al., 2002; Venniro et al., 2018), and in other species (Anderson et al., 2002; Carroll et al., 1991) including humans (Hart et al., 2000, 2005). Importantly, this finding is consistent with the predictions of the matching law (Anderson et al., 2002). If the rate of reinforcement for non-drug-taking activities is very low (e.g., RNonDrug=1 reinforcer per week), and the rate of reinforcement for taking drugs is twice as high (RDrug=2 reinforcers per week), then the matching law predicts that, all else being equal, 67% of the individual's time should be spent in drug-related activities (BDrug): BDrug/ (Bdrug+BNonDrug) = RDrug / (RDrug+RNonDrug)= 2/(2+1) = 2/3 = 0.67*100 = 67% In Dr. Carl Hart's TedMed talk, he discusses the implications of this behavioral research on public policy. Drug use is not a moral failing, he argues, it is the result of a lack of substitute reinforcers. To prevent drug use, Dr. Hart suggests, we must increase the value of RNonDrug. That is, if an individual can obtain contingent reinforcers for engaging in non-drug-taking activities, that individual will be much less likely to develop a substance-use disorder. Perhaps the most effective of these non-drug-taking activities is work. Being gainfully employed substantially reduces one's risk of substance abuse. When economic prospects are dim, as they are in the areas of the United States where the opioid crisis has killed thousands, the risk of problem drug use is increased. Showing compassion by increasing RNonDrug isn't a political stance; it is a scientific stance.

Four Variables Affecting Choice

Imagine there are two identical buttons in front of you. You can choose to press either one and you are free to switch between them any time you like. Furthermore, if you don't want to press either button, preferring instead to stand on one leg while sipping tea, you can do that too. Before you press the buttons, we will assume you have no preference between them - one is as good as the other. However, if the consequences of pressing the buttons are different, a preference will develop. The next four sections describe variables that will strongly influence choice. Indeed, when these variables alone are at work, they tend to produce exclusive choice for one alternative over the other. Reinforcement vs. No Consequence Reinforcer Size/Quality Effort Reinforcer Delay

Reinforcement vs. No Consequence

In Figure 13.1, when you press the right button (BR) nothing happens, ever. When you press the left button (BL), your press is immediately accompanied by a "beep" and a display on a computer screen that says "25 cents have been added to your earnings," which we will assume functions as a reinforcer. You press the left button 24 more times and more money is added to your account each time. Score! So, which of these buttons would you choose to press going forward? Obviously, you will exclusively press the button that produced the reinforcer (BL). The reason is clear - pressing the left button is reinforced and pressing the other one is not. Every species of nonhuman animal tested so far prefers reinforcement over non-reinforcement. For example, in an experiment conducted by Rybarczyk et al. (2001), cows preferred to press the left lever (BL) because it was followed by a food reinforcer. They chose not to press the right lever (BR) after learning that it never provided a reinforcer. In sum, individuals choose reinforcement over no consequence.

Substitutes in herrnstein's experiments

In Herrnstein's experiments and the hundreds of choice experiments that have replicated his findings, the animals and humans chose between two identical reinforcers. For rats, the choice was between food reinforcers earned by pressing the BL lever and the same food reinforcers available for pressing the BR lever; similarly, for humans it was a choice between money and money. However, most of the choices we make in our daily lives are between different reinforcers. For example, at snack time we might choose between having a slice of chocolate cake or chips and salsa. Can the matching law be used to predict and influence these choices as well? The answer is, "it depends"; it depends on whether the two reinforcers substitute for each other. Before we dive into the "it depends" answer, let's explain what we mean by substitute. When we say two reinforcers substitute for one another, it helps to think of ingredients in a recipe. If we are out of coriander, we might Google "What substitutes for coriander." The answer will be a spice that has a similar taste (equal parts cumin and oregano). If we cannot easily obtain one reinforcer (coriander), the reinforcer that will take its place (a cumin/oregano mix) is a substitute. The more technical definition of a substitute reinforcer is a reinforcer that is increasingly consumed when access to another reinforcer is constrained. For example, if the price of coffee increases substantially, our access to this reinforcer is constrained - we can't afford to drink as much coffee as we normally do. When coffee prices spike, we will substitute other caffeinated drinks like tea, caffeine shots, and energy drinks. Likewise, if your partner moves to another city, your access to intimate reinforcers is greatly constrained. Substitute reinforcers might include social reinforcers from friends, intimate reinforcers from a new partner, or drug reinforcers from a bottle. Now back to the "it depends" answer. The matching law can be used to predict and influence choice if the two different reinforcers substitute for one another. For example, clothes stocked by Moxx and Rass substitute for one another - if Rass raises their prices, we buy more clothes at Moxx - so the matching law applies. However, if the two reinforcers do not substitute for one another, then no, the matching law does not apply. For example, clothes and hamburgers do not substitute for one another. If the price of clothes goes up, you will not begin wearing hamburgers. The study of substitute reinforcers has revealed some important findings in the area of substance use disorders. This topic is explored further in Extra Box 2.

Impulsivity and Self-Control

In our everyday language, we often refer to people as "impulsive" or "self-controlled." Sometimes these words are used as descriptions of behavior ("I bought the dress on impulse"), and sometimes as explanations of behavior ("You know why she bought that dress - because she's so impulsive"). In the latter case, the cause of impulsive (or self-controlled) behavior is hypothesized to be inside the person. That is, we see an individual make several impulsive choices and then speculate that the cause of those choices is "impulsivity" or a lack of "willpower." Conversely, if we see someone resist a temptation, we point to "self-control" as the cause. As we have previously discussed, such explanations are circular (see Chapter 9). The only evidence for an internal "impulsivity" or "self-control" is the external pattern of choices made. When the only evidence for the cause (impulsivity) is the effect (impulsive choice), the explanation is circular. A more scientific approach to understanding impulsive and self-control choice begins by specifying behavioral definitions (what is an impulsive choice), and then looking for functional variables that systematically influence these choices. We will use these definitions: Impulsive choice Choosing the smaller-sooner reward and foregoing the larger-later reward. Self-control choice Choosing the larger-later reward and foregoing the smaller-sooner reward. Let's apply these definitions to some everyday impulsive choices. How about the choice between eating junk food (impulsive choice) and eating healthy food (self-control choice)? To put some meat on this example, imagine going to Taco Bell for lunch and eating a beef quesarito, cheesy fiesta potatoes, and a medium Mountain Dew. This meal gives you the immediate enjoyment that only a beef quesarito can provide. However, by choosing to eat this, you are consuming almost all the calories you are allowed under your diet. Because you already had a mocha frappuccino for breakfast and you do not plan to skip dinner, you have chosen to forego your long-term weight loss goal. Is eating this meal an impulsive choice? Most of us would intuitively say yes, but does it meet the behavioral definition of an impulsive choice - is junk food a smaller-sooner reward that requires you to forego a larger-later reward? Relative to the long-term benefits of sticking to your diet (weight loss, improved health, reduced risk of disease, higher self-esteem, etc.), that beef quesarbito is a pretty small reward. Is it sooner? Yes, the benefits of eating the quesarito meal are experienced right now. The benefits of sticking to your diet will not be felt for months. Therefore, eating junk food is impulsive - you are choosing the smaller-sooner reward and foregoing the larger-later reward. Conversely, choosing to stick to the diet is a self-control choice because you are preferring the larger-later reward and foregoing the smaller-sooner one. Before we provide more examples of impulsive and self-control choices, note that when we make an impulsive choice (like eating an unhealthy snack) we feel a strong sense of wanting. However, almost immediately after eating, we regret our decision. This is quite common with impulsive choices and we will have more to say about this later, in the section on "Preference Reversals." But for now, Table 13.3 provides more examples of impulsive and self-control choices. See if you would feel tempted by these impulsive choices, only to later regret having made them.

More uncertainty about herrnstein's experiment

In the next part of Herrnstein's experiment, uncertainty was increased by programming a VI schedule on both keys. A VI 3-minute schedule was arranged on the left key (RL = 20 reinforcers per hour) and a different VI 3-minute schedule was arranged on the right (RR = 20 reinforcers per hour too). In many ways, this resembles the reinforcement schedules operating when you choose between shopping at two comparable stores like Moxx and Rass. Reinforcers (bargain buys) are found intermittently; we cannot predict when the next one will be found, but our experience tells us that they are found about equally often at the two stores (RMoxx = RRass). If, in all other ways, the stores are the same (same distance from your apartment, same quality of clothes, etc.), then you should visit both stores equally often. This is what Herrnstein's equation predicts when a VI 3-minute schedule is programmed on the left and right keys: BL / (BL+BR) = 20 per hour / (20 per hour + 20 per hour) = 20/40 = 1/2*100 = 50% If 50% of the reinforcers are obtained by pecking the left key, then choice should match this percentage: half of the pecks should be directed toward that key. As shown in Figure 13.6, the pigeons behaved in accord with this prediction, allocating an average of 53% of their responses to BL. To make it easier for you to see deviations from predictions of the matching equation, we added a blue line to the figure. The matching equation predicts all of the data points should fall along this blue line. In another phase, Herrnstein's equation was tested by assigning a VI 9-minute schedule on the left key (RL = 6.7 reinforcers per hour) and a VI 1.8-minute schedule on the right key (RR = 33.3 reinforcers per hour). Now, reinforcers are obtained more often on the right key, the key the pigeons previously dispreferred. Plugging these reinforcement rates into the matching equation, BL / (BL+BR) = 6.7 / (6.7+33.3) = 6.7/40 = 0.1675*100 = 16.75% we can see that about 17% of the reinforcers are obtained by pecking the left key. Therefore, the matching equation predicts the pigeons will choose to allocate 17% of their behavior to the left key (and the other 83% to the right key). As shown in Figure 13.7, in Herrnstein's experiment this prediction was pretty accurate. Translating this to shopping, if we rarely find bargains at Rass but usually find them at Moxx, then Herrnstein's matching equation predicts we should spend most, but not all of our time shopping at Moxx. Makes sense, but imagine that our local Rass store gets a new acquisitions manager and now shopping there is reinforced twice as often as it is at Moxx. The matching equation predicts that we will switch our allegiance and spend two-thirds of our time shopping at Rass. As shown in Figure 13.8, when Herrnstein tested this prediction, his pigeons did exactly this. The additional green data points in the figure show choice data from other conditions in which Herrnstein tested the predictions of the matching equation. As you can see, the predictions were quite accurate.

Using the Matching Law to Positively Influence Behavior

It should be clear by now that Herrnstein's matching equation may be used to accurately predict choices between substitute reinforcers available in uncertain environments. What about the other goal of behavior analysis? Can the matching equation identify functional variables that may be used to positively influence behavior? If you read Extra Boxes 1 and 2, then you will know the answer is "yes." This section explicitly identifies those functional variables and specifies how to positively influence choice. In the matching equation, the functional variables appear on the right side of the equals sign, that is, R1 and R2. By changing these variables, we can increase socially desirable behavior and decrease undesirable behavior. Let B1 serve as the frequency of socially desirable behavior. Given Herrnstein's equation, there are two ways to increase the proportion of behavior allocated to B1. The first technique is to increase R1. That is, ensure that the socially desirable behavior is reinforced more often. This advice is entirely consistent with everything presented in the book thus far - if we want to see more of B1, reinforce it more often. The top portion of Table 13.1 provides an example of this prediction. The second technique for increasing B1 is to decrease R2. Said another way, if we want to increase socially desirable behavior, get rid of other substitute reinforcers. If we reduce R2 to zero, then we will have arranged a differential reinforcement procedure. That is, B1 will be reinforced, but B2 will not. The matching equation predicts differential reinforcement, if perfectly implemented, will yield exclusive choice for B1, the reinforced behavior. But short of reducing R2 to zero, the matching equation predicts a nonexclusive shift toward B1, the socially desirable behavior. This is demonstrated in the lower portion of Table 13.1. The matching equation also prescribes two techniques for decreasing an undesired behavior (B2). The first technique is to decrease R2. As shown in the upper part of Table 13.2, when we decreased R2 from 10 reinforcers per minute to 2, the percentage of behavior allocated to B2, the undesired behavior, decreased from 50% to 17%. The second technique for decreasing an undesired behavior (B2) is to increase R1. This prediction is shown in the lower portion of Table 13.2. When the rate of reinforcers on R1 is increased (from 10/minute to 50/minute), the percentage of behavior allocated to inappropriate B2 behaviors decreases from 50% to 17%.

Choosing between Uncertain Outcomes

Later in the chapter we will return to the choices you just made, but for now consider the choice you make when leaving home and deciding where to go shopping for clothes. There are many options, but for simplicity we will confine our choices to two stores: Moxx or Rass. Which one will you choose? You cannot be certain which store will have that great bargain you are looking for, so you will have to decide based on your past experiences. That is, you will have to decide which one has had more bargains in the past. In the 1960s, Richard Herrnstein conducted some of the first operant learning experiments on choice. He studied choices made by rats and pigeons. As animals are uninterested in shopping for clothes, Herrnstein arranged food as the reinforcer. Rather than building two little grocery stores that the subjects could shop in, he arranged two response keys (pigeons) or two levers (rats) on the wall of the chamber; by pecking or pressing them they could occasionally obtain these reinforcers. The individual animals participating in these experiments were free to do whatever they chose, whenever they wanted. They could peck one key for a while and then the other, they could choose to press one lever exclusively, or they could do something else, such as walk around, groom themselves, or take a nap. In one experiment with pigeons, Herrnstein (1961) programmed a variable-interval (VI) 90-second schedule of reinforcement to control the availability of reinforcers on the left key (BL), whereas pecks on the right key (BR) were never reinforced. Note that the consequence of pecking BL is uncertain. Sure, pecking this key will be reinforced once, on average, every 90 seconds, but it is impossible to be certain which peck will be reinforced. Despite this uncertainty, the pigeons spent all of their time pecking the left key (another example of preference for reinforcement over no consequence). Herrnstein's Matching Equation

THE MATCHING LAW, TERRORISM, AND WHITE NATIONALISM

Some of the most difficult to understand choices are those that we find reprehensible. For example, violent terrorist acts that take the lives of innocents are difficult for us to comprehend. What could possess someone to do something so evil? Likewise, why would anyone join a neo-Nazi group and march through the streets of Charlottesville while chanting, "Jews will not replace us!" If we could predict and influence such choices, we could reduce violence and hate. In her documentary films Jihad and White Right: Meeting the Enemy, the groundbreaking filmmaker, Deeyah Kahn explores the variables that lead young men to make these choices. In Jihad, Kahn interviews young Muslim men who leave their homes in the West to join Jihadi movements in the Middle East. In most cases, these men are struggling to succeed in the West. They are failing to establish friendships and romantic relationships; they are not meeting the expectations of their parents or their culture. Most of their interactions with Westerners left them feeling ashamed and humiliated. In a word, they perceived themselves as failures. Then came their first interactions with a Jihadi recruiter. While no one in the West was giving them any positive reinforcement, the recruiter spent hours online interacting with them, telling them how important they were, describing how they could be part of a life-changing movement, and how, if they died in the movement, they would be handsomely rewarded in the afterlife. The matching law makes clear predictions about the choices these young men are susceptible to making. First, we define BWest as the behavior of interacting with people in the Western world, and RWest as the reinforcers obtained in these pursuits. If BWest is rarely reinforced (i.e., RWest approaches zero), then the matching law predicts interacting with the Jihadi recruiter (BJihad) will occur if this behavior is richly reinforced with the kind of attention the Jihadi recruiters regularly provide (RJihad): These recruiters not only richly reinforce these online interactions, they shape increasingly hardline opinions when the recruits begin to mimic the recruiter's radical ideas. In extreme cases, among those young men who are the most alienated (i.e., the least likely to obtain social reinforcers from anyone in the West), they leave their homes and join the extremist movement. Consistent with this analysis, Deeyah Khan reported that these young men are not drawn to the movement by religion; they are more loyal to their recruiters than to their religion. These recruits are escaping alienation (i.e., a low value of RWest) in favor of a rich source of social reinforcement (RJihad). In her movie White Right: Meeting the Enemy, Khan found that a similar profile of young white men were drawn to the neo-Nazi movement. Those without friends, gainful employment, or supportive families were drawn to the reinforcers obtained by joining the gang of "brothers" who agree with one another, march with one another, and make headlines with one another. Not only does the matching law predict these outcomes, it tells us how to prevent them. According to the matching law, the answer is to identify those who are failing in life and offer them assistance, friendship, and a source of reinforcement for engaging in socially appropriate behavior. In other words, increase the value of RWest. Could this work? In her films, Kahn documents several cases of white nationalists and Jihadis who come to reject their radical beliefs when they begin interacting with those they hate. When they are befriended by these professed enemies. When they are accepted, even in the face of their hateful speech. Likewise, organizations that work with incarcerated white nationalists (e.g., Life After Hate) report that such underserved kindness from others can play an important role in turning these individuals away from hate. Of course, case studies fall short of experimental, scientific evidence, but these conversions from hate to love are predicted by the matching law. They suggest we should approach our enemies and shape their behavior with love, not revile them from afar.

Influencing Impulsive Choice

Steeply discounting the value of future consequences (solid curve in Figure 13.13) can lead to impulsive choices. When the subjective value of the smaller-sooner reward (height of the green bar) exceeds that of the larger-later reward (red dot at T1), the impulsive choice is made. By contrast, when delayed rewards are discounted more shallowly (dashed curve), the subjective value of the larger-later reward (blue dot at T1) exceeds that of the smaller-sooner reward, and the self-control choice is made. If, in Figure 13.10, your own Rich Uncle Joe discounting curve was steeper than ours, there is a good chance that you are young and not wealthy; younger and less well-off individuals tend to have steeper discounting curves (Reimers et al., 2009). Consistent with the analysis shown in Figure 13.13, a substantial amount of evidence suggests steeply discounting the future is correlated with substance-use disorders such as cigarette smoking and the misuse of alcohol, cocaine, methamphetamine, heroin and other opiates (Bickel et al., 1999; Heil et al., 2006; Madden et al., 1997). Likewise, steeply discounting delayed consequences is correlated with relapsing during drug-abuse treatment (Coughlin et al., 2020; Harvanko et al., 2019; MacKillop & Kahler, 2009). Similarly, pathological gambling and behaviors that risk significant health losses are correlated with steep delay discounting (Dixon et al., 2003; Herrmann et al., 2015; Kräplin et al., 2014; Odum et al., 2000). Perhaps you remember from Chapter 2 that correlation does not imply causation. No experiments have established that steeply discounting the value of future consequences plays a causal role in addictions. However, there are some findings that are suggestive. For example, studies that have assessed delay discounting in human adolescents have reported that steep delay discounting precedes and predicts early drug use (Audrain-McGovern et al., 2009) and similar findings have been reported in rats (Perry et al., 2008). Such findings suggest reducing delay discounting early in life might improve decision-making and lessen human suffering. The following sections describe two methods for reducing impulsive choice. One method - using a commitment strategy - works by engineering the environment to improve the choices we make. The second method - delay-exposure training - works through learning.

Reinforcer Delay

The final variable that produces exclusive choice is reinforcer delay; that is, how long you have to wait for the reinforcer after having made your choice. For example, if two online retailers offer the headphones you want for $50, but one of them (BL) can ship them twice as quickly as the other one (BR), then you will choose BL because of the difference in reinforcer delay (see Figure 13.4). Once again, many consumers exclusively buy online products from Amazon.com because they ship them to customers faster than other retailers. Delays to reinforcers strongly influence our daily decision-making. Consider that many iPhone users spent large sums of money on a new phone in 2017 when Apple slowed the processing speed of their older phones. Customers on Reddit accused Apple of using behavioral technology (reinforcer delays) to induce customers to buy a new iPhone. Apple denied the charge and offered to fix the cause of the spontaneous shut-downs (older batteries) at a reduced price.

Predicting Impulsive Choice

The first task for a behavioral science is, as always, to accurately predict behavior; in the present case, predicting impulsive choice. At the opening of this chapter, we discussed four variables affecting choice. You may have noticed that two of those variables - reinforcer size/quality and reinforcer delay - are relevant to impulsive and self-control choices. These two variables are pulling behavior in opposite directions. A larger-later reward is attractive because of its larger reinforcer size/quality, but it is a less effective reinforcer because it is delayed. The smaller-sooner reward has immediacy going for it, but it is a less effective reinforcer because of its small size or lower quality. These two functional variables pull behavior in opposite directions - we want more, but we also want it now. If the delayed benefits of adhering to our diet could be obtained right after refusing dessert ("No cake for me please." Poof! You lose 1 pound of unwanted fat), no one would ever eat dessert again. Unfortunately, these health benefits are delayed and, therefore, much of their value is lost on us. This reduction in the subjective value of a delayed reinforcer is illustrated in Figure 13.10. Instead of showing the value of delayed health outcomes, we show the subjective value of the delayed monetary reward that Rich Uncle Joe promised us earlier in the chapter. The red data points in the figure plot how someone might subjectively value a $1,000 gift promised 1 month, 1 year, 5 years, and 25 years in the future. How did you subjectively value these delayed gifts (the subjective values are those amounts that you wrote in each blank of the Rich Uncle Joe survey)? Go back to the survey and add your own subjective values to Figure 13.10. If you said that you would sell the $1,000 in 5 years for $300 in cash right now (and not a penny less), then you should add a data point at $300 on the y-axis and 5 years on the x-axis. The first thing to note about the data in Figure 13.10 is that the subjective value of a reinforcer does not decline linearly (i.e., according to a straight line). Instead, the line is curved in a way that reflects the fact that, for our hypothetical decision-maker whose data are shown, reinforcers lose most of their value at short delays (≤ 5 years in Figure 13.10). Another way to say this is that the value of the reinforcer was discounted steeply in this range of delays. At delays longer than 5 years, the curve declines less steeply. In this range, waiting longer has less impact on the discounting of the delayed gift. The shape of this delay discounting curve is called a hyperbola. Importantly, this hyperbolic shape is predicted by a version of the matching law whose quantitative details we will not explore - you've had enough math for one day.2 This predicted hyperbolic discounting of delayed reinforcers has been evaluated and confirmed in many experiments. Rats, pigeons, monkeys, and humans all hyperbolically discount the value of delay events (Bickel & Marsch, 2001; Green et al., 2007; Laibson, 1997; Logue, 1988; Madden et al., 1999; Mazur, 1987; Rachlin et al., 1991). Now that we know delayed reinforcers are discounted hyperbolically, we can predict when impulsive choices will be made. Figure 13.11 shows another hyperbolic discounting curve but this time the x-axis is reversed to show time passing from left to right. If you are the stick-figure decision-maker in the figure, you must choose between a smaller-sooner reward (SSR; the short green bar) and a larger-later reward (LLR; the tall red bar). Because the smaller-sooner reward is available right now (e.g., that slice of cake), it maintains its full value, which is given by the height of the green bar. By contrast, the subjective value of the larger-later reward (the health benefits of eating a healthy diet) is discounted because this consequence is delayed. The hyperbolic discounting curve sweeping down from the red bar shows the discounted value at a wide range of delays. At T1, where the stick-figure decision-maker is making its choice, the discounted value of the larger-later reward is shown as a red dot. The red dot is lower than the height of the green bar; therefore, the value of the smaller-sooner reward is greater than the subjective value of the larger-later reward. Said another way, the smaller-sooner reward feels like it is worth more than the discounted value of the larger-later reward. Because choice favors the reward with the higher subjective value, our stick-figure decision-maker makes the impulsive choice; that is, they choose the smaller-sooner reward and forego the larger-later reward.

DELAY-EXPOSURE TRAINING

The second method for reducing delay discounting and impulsive choice takes a different approach. Instead of arranging an environment in which impulsive choice is less likely to occur, delay-exposure training teaches the individual by giving them a lot of experience with delayed reinforcers. The logic is simple - if we are used to waiting (we do it all the time) - then waiting for a larger-later reward is nothing out of the ordinary; why not choose the larger-later reward? By contrast, if we are used to getting what we want when we want it, then delays are unusual, aversive, and may signal that we are not going to get what we want. Delay-exposure training has been studied in several different species and several methodologies have been employed. For example, Mazur and Logue (1978) exposed pigeons to delayed reinforcers for several months and found that this experience produced long-lasting reductions in impulsive choice. Similar findings have been reported in laboratory studies with rats and humans (Binder et al., 2000; Eisenberger & Adornetto, 1986; Stein et al., 2015). A form of delay-exposure training has been used in preschools to improve self-control choice and reduce the problem behaviors that can occur when preschoolers don't get what they want when they want it (Luczynski & Fahmie, 2017).

Reinforcer Size/Quality

The second variable that strongly influences choice is the size or the quality of the reinforcer. Given a choice between a small ($0.05) and a large ($5.00) reinforcer, all else being equal, the larger one will be chosen exclusively (see Figure 13.2). To be sure that this outcome was influenced by the difference in reinforcer size ($0.05 vs. $5.00) and not a preference for the button on the right, we could switch the contingencies so the smaller reinforcer is assigned to the right button (BR) and the $5 reinforcer to the left (BL). When behavior shifts toward BL, we would conclude that reinforcer size strongly influences choice. Because reinforcer quality has the same effect as reinforcer size, we can use choice to answer the question, which reinforcer is qualitatively better? For example, animal-welfare advocates can give farm animals choices between different living arrangements. Such behavioral research has shown that cows prefer quiet settings over noisy ones, a plant-based over a meat-based diet, and that chickens prefer smaller cages than humans might expect (Foster et al., 1996; Lagadic & Faure, 1988). If you are an animal lover, you should know that this work is underappreciated and represents an important area for future growth in behavior analysis (see Abramson & Kieson, 2016 for a discussion of this potential). But for now, just remember that individuals choose larger (higher-quality) reinforcers over smaller (lower-quality) reinforcers.

Effort

The third variable that produces exclusive choice is effort. When people and other animals are given a choice between working hard to get a reinforcer and working less hard to get the same reinforcer, they exclusively choose the less effortful option. Figure 13.3 shows an experiment conducted by Herrnstein and Loveland (1975). Under these conditions, pigeons exclusively chose less work (fixed ratio [FR] 1) over more work (FR 10) in pursuit of the same reinforcer. Translating this to your everyday life, if you need an elective course in plant science and you hear that one section is taught by an instructor who assigns a lot of busy work (BR) and another section is taught by a professor who doesn't assign any homework (BL), all else being equal, you will choose to enroll in the course taught by professor BL. The reason why is that both professors offer the same reinforcer (elective credits in plant science) but one requires less effort than the other. Why waste your effort? Similarly, many consumers exclusively buy products online because of the reduced effort in ordering. For example, Amazon's "dash buttons" allow you to order products like laundry detergent by pressing the button, conveniently located in the laundry room, once. What could be easier? Compare this with making a shopping list, driving to the store, finding the laundry detergent, and then hauling it home. Simply put, all else being equal, individuals choose low-effort reinforcers over high-effort reinforcers.

The Matching Law and Attention

The world in which we live is filled with thousands, if not millions, of stimuli, yet we attend to very few of them. For example, when is the last time that you chose to attend to the space on the ceiling in the corner opposite from you right now. Have you ever looked at that spot? Why do you instead spend so much time attending to books, your computer, and your phone? In the 1980s, no one spent 4 hours a day looking at their phones. What changed? If you have understood this chapter so far, you should be able to propose a falsifiable hypothesis about the allocation of your attention. The hypothesis forwarded by the matching law is that more attention will be allocated to stimuli predictive of higher rates of reinforcement (like your phone today), less attention will be allocated to stimuli predictive of lower rates of reinforcement (like phones in the 1980s), and no attention will be allocated to stimuli predictive of no reinforcers (like that spot on the ceiling). The evidence for this matching law hypothesis of attention comes from several laboratory studies. Rats, pigeons, and humans reliably attend to stimuli signaling higher rates of reinforcement more than they attend to stimuli signaling lower rates, or no reinforcement, exactly as the matching law predicts (Della Libera & Chelazzi, 2006, 2009; Shahan & Podlesnik, 2008; for review see Shahan, 2013). Your everyday experience is consistent with these findings. Your attention is easily drawn to the sound of your name across a noisy room because this auditory stimulus has reliably signaled a momentary increase in the probability of a reinforcer. For example, when a friend calls your name, attending to them might provide an opportunity to watch a funny video, hear a juicy bit of gossip, or get invited to a party. Similarly, your attention is readily drawn to a $20 bill blowing across campus, but not at all drawn to a candy wrapper rolling in the wind. The matching law's account of human attention may also help to explain why humans choose to allocate more of their news-gathering activities to biased sources, either progressive or conservative. These news outlets provide a higher rate of reinforcement; that is, they more frequently publish stories their audiences find reinforcing. For example, individuals who describe themselves as progressive choose to allocate more of their attention to newspapers and cable news channels that frequently publish stories and editorials consistent with their world view (e.g., MSNBC), and less attention to news outlets that frequently publish news appealing to a conservative audience (e.g., Breitbart, Fox News). Those with the opposite world view allocate their attention in the opposite direction, where they find the higher rate of reinforcement. If you let Google or Facebook choose what news you read, then you should know that their artificial intelligence algorithms have learned what you find reinforcing, and, to keep your attention directed at their platform, they feed you a steady diet of news stories predicted to be reinforcing (Zuboff, 2019). The downside of this is that different individuals have different sets of "facts," leading to the shared belief that the other tribe is "nuts." As you can see, the predictions of the matching law have been subjected to a large number of empirical tests involving a wide variety of choices and species. The preponderance of the evidence is supportive of the matching law. More advanced courses in behavior analysis will outline even further the quantitative predictions of the matching law, but these are beyond the scope of this text. The next section explores one of these predictions without going into any further quantitative details. As you will see, the matching law helps us to understand some of the maladaptive and irrational choices that we all make every day.

Summary of Influencing Impulsive Choice

This chapter defined choice as a voluntary behavior occurring in a context in which alternative behaviors are possible. We reviewed four variables that tend to produce exclusive choice for one alternative over another. That is, we strongly prefer to obtain a reinforcer instead of nothing, to get a larger over a smaller reinforcer, to work less for the same reinforcer, and to receive reinforcers sooner rather than later. Choice is less exclusive when the outcomes of choice are uncertain. This describes the hunter-gatherer environment in which human behavior evolved. Whether hunting game or foraging for berries, our ancestors learned where food was more likely to be found, but there were no guarantees. In this uncertain environment, their behavior probably conformed to Herrnstein's matching equation, which holds that choice is influenced by relative rates of reinforcement. In modern times, the matching law can help us understand choices that are otherwise difficult to explain: terrorism, substance-use disorder, and white nationalism. In each case, the matching law suggests reducing these behaviors may be achieved by arranging alternative reinforcers that will substitute for the problematic reinforcer. These reinforcers will be more competitive for human behavior if they are more meaningful (and more immediate) than those reinforcers currently maintaining behaviors destructive to self and others. The chapter also explored the impulsive choices we all make but later regret. These preferences for a smaller-sooner over a larger-later reward can have a negative impact on our lives. We sleep longer than we planned, lapse from our diet, and may even cheat on our partner because the benefits of doing so are immediate and the benefits of doing otherwise are delayed and, therefore, discounted in value. Many studies conducted in many species have established that delayed rewards are discounted in value and that the shape of the delay-discounting function is a hyperbola. Hyperbolic discounting can explain why we repeatedly change our minds - preferring the larger-later reward most of the time, and defecting on these good intentions when the smaller-sooner reward may be obtained immediately. While steeply discounting the value of delayed rewards is correlated with substance-use disorders and other health-impacting behaviors, there are methods for reducing impulsive choice. One method is the commitment strategy - choosing the larger-later reward well in advance, when the smaller-sooner reward is not immediately available. To the extent that this commitment cannot be reversed, this greatly increases self-control choices. The other method, delay-exposure training, provides the individual lots of experience with delayed reinforcers. Delay-exposure training seeks to change the individual's learning history, thereby producing large and lasting reductions in impulsive choice.

Summary of Four Variables Affecting Choice

This section has considered four variables that, all else being equal, will generate exclusive choice. When we say "all else being equal," we mean that the only difference between the two alternatives is that one, for example, produces the reinforcer faster than the other. If there are no other differences, then an exclusive choice will develop for the alternative that yields (1) a reinforcer (vs. one that does not), (2) a bigger/better reinforcer, (3) the same reinforcer less effortfully, and (4) the same reinforcer sooner. The remainder of this chapter will focus on nonexclusive choices. That is, situations in which we prefer BL, but we don't choose it all the time; sometimes we choose BR. For example, an angler may prefer to fish in lake BL but will sometimes fish in lake BR. Similarly, a shopper may prefer clothing store BL, but will sometimes shop in BR (to see if they have any bargains). In both of these situations, the consequences of choice are less certain than discussed earlier and, as a result, choice is nonexclusive; that is, we prefer one lake/store but search for reinforcers in both. Another form of nonexclusive choice is variously described as impulsivity, self-control, or delay of gratification. Sometimes we make choices that we describe as "impulsive" and then regret the decision. For example, we order a high-calorie dessert after eating a full meal, and after enjoying the immediate benefits of this choice, we regret our decision and berate ourselves for our lack of "self-control" or "willpower." Appealing to reifications like "impulsivity" and "willpower" reveals how little we understand the choices we make. As you will see later, nonexclusive choices between, for example, dieting and breaking from the diet, are predictable (one of the goals of behavior analysis). More importantly, some progress has been made in identifying functional variables that can be used to positively influence choice. As always, we hope readers of this book will further this progress and will help to reduce impulsive choices such as problem drug use, uncontrolled eating, and pathological gambling.

What Is Choice?

When a judge considers what punishment is appropriate to a crime, an important question is whether the individual who committed the crime chose to do it. If the criminal action was an involuntary response (e.g., an elicited startle-reflex caused harm to another person), then the judge would dismiss the case. For punishment to be imposed, the individual needs to have chosen to commit the crime. Choice may be defined as voluntary behavior occurring in a context in which alternative behaviors are possible. Two parts of this definition are noteworthy. First, choice is voluntary behavior, which is to say choice is not a phylogenetically determined reflex response or a Pavlovian response evoked by a conditioned stimulus (see Chapter 4). We don't choose when to begin digesting a meal, whether to salivate, vomit, or startle; these are involuntary reflexes either elicited by unconditioned stimuli or evoked by conditioned stimuli. Readers should not equate "voluntary behavior" with "willed behavior," that is, a behavior that we feel like we initiate. Instead, by "voluntary" all we mean is nonreflexive. Second, choice occurs in a context is which alternative behaviors are possible. After putting your money into the vending machine, several alternative behaviors are possible - you could press one button to get a bag of chips, another button to select a sleeve of cookies, and another button still to obtain the healthier granola bar. Similarly, when considering which colleges to apply to, deliberations occur in a choice context - one in which alternative behaviors are possible. When you think about it, almost all of our voluntary actions are choices. When walking, consider that we could be running, crawling, or balancing on one leg while sipping tea. Alternative behaviors are almost always possible. You began reading this chapter in a choice context - alternative behaviors were possible. Indeed, alternative behaviors are possible right now. You can always stop reading and engage in literally thousands of other behaviors at any time. But here you are reading. Why? While there are many theories of choice in the psychological and social sciences, we will begin by focusing on four simple variables that strongly influence choice. These variables will be familiar to you, as we discussed them in previous chapters.


Conjuntos de estudio relacionados

Assessment and Management of Patients With Hepatic & gallbladder Disorders

View Set

Cecchetti Grade 4 Theory and Terminology

View Set

Learning Curve 29: Saving, Investment, and Financial System

View Set

public speaking chapter 1, 2, 3, 5, 6, 7, 8, 9, 11

View Set

Chapter 9: Understanding Style: Theatricalism

View Set

Chapter 2: The Nature and Measurement of Crime

View Set

ODW Ch 7 (The Social Web: Opportunities for Learning, Working, and Communicating )

View Set

Entrepreneurship Chapter 3 Vocab

View Set