L&B chapter 7

¡Supera tus tareas y exámenes ahora con Quizwiz!

avoidance paradox

How can the nonoccurence of an event (shock) serve as a reinforcer for the avoidance response?

Animals that are given certain pharmacological agents show _______ than control animals.

less learned helplessness

Most behavior-reduction techniques teach a person what not to do, but they do not teach the patient what to do. _______ remedies this deficiency, and it provides more acceptable behaviors to fill the "behavioral vacuum" that is created when one behavior is reduced.

differential reinforcement of alternative behavior (DRA)

Rolider and Van Houten (1985)

found that delayed punishment could be effective if they used a tape recorder to replay the sounds of a child's disruptive behavior from earlier in the day, and then deliver the punisher (physical restraint or scolding)

One-factor theorists favor the _____ approach, because they maintain the animals are sensitive to ________, such as an overall reduction in shock frequency that occurs when an avoidance response is made.

molar long-term consequences

One-factor theorists favor the _____ approach. Two-factor theorists favor the _____ approach.

molar molecular

Dinsmoor (2001) has suggested that bodily feedback from the act of making an avoidance response can serve as an immediate reinforcer because, even in procedures such as those of Herrnstein and Hineline (1966), responding leads to a relative degree of safety. This exemplifies the ____-factor theory and a _____ approach.

molecular

Two-factor theorists favor the ______ approach because they assume that the immediate consequences control avoidance responses.

molecular

A few studies have shown that it is possible to train animals to make an arbitrary operant response in an avoidance situation by somehow making the desired response...

more compatible with the SSDRs of that species.

Research on rats by Grau and his colleagues has shown learned helplessness can be observed at the level of the _______.

spinal cord

Since systematic desensitization (which never allows the patient to experience a high level of fear) is a more pleasant form of therapy, Morganstern argued that...

there is little justification for using flooding in therapy.

Morganstern (1973) reviewed a number of studies that compared the effectiveness of flooding and systematic desensitization, and he concluded that...

these procedures are about equally effective.

Both ____ and _____ have become increasingly popular with teachers and behavior therapists, because they are effective ways to reduce unwanted behaviors without presenting any aversive stimulus.

time-out response cost

differential reinforcement of alternative behavior

-A classic study by Ayllon and Haughton (1964) offers a good illustration of how extinction of inappropriate behaviors can be combined with reinforcement of more appropriate behaviors--a procedure known as differential reinforcement of alternative behavior. -Ayllon and Haughton worked with patients in a psychiatric hospital who engaged in psychotic or delusional speech. They found that this inappropriate speech was often reinforced by the psychiatric nurses through their attention, sympathy, and conversation. -Ayllon and Haughton conducted a two-part study. -In the first part, nurses were explicitly instructed to reinforce psychotic speech with attention and tangible items (gum, candy, etc.). Psychotic speech increased steadily during this part of the study. -In the second phase, the nurses were told to ignore psychotic speech, but to reinforce normal speech (e.g. conversations about the weather, ward activities, or other everyday topics). -This study demonstrated both the power of attention as a reinforcer and how attention can be withheld from inappropriate behaviors and delivered for more desirable alternative behaviors. -a common part of treatment packages for behavior reduction -has been used effectively for such problems as food refusal, aggression, disruptive classroom behavior, and self-injurious behavior -The logic is that most behavior-reduction techniques teach a person what not to do, but they do not teach the patient what to do. DRA remedies this deficiency, and it provides more acceptable behaviors to fill the "behavioral vacuum" that is created when one behavior is reduced.

disadvantages of using punishment

-Although Azrin and Holz (1966) concluded that punishment can be a method of behavior change that is at least as effective as reinforcement, they warned that it can produce a number of undesirable side effects. -First, they noted that punishment can elicit several emotional effects, such as fear and anger, that are generally disruptive of learning and performance. -Second, punishment can sometimes lead to a general suppression of all behaviors, not only the behavior being punished. -A third disadvantage is that in real-world situations the use of punishment demands the continual monitoring of the individual's behavior. In contrast, use of reinforcement does not necessarily demand such monitoring, because it is in the individual's interest to point out instances of a behavior that is followed by a reinforcer. -A practical problem with the use of punishment is that individuals may try to circumvent the rules or escape from the situation entirely (e.g. hide). -Another problem with using punishment is that it can lead to aggression against either the punisher or whoever happens to be around. -A final problem with using punishment (one not mentioned by Azrin and Holz) is that in institutional settings, the people who must actually implement a behavior modification program may be reluctant to use punishment.

Thorndike and Skinner on whether punishment is indeed the opposite of reinforcement

-Based on their own research, each concluded that the effects of punishment are not exactly opposite to those of reinforcement. However, their experiments are not very convincing. -For example, Skinner (1938) placed two groups of rats on variable-interval (VI) schedules of lever pressing for three sessions; then each animal had two sessions of extinction. For one group, nothing unusual happened during extinction; responding gradually decreased over the two sessions. For the second group, however, each lever press during the first 10 minutes of extinction was punished: Whenever a rat pressed the lever, the lever "slapped" upward against the rat's paws. This mild punishment was enough to reduce the number of responses during these 10 minutes to a level well below that of the first group. However, when the punishment was removed, response rates increased, and by the end of the second session, the punished animals had made just about as many responses as the unpunished animals. From these results, Skinner concluded that the effects of punishment are not permanent and that punishment produces only a "temporary suppression" of responding. The problem with Skinner's conclusion is that although the effects of punishment were temporary in his experiment, so was the punishment itself. We know that the effects of positive reinforcement are also "temporary" in the sense that operant responses will extinguish after the reinforcer is removed. -Since Skinner's early experiment, many studies have addressed the question of whether punishment is the opposite of reinforcement in its effects on behavior: -We can attempt to answer this question by examining the two words used by Skinner: temporary and suppression. -If, unlike in Skinner's experiment, the punishment contingency is permanent, is the decrease in behavior still temporary? Sometimes it can be: Some studies have shown that subjects may habituate to a relatively mild punisher. In an experiment by Azrin (1960), pigeons were responding steadily for food on a VI schedule, and then punishment was introduced--each response produced a mild shock. Response rates decreased immediately, but over the course of several sessions, they returned to their preshock levels. Despite such results, there is no doubt that suitably intense punishment can produce a long-term decrease or disappearance of the punished behavior. When Azrin used more intense shocks, there was little or no recovery in responding over the course of the experiment. -Although Skinner did not define the term suppression, later writes took it to mean a general decrease in behavior that is not limited to the particular behavior that is being punished. Does the use of punishment lead to a general reduction in all behavior, or does only hte punished behavior decrease? An experiment by Schuster and Rachlin (1968) investigated this question. Pigeons could sometimes peck at the left key in a Skinner box, and at other times they could peck at the right key. Both keys offered identical VI schedules of food reinforcement, but then different schedules of shock were introduced on the two keys. When the left key was lit (signaling that the VI schedule was available on this key), some of the pigeon's key pecks were followed by shock. However, when the right key was lit, shocks were presented regardless of whether the pigeon pecked at the key. Under these conditions, responding on the left key decreased markedly, but there was little change in response rate on the right key. -Studies like this have firmly established the fact that punishment does more than simply cause a general decrease in activity. When a particular behavior is punished, that behavior will exhibit a large decrease in frequency while other, unpunished behaviors usually show no substantial change in frequency. -To summarize, contrary to the predictions of Thorndike and Skinner, research results suggest that the effects of punishment are directly opposite to those of reinforcement: Reinforcement produces an increase in whatever specific behavior is followed by the hedonically positive stimulus, and punishment produces a decrease in the specific behavior that is followed by the aversive stimulus. In both cases, we can expect these changes in behavior to persist as long as the reinforcement or punishment contingency remains in effect.

Seligman and learned helplessness

-If helpless dogs are guided across the barrier for enough trials, they will eventually start making the response on their own. In more general terms, he suggests that the best treatment is to place the subject in a situation where it cannot fail, so that gradually an expectation that one's behavior has some control over the consequences that follow will develop. -More interesting are studies showing that learned helplessness can be prevented in the first place by what Seligman calls immunization. For example, a rat might first be exposed to a situation where some response (such as turning a wheel) provides escape from shock. Thus the rat's first exposure to shock occurs in a context where it can control the shock. Then, in a second situation, inescapable shocks are presented. Finally, the rat is tested in a third situation where a new response (say, switching compartments in a shuttle box) provides escape from shock. Studies have shown that this initial experience with escapable shock blocks the onset of learned helplessness. -Seligman (1975) suggests that feelings of helplessness in a classroom environment may be prevented by making sure that a child's earliest classroom experiences are ones where the child succeeds (one where the child demonstrates a mastery over the task at hand). McKean (1994) has made similar suggestions for helping college students who exhibit signs of helplessness in academic settings. Such students tend to view course work as uncontrollable, aversive, and inescapable. They assume that they are going to do poorly and give up easily whenever they experience difficulty with course assignments or other setbacks. To assist such students, McKean suggests that professors should make their courses are predictable and controllable as possible. -In his more recent work, Seligman (2006) has proposed that another method for combating learned helplessness and depression is to train people in learned optimism. The training involves a type of cognitive therapy in which people practice thinking about potentially bad situations in more positive ways. A person learns to recognize and dispute negative thoughts. Some writers have questioned the effectiveness of Seligman's techniques to teach optimism, but results from a number of studies suggest that they can be beneficial.

stimulus satiation

-If it is not feasible to remove the reinforcer that is maintaining an undesired behavior, it is sometimes possible to present so much of the reinforcer that it loses its effectiveness due to stimulus satiation. -Ayllon (1963) described a female psychiatric patient who hoarded towels in her room. A program of stimulus satiation was begun in which the nurses brought her many towels each day. At first, she seemed to enjoy touching, folding, and stacking them, but soon she started to complain that she had enough and that the towels were in her way. The nurses then stopped bringing her towels, and afterward, no further instances of hoarding were observed. -A psychiatric patient who complained of hearing voices was given ample time to listen to these voices. For 85 half-hour sessions, the patient was instructed to sit in a quiet place and record when the voices were heard, what they said, and how demanding the tone of voice was. By the end of these sessions, the rate of these hallucinations was close to zero. -This version of stimulus satiation has also been used to treat obsessive thoughts.

learned optimism

-In his more recent work, Seligman (2006) has proposed that another method for combating learned helplessness and depression is to train people in a type of cognitive therapy in which people practice thinking about potentially bad situations in more positive ways. -A person learns to recognize and dispute negative thoughts. -Some writers have questioned the effectiveness of Seligman's techniques to teach optimism, but results from a number of studies suggest that they can be beneficial.

Rachlin (1969) / Modaresi (1990)

-Rachlin trained pigeons to operate a "key" that protruded into the chamber in order to avoid the shock -Modaresi found that lever pressing in rats was much easier to train as an avoidance response if the lever was higher on the wall, and especially if lever presses not only avoided the shocks but produced a "safe area" (a platform) on which the rats could stand Through a careful series of experiments, Modaresi showed that these two features coincided with the rats' natural tendencies to stretch upward and to seek a safe area when facing a potentially painful stimulus. Both of these studies are consistent with Bolles's claim that the ease of learning an avoidance response depends on the similarity between that response and one of the animal's SSDRs.

learned helplessness

-Repeated exposure to aversive events that are unpredictable and out of the individual's control can have long-term debilitating effects. -Seligman and his colleagues have proposed that in such circumstances, both animals and people may develop the expectation that their behavior has little effect on their environment, and this expectation may generalize to a wide range of situations. -e.g. dog exposed to inescapable shock will have trouble learning to avoid the shock later

immunization

-Studies show that learned helplessness can be prevented in the first place. -For example, a rat might first be exposed to a situation where some response (such as turning a wheel) provides escape from shock. Thus the rat's first exposure to shock occurs in a context where it can control the shock. Then, in a second situation, inescapable shocks are presented. Finally, the rat is tested in a third situation where a new response (say, switching compartments in a shuttle box) provides escape from shock. -Studies have shown that this initial experience with escapable shock blocks the onset of learned helplessness.

two-factor theory

-The two factors, or processes, of this theory are classical conditioning and operant conditioning, and according to the theory, both are necessary for avoidance responses to occur. -These two factors can be illustrated in the experiment of Solomon and Wynne. One unconditioned response to shock is fear, and fear plays a critical role in the theory. Through classical conditioning, this fear response is transferred from the unconditioned stimulus (US) (shock) to some conditioned stimulus (CS) (a stimulus that precedes the shock). In the Solomon and Wynne experiment, the CS was the 10 seconds of darkness that preceded each shock. After a few trials, a dog would presumably respond to the darkness with fear. This conditioning of a fear response to an initially neutral stimulus is the first process of the theory. -The second factor, based on operant conditioning, is escape from a fear-provoking CS. In the Solomon and Wynne experiment, a dog could escape from a dark compartment to an illuminated compartment by jumping over the barrier. It is important to understand that in two-factor theory, what we have been calling "avoidance responses" is redefined as escape responses. The theory says that the reinforcer for jumping is not the avoidance of the shock, but rather the escape from a fear-eliciting CS. This theoretical maneuver is two-factor theory's solution to the avoidance paradox. -In summary, according to two-factor theory, ending the signal for shock in avoidance behavior has the same status as ending the shock in escape behavior--both are actual stimulus changes that can serve as reinforcers (i.e., negative reinforcers, because they are removed when a response is made).

escape extinction

-This procedure can be used when an undesired behavior is maintained by escape from some situation the individual does not like. -e.g. food refusal -> force the child to swallow ...Studies have shown that this method is very effective in reducing food refusal behaviors.

punishment

-a behavior is followed by an unpleasant stimulus -positive punishment: stimulus is presented -negative punishment: a pleasant stimulus is removed or omitted if a behavior occurs (a.k.a. omission)

N. E. Miller (1948)

-attempted to turn a white compartment into an aversive stimulus by shocking rats while they were in that chamber -From this point on, no further shocks were presented, but Miller found that a rat would learn a new response, turning a wheel, when this response opened a door and allowed the rat to escape from the white chamber. -In the next phase, wheel turning was no longer an effective response; instead, a rat could escape from the white chamber by pressing a lever. Eventually, the rats learned this second novel response. -shows that a CS for shock can accelerate ongoing avoidance behavior, and its removal can be used as a reinforcer to teach an animal totally new responses

problems with two-factor theory

-avoidance without observable signs of fear ...If the theory is correct, we should be able to observe an increase in fear when the signal for shock is presented and a decrease in fear once the avoidance response is made. However, it has been long recognized that observable signs of fear disappear as subjects become more experienced in avoidance tasks. But according to two-factor theory, fear should be greatest when avoidance responses are the strongest, since fear is supposedly what motivates the avoidance response. ....To deal with this problem, other versions of two-factor theory have downplayed the role of fear in avoidance learning. For example, Dinsmoor (2001) has maintained that it is not necessary to assume that the CS in avoidance learning produces fear (as measured by heart rate or other physical signs). We only need to assume that the CS has become aversive (meaning that it has become a stimulus the animal will try to remove). -extinction of avoidance behavior ...From the perspective of two-factor theory, each trial without the shock is avoided is an extinction trial: The CS (darkness) is presented, but the US (shock) is not. According to the principles of classical conditioning, the CR of fear (or aversion, if we use Dinsmoor's approach) should gradually weaken on such extinction trials until it is no longer elicited by the CS. But if the darkness no longer elicits fear or aversion, the avoidance response should not occur either. Therefore, two-factor theory predicts that avoidance responding should gradually deteriorate after a series of trials without shock. However, once avoidance responses fail to occur, the dog will again receive darkness-shock pairings, and the aversion should be reconditioned. Then, as soon as avoidance responses again start to occur, the aversion to darkness should once again start to extinguish. In short, two-factor theory seems to predict that avoidance responses should repeatedly appear and disappear in a cyclical pattern. ...Unfortunately for two-factor theory, such cycles in avoidance responding have almost never been observed. Indeed, one of the most noteworthy features of avoidance behavior is its extreme resistance to extinction.

Lerman and Vorndran (2002)

-concluded that many questions about the effective use of punishment have never been adequately studied -There is not much research on the long-term effects of punishment on how punishing a behavior in one situation may produce generalization to other situations, or on how to minimize the disadvantages of punishment in applied settings. -Additional research should help to find better ways to use punishment and avoids its undesirable side effects. For example, we have seen that punishment can be quite ineffective if it is delayed, but sometimes it is not possible for a teacher or caregiver to punish a bad behavior immediately. However, there may be ways to "bridge the gap" between an unwanted behavior and punishment and still maintain the punisher's effectiveness (e.g. Rolider and Van Houten's tape recorder).

Solomon and Wynne (1953)

-conducted an experiment that illustrates many of the properties of negative reinforcement -subjects were dogs and their apparatus was a shuttle box--a chamber with two rectangular compartments separated by a barrier several inches high -A dog could move from one compartment to the other simply by jumping over the barrier. Each compartment had a metal floor that could deliver an unpleasant stimulus, a shock. -There were two overhead lights, one for each compartment. Every few minutes, the light above the dog was turned off (but the light in the other compartment remained on). If the dog remained in the dark compartment, after 10 seconds the floor was electrified and the dog received a shock until it hopped over the barrier to the other compartment. -Thus the dog could escape from the shock by jumping over the barrier. However, the dog could also avoid the shock completely by jumping over the barrier before the 10 seconds of darkness had elapsed. -The next trial was the same, except that the dog had to jump back into the first compartment to escape or avoid the shock. -For the first few trials a typical dog's responses were escape responses--the dog did not jump over the barrier until the shock had started. After a few trials, a dog would start making avoidance responses--it would jump over the barrier soon after the light went out, and if it jumped in less than 10 seconds it did not receive the shock. -After a few dozen trials, a typical dog would almost always jump over the barrier just 2 or 3 seconds after the light went out. Many dogs never again received a shock after their first successful avoidance response. -Results such as these had led earlier theorists to ponder a question that is sometimes called the avoidance paradox: How can the nonoccurence of an event (shock) serve as a reinforcer for the avoidance response? -Reinforcement theorists had no problem explaining escape responses, because the response produced an obvious change in an important stimulus (e.g. shock changed to no shock). -The problem for reinforcement theorists was with avoidance responses, because here there was no such change in the stimulus. -It was this puzzle about avoidance responses that led to the development of an influential theory of avoidance called two-factor theory, or two-process theory.

cognitive theory of avoidance

-developed by Seligman and Johnston (1973) -proposed that an animal's behavior can only change in an avoidance task if there is a discrepancy between expectancy and observation -On the first trial of a signaled avoidance experiment, the animal can have no expectations about shock or how to avoid it. Consequently, the animal makes no avoidance response on the first trial. However, as the trials proceed, the animal gradually develops the expectations that (1) no shock will occur if it makes a certain response, and (2) shock will occur if it does not make the response. Because the animal prefers the first option over the second option, it makes the response. -Once these two expectations have been formed, Seligman and Johnston assumed that the animal's behavior will not change until one or both of the expectations are violated. This can explain the slow extinction of avoidance behavior. As long as the animal responds on each extinction trial, all it can observe is that a response is followed by no shock. This observation is consistent with the animal's expectation, so there is no change in its behavior. Presumably, extinction will only begin to occur if the animal eventually fails to make a response on some trial (perhaps by mistake, or because it is distracted, or for some such reason). -Only on a trial without an avoidance response can the animal observe an outcome (no response leads to no shock) that is inconsistent with its expectations.

Herrnstein and Hineline (1966)

-experiment in which neither an external stimulus nor the passage of time could serve as a reliable signal that a shock was approaching -By pressing a lever, a rat could switch from a schedule that delivered shocks at a rapid rate to one that delivered shocks at a slower rate. For example, in one condition there was a 30% chance of shock if the rat had not recently pressed the lever, but only a 10% chance if the rat had recently pressed the lever. Obviously, to reduce the number of shocks, the animal should remain on the 10% schedule as much as possible. -However, the key feature of this procedure was that pressing the lever did not ensure any amount of shock-free time. Sometimes, just by chance, a rat would press the lever and get a shock from the 10% schedule almost immediately. -found that 17 of their 18 rats eventually acquired the avoidance response -concluded (1) that animals can learn an avoidance response when neither an external CS nor the passage of time is a reliable signal for shock, and (2) that to master this task, animals must be sensitive to the average shock frequencies when they respond and when they do not respond -Herrnstein reasoned that if the rats were sensitive to these two shock frequencies in the procedure, then it is a needless complication to assume that fear or aversion to a CS controls the avoidance responses in the typical avoidance experiment: Why not simply assume that a reduction in shock frequency is the reinforcer for the avoidance response? For this reason, one-factor theory of avoidance is sometimes called the shock-frequency reduction theory.

Rescorla and LoLordo (1965)

-first trained dogs in a shuttle box where jumping into the other compartment received a shock -Then the dogs received conditioning trials in which a tone was paired with shock. -Finally, the dogs were returned to the avoidance task, and occasionally the tone was presented (but no longer followed by shock). Whenever the tone came on, the dogs dramatically increased their rates of jumping over the barrier. This result shows that a stimulus that is specifically trained as a CS for fear can amplify ongoing avoidance behavior.

O'Leary, Kaufman, Kass, and Drabman (1970)

-found that the manner in which a reprimand is given is a major factor determining its effectiveness -use "soft" or private reprimands whenever possible (minimizes attention given)

response cost

-in a token system, the loss of tokens, money, or other conditioned reinforcers following the occurrence of undesirable behaviors -Token systems that include a response-cost arrangement have been used with children, prison inmates, and patients in mental institutions.

response blocking

-involves presenting the signal that precedes shock but preventing the subject from making the avoidance response -For example, Page and Hall (1953) conducted an avoidance experiment in which rats learned to avoid a shock by running from one compartment to another. After the response was learned, one group of rats received normal extinction trials. A second group had the extinction trials preceded by five trials in which a rat was retained in the first compartment for 15 seconds, with the door to the second compartment closed. Thus these rats were prevented from making the avoidance response, but unlike the acquisition phase, they received no shocks in the first compartment. Page and Hall found that extinction proceeded much more rapidly in the response-blocking group. -The other term for this procedure, flooding, connotes the fact that subjects are "flooded" with exposure to the stimulus that used to precede shock. -There is considerable evidence that response blocking is an effective way to speed up the extinction of avoidance responses. -Cognitive theory offers a simple explanation of why response blocking works. The subject is forced to observe a set of events--no response followed by no shock--that does not match the animal's expectation that no response will be followed by shock. A new expectation, that no shock will follow no response, is gradually formed; as a result, avoidance responses gradually disappear. -Two-factor theory explains that the forced exposure to the CS produces extinction of the conditioned fear or aversion. -An explanation based on one-factor theory might proceed as follows: Normal avoidance extinction is slow because there is nothing to signal the change from acquisition to extinction conditions. However, the procedure is response blocking introduces a drastic stimulus change: There is now a closed door that prevents the animal from entering the other compartment. This change in stimuli gives the subject a cue that things are now different from the preceding acquisition phase. It is not surprising that subsequent extinctions proceeds more quickly.

Hiroto & Seligman (1975)

-learned helplessness experiment -College students were first presented with a series of loud noises that they could not avoid. They were then asked to solve a series of anagrams. These students had much greater difficulty solving the problems than students who were not exposed to unavoidable noises. -Early experience with uncontrollable aversive events produces a sense of helplessness that carries over into other situations, leading to learning and performance deficits.

factors influencing the effectiveness of punishment

-manner of introduction ...If one's goal is to obtain a large, permanent decrease in some behavior, then Azrin and Holz (1966) recommended that the punisher be immediately introduced at its full intensity. ....A given intensity of punishment may produce a complete cessation of behavior if introduced suddenly, but it may have little or no effect on behavior if it is gradually approached through a series of successive approximations -immediacy of punishment ...Just as the most effective reinforccer is one that is delivered immediately after the operant response, a punisher that immediately follows a response is most effective in decreasing the frequency of the response. ...The more immediate the punishment, the greater the decrease in responding (orderly relationship between punishment delay and response rate). -schedule of punishment ...Like positive reinforcers, punishers need not be delivered after every occurrence of a behavior. Azrin and Holz concluded, however, that the most effective way to eliminate a behavior is to punish every response rather than to use some intermittent schedule of punishment. ...smaller FR punishment schedule = greater decrease in responding ...The schedule of punishment can affect the patterning of responses over time as well as the overall response rate. When Azrin (1956) superimposed a fixed-interval (FI) 60-second schedule of punishment on a VI 3-minute schedule of food reinforcement, he found that pigeons' response rates declined toward zero as the end of each 60-second interval approached. In other words, the effect of the FI schedule of punishment (a decelerating pattern of responding) was the opposite of that typically found with FI schedules of reinforcement (an accelerating pattern of responding). -motivation to respond ...Azrin and Holz noted that the effectiveness of a punishment procedure is inversely related to the intensity of the subject's motivation to respond. ...Attempt to discover what reinforcer is maintaining the behavior, and decrease the value of that reinforcer. -reinforcement of alternative behaviors ...Based on their research with animals, Azrin and Holz concluded that punishment is much more effective when the individual is provided with an alternative way to obtain the reinforcer. -punishment as a discriminative stimulus ...Besides having aversive properties, a punisher can also sometimes function as a discriminative stimulus; that is, a signal predicting the availability of other stimuli, either pleasant or unpleasant. ...Because self-injurious behaviors often bring to the individual the reinforcers of sympathy and attention, the aversive aspects of this type of behavior (pain) may serve as discriminative stimuli that reinforcement is imminent.

situations in which learned helplessness is observed

-patients suffering from long-term illnesses -women who have been the victims of domestic violence -elderly who have issues coping with their problems -work efficiency and satisfaction of industrial employees

time-out

-probably the most common form of negative punishment -One or more desirable stimuli are temporarily removed if the individual performs some unwanted behavior. -In one case study, time-out was combined with reinforcement for alternative behaviors to eliminate the hoarding behavior of a patient in a psychiatric hospital. This case study illustrates what researchers call an ABAB design. Each "A" phase is a baseline phase in which the patient's behavior is recorded, but no treatment is given. Each "B" phase is a treatment phase. Stan was an adult with a brain injury, and he frequently hoarded such items as cigarette butts, pieces of dust and paper, food, and small stones by hiding them in his pockets, socks, or underwear. In the initial 5-day baseline phase, the researchers observed an average of about 10 hoarding episodes per day. This was followed by a treatment phase (days 6 through 15) in which Stan was rewarded for two alternative behaviors--collecting baseball cards and picking up trash and throwing it away properly. During this phase, any episodes of hoarding were punished with a time-out period in which Stan was taken to a quiet area for 10 seconds. The number of hoarding episodes decreased during this treatment phase. In the second baseline phase, the treatment was discontinued; during these 4 days, Stan's hoarding behavior increased. Finally, in the second treatment phase, the time-outs and reinforcement for alternative behaviors resumed, and Stan's hoarding gradually declined and eventually stopped completely.

Robert Bolles (1970)

-proposed that animals exhibit a type of preparedness in avoidance learning -In this case, the preparedness does not involve a stimulus-stimulus association, but rather a propensity to perform certain behaviors ina potentially dangerous situation. -He was highly critical of traditional theories of avoidance learning, especially two-factor theory. He suggested that a two-factor account of avoidance learning in the wild might go as follows: A small animal in the forest is attacked by a predator and is hurt but manages to escape. Later, when the animal is again in this part of the forest, it encounters a CS--a sight, sound, or smell that preceded the previous attack. This CS produces the response of fear, and the animal runs away to escape from the CS, and it is reinforced by a feeling of relief. According to the two-factor account, then, avoidance behavior occurs because animals learn about signals for danger (CSs) and then avoid those signals. He claimed, however, that this account is: "utter nonsense.... Thus, no real-life predator is going to present cues just before it attacks. No owl hoots or whistles 5 seconds before pouncing on a mouse. And no owl terminates its hoots or whistles just as the mouse gets away so as to reinforce the avoidance response. Nor will the owl give the mouse enough trials for the necessary learning to occur. What keeps our little friends alive in the forest has nothing to do with avoidance learning as we ordinarily conceive of it or investigate it in the laboratory... What keeps animals alive in the wild is that they very effective innate defensive reactions which occur when they encounter any kind of new or sudden stimulus." -Bolles called these innate behavior patterns species-specific defense reactions (SSDRs). As the name implies, SSDRs may be different for different animals, but Bolles suggested that they usually fall into one of three categories: freezing, fleeing, and fighting (adopting an aggressive posture and/or behaviors). -Bolles proposed that in laboratory studies of avoidance, an avoidance response will be quickly learned if it is identical with or at least similar to one of the subject's SSDRs. If the required avoidance response is not similar to an SSDR, the response will be learned slowly or not at all. To support this hypothesis, Bolles noted that rats can learn to avoid a shock by jumping or running out of a compartment in one or only a few trials. The rapid acquisition presumably reflects the fact that for rats, fleeing is a highly probable response to danger. However, it is very difficult to train a rat to avoid shock by pressing a lever, presumably because this response is unlike any of the creature's typical responses to danger. -Fanselow (1997) has argued that the basic principle of negative reinforcement (which states that any response that helps to avoid an aversive event will be strengthened) is not especially useful when SSDRs take over: Even a simple response such as pressing a lever or pecking a key may be difficult for the animal to learn.

Lovibond (2006)

-proposed that individuals can learn more detailed expectations that include information about the three parts of the three-term contingency (discriminative stimulus, operant response, and consequence) -For instance, an individual might learn that in the presence of one warning signal, a specific response will avoid one type of aversive event, but if another warning signal occurs, a different avoidance response is required to avoid a different aversive event. -Research with college students by Declercq, De Houwer, and Baeyens (2008) has found that they can and do develop these more elaborate three-part expectations in avoidance tasks.

examples of negative punishment

-response cost -time-out

behavioral decelerator

-sometimes used to refer to any technique that can lead to a slowing, reduction, or elimination of unwanted behaviors -two most obvious ones are punishment and omission (punishment of both "voluntary" and "involuntary" (reflexive) behaviors) -overcorrection ...In some cases, if an individual performs an undesired behavior, the parent, therapist, or teacher requires several repetitions of an alternate, more desirable behavior. ...often involves two elements: restitution (making up for the wrongdoing) and positive practice (practicing a better behavior) ...The corrective behavior is usually designed to require more time and effort than the original bad behavior. ...e.g. aggression against a sibling -> apology + sharing a toy ...meets the technical definition of a punishment procedure, but a learner is given repeated practice performing a more desirable behavior -extinction ...If an undesired behavior occurs because it is followed by some positive reinforcer, and if it possible to remove that reinforcer, the behavior should eventually disappear through simple extinction. ....One of the most common reinforcers to maintain unwanted behaviors is attention. ...Ducharme and Van Houten (1994) noted that extinction is sometimes slow, especially if the unwanted behavior has been intermittently reinforced in the past. In addition, the unwanted behaviors sometimes increase rather than decrease at the beginning of the extinction process (e.g. tantrums). ...Spontaneous recovery may occur. Nevertheless, when used properly, extinction can be a very useful method of eliminating unwanted behaviors. ...One of the most effective ways to use extinction is to combine it with the reinforcement of other, more desirable behaviors. -response blocking ...For behaviors that are too dangerous or destructive to wait for extinction to occur, an alternative is physically restraining the individual to prevent the inappropriate behavior. ...can have both short-term and long-term benefits: 1. By preventing the unwanted behavior, immediate damage or injury can be avoided. 2. As the individual learns that the behavior will be blocked, attempts to initiate this behavior usually decline. -differential reinforcement of alternative behavior ...A classic study by Ayllon and Haughton (1964) offers a good illustration of how extinction of inappropriate behaviors can be combined with reinforcement of more appropriate behaviors. ...Ayllon and Haughton worked with patients in a psychiatric hospital who engaged in psychotic or delusional speech. They found that this inappropriate speech was often reinforced by the psychiatric nurses through their attention, sympathy, and conversation. ...Ayllon and Haughton conducted a two-part study. ...In the first part, nurses were explicitly instructed to reinforce psychotic speech with attention and tangible items (gum, candy, etc.). Psychotic speech increased steadily during this part of the study. ...In the second phase, the nurses were told to ignore psychotic speech, but to reinforce normal speech (e.g. conversations about the weather, ward activities, or other everyday topics). ...This study demonstrated both the power of attention as a reinforcer and how attention can be withheld from inappropriate behaviors and delivered for more desirable alternative behaviors. -stimulus satiation ...If it is not feasible to remove the reinforcer that is maintaining an undesired behavior, it is sometimes possible to present so much of the reinforcer that it loses its effectiveness due to stimulus satiation. ...Ayllon (1963) described a female psychiatric patient who hoarded towels in her room. A program of stimulus satiation was begun in which the nurses brought her many towels each day. At first, she seemed to enjoy touching, folding, and stacking them, but soon she started to complain that she had enough and that the towels were in her way. The nurses then stopped bringing her towels, and afterward, no further instances of hoarding were observed.

one-factor theory

-states that the classical conditioning component of two-factor theory is not necessary -There is no need to assume that escape from a fear-eliciting CS is the reinforcer for an avoidance response, because, contrary to the assumptions of two-factor theory, avoidance of a shock can in itself serve as a reinforcer. An experiment by Murray Sidman illustrates this point. -Sidman (1953) developed an avoidance procedure that is now called either the Sidman avoidance task or simply free-operant avoidance. In this procedure, there is no signal preceding shock, but if the subject makes no responses, the shocks occur at perfectly regular intervals. For instance, in one condition of Sidman's experiment, a rat would receive a shock every 5 seconds throughout the session if it made no avoidance response. However, if the rat made an avoidance response (pressing a lever), the next shock did not occur until 30 seconds after the response. Each response postponed the next shock for 30 seconds. By responding regularly (say, once every 20 to 25 seconds), a rat could avoid all the shocks. In practice, Sidman's rats did not avoid all the shocks, but they did respond frequently enough to avoid many of them. -On the surface, these results seem to pose a problem for two-factor theory because there is no signal before a shock. If there is no CS to elicit fear or aversion, why should an avoidance response occur? Actually, two-factor theorists had a simple answer to this question, as Sidman himself realized. The answer is that although Sidman provided no external stimulus, the passage of time could serve as a stimulus because the shocks occurred at regular intervals. That is, once a rat was familiar with the procedure, its fear might increase more and more as time elapsed without a response. The rat could associate fear with the stimulus "a long time since the last response," and it could remove this stimulus (and the associated fear) by making a response. -To make a stronger case for one-factor theory, we need an experiment in which neither an external stimulus nor the passage of time could serve as a reliable signal that a shock was approaching: Herrnstein and Hineline's 1966 experiment met these requirements.

Reisel and Kopelman (1995)

-used 3 years' worth of data from all teams in the National Football League -results supported the theory of learned helplessness -They found that (1) if a team was badly beaten in one game, the team tended to perform worse than expected in the next game, and (2) this was especially true if the team faced a difficult opponent in the next game. (According to the theory, learned helplessness should indeed be most pronounced when the upcoming task appears insurmountable.)

prolonged exposure therapy

A patient with PTSD or OCD receives long-duration exposure to stimuli (either real or in their imaginations) that elicit their anxiety reactions.

flooding procedure

A therapist starts immediately with a highly feared stimulus and forces the patient to remain in the presence of this stimulus until the patient's external signs of fear subside.

A stimulus that is specifically trained as a __ for fear can amplify ongoing avoidance behavior.

CS

Contrary to the predictions of _____ and _____, research results suggest that the effects of punishment are ________ to those of reinforcement.

Thorndike Skinner directly opposite

one-factor theory's explanation for the slow extinction of avoidance responses

We have seen that once an avoidance response is acquired, the animal may avoid every scheduled shock by making the appropriate response. Now suppose that at some point the experimenter turns off the shock generator. From the animal's perspective, the subsequent trials will appear no different from the previous trials: The stimulus comes on, the subject responds, the stimulus goes off, no shock occurs. Since the animal can discriminate no change in the conditions, there is no change in behavior either, according to this reasoning.

Punishment has the opposite effect on behavior as positive reinforcement. Reinforcement produces ________, and punishment produces ______. Whether punishment is indeed the opposite of reinforcement is an empirical question, however, and such illustrious psychologists as _____ and _____ have concluded that it is not.

an increase in behavior a decrease in behavior Thorndike Skinner

Experiments supporting two-factor theory have shown that the signal for shock in a typical avoidance situation does indeed develop ______, and that animal can learn a new response that terminates the signal.

aversive properties

The negative reinforcement also includes instances of ____ in which a response prevents an unpleasant stimulus from occurring in the first place.

avoidance

Findings suggest that learned helplessness is a very ______ of ______, one that can be found in a wide range of species and even in the more primitive regions of the nervous system.

basic phenomenon associative learning

Seligman and Johnston (1973) developed a _______ that they suggested was superior to both two-factor and one-factor theories.

cognitive theory of avoidance

Many psychologists believe that learned helplessness can contribute to _______.

depression

Herrnstein and Hineline (1966) concluded: (1) that animals can learn an avoidance response when neither an ____ nor _____ is a reliable signal for shock, and (2) that to master this task, animals must be sensitive to the _____ when they respond and when they do not respond.

external CS the passage of time average shock frequencies

One of the most noteworthy features of avoidance behavior is its extreme resistance to _____.

extinction

The main difference between flooding and systematic desensitization is that, with flooding, the _______________ is eliminated.

hierarchy of fearful events or stimuli

As with positive punishment, omission procedures are most effective if the omission occurs...

immediately after the undesired behavior, every time the behavior occurs.

Molar theories assume that behavior is controlled by ___________, whereas molecular theories assume that ________ consequences are important.

long-term relationships between behavior and its consequences moment-to-moment

Because _______ is a means of reducing behavior without using an aversive stimulus, it has become a popular tool in behavioral modifications.

negative punishment

With ______, a behavior increases in frequency if some stimulus is removed after the behavior occurs.

negative reinforcement

The term _____ is often used instead of negative punishment.

omission

Extinction can be sped up by using a procedure called ______, or ______.

response blocking flooding

One-factor theory of avoidance is sometimes called the _____ theory.

shock-frequency reduction

A more recent cognitive theory proposed by Lovibond (2006) maintains that individuals can learn more detailed expectations that include information about...

the three parts of the three-term contingency (discriminative stimulus, operant response, and consequence).

The term flooding connotes the fact that subjects are "flooded" with exposure to the stimulus that ___________.

used to precede shock


Conjuntos de estudio relacionados

BIO 48_Lecture 4 and 5 (Cell: Membrane Transport / Potentials)

View Set

Micro Econ 2001-- HW 6 Review, Section 2 (Exam #3)

View Set

2 - Health Insurance Providers (Test only has 10 Questions)

View Set

1.3 Basic Explanation Of G & M Codes - Basic Mill Operator

View Set