5023 Unit 4

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

5. Describe Premack's 1962 experiment. The devil is in the details, so write your response carefully and study the details!

1) Arrange the EO: Premack first deprived rats of water for 23 hours and then measured their behavior in a setting in which they could run on an activity wheel or drink water. The rats spent more time drinking than running. 2) Arrange contingency 1: Next, the contingency was arranged between running and drinking water. The rats received a few seconds to access to drinking tubes when they ran on the wheels. Running on the wheel increased when it produced the opportunity to drink water, showing that drinking reinforced running. 3) Then Premack gave the rats free access to water. When the rats were allowed to choose between drinking and running, they did little drinking and a lot more running. Premack reasoned that running would reinforce drinking because running occurred at higher frequency then drinking. 4) Arrange contingency 2: The running wheel was then locked and the brake was removed if the rats licked the water tube for a few seconds. Based on this contingency, Premack showed that drinking water increased when it produced running. Overall this experiment shows that drinking reinforces running when rats are motivated to drink and that running reinforces drinking when running is the preferred activity.

The Model Experiment described on pages 101-103 is useful to being able to follow many of the experiments that follow throughout this book and course. Take time to read it thoroughly, and to understand the way cumulative recorder data are to be analyzed.

1) Deprivation procedure: Free access to food and water. Measure free-feeding/ baseline weight. Limit food until organism reaches 80% of free-feeding weight. 2) Magazine training: pair a click with the presentation of food from a magazine. Also, pair the sound of a hopper to the presentation of food 3) Baseline of key pecks (without reinforcement of food) 4) Shape behavior so that the pigeon pecks a specific key to get food. First instance of this final behavior is the first definable response. Reinforce on a CRF schedule. Cumulative recorder data shows low responses when the pigeon is initially put in the chamber. This is because of the abrupt change from home cage to the operant chamber. Response rate will increase and eventually plateau when the pigeon becomes satiated with food. An extinction procedure will show a decline of the response toward operant levels.

11. Describe the procedure used to shape pigeon pecking behavior. What is the role of behavioral variability in shaping?

1) First, a bird is placed alone in a cage and is given free access to food and water. The bird is weighed each day an its baseline weight is calculated. Next the daily food ration is reduced until the bird reaches approximately 80% of its free-feeding weight. After this food depravation procedure, it is placed in the operant chamber for magazine training. 2) At first the bird may show a variety of emotional responses, including wing flapping and defecating because of the novel feature of the chamber that initially function as aversive stimuli. These responses are extinguished with repeated exposure to the chamber. Over time, the bird will explore its environment and begin to eat from a food magazine. The sounds of the feeder opening will become a conditioned reinforcer. At this point, the bird is said to be magazine trained. 3) Next, a baseline (operant level) of key pecking is measured. This is accomplished by recording pecks on a key before a peck-food contingency is established. 4) Next, key pressing responses are trained by reinforcing successive approximations to the final performance. As each approximation occurs, it is reinforced with the sound of the feeder opening and the presentation of food. Earlier approximations are no longer reinforced and decrease in frequency. Behavior variability is important because it allows the bird to engage in varied response that are more similar to the final response. This process eventually results in the bird pecking on the desired key.

18. Discuss operant spontaneous recovery. In what ways is spontaneous recovery really not spontaneous at all? Provide an example of your own of "spontaneous recovery".

After a period of extinction, an organism's rate of response may be close to operant level. After some time, the organism is again placed in the setting and extinction is continued. Responding initially recovers (goes above operant levels), but over repeated sessions of extinction, the amount of recovery decreases. Spontaneous recovery is not really spontaneous because stimuli that have accompanied reinforced responding are usually presented at the beginning of extinction. Procedures and stimulation that arise when being placed in a setting that previously signaled the availability of reinforcement set the occasion for responding. An example of my own "spontaneous recovery" occurred when my phone rang but all I heard was a dial tone when I answered. The next time my phone rang, I answered and only heard a dial tone. This continued multiple times through out the day until I finally stopped answering the phone. Recovery occurred the next day when I answered my phone when it rang again "just too see" if someone else was on the other end of the line.

16. After 50 to 80 reinforced responses, an organism is likely to exhibit resistance to extinction. The PRE will cement this into place. Describe a situation in the home that might produce this kind of difficult and inflexible responding. How does Nevin (1988) account for this effect?

An example in the home that might produce a difficult and inflexible responding can be seen in a child who receives attention for self-injurious behavior on an intermittent schedule of reinforcement. There may be times when behavior is ignored and other times when the behavior is reinforced with attention. Nevin (1988) indicates that the partial reinforcement effect (PRE) is the result of reinforcement and discrimination. Organisms show less resistance to extinction for behaviors reinforced on a CRF schedule than for behaviors reinforced on an intermittent schedule because an organism can discriminate the difference between a high and steady rate of reinforcement (CRF) and no reinforcement (extinction) more easily than the difference between low and intermittent rate of reinforcement and no reinforcement.

14. How might an investigator produce response differentiation in the lab? How might a teacher do so in the classroom?

An investigator in the lab might produce response differentiation by only reinforcing an animal with food when the animal applies a specific amount of pressure on a key. Other amounts of pressure would not be reinforced. A teacher in the classroom might produce response differentiation but calling on students who raise their hand quietly and not on students who are waving their hands around and making noise.

20. Discuss extinction and forgetting behavior. Describe Skinner's 1938 experiment on extinction and forgetting.

Extinction is a procedure in which a previously reinforced response no longer produces reinforcement. The opportunity to emit the operant remains available during extinction. In contrast, forgetting is said to occur after the mere passage of time. An organism that has learned a response is tested for retention after some amount of time has passed. In this case, there is no apparent opportunity to emit the behavior outside of the retention test. Skinner (1938) designed an experiment to assess the behavioral loss that occurs after the passage of time. Four rats (group 1) were trained to press a lever, and each received 100 reinforced responses. After 45 days of rest, each animal was placed in an operant chamber and responding was extinguished. The number of responses emitted during extinction was compared with the performance of four other rats that received extinction 1 day after reinforced par pressing (group 2). Results showed that group 1made an average of 69 responses in one hour while group 2 made and average of 86 responses in one hour during extinction phase. Both groups showed similar response rates during the first minutes of extinction. This shows how group 1 had not forgotten what to do to get food. Overall, the results suggest that the passage of time affects resistance to extinction (the closer in time, the stronger the resistance) but a well-established performance is not forgotten.

8. Kobayashi, Schultz, & Sakagami (2010) showed that individual neurons responded to operant conditioning procedures; this adds to the evidence that changes in environment produce changes in neurons and neural connections. Describe the experiment these investigators performed.

It is possible to investigate the neuron and reinforcement by the method of in-vitro reinforcement (IVR). The idea is that calcium burst or firings of a neuron are reinforced by dopamine binding to specialized receptors. The process of neuronal conditioning can be investigated "in vitro" using brain-slice preparations and drug injections that stimulate the dopamine receptor. In these IVR experiments, a small injector tube is aimed at cells of the brain slice. During operant conditioning, micro-pressure injections of dopamine are applied to the cell following burst of activity. When the computer identifies a predefined burst of activity for the target neuron, the pressure injection pump delivers a minute droplet of dopamine to the cell.

13. Under what conditions does operant variability increase? How would you produce operant variability in your work with kiddos with autism?

Operant variability increases during extinction and when the latency to the appearance of reinforcement is increased. One way to produce operant variability would be to put a particular behavior on extinction and then reintroduce reinforcement.

6. I normally don't like doing dishes, but when I've got lots of assignments to work on, I don't get to perform dishwashing at my free-operant baseline level. At this time, the opportunity to wash dishes will serve as a reinforcer for other behavior. What behavioral principle does this reflect? How might you use this in your work? How might you teach parents to use this principle with children at home?

The behavioral principle reflected is the Premark principle And thats it.

19. What is reinstatement? Discuss reinstatement in the case of the recovering drug user.

Reinstatement is a kind of response recovery that involves the recovery of behavior when the reinforcer is presented alone after a period of extinction. Reinstatement can be observed in the treatment of drug addiction. After becoming a drug addict (acquisition), the addict may seek help for his addiction, and treatment may involve drug withdrawal (extinction) in a therapeutic setting. When the client is returned to his former neighborhood and drug culture (original setting), drugs may be available on a response independent basis (e.g. freely available from dealers of friends to get him hooked). Free hits of drugs would activate the original setting events that have set the occasion for obtaining and using drugs in the past, reinstating drug use.

17. How do SDs that remain in an environment after extinction procedures are in effect, affect the rate of conditioning? (NOTE: A common error in student writing is to misuse the words effect and affect. Learn the uses of these words!) How might un-programmed stimuli influence the rate at which a response is extinguished?

Resistance to extinction is also affected by discriminative stimuli that are conditioned during sessions of reinforcement.

2. Define SD and SΔ precisely

SD: an event or stimulus that precedes an operant and sets the occasion for operant behavior. SΔ: When an operant does not produce reinforcement, the stimulus that precedes the operant is called an S-delta. In the presence of an S-deltas, the probability of emitting an operant declines.

12. Discuss Schwartz's (1980, 1982a) findings regarding reinforcement and stereotypy. How did Page & Neuringer (1985) reveal weaknesses in Shwartz's experimental preparation? How do these investigators interpret their own and Schwartz's findings? Taken together, how might we interpret the role of reinforcement on flexible and inflexible problem-solving behavior?

Schwartz's experiments showed that reinforcement produced a set pattern of responding that occurred over and over again (response stereotypy). He concluded that reinforcement interfered with problem solving because it produced stereotyped response patterns (e.g. emitting only 1 response even though there are multiple responses that could solve a problem). Page and Neuringer revealed that the contingencies of Shwartz's experiments produced response stereotypy and that this was not an inevitable outcome of reinforcement. In the Schwatz experiments, response patterns were constrained by the requirement to emit exactly 4 pecks of each key in any order. This constraint meant that only 72 patterns resulted in reinforcement and that a 5th peck on either key resulted in time-out from reinforcement. This time out punished response variability. Page and Neuringer interpret their own results by concluding that variability is an operant dimension regulated by the contingency of reinforcement. Variability increases with reinforcement of behavioral variation. The role of reinforcement is important to the flexibility of problem-solving behavior because the contingencies of reinforcement that reinforce variability (not stereotypy) may generate novel, creative sequences of behavior for problem solving

3. Discuss the case against using reinforcement (e.g., Deci, Koestner, & Ryan, 1999). How is this argument illogical? How have behavior analysts (i.e., Cameron and colleagues' multiple studies on the topic) responded to the argument?

Social psychologist and educators have been critical of the practice of using rewards in business, education, and behavior modification programs. The concern is that rewards (the terms "reward" and "reinforcement" are often used similarity in this literature) are experienced as controlling, thereby leading to the reduction in an individual's self-determination, intrinsic motivation, and creative performance. Thus, when a child who enjoys drawing is rewarded for drawing with praise or a tangible reward, the child's motivation to draw is said to decrease. From this perspective, the child will come to draw less and enjoy it less once the reward is discontinued. One common sense reason for doubting this claim is that rewards are not given to criminals for illegal behavior in the hope of undermining their intrinsic motivation to commit crime. Rewarding theft with additional monetary payments would not lead the thief to loose interest in criminal activities. Cameron and colleagues responded to this argument by conducting a meta-analysis from 145 experiments on rewards and intrinsic motivation. The findings indicated that rewards could be used effectively to enhance or maintain an individual's intrinsic interest in activities. Verbal rewards were found to increase people's performance and interest on task. Tangible rewards can be used to increase performance and interest for activities that were initially found to be boring or uninteresting. For activities that people find inherently interesting, results from the meta-analysis point to the reward contingency as a major determinant of intrinsic motivation. Rewards tied to high performance, achievement, and progressive mastery increase intrinsic motivation, perceived competence, and self-determination.

4. Using precise language, define the Premack principle. This principle describes a contingency between two sets of behaviors. How is this different from the three-term contingency with which you are more familiar?

The Premack principle states that a higher-frequency behavior will function as reinforcement for a lower-frequency behavior. This is different from the three-term contingency because the Premack principle proposes that reinforcement involves a contingency between operant behavior and reinforcing behavior, rather than between an operant and a stimulus. It is possible to describe reinforcing events as actions of the organism rather than discrete stimuli. Thus, reinforcement involves eating rather than the presentation of food, drinking rather than the provision of water, and reading rather than the effects of textual stimuli.

9. Describe the free operant method. Why is it that Discrete Trial Training cannot give us an indication of operant rate and response probability?

The free operant method is a method in which an organism may repeatedly respond over an extensive period of time. The organism is "free" to emit many responses or non at all. These responses can be made without interference by the experimenter. Rate of response must be free to very if it is to be used to index the future probability of operant behavior. Discrete Trial Training cannot give us an indication of operant rate and response probability because the teacher largely controls the rate of behavior. Since the number of trials and response opportunities are set in DTT, changes in the rate of response cannot be directly observed and measured.

10. What is the difference between operant level and CRF?

The operant level is the baseline rate of response or the rate of response before any known conditioning. CRF is a schedule of reinforcement where each response produces reinforcement.

1.An SD evokes a response, but there is no one-to-one relationship between SDs and responses. Explain

The probability of emitting an operant in the presence of an SD may be very high, but these stimuli do not have a one-to-one relationship with the response that follows them. For example, a telephone ring increases the chances that you will emit the operant, answering the phone, but it does not force you to do so. This is in contrast to reflexes, which are tied to the physiology of an organism and, under appropriate conditions, always occur when the eliciting stimulus is presented.

7. Describe Thorndike's puzzle box experiments making sure to mention the unit of measurement Thorndike used. What was Skinner's objection to Thorndike's description of his findings as evidence of "trial-and-error learning"?

Thorndike placed cats, dogs, and chicks in situations in which they could obtain food by performing complex sequences of behavior. For example, hungry cats were confined to an apparatus that Thorndike called a puzzle box. Food was placed outside the box, and if the cat managed to pull out a bolt, step on a lever, or emit some other behavior, the door would open and the animal could eat the food. After some time in the box, the cat would accidently pull the bolt or step on the lever and the door would open. Thorndike measured the time from closing the trap door until the cat managed to get it open. The measure, called latency, tended to decrease with repeated exposures to the box. Skinner's objection to Thorndike's description of his findings as evidence of "trial-and-error-learning" was that simply measuring the time (latency) taken to complete a task misses changes that occur across several operant classes. Responses that resulted in escape and food were selected while other behavior decreased in frequency. Eventually those operants that produced reinforcing consequences came to dominate that cat's behavior, allowing the animal to get out of the box in less and less time. Thus, latency was an indirect measure of change in the animal's operant behavior.

15. Why do we typically use reinforcement procedures alongside extinction in order to decelerate unwanted behavior?

We typically use reinforcement procedures alongside extinction in order to decelerate unwanted behavior in order to lessen the side effects of extinction (e.g.bursting and aggression.)


Set pelajaran terkait

Multiple Choice Questions from John 7-12

View Set

MGMT principles of management Chapter 13 Motivation

View Set

Unit 2 Section 3 Napoleon Forges an Empire Study Guide

View Set

Layers of the Earth and Earth Processes

View Set

CoursePoint - Chapter 63: Management of Patients with Neurologic Trauma

View Set