PSY 3011: Chapter 6
Nevin's analogy to the momentum of a moving object
Behavioral Momentum
The subject must complete the requirement for two or more simple schedules in a fixed sequence, and each schedule is signaled by a different stimulus
Chained Schedules
The subject is presented with two or more response alternatives, each associated with its own reinforcement schedule
Concurrent Schedule
A written agreement that lists the duties (behaviors) required of each party and the privileges (reinforcers) that will result if the duties are performed
Contingency Contract
Each reinforcement schedule tends to produce its own characteristic pattern of behavior
Contingency-Shaped Behaviors
CRF
Continuous Reinforcement
In which a certain number of responses must occur within a fixed amount of time
Differential Reinforcement of High Rates Schedule
A response is reinforced if and only if a certain amount of time has elapsed since the previous response
Differential Reinforcement of Low Rates Schedule
Subject is able to discriminate the change in reinforcement contingencies
Discrimination Hypothesis
a prompt is used, it is usually withdrawn gradually
Fading
According to Capaldi, there is a small generalization decrement when the schedule switches from CRF to extinction, because the subject has never experienced a situation in which its responses were not reinforced
False
Antecedent-based interventions focus on events that occur after the work is done, and they focus on such matters as providing appropriate worker training, clarifying tasks, and setting goals
False
As on FR schedules, there is a post-reinforcement pause on FI schedules, but after this pause, the subject usually starts by responding quite quickly
False
Behavior therapy as a method of treating autism has been shown to be ineffective
False
Behavioral momentum is independent of how frequently the behavior has been reinforced in the presence of a certain discriminative stimulus
False
Contingency-shaped behaviors are those in which the animal's behavior is gradually shaped into its final form as it gains more and more experience with different reinforcement schedules
False
DRL schedules produce very high rates of responding
False
Eventually, the post-reinforcement pause gives way to an abrupt decrease in responding
False
Extinction is slower after CRF than after a schedule of intermittent reinforcement
False
Games of chance all exhibit the two important characteristics of FR schedules: winning is disproportional to number of plays, and the number of responses required is uncertain
False
IR results clearly favor the molar approach, for they indicate that the animals were sensitive to the short-term consequences of their behavior, but not to the long-term consequences
False
If a VR schedule and a VI schedule deliver the same number of reinforcers per hour, subjects usually respond faster on the VI schedule
False
If the subject makes no response, a vertical line on the cumulative record is the result
False
In an FI 60-second schedule, immediately after one reinforcer has been delivered, any responses that are made during those 60 seconds will also be reinforced
False
It is unlikely that the differences between human and nonhuman behavior patterns that are sometimes found with the same reinforcement schedules are a product of different reinforcement histories
False
Mawhinney's experiment on the study habits of college students demonstrates that an instructor's selection f a schedule of quizzes or exams has virtually no effect on the study behavior of the students in the course
False
Molar theories discuss relationships measured over less than 1 minute
False
Prior experience with one reinforcement schedule can alter how animal, but not human, subjects later perform on another schedule
False
Ratio strain is sometimes used to describe the general weakening of responding that is found when small ratios are used
False
Segments of the cumulative record that have a fairly linear appearance correspond to periods in which the subject was responding at an accelerating rate
False
Skinner proposed that because people have language, contingency-shaped behavior can take precedence over rule-governed behavior
False
The characteristics of a VR schedule are not strong enough to offset the fact that in most forms of gambling, the odds are against the player, so that the more one plays, the more one can be expected to lose
False
The concept of behavioral momentum cannot explain why some undesirable behaviors may relapse when the individual returns to an environment where the behavior occurred in the past
False
The cumulative record pattern from the FR schedule is sometimes called a fixed-ratio scallop
False
The definition of a VR schedule is that the occasion of the next reinforcer is predictable, and in the long run, the more often the behavior occurs, the more rapidly will reinforcers be received
False
The finding that post-reinforcement pauses become larger as the size of the FR increases is consistent with the satiation hypothesis
False
The important feature of an reinforcer is its quality instead of its rate of presentation or its delay
False
There is no relationship between IRT size and the probability of reinforcement on a VR schedule because time is most relevant on a VR schedule
False
Typical psychiatric and institutional care is the only thing that produces improvement in autistic children
False
Under a DRL 10-second schedule, every response that occurs within 10 seconds is reinforced
False
Unlike VI schedules, there is significant selective strengthening of long pauses on VR schedules, and this in itself could explain the difference between VI and VR response rates
False
We cannot predict the size of the post-reinforcement pause by knowing how many responses the subject has produced in the preceding ratio thus forcing us to accept the fatigue hypothesis
False
With FR schedules, the average size of the post-reinforcement pause increases as the size of the ratio decreases
False
The first response after a fixed amount of time has elapsed is reinforced
Fixed-Interval Schedule
A reinforcer is delivered after every n responses
Fixed-Ratio Schedule
The decreased responding one observes in a generalization test when the test stimuli become less and less similar to the training stimulus
Generalization Decrement Hypothesis
Why should a response that is only intermittently followed by a reinforcer be more resistant to extinction than a response that has been followed by a reinforcer every time?
Humphreys' Paradox
Theory about the time between two consecutive responses
Interresponse Time Reinforcement Theory
Focuses on small scale events; the moment-by-moment relationships between response and reinforcers
Molecular Theory
Devoted to using the principles of behavioral psychology to improve human performance in the workplace
Organizational Behavior Management
Deals with large-scale measures of behavior and reinforcement
Molar Theory
The subject is presented with two or more different schedules, one at a time, and each schedule is signaled by a different discriminative stimulus
Multiple Schedule
Extinction is more rapid after CRF than after a schedule of intermittent reinforcement
Partial Reinforcement Effect
Eventually gives way to an abrupt continuation of responding
Postreinforcement
Any stimulus that makes a desired response more likely
Prompt
If the subject makes no response, a horizontal line on the cumulative record is the result
True
Sometimes used to describe the general weakening of responding that is found when large ratios are used
Ratio Strain
A rule that states under what conditions a reinforcer will be delivered
Reinforcement Schedules
This theory emphasizes a relationship between responses and reinforcement of a much more global nature
Response-Reinforcer Correlation Theory
In an FR 20 schedule, every 20 responses will be followed by a reinforcer
True
Skinner proposed this occurs because people have language
Rule-Governed Behavior
It seems unlikely that token systems will be used extensively in psychiatric institutions in the near future
True
If a VR schedule and a VI schedule deliver the same number of reinforcers per hour subjects usually respond faster on the VR schedule
True
An object or symbol that is exchanged for goods or services
Token
According to Capaldi, there is a large generalization decrement when the schedule switches from CRF to extinction, because the subject has never experienced a situation in which its responses were not reinforced
True
According to the generalization decrement hypothesis, responding during extinction will be weak if the stimuli during extinction are different from those that were present during reinforcement, but strong if these stimuli are similar to those encountered during reincforcement
True
According to the satiation hypothesis, pauses should be longer on shorter FR schedules, not on larger FR schedules
True
Antecedent-based interventions focus on events that occur before the work is done, and they focus on such matters as providing appropriate worker training, clarifying tasks, and setting goals
True
As on FR schedules, there is a post-reinforcement pause on FI schedules, but after this pause, the subject usually starts by responding quite slowly
True
Behavior therapy is the only method of treating autism that has been shown to reduce or eliminate some of its main symptoms
True
Behavioral momentum depends on how frequently the behavior has been reinforced in the presence of a certain discriminative stimulus
True
DRH can be used to produce higher rates of responding than those obtained with any other reinforcement schedule
True
Despite considerable research, the causes of autism remain a mystery
True
Eventually, the post-reinforcement pause gives way to an abrupt continuation of responding
True
From the subject's perspective, DRL schedules do not produce optimal rates of responding
True
Mawhinney's experiment on the study habits of college students demonstrates that an instructor's selection of a schedule of quizzes or exams can have a large effect on the study behavior of the students in the course
True
Molar theories discuss relationships measured over at least several minutes
True
Quite a few studies hav found that reinforcement programs can both reduce workplace accidents and save companies substantial amounts of money
True
Quite a few studies have found that reinforcement programs can both reduce workplace accidents and save companies substantial amounts of money
True
Ratio strain is sometimes used to describe the general weakening of responding that is found when large ratios are used
True
Skinner suggested that the tendency to respond in bursts could lead to a selective strengthening of short IRTs on a VR schedule
True
Small downward deflections in a cumulative record generally indicate those times at which a reinforcer was delivered
True
The FI schedule does not have any close parallels outside the laboratory, because few real world reinforcers occur on such a regular temporal cycle
True
The FI schedule does not have many close parallels outside the laboratory, because few real world reinforcers occur on such a regular temporal cycle
True
The concept of behavioral momentum can help to explain why some undesirable behaviors may relapse when the individual returns to an environment where the behavior occurred in the past
True
The cumulative record pattern from the FI schedule is sometimes called a fixed-interval scallop
True
The definition of a VR schedule is that the occasion of the next reinforcer is unpredictable, but in the long run, the more often the behavior occurs, the more rapidly will reinforcers be received
True
The delivery of mail approximates a VI schedule because it is unpredictable, only one response is required to collect it, and if the reinforcer has not yet been stored, no amount of responding will bring it forth
True
The pigeon's behavior during each link of the chain is usually characteristic of the schedule currently in effect in that link
True
Three important features of any reinforcer are its quality, its rate of presentation, and its delay
True
Token systems are difficult to implement
True
Token systems are now very commonly found in classrooms
True
Token systems require a long time to produce lasting changes in behavior
True
Under a DRL 10-second schedule, every response that occurs after a pause of at least 10 seconds is reinforced
True
Unlike VI schedules, there is no selective strengthening of long pauses on VR schedules, and this in itself could explain the difference between VI and VR response rates
True
We cannot predict the size of the post-reinforcement pause by knowing how many responses the subject has produced in the preceding ratio thus forcing us to reject the fatigue hypothesis
True
What all token systems have in common is that each individual can earn tokens by performing any of a number of different desired behaviors and can later exchange these tokens for a variety of "backup" or primary reinforcers
True
With FR schedules, the average size of the post-reinforcement pause increases as the size of the ratio increases
True
The amount of time that must pass before a reinforcer is stored varies unpredictably from reinforcer to reinforcer
Variable-Interval Schedule
The number of required responses is not constant from reinforcer to reinforcer
Variable-Ratio Schedule