PY321 Exam 3

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Factors Affecting Extinction -Usually requires many _________ --May experience ________ _________ -Number of trials needed depends on learning history --Harder to get extinction of "strong" behaviors --________ ________ _______

trials spontaneous recovery Partial Reinforcement Effect

Operant Learning: Principles Three-term contingency (ABCs) -____________ ____________ (S{subscript D} or S{subscript delta}) -Something present in the environment before a behavior occurs --Will either increase or decrease likelihood of engaging in a response -______________ R -_____________ (S{subscript R} or S{subscript p}) *Reinforcer or Punisher -Will then change likelihood of engaging in behavior in the future (In reality there is a four term contingency) ____________ ___________ -__________ ___________: Increases likelihood of engaging in response because the value of the consequence has increased -___________ ____________: Opposite is true

Antecedent Event Behavior Consequence Motivating Operations Establishing Operations Abolishing Operations

Operant Learning Basic Components __________ _________________ -______(discriminate stimulus = stimulus in whose presence, the target response is reinforced -There has to be an _____ and a ______ -______ = stimulus in whose presence, the target response will not be reinforced --If you made some response it would be a punishment --Ex: Trying to sit down in an occupied seat (EO would be that you are exhaused and need a seat)

Antecedent Stimulus S{subscript D} EO, S{subscript D} S{subscript delta}

Types of Chaining Procedures __________ __________ -Last component of the chain is taught first and reinforced until mastered -Then teach next to last part of chain --Followed by last component -Requires individuals to always complete the chain -Training occurs until all components are taught and mastered -Usually used with learners with very _________ abilities

Backward Chaining limited

Complex Schedules __________ ________ -Reinforcement is delivered after completing the chain -Uses signals --Responding adapts to the current schedule --Ex: CHAIN FI10" VR5 _________ ____________ -No signal

Chain schedule Tandem schedule

_____________ -_________ ___________ --Series of related behaviors performed in sequence, resulting in reinforcement -Series of steps that have to occur in order to receive reinforcement Ex: Cooking

Chaining Behavioral Chain

Complex Schedule _________ ____________ -Two or more schedules ongoing at the same time --Option to perform on either schedule -Used to study choice behavior Button A: VR20 Button B: VR50 -Organism will start by pressing both buttons -Eventually they will mostly press _____

Concurrent Schedule A

___________ refers to the gap in time between a behavior and its reinforcing consequence. In general, the shorter this interval is, the faster learning occurs (Escobar & Bruner, 2007; Hunter, 1913; Okouchi, 2009; Schlinger & Blakely, 1994; Thorndike, 1911; see Figure 5-7).

Contiguity

Variables Affecting Reinforcement _______________ -Correlation between a behavior and its consequence _____________ -Time between the behavior and its consequence --In general, ___________ intervals are more effective ___________ _____________ -Size of the reinforcer --________ the better (usually) -Reinforcer preference ____________ _______________ -Response Effort --More ___________ tasks need "____________" reinforcers ______________ _______________ -__________ deprivation usually makes reinforcer more effective --Too much of a reinforcer, decreases effectiveness -Not the case with __________ __________ ___________ ___________ -Are reinforcers simultaneously available for other behaviors?

Contingency Contiguity shorter Reinforcer Characteristics Larger Task Characteristics difficult, bigger Deprivation Level Larger secondary reinforcers Competing contingencies

Simple Schedules of Reinforcement ______________ ______________ (______ or _______) -Schedule in which each response is reinforced -_____________, steady rates of responding which decrease as subject gets satiated -________ schedule to use when first teaching/training a new behavior Ex: Potty training

Continuous Reinforcement CRF or FR1 Moderate Best

____________ ______________ are events that are provided by someone for the purpose of modifying behavior. For example, a parent may give a child a piece of cookie when the child says "Cook-ee" with the idea of getting her to try to say words. A boss may provide bonuses to highly productive workers to maintain their efforts. And a rehabilitation therapist might show a patient a graph depicting her progress to reinforce her persistence.

Contrived reinforcers

Procedures to study Operant Learning _________-_________ Procedure -Preferred by ___________ -Behavior of interest can be repeated a number of times over a certain _________ Allows measurement of _________ change over ___________ Advantages 1) Less __________ 2) Effects of changes in reinforcement more easily identifiable 3) Recording of responses may be easier Ex: Rat pressing ________ to get food

Free-operant Skinner period response, time intrusive lever

Other Simple Schedules of Reinforcement __________ _________: Behavior must occur for the entire time period -Ex: Practice playing piano for specific amount of time before earning x __________ ____________: Behavior must occur for the entire average time period ___________ ________ (also known as non-contingent reinforcement): Reinforcer delivered after specific time regardless of behavior --Skinner machine malfunctioned and pigeon recieved reinforcement regardless of behavior, pigen began to engage in superstitious behavior. _____________ __________: Reinforcer delivered after an average time period regardless of behavior

Fixed Duration Variable Duration Fixed Time Variable Time

___________ ___________ (____) -Reinforcer is delivered with first response after a fixed period of time since last reinforcer -Defined by ___________ to availability of reinforcer --___________ = First response 20 sec. after delivery of last reinforcer is required -__________ ____________: low levels at beginning of interval, high levels at the end -___________ must still occur to receive reinforcer Ex: Youtube ad, registering for classes, Southwest check-in

Fixed Interval (FI) time FI20" Scalloped responding Response

__________ __________ (Partial Reinforcement Effect) _________ (1958) -Intermittent reinforcement results in conflicting expectations of reinforcement and non-reinforcement -When faced with conflict, organisms prefer to respond (they may get a __________) -Responding becomes a reaction to non-reinforcement (____________) -If you've been on a schedule that was relatively lean (little reinforcement; FR200), frustration builds and then reinforcer is delivered -During extinction, frustration builds which is a cue to keep going

Frustration Hypothesis Amsel reinforcer frustration

__________ __________ (____) -A fixed number of responses is required to obtain the reinforcer -Defined by the number of responses required --_______: 15 responses required --A child earns a break after she completes 15 math problems from her homework _________, steady levels of responding to complete each ratio (________ _________) with ___________ _____________. -___________ __________ is a break to prepare for the next run. -The __________ the FR, the longer the pause -________ ________ is sign that behavior required is too straining -____________ the ___________: Not going to start out requiring pigeon to peck 1000 times to receive food pellet. So start out with FR25 --> FR50 --> FR100 --> FR200... FR1000.

Fixed Ratio (FR) FR15 High ratio run postreinforcement pause Postreinforcement pause higher Ratio strain Stretching the ratio

__________ _________ To teach Kate how to tie her shoe 1. Grab one lace in each hand 2. Pull the shoelaces tight 3. Cross the shoelaces 4. Pull the front lact 5. etc.

Forward Chaining

Types of Chaining Procedures ___________ ____________ -Reinforces first link in chain until mastered -Then require organism to perform first link followed by the second link in the chain (then ___________) --This continues until all components are learned

Forward Chaining reinforcement

Side Effects of Extinction -___________ _________ --A behavior undergoing extinction initially ____________ in frequency/intensity before it decreases --May see __________ behaviors (increased variability) (trying something different) -_____________ --May see organism go though other behaviors that received reinforcement previously -Aggression/Emotional Behavior/Depression

Extinction Burst increases novel Resurgence

Shaping Guidelines 1. _________ the target behavior (operational definition) 2. Determine if shaping is most __________ procedure (Needs to be a new skill or one they could do previously) 3. ___________ starting behavior (needs to be something done easily) 4. Choose the ______ _______ (successive approximations) 5. Choose the _________ 6. __________ __________ successive approximations 7. Move through the shaping steps at a _______ ________ (don't stay on one step too long)

Define appropriate Identify shaping steps reinforcer Differentialially reinforce proper pace

Factors Affecting Reinforcement Identifying and Strengthening Reinforcement __________ ___________ __________ -Naturalistic Observation -_________ ___________ --Forced Choice Assessment: Two different stimuli, allow organism to pick one in order to show preference __________ _________ _________ -Questionnaires -Limitations? --Less accurate --Individuals with limited abilities

Direct Assessment Methods Preference Assessment Indirect Assessment Method

Procedures to Study Operant Learning _________-________ Procedure -trial is ended by the behavior of interest -__________ variable may be: --Time to perform task --Number of errors --Number of correct responses --Number of responses Ex: Math __________ __________ -Each response is 1 trial --DV could be number of correct responses Ex: Rats in a __________

Discrete-trial Dependent flash cards maze

__________ __________ (Partial Reinforcement Effect) ___________ and __________ (1945) -Organism may have difficulties detecing non-reinforcement in intermittent schedules Put rats on one of four schedules (pressing elvel to receive food pellet) CRF = 128 times after extinction procedue FR2 = 188 times after extinction procedure FR3 = 215 times after extinction procedure FR4 = 272 times after extinction procedure Experiment (Jenkins): Phase 1: -Group 1: _____ schedule -Group 2: _____ schedule Phase 2: -Group 1: stays on same schedule -Group 2: ______ schedule Phase 3 (Extinction): -Group ___ ends up responding more which disproves this hypothesis

Discrimination Hypothesis Mowrer and jones CRF INT (VR5) CRF 2

Reinforcement Theories (Why do reinforcers strengthen behavior) _______-_______ __________ -Clark Hull (1951) -People behave because of "__________" --Reinforcers reduce the need for this "__________" __________-________ ___________ -Theory works well with primary reinforcers --Do secondary reinforcers address physiological needs? -Suggested that secondary reinforcers get strength from primary reinforcers --_________ largely accepted

Drive-Reduction Theory drive drive Drive-Reduction Theory Not

______ _______________'s (credited with initial research) Puzzle Box Cat placed in a box that can be opened from inside by pushing on latch Initially cat shows random behaviors Eventually cat will hit latch -Hitting latch leads to "_________ __________" --escape --food

E.L. Thorndike pleasant consequence

About the same time Pavlov was trying to solve the riddle of the psychic re- flex, a young American graduate student named _________ _________ __________ was tackling another problem: animal intelligence. In the 19th century, most people believed that higher animals learned through reasoning. Anyone who owned a dog or cat could "see" the animal think through a problem and come to a logical conclusion, and stories of the incredible talents of animals abounded. Taken together, these stories painted a picture of animal abilities that made some pets little less than furry Albert Einsteins. Thorndike recognized the impossibility of estimating animal abilities from this sort of an- ecdotal evidence: "Such testimony is by no means on a par with testimony about the size of a fish or the migration of birds," he wrote, "for here one has to deal not merely with ignorant or inaccurate testimony, but also with prejudiced testimony. Human folk are as a matter of fact eager to find intelligence in animals"

Edward Lee Thorndike

Thorndike's __________ of ____________ -____________ changes because of its _____________ "The probability of a behavior occurring is a function of the ___________ that the behavior had in the past" _________ _____________ (satisfies) -Probability of behavior increases ___________ ___________ (annoys) -Probability of behavior decreases Thorndike typically measured ____________ when looking at how long it took car to exit the puzzle box

Law of Effect Behavior, consequences consequence Pleasant consequence Unpleasant consequence duration/latency

Other Simple Schedules of Reinforcement ___________ ___________: Used with interval schedule -Organism has to respond within a certain period of time -Reinforcer available for limited time ___________ __________: Behavior must occur at a specific _________ -________ _______ of ______-________ (DRL) --Also called IRT > t -__________ _________ of _______-________ (DRH) --IRT = _________-_______ __________ --t = specific amount of time

Limited hold Performance contingent rate Differential reinforcement of low-rate Differential reinforcement of high-rate Inter-response time

___________ ___________ (___________, 1961) -Relative rate of responding in one alternative equals relative rate of reinforcement for that alternative "__________ goes where __________ flows" -Doesn't always work as perfectly as described --Sensitivity depends on species, effort, etc. -_____________: Subject spends more time than predicted in richer schedule -_____________: Subject changes more between schedules than predicted--spends less time on the richer schedule Ex: Think of the lakes that are close vs. far away ________ ________ __________: occurs when organism has to wait to switch (lakes are far away) -There's a penalty for transferring ____________ ____________ -If different responses are required, animals might prefer one response to the other

Matching Law Herrnstein Behavior, reinforcement Overmatching Undermatching Change Over Delay Response Bias

Complex Schedules _________ __________ -Various schedules, a stimulus signals the current schedule --Responding adapts to the current schedule --Ex: MULT FI20" VR15 _________ __________ -No signal

Multiple Schedule Mixed Schedule

_____________ ____________ are events that follow spontaneously from a behavior: When you get up in the morning you brush your teeth and, as a result, your mouth no longer tastes like a garbage pail; you pedal your bike and the pedaling moves the bike forward; you climb some stairs and reach the floor where your class meets. Each reinforcing event is an automatic consequence of an action, so natural reinforcers are sometimes called ___________ ____________.

Natural reinforcers automatic reinforcers

Positive or Negative Reinforcement? Michelle hates driving to work in heavy traffic. She happens to leave home earlier than usual one morning and notices that the traffic is significantly less congested. As a result, Michelle started leaving for work earlier in the morning. ___________ Despite constantly working out, Jim had not been gaining as much muscle mass as he wanted to, so he decided to start taking steroids. He noticed an immediate improvement and as a result was more likely to take steroids any time he wanted to quickly build mass. __________

Negative Positive

Forms of Reinforcement ______________ ____________ -Occurrence of behavior -Followed by __________ of stimulus (consequence) -Results in __________ of behavior Ex: A politician who really irritates you is giving an interview on the TV station you are watching, so you change the channel. As a result, you are more likely to change the channel whenever the politician is on TV.

Negative Reinforcement removal strengthening

___________ ___________ -Witholding the consequence that previously reinforced the target behavior -Eventual effect is __________ in responding -You must identify what _________ is maintaining behavior -What does the extinction procedure look like in R+ vs R- --R- keep the ________ stimulus in place

Operant Extinction decrease consequence aversive

_____________ _________ ___________ (______) -Behavior that is intermittently reinforced is _________ to extinguish than continuously reinforced behavior

Partial Reinforcement Effect (PRE) harder

Four Basic Operant Procedures Stimulus is presented and behavior increases: Stimulus is presented and behavior decreases: Stimulus is removed and behavior increases: Stimulus is removed and behavior decrease:

Positive Reinforcement Positive Punishment Negative Reinforcement Negative Punishment

Forms of Reinforcement _____________ _____________ -Occurrence of behavior -Followed by ___________ of stimulus (consequence) -Results in ____________ of behavior (We strengthen behavior not people!) Ex: an employee receives a bonus for every 15 new customers they get to sign-up for a store credit card. As a result, the employee is more likely to ask customers if they'd like to register for a new credit card.

Positive reinforcement addition strengthening

As a measure of the relative values of two activities, Premack sug- gested measuring the amount of time a participant engages in both activi- ties, given a choice between them. According to Premack, reinforcement involves a relation, typically between two behaviors, one of which is re- inforcing the other. This leads to the following generalization: Of any two responses, the more probable response will reinforce the less probable one. This generalization, known as the _____________ _________, is usually stated somewhat more simply: High-probability behavior reinforces low-proba- bility behavior.

Premack principle

_______________ (unconditioned) ____________: -Not dependent on the association with other reinforcers -Naturally reinforcing -Water, food, sex, drugs -Usually associated with internal stimuli _____________ (conditioned) ___________: -Dependent on association with other reinforcers -Poker chip, words, money, grades

Primary Reinforcers Secondary Reinforcers

___________ ____________ are those that appear to be innately effective, what William Baum (2007) refers to as "phylogenetically significant events." This is typically true, but the defining feature of primary reinforcers is that they are not dependent on learning experiences. Since they are not the product of learning, they are often called ____________ __________. The most obvious primary reinforcers, and the ones most often used in research, are food, water, and sexual stimulation. Others that are readily recognized as innate are sleep, activity (i.e., the opportunity to move about), drugs that produce a high or relieve discomfort, electrical stimulation of certain areas of the brain (see neuromechanics, below), and relief from heat and cold.

Primary reinforcers unconditioned reinforcers

Shaping of ________ ____________ -Successive approximations of a behavior that is not desirable are reinforced (child throwing a temper tantrum)

Problem Behaviors

___________ ___________ stand out from other schedules in that the rules describing the contingencies change systematically (Stewart, 1975). In fact, this feature makes me wonder if they can be considered simple schedules. There are theoretically four different types of progressive schedule (see Fig- ure, 7-7), but I will focus on the most commonly studied, the ___________ ___________ ___________ (Hodos, 1961; Killeen et al., 2009). In a PR schedule, the requirement for reinforcement typically increases in a predetermined way, often immediately following each reinforcement. The progression in the ratio is either arithmetic or geometric. For example, a rat might receive food after pressing a lever twice. After this it might have to press the lever four times to receive food, then 6 times, 8 times, 10 times, and so on. In a geometric progression, the rat might have to press two times, then 4, then 8, 16, 32, 64, and so on. In some PR schedules what changes is not the number of re- sponses required for reinforcement, but the reinforcer. The amount of food provided might get smaller and smaller, or its quality might diminish, or it might be delivered after a longer and longer delay. Whatever form the pro- gression takes, it continues until the rate of the behavior falls off sharply or stops entirely. This is called the __________ ___________.

Progressive schedules progressive ratio (PR) schedule break point

It's a nice warm and sunny day, so you decide to go for a walk. While on the walk, you see a golden retriever puppy that is lost, so you stop to pick it up and see if you can find who she belongs to. Thankfully, her collar lets you know that she belongs to someone just down the street, so you return her to a very grateful owner. In this scenario, what is the: S{subscript D}: R: S{subscript R} What could a possible S{subscript delta} be?

Puppy Stop and look for owner Thank you from owner Aggressive dog

Intermittent Reinforcement Reinforcer after a certain number of responses --> __________ (two kinds: ________ and ___________) Reinforcer after a certain period of time --> ________ (two kinds: __________ and ____________)

Ratio fixed and variable Interval fixed and variable

_________________ -Providing consequences for a behavior that increases the strength (e.g., frequency, magnitude, etc.) of that behavior ______________ -Consequence that strengthens the operant behavior -Anything can serve as a reinforcer --The only thing that matter is what happens to the behavior (dog eating poop)

Reinforcement Reinforcer

It is clear, said Premack, that in any given situation some kinds of behav- ior have a greater likelihood of occurrence than others. A rat is typically more likely to eat, given the opportunity to do so, than it is to press a lever. Thus, different kinds of behavior have different values, relative to one another, at any given moment. It is these relative values, said Premack, that determine the reinforcing properties of behavior. This theory, which may be called the __________ _________ ___________, makes no use of assumed physiological drives. Nor does it depend on the distinction between primary and secondary reinforcers. To determine whether a given activity will reinforce another, we need know only the relative values of the activities.

Relative Value Theory

Theories of Reinforcement _________ __________ __________ and _________ ____________ -David Premack (1959) -Reinforcement is based on the current ___________ of an activity -Measure amount of time spent engaging in each behavior during a baseline period --__________ ___________ __________ ___________: ________ probability behaviors will reinforce _____________ probability behaviors -Ex: Homework (child doesn't want to do homework, a low probability behavior, so make earning access to video games, a high probability behavior, contingent on completing homework. "_____________ before ___________" Looking at specific behaviors that individuals engage in -Value of engaging in specific behavior vs alternative behavior --Reinforcer is the ________ itself

Relative Value Theory value Concurrent Schedule Premack Principle High, low Work before play behavior

Theories of Reinforcement _____________ ______________ ___________ -Timberlake and Allison (1974) -Addresses the fact that ___________ probability behavior can reinforce ___________ probability behavior Certain behaviors are reinforcing when _________ from engaging in it at its normal rate Does have problems with certain __________ ___________ Behavior can serve as a reinforcer if: 1) _______ to that behavior is restricted 2) It's _____ falls below baseline levels Ex: Rats would run on wheel for 30 min. -Then prevented rats from running for 30 min, they could only run for 15 -They then had to engage in certain behavior to gain access

Response Deprivation Theory lower high prevented conditioned reinforcers Access frequency

___________ __________ __________ (Partial Reinforcement Effect) Proposed by __________ and __________ -Experimenters count responses, animals may count something different -Ex: FR20 --Experimenter = _____ responses --Animal = ____ response -According to this hypothesis, a constant number of _________ __________ are needed to extinguish behavior -Perhaps the individuals are counting responses differently than what the researcher is CRF= 128 FR2 = 188/2 = 94 FR3 = 215/3 = 71.6 FR4 = 272/4 = 68 ---> _______ _______ ________ disappears

Response Unit Hypothesis Mowrer and Jones 20 1 response units Partial Reinforcement Effect

________________ of _____________ (Contingencies) -Program or rule that determines how and when a response will be reinforced -Produces ___________ responding

Schedules of Reinforcement characteristic

___________ __________ continue to be reinforcers only if they at least occasionally are paired with other reinforcers

Secondary Reinforcers

___________ ___________ are those that are not innate, but the result of learn- ing experiences. Everyday examples include praise, recognition, smiles, and applause. Because secondary reinforcers normally acquire their reinforcing power by being paired with other reinforcers, including secondary reinforcers, they are also called ____________ ___________. A demonstration is provided by Donald Zim- merman (1957), who sounded a buzzer for two seconds before giving water to thirsty rats. After pairing the buzzer and water in this way several times, Zim- merman put a lever into the rat's chamber. Each time the rat pressed the lever, the buzzer sounded. The rat soon learned to press the lever, even though lever pressing never produced water. The buzzer had become a conditioned reinforcer.

Secondary reinforcers conditioned reinforcers

__________ __________ (Partial Reinforcement Effect) ____________ (1967) -Intermittent reinforcement results in some non-reinforced trials being followed by reinforced trials -Subjects learn to respond when the previous trial has been reinforced --Gambling --Dating --Sports -__________ process is not much different than what you've already been through

Sequential Theory Capaldi Extinction

_______________ Successive Approximations --> Desired target behavior

Shaping

____________ ___________ ___________: -Procedure in which a specific behavior is followed by a reinforcer but other behaviors are not -Involves the basic principles of ___________ and ___________ -Results in increase in the target behavior and decrease in other behaviors

Shaping Differential Reinforcement reinforcement extinction

____________ -To use reinforcement, the behavior must be occurring at least occasionally -_________ is used to develop a target behavior that an organism does not currently exhibit -_________: Differential reinforcement of successive approximations of a target behavior --Positive reinforcement --Successive means small step Works for teaching a ______ skill; or strengthening a response that existed __________

Shaping Shaping Shaping new, previously

Token Economies -__________: conditioned reinforcer that can be exchanged for desired items -_________ maintain behavior with very infrequent reinforcement

Token Tokens

Types of Chaining Procedures _______ ________ __________ -Chain is taught as a single unit -Used with simple chains and persons without limited mental capabilities

Total Task Chaining

Ratio vs. Interval Schedule _____ schedule results in greater rate of responding -Schedule informs organism about effectiveness of its behavior _________: number of reinforcers depends on number of responses _____________: number of reinforcers is independent of number of responses

VR Ratio Interval

____________ ___________ (____) -Reinforcer is delivered with first response after an average period of time since last reinforcer -Defined by __________ amount of time to reinforcer availability --______ = On average, first response after 6 min. since last reinforcement receives reinforcer -___________ levels of responses without regular pauses due to unpredictability -Ex: Pop quizzes, "We'll have an exam at some point this semester"

Variable Interval (VI) average VI6' Steady

____________ __________ (____) -Number of responses required to obtain the reinforcer varies around an average value -Defined by the mean number of responses required --________ = 48 responses on average required --A telemarketer makes a sale about every 48th call -____________, steady levels of responding with few or no pauses -No ____________ ___________ --Organism doesn't know when reinforcement is coming next -Incredibly ___________ rates of frequency of behavior

Variable Ratio (VR) VR48 High postreinforcement pause high

Clark Hull (1943, 1951, 1952) believed that animals and people behave because of motivational states called __________. For him, all behavior is literally driven. An animal deprived of food, for example, is driven to obtain food. Other drives are associated with deprivation of water, sleep, oxygen, and sexual stimulation. A reinforcer, then, is an event that reduces one or more drives.

drives

Positive or Negative Reinforcement? Whenever AJ cleans his room the first time he is asked to do so, his dad takes him out for ice cream. ____________ Carrie has noticed that when she does the exercises her physical therapist taught her, the pain in her back decrease tremendously. As a result, Carrie is more likely to perform the exercises when her back is starting to get sore. _______________

We don't know what happens in the future so we can't say Negative

Reinforcement Things to remember: 1) Reinforcement is defined by the effect it has on the __________ 2) Reinforcement ____________ means an increase in the behavior 3) Positive and Negative do not describe the ____________ --Anything can serve as a reinforce 4) Positive = ________ of stimulus 5) Negative = ________ of a stimulus

behavior always stimulus addition removal

Analyzing the Situation Ask these three questions when determining whether it's positive or negative reinforcement 1) What is the __________ (the targeted response) 2) What happened immediately after the behavior (was a stimulus added or removed) 3) What happened to the behavior in the ___________? (Was it more likely to occur)

behavior future

I have said that reinforcement is an increase in the strength of a behavior due to its consequences. What exactly does strength mean? Thorndike and Skinner equated strength with the frequency or probability of the behavior. The emphasis on frequency is understandable because one of the things we most want to know about behavior is how likely it is to occur. However, as John Nevin (1992; Nevin & Grace, 2000) of the University of New Hampshire points out, reinforcement has other strengthening effects besides rate increase: It increases the tendency of the behavior to persist after reinforcement is discontinued; the tendency to occur despite other, aversive consequences (e.g., punishment); the tendency to persist even when more effort is required; and the tendency to persist despite the availability of reinforcers for other behavior. Nevin, who studied engineering be- fore focusing on behavior science, likens the effects of reinforcement to the physicist's concept of momentum. Just as a heavy ball rolling down a hill is less likely than a light ball to be stopped by an obstruction in its path, behavior that has been reinforced many times is more likely to persist when "obstructed" in some way, as (for example) when one confronts a series of failures. Nevin calls this __________ _____________ (Mace et al., 1997; Nevin, 1992; Nevin & Grace, 2000).

behavioral momentum

Where operant learning is concerned, the word _____________ refers to the degree of correlation between a behavior and its consequence. The stronger this correlation is, the more effective the reinforcer is likely to be. Put another way, the more reli- ably a reinforcer follows a behavior, the more it strengthens the behavior.

contingency

The __________ __________ says that extinction takes longer after intermittent reinforcement because it is harder to distinguish (or discriminate) between extinction and an intermittent schedule than between extinction and continuous reinforcement (Mowrer & Jones, 1945).

discrimination hypothesis

Advantages of Secondary Reinforcers -They do not _________ behavior as much as primary reinforcers -Individuals do not become as "__________" (i.e., tired of) with secondary reinforcers -Easier to provide secondary reinforcers ________ -May be used in multiple situations --__________ reinforcers -Biggest __________ is that they have to be conditioned --Primary reinforcers must be available and reinforcing

disrupt satiated immediately Generalized disadvantage

Hull's ________-_________ ___________ works reasonably well with primary rein- forcers such as food and water because these reinforcers alter a physiological state.

drive-reduction theory

What reinforces behavior in negative reinforcement is escaping from an aversive (unpleasant) situation. Once you have learned to do this, you often learn to avoid it entirely. For example, after escaping from the loud noise of your car radio, you may in the future turn down the volume control before you turn on the ignition. And instead of escaping from the screeching noise of your saxophone playing, you may "forget" to practice. For this reason, negative reinforcement is sometimes called _________ ____________ or __________-_________ ____________.

escape learning escape-avoidance learning

You will recall that in classical conditioning, extinction means the CS is never followed by the US. In operant learning, ____________ means that a previously reinforced behavior is never followed by reinforcers. Since no reinforcer is provided, extinction is not truly a reinforcement schedule; however, we might think of it as an FR schedule requiring an infinite number of responses for reinforcement. In an early study of extinction, Skinner (1938) trained rats to press a lever and then, after reinforcing about a hundred lever presses, disconnected the feeding mechanism. Everything was as it had been during training, except that now lever pressing no longer produced food. The result was a gradual decline in the rate of lever pressing (see Figure 7-3).

extinction

Although the overall effect of extinction is to reduce the frequency of the behavior, the immediate effect is often an abrupt increase. This is called an ____________ ____________. When extinction is used to treat practical behavior problems, the extinction burst gives the impression that the procedure has made the problem worse, rather than better. Tell a mother that she should ignore her child's demands for a treat, and the demands are likely to turn into screams and the parent will say, "I tried that; it didn't work." If extinction is continued, however, the extinction burst is typically followed by a fairly steady decline in the behavior.

extinction burst

Token Economies Function of the token: -Provide _________ --Behavior was done correctly -Tell the organism what to do ____ --Get reinforcer; obtain more tokens -Bridge long periods of time without _________ __________

feedback correctly primary reinforcement

In a ___________ ___________ __________, reinforcement is contingent on the con- tinuous performance of a behavior for some period of time. A typical example of an FD schedule is the child who is required to practice playing a piano for half an hour. At the end of the practice period, and provided the child has practiced the entire time, he receives a reinforcer. For example, a parent may provide milk and cookies or some other treat after a piano practice.

fixed duration (FD) schedule

Abram Amsel (1958, 1962) has proposed the __________ _____________ to explain the PRE. Amsel argues that nonreinforcement of previously reinforced behavior is frustrating. Frustration is an aversive emotional state, Amsel says, so anything that reduces frustration will be reinforcing. In continuous reinforcement, there is no frustration because there is no nonreinforcement, but when the behavior is placed on extinction, there is plenty of frustration. With each nonreinforced act, frustration builds. (Anyone who has repeatedly lost coins in a pay phone or a vending machine is familiar with the aversive state created by nonreinforcement of a behavior that is normally reinforced.) Any behavior that reduces an aversive state is likely to be negatively reinforced, so during extinction, frustration may be reduced by not performing the behavior. (In the same way, you will quickly abandon a pay phone that cheats you, thereby reducing your annoyance.)

frustration hypothesis

Conditioned reinforcers also have the advantage that they can be used in many different situations. Food and water are very effective reinforcers when the animal or person is hungry or thirsty but not so much at other times. A stimulus that has been paired with food, however, may be reinforcing even when the animal or person is not at all hungry. Reinforcers that have been paired with many different kinds of reinforcers can be used in a wide variety of situations. Such reinforcers are called ___________ ____________ (Skinner, 1953). The most obvious example of a generalized reinforcer may be money.

generalized reinforcers

Thorndike (1911) later called this relationship between behavior and its consequences the _______ of ________ and offered this definition: Of several responses made to the same situation, those which are accompanied or closely followed by satisfaction to the animal will, other things being equal, be more firmly connected with the situation, so that, when it recurs, they will be more likely to recur; those which are accompanied or closely followed by discomfort to the animal will, other things being equal, have their connections with that situation weakened, so that, when it recurs, they will be less likely to occur. (p. 244) Thorndike's law identifies four key elements: the ____________ (situation or context) in which a behavior occurs, the __________ that occurs, the change in the ____________ following the behavior, and the change in the behavior produced by this _______________. Another way of expressing the essence of the law in fewer words is: _________ is a function of its ____________.

law of effect environment behavior environment consequence behavior is a function of its consequence

Reinforcers are not all alike; some work better than others. One impor- tant characteristic is size or strength (sometimes referred to as ____________). Although small reinforcers given frequently usually produce faster learning than large reinforcers given infrequently (Schneider, 1973; Todorov et al., 1984), the size of the reinforcer does matter. Other things being equal, a large reinforcer is generally more effective than a small one (Christopher, 1988; Ludvig et al., 2007; Wolfe, 1936). If you happen to look down while walking along a sidewalk and see a dollar bill, the chances are you will continue look- ing in that area and may continue looking down even when you go on your way. But the reinforcing effect is apt to be much stronger if what you see is a $100 bill.

magnitude

People and many animals have a remarkable ability to distinguish between more and less reinforcing schedules. The tendency to work in proportion to the reinforcement available is so reliable that it is called the __________ ___________. In the case of a choice among ratio schedules, the matching law correctly predicts choosing the schedule with the highest reinforcement frequency. In the case of a choice among interval schedules, the matching law predicts working on each schedule in proportion to the amount of reinforcers available on each.

matching law

A ___________ ____________ is anything that changes the effectiveness of a con- sequence (Keller & Schoenfeld, 1950; Laraway et al., 2003; Michael, 1982, 1983, 1993). There are two kinds of motivating operations: those that increase the effectiveness of a consequence, and those that decrease its effectiveness. Today these two kinds of procedures are called _____________ ___________ and ____________ ___________, respectively (Iwata, Smith, & Michael, 2000; Laraway et al., 2003).

motivating operation establishing operations abolishing operations

In __________ ___________, a behavior is strengthened by the removal, or a decrease in the intensity, of a stimulus. This stimulus, called a _________ _________, is ordinarily something the individual tries to escape or avoid. If you get into your car and turn the ignition and you are suddenly blasted by loud music (because your boyfriend/girlfriend left the radio volume at maxi- mum), you turn down the volume control. The reduction in sound reinforces the act of turning the volume dial. In the same way, if your efforts to play the saxophone produce only sounds as welcome to your ears as the scraping of fingernails on a blackboard, you are apt to put the instrument aside. Doing so ends your torture, and this reinforces the act of discontinuing your performance.

negative reinforcement negative reinforcer

It is possible, however, to create a schedule in which reinforcers are delivered independently of behavior (Lachter, Cole, & Schoenfeld, 1971; Zeiler, 1968). There are two kinds of such _____________ _________________ _______________. In a ___________ ____________ _____________, a reinforcer is delivered after a given period of time regardless of what behavior occurs. Fixed time schedules resemble fixed interval schedules except that in an FT schedule no behavior is required for reinforcement. In an FI 10" schedule, a pigeon may receive food after a ten-second interval but only if it pecks a disc; in an FT 10" schedule, the pigeon receives food every ten seconds whether it pecks the disc or not. In __________ __________ ____________, a reinforcer is delivered periodically at irregular intervals regardless of what behavior occurs. The only difference between VT schedules and FT schedules is that in VT schedules the reinforcer is delivered at intervals that vary about some average, whereas in FT schedules they arrive after a fixed period.

noncontingent reinforcement (NCR) schedules fixed time (FT) schedule variable time (VT) schedules

Skinner called experiences whereby behavior is strengthened or weakened by its consequences __________ ____________ because the behavior operates on the environment. The behavior is typically instrumental in producing the events that follow it, so this type of learning is also called __________ ____________. It goes by other names as well, including response learning, consequence learning, and R-S learning.

operant learning instrumental learning

One peculiar schedule effect is the tendency of behavior that has been maintained on an intermittent schedule to be more resistant to extinction than behavior that has been on continuous reinforcement. This phenomenon is known as the ____________ ___________ ___________. (It is also referred to as the partial reinforcement extinction effect [PREE].)

partial reinforcement effect (PRE)

There are, Skinner said, two kinds of reinforcement. In __________ __________, the consequence of a behavior is the appearance of, or an increase in the intensity of, a stimulus. This stimulus, called a _________ __________, is ordinarily something the individual seeks out. If you put money into a vending machine and you get the food you want, you are likely to put money into that machine in the future, given the opportunity. And if you play the saxophone and the sound produced is distinctly better than the last time you played it, you may continue playing even if, to other ears, the result is remarkably unmelodic.

positive reinforcement positive reinforcer

Classical Conditioning vs Operant Learning Pavlov -Classical conditioning of __________ -Subject ___________ exposed to the CS and US -No __________ over the response that happens --CS-->CR Can learning occur with nonreflexive behavior? -____________ ____________: a voluntary response that acts on the environment in a meaningful way --Instrumental in producing a change in the ______________ --____________ --> __________

reflexes passively control Instrumental Response environment Response --> Consequence

In learning, ______________ means an increase in the strength of behavior due to its consequence. Charles Catania (2006) maintains that an experience must have three characteristics to qualify as reinforcement: First, a behavior must have a consequence. Second, the behavior must increase in strength (e.g., occur more often). Third, the increase in strength must be the result of the consequence.

reinforcement

Instrumental Learning The modification of instrumental responses using ___________ and ____________ -__________ are "instrumental" in producing the ____________ Also referred to as _________ _________ -Behavior is "operating" on the _____________ If you don't engage in ___________, ___________ don't occur

reinforcers and punishers Responses consequences operant learning environment behavior, consequences

Mowrer and Jones (1945) offer another explanation for the PRE called the ___________ __________ _____________. This approach says that to understand the PRE we must think differently about the behavior on intermittent reinforcement. In lever pressing studies, for example, lever pressing is usually thought of as one depression of the lever sufficient to produce some measurable effect on the environment, such as activating a recording device. But, say Mowrer and Jones, lever pressing can also be defined in terms of what produces reinforcement.

response unit hypothesis

Because of the problems with Premack's relative value theory, William Timberlake and James Allison (1974; Timberlake, 1980) proposed a variation of it called ________-_________ ___________ (also sometimes called equilibrium theory or response-restriction theory). The central idea of this theory is that behavior becomes reinforcing when the individual is prevented from engaging in the behavior at its normal frequency.

response-deprivation theory

Another effect of extinction is the reappearance of previously reinforced behavior, a phenomenon called _____________ (Epstein, 1983, 1985; Mowrer, 1940). Suppose a pigeon is trained to peck at a disc, and then this behavior is extinguished. Now suppose some new behavior, such as wing flapping, is reinforced. When the bird flaps steadily, this behavior is put on extinction. What does the bird do? Wing flapping declines, as expected, but something unexpected also occurs: The bird begins to peck at the disc again. As the rate of wing flapping declines, the rate of disc pecking increases (see Figure 7-4). Animal trainer Karen Pryor (1991) describes an instance of resurgence in a porpoise. An animal named Hou received reinforcement for performing a behavior learned in the previous training session. If this behavior were not reinforced, Hou would then run through its entire repertoire of previously learned stunts: breaching, porpoising, beaching, and swimming upside down.

resurgence

Because the consequences involved in positive reinforcement are usually things most people consider rewarding (e.g., success, improved performance, praise, food, recognition, approval, money, special privileges), positive reinforcement is sometimes called __________ __________. Skinner (1987) objected to the term. "The strengthening effect is missed when reinforcers are called rewards," he wrote. "People are rewarded, but behavior is reinforced" (p. 19).

reward learning

Some of the most powerful primary reinforcers (food, water, and sex, in particular) no doubt played an extremely important role in our survival. Had your ancestors not done the things necessary to produce these reinforcers, it's likely you would not be here. Some primary reinforcers lose their effectiveness rather quickly, however, a phenomenon known as __________. If you have not eaten for some time, food can be a powerful reinforcer, but with each bite the reinforcing power of food is diminished until finally it is ineffective; that is the point of satiation.

satiation

E. J. Capaldi's (1966, 1967) ____________ ____________ attributes the PRE to differences in the sequence of cues during training. He notes that during training, each performance of a behavior is followed by one of two events, reinforcement or nonreinforcement. In continuous reinforcement, all lever presses are reinforced, which means that reinforcement is a signal for lever pressing. During extinction, no lever presses are reinforced, so an important cue for lever pressing (the presence of reinforcement) is absent. Therefore, extinction proceeds rapidly after continuous reinforcement because an important cue for performing is missing.

sequential hypothesis

BF Skinner and the Birth of the Operant -Skinner rejected the ______________ of the law of effect ____________: units of behavior defined in terms of their effect on the environment Doesn't concern itself with the form a behavior takes (_____________) -Cares about the likelihood of engaging in said behavior Open bag by pulling it apart or open bag by using scissors --> ________: open bag --Same _____________!! *Part of a __________ __________ (the end result)

subejctiveness Operant topography effect Operant response class

"Dogs get lost hundreds of times and no one ever notices it or sends an account of it to a scientific magazine," wrote Thorndike, "but let one find his way from Brooklyn to Yonkers and the fact immediately becomes a circulating anecdote. Thousands of cats on thousands of occasions sit helplessly yowling, and no one takes thought of it or writes to his friend, the professor; but let one cat claw at the knob of a door supposedly as a signal to be let out, and straightway this cat becomes the representative of the cat-mind in all the books. . . . In short, the anecdotes give really the . . . ____________ psychology of animals"

supernormal

Steps to Chaining Develop a ______ ________ -Identifying and listing each component in a behavior chain -How specific depends on what and who you are training

task analysis

In a _____________ ____________ ____________, the required period of performance varies around some average. In the case of a child practicing the piano, any given session might end after 30 minutes, 45 minutes, 20 minutes, or 10 minutes. On average, the student will practice for half an hour before receiving the milk and cookies, but there is no telling when the reinforcers will appear.

variable duration (VD) schedule


संबंधित स्टडी सेट्स

American History B - Skill Challenges

View Set

Unit 2 Questions *answers capitalized*

View Set

TEXES test 141 computer science vocabulary

View Set

CompTIA A+ Certification Practice Test 15 (Exam 220-901)

View Set

Endocrine and Metabolic Disorders

View Set

Unit 2: Chapter 12 Wrap It Up Quiz, Chapter 3 MindTap, Gov Ch 14 Wrap It Up Quiz, Ch 16 Wrap It Up Quiz, Unit 2: Chapter 13 Wrap It Up Quiz

View Set

ITE115 Module 02: Computer Hardware Quiz

View Set