Neuroscience Exam 2 Material

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

What are the two (2) separate components of the Mesotelencephalic Reinforcer System?

(1) The Tegmentostriatal Pathway and (2) the Nigrostriatal Pathway (Lecture 14, October 19th)

Explain how in the backward chaining procedure, a stimulus can serve two (2) functions: (1) As a secondary reinforcer to reward the previous response, and (2) as a discriminate stimulus to inform the organism on the next response that is required.

(Chapter 5, p. 129)

Familiarize yourself with current problems in society that are directly related to, or can be explained by, Crespi's experimental findings.

(Lecture 12, October 7th)

How did the design of the Hutt (1954) study and subsequent results determine the importance of the quality versus quantity of a reinforcer in producing the greatest level of response strength?

(Lecture 12, October 7th)

Identify how the implications of Schlinger's findings are used widely in society.

(Lecture 12, October 7th)

In the Hutt (1954) study, response strength was represented as what?

(Lecture 12, October 7th)

In the study by Capaldi (1978), response strength was measured by what?

(Lecture 12, October 7th)

List some examples of how the implications of Capaldi's (1978) research study are utilized in the business community, school system, or any other aspect of society (i.e. the use of "signaled secondary reinforcers").

(Lecture 12, October 7th)

Thinking back to the findings from Capaldi (1978), how do you overcome deficits in performance produced by response-reinforcer delays?

(Lecture 12, October 7th)

Thinking back to the findings from Capaldi (1978), what is the role of the signaled secondary reinforcers in the process of overcoming deficits in performance produced by response-reinforcer delays?

(Lecture 12, October 7th)

Thinking back to the findings from Capaldi (1978), why does imposing a delay between response and the reinforcer decrease motivation to respond?

(Lecture 12, October 7th)

Thinking back to the findings from Crespi (1942), response strength was assessed by which behavioral measure?

(Lecture 12, October 7th)

Thinking back to the findings from Crespi (1942), what experimental procedures illustrated the changes in behavior that evolve when an organism's current level of reinforcer is different from its previous reinforcer history?

(Lecture 12, October 7th)

What does the concept of asymptote represent, and why were the two initial groups in Crespi's study (i.e. high versus low reinforcer) trained until they reached this point?

(Lecture 12, October 7th)

What two (2) hypotheses explain why previous reinforcement history influences response strength following a shift in reinforcement amount?

(Lecture 12, October 7th)

Use Capaldi's graph to demonstrate that the delay between the response and the reinforcer leads to groups 1 & 2 learning completely different associations between the stimulus (S), response (R), and the reinforcer (SR).

(Lecture 12, October 7th) -- SEE SLIDES

Using the Schlinger et al. (1994) graph, explain how "D" and "E" are represented in the methodological design and the findings that emerged from this study.

(Lecture 12, October 7th) -- SEE SLIDES

Using the diagram found on the last page of the lecture 13 review guide (with the rat and response chains), identify the secondary reinforcer and the discriminative stimulus (SD) functions at any step of a hypothetical changing procedure (such as the one shown in the diagram).

(Lecture 13 Review Guide)

Define a variable-interval (VI) schedule.

A variable-interval (VI) schedule is a schedule where the amount of time that must pass before a reinforcer is stored varies unpredictably from reinforcer to reinforcer. An example of a VI schedule in real life would be checking for mail. (Chapter 6, p. 149)

What are the two (2) rules for reinforcement delivery on a DRH schedule?

1. Animal must make a certain number of responses 2. Animal must make this species number of responses within a fixed amount of time (Chapter 6, p. 152)

What steps and rules are involved in the DRL schedule, and what impact does this schedule have on behavior?

1. Animal must wait for a certain amount of time to pass 2. If the full amount of time is not achieved before a response is made, the clock resets back to zero 3. Rates of responding are very low (Chapter 6, p. 152)

What are the four (4) different schedules of reinforcement that we talked about in-class?

1. Fixed Ratio Schedule (FR) 2. Fixed Interval Schedule (FI) 3. Variable Interval Schedule (VI) 4. Variable Ratio Schedule (VR) (Lecture 13, October 14th)

Which (3) characteristics of a consequence (SR) influence response strength or vigor?

1. Quantity & Quality 2. Timing or Delay 3. Previous SR History (Lecture 12, October 7th)

What are the three (3) components of the Optimal Stimulation Theory that are used to describe the basic premise of what serves as reinforcer?

1. Reinforcers return an organism to an intermediate level of arousal, which is unique for each organism 2. Reinforcers are things that produce an increase or decrease in sensory stimulation 3. Reinforcers contribute to the brain's ingrained need for stimulation (Lecture 14, October 19th)

What is the purpose of the two (2) separate procedures involved in shaping?

1. Selective reinforcement (SR) of components of the target behavior(s) 2. Extinction where the reinforcement is gradually withheld for responses that were once reinforced (Lecture 11, October 5th)

List two (2) criticisms against the mentalism theory of behavior.

1. What about actions that have consequences? 2. Are mind states observable? (Lecture 10, September 30th)

Define the Differential Reinforcement of Low Rates (DRL) schedule.

A DRL schedule is when a response is reinforced if and only if a certain amount of time has passed since the previous response. A failure to wait the specified length of time results in a complete reset of the clock back to zero. This schedule produces very low rates of responding. (Chapter 6, p. 152)

What is a cognitive map?

A cognitive map is a general understanding of the spatial layout of a certain area. (Chapter 8, p. 202)

What is the difference between a conditioned reinforced and a primary reinforcer?

A conditioned reinforcer is a previously neutral stimulus that has acquired the capacity to strengthen responses because that stimulus has been repeatedly paired with food or some other primary reinforcer. A primary reinforcer is a stimulus that naturally strengthens any response it follows. Primary reinforcers include food, water, sexual pleasure, and comfort. (Chapter 5, p. 120)

Define continuous reinforcement (CRF).

A continuous reinforcement (CRF) schedule is a schedule where every occurrence of the operant response is followed by a reinforcer. (Chapter 6, p. 142)

What constitutes a reinforcer according to the Need Reduction Theory (Hull)?

A reinforcer is any stimulus that reduces a biological need. (Lecture 14, October 19th)

Define a fixed-interval (FI) schedule.

A fixed-interval (FI) schedule is a schedule where the first response after a fixed amount of time has elapsed is reinforced. The cumulative record pattern from FI schedules is sometimes called a fixed-interval scallop. A real-life example of an FI schedule would be waiting for a bus. (Chapter 6, p. 147)

Define a fixed-ratio (FR) schedule.

A fixed-ratio (FR) schedule is a schedule wherein a reinforcer is delivered after every n responses, where n is the size of the ratio. As an example, in an FR 20 schedule, every 20 responses will be followed by a reinforcer. (Chapter 6, p. 144)

Explain how a functional analysis of reinforcers can be used to determine the causes of unusual or puzzling behaviors.

A functional analysis is a method that allows the therapist to determine what reinforcer is main thing the unwanted behavior. By conducting a functional analysis of reinforcers, researchers can individually test out different types of automatic reinforcement (e.g. sensory stimulation from the behavior that may serve as its own reinforcer) to see if it explains why the behavior is occurring. (Chapter 8 Learning Objectives, p. 201)

What are the rules for reinforcers and what is the primary objective or expected outcome of performance on percentile schedules?

A percentile schedule is a schedule where a response is reinforced if it is better than a certain percentage of the last several responses that the learner had made. The primary objective on percentile schedules is that the learner will follow a gradually increasing schedule as measured by their own unique performance progress, as they are continually being measured against their own previous performance. In this way, percentile schedules are highly tailored to the individual learner, and have been successfully applied to the academic performance of children with developmental disabilities, as well as with increasing activity level in adults to improve their health. (Chapter 5, p. 123)

Define the post-reinforcement pause that something's occurs during an FR schedule.

A post-reinforcement pause is a pause in responding after each reinforcer during an FR schedule, that contributes to a "stop-and-go" pattern. An example of FR schedules in real life is the "piecework" method used to pay factory workers in some companies, where the workers are paid $10 for every 100 hinges made, for example. (Chapter 6, p. 144)

What types of procedures are used to understand the nature of the post-reinforcement pause?

A procedure called a multiple SR schedule is used to understand the nature of the post-reinforcement pause. In this schedule, there are two types of discriminative stimuli, which inform the organism what is required of them. The discriminative stimuli used are blue and red lights, where the blue light corresponds to an FR 100 (i.e. the reinforcer will only be delivered after 100 responses), and the red light corresponds to an FR 10 (i.e. the reinforcer will be given after 10 responses). Interestingly, the sizes of the pauses were larger after the shorter FR 10, and smaller after the longer FR 100 — this is because the organism was thinking about the upcoming schedule in the time immediately after receiving the reinforcer. After completing FR 100, the organism only paused briefly, because they knew that an FR 10 schedule was coming up, which required little effort on their part. However, after completing an FR 10 schedule, the organism paused for a long time, because they were dreading the upcoming FR 100 schedule which was quite labor intensive for them. (Lecture 13, October 14th)

Define reinforcement schedule.

A reinforcement schedule is simply a rule that states under what conditions a reinforcer will be delivered. (Chapter 6, p. 142)

What are adjunctive behaviors?

Adjunctive behaviors are a variety of stereotyped behaviors that occur when the next reinforcer is some time away and the animal must do something to "pass the time." (Chapter 5, p. 119)

How does Skinner's view of the "S-R-SR" relationship differ from that of Thorndike and Guthrie?

Both Thorndike and Guthrie embrace the idea of "backward causality," while Skinner is focused on "forward anticipation." (Lecture 11, October 5th)

Explain how the Discrimination and Generalization Decrement Hypothesis explains the changes in extinction-responding between organisms trained on a CRF versus the more extensive FR or VR schedules.

Both of these hypotheses have been used to explain Humphrey's Paradox dilemma regarding CRF versus intermittent reinforcement schedules. The discrimination hypothesis says that in order for behavior to chance once extinction begins, the individual must be able to discriminate the change in reinforcement contingencies. The generalization decrement hypothesis (from Capaldi) says that responding during extinction will be weak if the stimuli during extinction are different from those that were present during reinforcement, but it will be strong if these stimuli are similar to those encountered during reinforcement. Note that generalization decrement is a term for the decreased responding one observes in a generalization test when the test stimuli become less and less similar to the training stimulus. Therefore, these two hypotheses suggest that Humphrey's Paradox can be explained in the following way(s): (1) Behavior changes rapidly when the organism identifies that a change has occurred in predicted reinforcement delivery, or (2) Behavior will be weak if the stimuli during extinction are different than the stimuli that were there during reinforcement. (Chapter 6, p. 151)

Looking at Figures 8.9 (p. 222) and 8.10 (p. 223) in chapter 8 of the textbook, explain what benefit or knowledge is gained by formulating "demand curves" and identifying the "peak output" of responding in these types of experiments.

By identifying the peak output of responding and formulating demand curves, researchers can predict behavior according to the cost (whether that be actual monetary cost for humans, or something like the amount of lever bar presses for animals). (Chapter 8, p. 222-223)

What are some reasons why children with psychological problems may exhibit bizmare behaviors? How can a functional analysis determine the cause of such behaviors?

Children may show bizmare behaviors because they are receiving automatic reinforcement, where the sensory stimulation from the behavior itself may serve as its own reinforcer. A functional analysis can determine the cause of these behaviors by systematically testing different reinforcers to see if they explain the cause of the bizarre behavior. (Chapter 8 Review Questions, p. 227)

How does performance on a DRH schedule compare to performance on all other reinforcement schedules?

DRH schedules produce the highest rates of responding of any reinforcement schedule. (Chapter 6, p. 152)

Define the contra-freeloading effect.

Definition: The contra-freeloading effect is the behavior seen in most animals that when an animal is offered a choice between free food or identical food that requires effort, the animal prefers the food that requires effort. The contra-freeloading effect was illustrated in an experiment by Davidson (1971), where rats learned to work (FR 10 schedule of lever pressing) for a consequence (food). Then, the rats were given 8 days of free feeding. When they returned to the box, they were given a choice between free feeding or lever pressing on the FR 10 schedule. Surprisingly, the rats preferred to work on the FR 10 schedule for the consequence, rather than "freeloading" with free feeding. (Lecture 10, September 30th)

The fact that visual stimulation, exercise, and horror films can be reinforcers is a problem for ______ theory.

Drive Reduction (Chapter 8 Practice Quiz 2, p. 225)

Define the term elastic demand.

Elastic demand is a term used by economists when the amount of a commodity purchased decreases markedly when its price increases. For example, demand is usually elastic when close substitutes for the product are readily available (e.g. the demand for the specific Coca-Cola brand would drop dramatically if its price increased by 50% because people would just switch to other soda brands that tasted about the same). (Chapter 8, p. 222)

What shapes our behavior? Keep it simple; think broadly.

Expectations (Lecture 13, October 14th)

Describe the four (4) simple reinforcement schedules and the types of behavior they produce during reinforcement and extinction.

FR schedule: The fixed ratio schedule centers around the individual reaching a specific number of required responses in order to receive the reinforcer (the number of required responses doesn't change from trial to trial) - produces post-reinforcement pauses - graphs illustrate stair-stepped behavior (stop-and-go pattern) - behavior (once resumed after a P-RP) is constant and rapid until the next reinforcer is delivered FI schedule: The fixed interval schedule uses specific intervals of time for each trial, and the animal is only rewarded for the first response that occurs after this time interval has passed (the time intervals remain constant from trial to trial) - produces the lowest rate of responding - graphs illustrate scalloped behavior - learned patience rather than perceived control - behavior gradually builds as the delivery of the consequence (SR) nears VR schedule: The variable ratio schedule requires that a specific number of responses are performed before the delivery of the reinforcer (but the number of required responses varies unpredictably from trial to trial) - produces the highest rate of responding - high amount of perceived control - uncertainty over reinforcer delivery + perceived control = high motivation to exhibit the behavior, as the more often the behavior occurs, the more rapidly reinforcers will be received even though their delivery is unpredictable VI schedule: The variable interval schedule requires that a specific interval of time passes before the response made counts (but the time interval varies unpredictably from trial to trial) - same as FI schedules except the amount of time that has to pass before a reinforcer is received varies unpredictably from reinforcer to reinforcer - behavior is moderate and steady (like checking the mail once a day) (Chapter 6 Learning Objectives, p. 142)

Give examples of reinforcement schedules from everyday life.

FR: working on a schedule where you get paid $10 for every 100 products you make FI: studying (or rather, procrastinating) before exams VR: gambling on games of chance like slot machines VI: checking the mail (usually) once a day (Chapter 6 Learning Objectives, p. 142)

The fact that such things as sex and artificial sweeteners are reinforcers is a problem for ______ theory.

Need Reduction (Chapter 8 Practice Quiz 2, p. 225)

Give a concrete example of how shaping can be used in a behavior modification program with a human learner?

For example, the textbook talked about a little boy that needed to learn to use a mask that delivered vital medication for his serious respiratory condition. The therapists used shaping, and first started by giving the boy a reinforcer whenever he wore the mask for just 5 seconds, and then the criterion for reinforcement was gradually increased over a period of several weeks until he was using the mask for the full duration of 40 seconds that he needed. (Chapter 5 Review Questions, p. 139)

Explain B. F. Skinner's free-operant procedure, three-term contingency, and the basic principles of operant conditioning.

Free-operant procedures: Procedures where the operant response can occur at any time, and the operant response can occur repeatedly for as long as the subject remains in the experimental chamber (p. 125) Three-term contingency: Skinner proposed that there are three components in the operant conditioning contingency; (1) The context or situation in which a response occurs which we also call stimuli, (2) The response itself, and (3) The stimulus that follows the response, also known as the reinforcer (p. 126) Basic principles of operant conditioning: 1. Acquisition, which is the gradual process of acquiring an operant response like that of a CR (conditioned reinforcer) (p. 126) 2. Extinction, where the operant response is no longer followed by a reinforcer (p. 126) 3. Spontaneous recovery, where the subject will exhibit the operant response when returned to the experimental chamber (p. 126) 4. Discrimination, which is a type of learning that can occur in classical conditioning as well, and occurs when the operant response will only occur in the presence of a specific stimulus (p. 126) 5. Generalization, a phenomenon that occurs when the operant response is wrongly applied in the presence of a similar stimulus, like a green light instead of a yellow light (p. 126) 6. Generalized reinforcers, a term created by Skinner to refer to a special class of conditioned reinforcers that are associated with a large number of different primary reinforcers (p. 127) 7. Response chains, sequences of behaviors that must occur in a specific order, with the primary reinforcer being delivered only after the final response of the sequence (p. 128) (Chapter 5 Learning Objectives, p. 113)

Describe how the performance of a specific group in the Tolman study on latent learning revealed the differential contribution of reinforcement in influencing learning versus performance.

Group 3 experienced both conditions of the study (no reinforcer, or food as a reinforcer), so that the researchers could see what happened after switching the reinforcement conditions on the rats in this group. During the first half of the trials, the rats did fairly poorly because they received no reinforcement and weren't motivated to display their learning. But once food was available in the second half of the trials, the rats did quite well, and even better than Group 2 (rats that were consistently reinforced with food during the duration of the study). This showed the researchers that reinforcement acted as a motivator to display what they had learned previously (i.e. latent learning). (Chapter 8, p. 203)

What conclusion did Guthrie make regarding his findings with cats in a puzzle box?

Guthrie concluded that stimuli (S) acting at the time of a response (R) tend to evoke a response (R). (Lecture 10, September 30th)

How did Guthrie vision the role of reinforcers in learning and the "S-R-SR" relationship?

Guthrie thought that reinforcers (SR) played only a limited role in the learning process, and that their main purpose was to preserve or protect the stimulus-response (S-R) bonds that were formed during the learning process. (Lecture 10, September 30th)

Briefly explain Guthrie's transparent puzzle box experiment (1946).

Guthrie used cats (just like Thorndike did) for his transparent puzzle box experiment. The goal was for the cat to learn to escape the box by displacing a pole that opened the front door. A cat was inserted into the back of a transparent puzzle box. During the first three (3) trials the front door remained open, and food was available after exiting via the front door. On the remaining trials, the cat could only escape by displacing the pole that opened the front door. Guthrie took a snapshot of the cats to capture the exact response. From his snapshots, Guthrie was surprised tp see a tremendous amount of stereotypy displayed by each cat. He also noted that there was a "startling repetition of movement during the whole stay in the box." (Lecture 10, September 30th)

What are Guthrie's two (2) viewpoints on learning?

Guthrie's viewpoints on learning were that: 1. All behavior cannot be explained by its consequences, since several behaviors are not goal directed or purposeful. 2. Association by contiguity is the basis for most learning (S-R). Note that there is no reinforcer or consequence (SR) in Guthrie's S-R-SR equation. (Lecture 10, September 30th)

What is an everyday, real-life example where the quality of a consequence produces a greater level in response strength than that produced by the quantity?

Having quality money (only a single $100 bill) rather than a quantity of money (lots of $1 bills) Having a high-quality small meal at a fancy restaurant instead of a high quantity of mediocre buffet food Having a smaller bottle of "higher quality" (Fiji) water instead of a larger-quantity water bottle filled with tap water (Lecture 12, October 7th)

List five (5) theories about how we can predict what will serve as a reinforcer, and discuss their strengths and weaknesses.

Here are the five theories about what constitutes a reinforcer: 1. Need Reduction Theory (p. 210) 2. Drive Reduction Theory (p. 210) 3. Trans-situationality Theory (p. 211) 4. Premack's Principle (p. 212) 5. Response Deprivation Theory (p. 216) Here are the strengths (pros) and weaknesses (cons) of each theory: Need reduction pro: Reinforcers decrease biological needs Need reduction con: Doesn't include reinforcers for non-biological needs, like certificates, trophies, smiles, kind words, etc. Drive reduction pro: Reinforcers decrease tension caused by drives or biological needs Drive reduction con: Doesn't take into account pleasurable tension, like thrill seeking or exercise Trans-situationality pro: Reinforcers can be motivating in more than one context or situation Trans-situationality con: Ignores the issue where a reinforcer in one situation doesn't act as a reinforcer in another situation, like how an ice-cold refreshing glass of water on a hot summer day isn't so motivating in a freezing igloo Premack's Principle pro: Reinforcers can be behaviors instead of just stimuli, and these reinforcing behaviors are measured on a scale from low to high probability of occurring (preference) — the more probable or favorable behaviors will actually reinforce the less probable behaviors Premack's Principle con: Doesn't explain why sometimes the opposite is true — that in real life, sometimes low-probability behaviors can be used as reinforcers for high-probability behaviors (like the study where rats engaged in the low-probability behavior of running—which ended up serving as a reinforcer—in order to have the opportunity to engage in the high-probability behavior of drinking) Response deprivation pro: Reinforcers change according to which behavior has been restricted, and this restricted behavior will then serve as a reinforcer regardless of whether it's a high or low probability behavior Response deprivation con: The predictions of this theory allow researchers to be correct nearly 100% of the time when it comes to predicting what will serve as a reinforcer (Chapter 8 Learning Objectives, p. 201)

Using the slides from class on October 19th, identify the location of each structure in the Tegmentostriatal Pathway, and describe their individual functions, connections, and type of neurotransmitters involved in their activation.

Here's how the tegmentostriatal pathway uses dopamine to produce pleasurable sensations in response to reinforcers: 1. The lateral hypothalamus detects reinforcement-related stimuli. Norepinephrine is released by the medial forebrain bundle (the MFB is the bundle formed by the of axons of the laterla hypothalamus). 2. The ventral tegmental area (VTA) is activated by the release of norepinephrine. 3. The activation of the VTA triggers the nucleus accumbens to release dopamine, which induces hedonic sensations. 4. The septum releases endorphins, which also produce hedonic sensations. 5. All of this information is sent to the prefrontal cortex (PFC), which then makes decisions and plans for future responses in relation to reinforcement-related stimuli. (Lecture 14, October 19th)

Identify the actual VI schedule on a cumulative graph when given the time between reinforcer delivery and the total time of a training session.

Here's how to calculate a VI schedule: 1. Identify the total amount of time (cumulative) 2. Divide this cumulative duration by the number of reinforcers that were given For example, let's say that the total amount of time was a cumulative 28 minutes, and that we also know that a total of 7 reinforcers were given. To calculate this VI schedule, we would need to divided 28 by 7, which would give us a schedule of VI 4. (Lecture 13, October 14th)

Who was Edwin Guthrie?

Historical Background: Guthrie was trained as a mathematician and philosopher from the University of Nebraska. After studying philosophy, Guthrie said that, "The search for absolute knowledge and truth is fruitless, since this author spent 400 pages of tenuous knowledge to establish that 1 + 1 = 2." His goal was to make predictions about things he considered worth knowing, rather than following the pursuits of other psychologists. After all, Guthrie believed that a precise explanation of uninteresting things was not worth much at all! Viewpoint on Learning: Guthrie believed that all behavior cannot be explained by its consequences, since several behavior are not goal directed or purposeful. He also though that association by contiguity is the basis for most learning (S-R). The two lines of evidence that he used to support his S-R position were found (1) in his experiment with the transparent puzzle box, and (2) his contra-freeloading effect. (Lecture 10, September 30th)

Who was Edward Thorndike?

Historical Background: Thorndike was the first psychologist to study how the consequences of an action influences learning and voluntary behavior. His initial interest (at Harvard) was mentalism in children (i.e. mind reading via facial expressions). His focus then changed (at Columbia University) to studying if animals possessed insight, intelligence, or reasoning during the learning process. Summary of Viewpoint on Learning: Thorndike believed exactly the opposite of the philosopher Plato, that learning and problem-solving involves trial and error rater than insight. Thorndike's Law of Effect explains how consequences strengthen the relationships between stimuli and responses during learning. He thought that consequences selected responses and then connected them to stimuli. (Lecture 10, September 30th)

Summarize the difference viewpoints on the impact of consequences (SR) on learning from Thorndike, Guthrie, and Skinner. Be able to identify the visual S-R-SR representation that corresponds with each psychologist.

How consequences (SR) impact learning, according to... Thorndike: STRENGTHEN "bonds" or associations between S-R (S - R) <---strengthen--- SR Guthrie: PRESERVE S-R bonds that are forming are a result of the learning process (S - R) ||||preserve|||| SR Skinner: Strengthens RESPONSES that precise their delivery resulting in the organism developing an expectation ( R) ---strengthen---> SR (Lecture 11, October 5th)

Define a variable-ratio (VR) schedule.

In a variable-ratio (VR) schedule, the number of required responses is not constant from reinforcer to reinforcer. On average, a subject will receive one reinforcer for every n responses but the exact number of responses required at any moment may vary widely. For example, on a many forms of gambling are on VR schedules, where games of chance like slot machines exhibit two important characteristics of VR schedules: (1) A person's chances of winning are directly proportional to the number of times the person plays, and (2) the number of responses required for the next reinforcer is uncertain. (Chapter 6, p. 146)

List the types of early experimental evidence that led to the development of the Dopamine Theory of Reinforcement.

In an experiment by Olds & Milner in 1954, the medial forebrain bundle (MFB) was accidentally discovered! Olds & Milner wanted to stimulate the reticular activating system (RAS) by implanting an electrode there. However, the implant bent upon impact, and unbeknownst to the researchers, they were actually stimulating the MFB instead. As the experiment went on, the researchers found that their rat participants actually ended up dying because they wouldn't stop pressing the level for stimulation (of the MFB) — they chose this over their biological needs of drinking and eating. This study showed that the MFB produced a pleasureable sensation of enjoyment (Lecture 14, October 19th)

List some real-life examples of the Optimal Stimulation Theory and its relationship to arousal.

In class, some example that the professor mentioned included: - People putting in earbuds after class as they're walking outside to restore their optimal internal level of arousal - People that need time away from others after having too much sensory stimulation - People that enjoy thrill seeking due to a naturally-higher intermediate level of arousal (Lecture 14, October 19th)

Explain how an understanding of the function and significance of the medial forebrain bundle (MFB) in reinforcement and reward has been applied to resolve problems within our society.

In class, we talked about the nature of drug addiction, and how knowledge of the MFB in regards to reinforcing drug addiction has led scientists to develop drugs to counterattack this process. (Lecture 14, October 19th)

In a cumulative record, fast responding is indicated by a ______, and no responding is indicated by a ______.

steep line; horizontal line (Chapter 6 Practice Quiz 1, p. 155)

In the videotape by Harlow that we watched in class on October 19th (with the baby monkeys), how were the principles of the Optimal Stimulation Theory illustrated?

In the video, we saw that baby monkeys preferred something that affected their internal level of arousal over their biological needs when they chose the cloth mother instead of the familiar wire mother. (Lecture 14, October 19th)

Explain how the rationale underlying the experimental procedures of Madden, Smethells, Ewan, & Hursh (2007) was used to illustrate the concepts of elastic and inelastic demand.

In this experiment, the researchers increased the "price" of the food reinforcer for rats by increasing the number of lever presses that were required before reinforcement. In the first phase of the study, the reinforcer was food, but for the second phase of the study, the reinforcer was fat. The results indicated that the demand for fat was more elastic (i.e. product demand decreases when the price increases) than for food pellets. As the size of the FR schedules increased, (increasing the number of responses required), the rats' consumption of fat decreased sharply. (Chapter 8, p. 222-223)

Explain how Tolman's experiment resolved the issue of "whether reinforcement is necessary for an organism to learn new forms of voluntary behavior."

In this study, rats were divided into three groups: Group 1 was never fed in the maze, Group 2 got food as a reinforcer once they reached the goal box, and Group 3 experienced two different conditions (in the first part of the trials there was no food, but about mid-way through the trials food became available). The results were as follows: Group 1: These rats showed much poorer performance Group 2: This consistent reinforcement led to the rats displaying a typical learning curve Group 3: This group of rats showed two different types of behavior according to the experiment condition: during the first half of the study with no food, the rats performed identically to Group 1, but during the second half of the study where they got food, the rats improved dramatically and even made slightly less errors than Group 2 This tells us that at first when the Group 3 rats didn't get food, they weren't motivated to display what they had learned about the maze, but after food was available le to them, the rats translated their learning into performance. Therefore, reinforcement is not necessary for the learning of a new response, but it is necessary for the performance of that response! (Chapter 8, p. 203)

What were the dependent and independent variable(s) in the Tolman and Honzik study on latent learning?

In this study, the thing being manipulated by the researchers—or the independent variable (IV)—was the availability of food as a reinforcer for the rats. The dependent variable (DV) that was the measured behavior was average number of errors (i.e. wrong turns) from each group of rats.

Define the term inelastic demand.

Inelastic demand is a term used by economists when the changes in the price of a product have relatively little effect on the amount purchased. Products have inelastic demand when there aren't any close substitutes (e.g. the demand for gasoline is fairly inelastic because many people have no alternative to driving their car as a means of transportation). (Chapter 8, p. 222)

Define instinctive drift, and explain why some psychologists believed that it posed problems for the principle of reinforcement.

Instinctive drift, a phenomenon termed by the Brelands, describes the biological constrain on operant learning that occurs when the animal learns a new response, but has difficulty in maintaining it over time. New behaviors may appear that are not reinforced, such as the animal reverting to its natural evolutionary behaviors (p. 132). Some psychologists believed that it posed problems for the principle of reinforcement, because animals were exhibiting behaviors that trainers didn't reinforce in place of behaviors the trainers had reinforced. (Chapter 5 Learning Objectives, p. 113)

What are the main differences between instrumental learning and operant learning?

Instrumental learning occurs due to the organism's responses acting on the environment (i.e. the organism's responses are instrumental). This type of learning was the area of study for Thorndike, Capaldi, Guthrie, and Crespi. With instrumental learning, learning is measured through the organism's latency (i.e. how fast a response is omitted). With this methodology, the organism learns through discrete trials, which adhere to the following sequence: (1) Organism makes a response, (2) Organism receives a reinforcer, (3) New trial begins after this combination is achieved. Operant learning occurs due to the organism's ability to use their responses to operate on the environment. This type of learning was discovered by B.F. Skinner. With operant learning, learning is measured through the organism's rate of responding (i.e. how frequently responses are omitted). Here, the organism learns through the free operant method, which allows the organism to make as many responses they choose (i.e. they are free to control their own rate of responding, not the experimenter). This means that the organism can determine how many reinforcers they receive based on their behavior. (Lecture 13, October 14th)

What is the difference between the behavior patterns of interim behaviors and terminal behaviors?

Interim behaviors are behaviors occurring during the early part of a time interval. Terminal behaviors are behaviors occurring as the time of food delivery draws near. (Chapter 5, p. 119)

Define latent learning.

Latent learning is a form of learning that is not immediately expressed in an overt response. For example, with humans, latent learning refers to knowledge that only becomes clear when a person has an incentive to display it. (Chapter 8, p. 203)

Using the diagram in the textbook (p. 144) from chapter 6, explain the real reason why "pauses" in responding occur after reinforcement.

Looking at the FR schedule, we can see pauses that occur after each reinforcer (as marked by the diagonal lines) is given. These post-reinforcement pauses occur in response to the upcoming FR schedule, where the animal is mentally preparing for the next task and the amount of work that is upcoming. (Lecture 13, October 14th)

Explain the model that was developed to explain the Drive Reduction Theory.

Maslow's Hierarchy of Needs organizes drives in order of being necessary to survival, with more basic needs (i.e. biological needs) being at the base of pyramid, and higher-level needs (i.e. self-fulfillment needs) being at the very top. Organisms must fulfill their needs starting at the base of the pyramid, with those biological needs, before moving up to address psychological needs and self-fulfillment needs. (Lecture 14, October 19th)

Define mentalism.

Mentalism is an ideology that says that all thoughts, hopes, expectations, and the actions that follow are caused or proceeded by mental processes. Another way of looking at this is that all actions occur because of mind states. (Lecture 10, September 30th)

What are need-reduction theory, drive-reduction theory, and the principle of trans-situationality? What are their weaknesses? How do Premack's principle and response-deprivation theory predict what will serve as a reinforcer?

Need reduction theory: This theory says that a reinforcer is something that reduces a biological need. The weakness of this theory is that it doesn't account for reinforcers of non-biological needs. Drive reduction theory: This theory says that a reinforcer is something that either decreases biological needs and/or also reduces drives/tension. The weakness of this theory is that it ignores reinforcers that actually create tension (like exercise or thrill seeking). Principle of trans-situationality: This principle says that a stimulus that serves as a reinforcer is one situation may also serve as a reinforcer in a different situation or context. However, the weakness of this theory is that it ignores stimuli that do not serve as reinforcers across different context (like an ice-cold glass of water on a hot summer day versus in the middle of a blizzard). Premack's principle: This principle says that behaviors serve as reinforcers for behaviors, and that all behaviors can be ranked on a probability scale from low to high (which is unique to each organism based on their preferences). It might seem counterintuitive, but high-probability behaviors will actually serve as reinforcers for low-probability behaviors. Response-deprivation theory: This theory says that a reinforcer is something that has been restricted or withheld, regardless of whether it is a high or low-probability behavior. (Chapter 8 Review Questions, p. 227)

Responding on an FR schedule typically shows a ______ pattern, and responding on an FI schedule typically shows a ______ pattern.

stop-and-go (stair-stepped behavior); accelerating (scalloped behavior) (Chapter 6 Practice Quiz 1, p. 155)

Discuss explanations of why responding is faster on variable-ratio (VR) schedules than on variable-interval (VI) schedules.

On VR schedules, responding is faster because... 1. Higher perceived control on outcome 2. Uncertainty increases motivation to perform responses 3. The opportunity to receive a reinforcer is directly proportional to the number of times the behavior is performed (Chapter 6 Learning Objectives, p. 142)

Define the Differential Reinforcement of High Rates (DRH) schedule.

On a DRH schedule, a certain number of responses must occur within a fixed amount of time. This schedule produces the highest rate of responding of any reinforcement schedule. (Chapter 6, p. 152)

What are the rules for administering a reinforcer in a variable interval (VI) schedule?

On a VI schedule, the amount of time that the animal must wait to pass in order to receive a reinforcer is not fixed, but instead varies unpredictably. Usually, VI schedules produce a steady, moderate response rate (p. 149). The rules for this schedule say that the first response to occur gayer a reinforcer is stored collects that reinforcer, and the clock doesn't start again until the reinforcer is collected. In other words, the first response after the time interval has passed is rewarded. (Lecture 13, October 14th)

How does the organism's perception of voluntary control differ between the experiences of an FR schedule and an FI schedule?

On a fixed ratio (FR) schedule, animals learn that their behaviors have direct control over the environment, because once they reach a certain number of responses, they get a reward, and this leads them to believe that their responses have an impact on the outcome. With a fixed interval (FI) schedule, all animals learn is how to be patient and wait for a certain amount of time to pass. Their actions have no effect on their environment, and they learn that they have little to no control. (Lecture 13, October 14th)

What are the criteria for delivering a reinforcer on a variable ratio (VR) schedule, in comparison to a fixed ratio (FR) schedule?

On a variable ratio (VR) schedule, a reinforcer is given after a certain number of responses has been made, but this required number varies unpredictably. On a fixed ratio (FR) schedule, a reinforcer is given after a certain number of responses have been made, and this required number of responses remains fixed. (Lecture 13, October 14th)

List some common human examples of behaviors that correspond to Guthrie's viewpoint on "we do what we did the last time we were in that situation?"

On the lecture slides, there was an example of procrastinating studying for an exam and attempting to cram in the material the night before. We resort to doing this behavior not because it is effective (it's not), but rather because it's what we did last time we were in this situation of having an exam in the morning. (Lecture 10, September 30th)

Describe one biofeedback procedure used to treat a medical problem. What type of feedback is given, how do subjects respond, and how effective is the treatment in the long run?

One biofeedback procedure used to treat tension headaches was to use EMG-translated audible clicks to allow patients to manually control their muscle tension, which would reduce the frequency of the clicks. It was very effective, and patients were able to control their tension without the EMG feedback even after the study ended. (Chapter 8 Review Questions, p. 227)

Give examples of how the principles of operant conditioning have been used in behavior modification with children and adults.

Operant conditioning has been used to teach language to kids with autism. This was accomplished by using food as a primary reinforcer, shaping to gradually increase the criterion for reinforcer delivery, other conditioned reinforcers (stimuli such as saying "Good job!" or giving the child a hug) so as not to only rely on food with the issue of eventual satiation, prompts (any stimulus that makes a desired response more likely) like physical guidance to aid in mouth and lip movements, and fading (a procedure to gradually withdraw whatever prompt is being used). (Chapter 6 Learning Objectives, p. 142)

Explain which individual brain circuits were manipulated by scientists in order to take complete control of the RATBOT's behavior.

RATBOT's behavior was manipulated by scientists that used the rat's pleasure center (the MFB) to motivate them to make the correct left or right turns. This was accomplished through an electrode that stimulated the MFB. (Lecture 14, October 19th)

Define the term ratio strain.

Ratio strain is the used to describe the general weakening of responding that is found when large ratios are used. For example, with very large ratios, the animal may start to exhibit long pauses at times other than right after reinforcement. (Chapter 6, p. 145)

Discuss whether performing a response and receiving a reinforcer are essential in the learning and in the performance of a new behavior.

Receiving a reinforcer is essential in the performance of a new behavior, but it is actually not essential in the learning of a new behavior (p. 203). This can be seen in the example that the textbook discussed: The famous latent learning experiment by Tolman and Honzik (1930). The results of this study indicated that rats that did not have the reinforcer of food were not motivated to display what they had learned, and were therefore unable to translate their learning into performance. (Chapter 8 Learning Objectives, p. 201)

What are the roles of reinforcers (SR's) in "selecting" and "connecting" during learning?

Reinforcers (SR) select responses and connect them to stimuli. (Lecture 10, September 30th)

What are the roles of reinforcers (SR's) in strengthening connections?

Reinforcers (SR) strengthen connections between responses (R) and stimuli (S) so that responses (R) that lead to satisfaction will be more firmly connected to a given event. When the event recurs, the response will be more likely to occur as well. (Lecture 10, September 30th)

How were the three (3) different groups of rats treated in Tolman and Honzik's classic experiment on latent learning? How did each of the three groups perform, and what did Tolman and Honzik conclude?

Reinforcers: Group 1 got no food, Group 2 consistently got food, and Group 3 got no food for the first half of the study and then switched to receiving food during the second half of the study. Performance: Group 1 did the worst, Group 2 showed a basic learning curve, and Group 3 did just as well, if not marginally better, than Group 2. (Chapter 8 Review Questions, p. 227)

What experimental procedures did G. S. Reynolds (1975) include to ensure that pigeons on a VR or VI received the same frequency of reinforcers throughout the 6-minute training session?

Reynolds ended up yoking the pigeons that were on the different reinforcement schedules so that the frequency that the reinforcers were received were contingent upon the other pigeon, which allowed the frequency that the reinforcers were given to be the same, even on different schedules. (Lecture 13, October 14th)

Describe the procedure of shaping and explain how the separate components of the definition explain the actual process involved in modifying behavior.

Shaping is composed of two parts: (1) Selective reinforcement of the target behavior through successive approximations, and (2) Extinction of undesired responses by gradually withholding reinforcement for behaviors that were previously reinforced. (Lecture 11, October 5th)

Define shaping.

Shaping is the process of gradually increasing the criterion for reinforcement delivery by selectively reinforcing successive approximations of a target behavior, and following up with extinction to gradually withhold reinforcement of behaviors that were once previously reinforced. (Lecture 11, October 5th)

Describe the procedure of shaping and explain how it can be used in behavior modification.

Shaping is the process of using successive approximations toward a target behavior or desired outcome (p. 121). For example, using shaping to teach a dog to jump quite high in the air, you would first reward the dog for even lifting its head, and then for raising its front legs off the ground, and then for standing on its hind legs, and then for a small jump, and so on and so forth until the desired height has been achieved. (Chapter 5 Learning Objectives, p. 113)

How were the basic procedures of shaping used to produce more accurately guided missiles during WW2?

Shaping was used to train pigeons to peck on a target on a screen with high accuracy, and guide the missile to its destination. (Lecture 11, October 5th)

What procedure was used to produce "superstitious behaviors" in pigeons?

Skinner conducted the famous "superstitious experiment" with pigeons. This experiment provided evidence for "R-SR bonds." The experiment was set up so that every 15 seconds, food (SR) would drop into the chamber. There was no stimulus or specific response required for the delivery of the food (SR). Results showed that pigeons exhibited superstitious behaviors, things like counterclockwise turns, head pokes at the chamber, head tossing, and side to side swaying. It was concluded that the pigeons' superstitious behaviors were actually just whatever behaviors were occurring (R) before the food (SR) was delivered, and therefore these behaviors (R) were then strengthened by the delivery of the food (SR). (Lecture 11, October 5th)

How does Skinner explain the behaviors documented in graphs by Thorndike and Guthrie?

Skinner thought that organisms developed an expectation that a given response will be followed by a specific SR, and he explained the behaviors seen in the graphs by saying that these were superstitious behaviors. (Lecture 11, October 5th)

What were Staddon & Simmelhag's (1971) criticism of Skinner's "superstitious experiment?" Additionally, which innate responses represent the two (2) types of adjunctive behaviors proposed by this group? Explain when each of these types of adjunctive behaviors are more likely to be exhibited.

Staddon and Simmelhag questioned whether the responses observed were due to reinforcement, or whether they were just adjunctive behaviors. They also criticized the results by suggesting that the responses observed were actually interim behaviors and terminal behaviors. Interim and terminal behaviors are two types of adjunctive behaviors. Interim behaviors occur after the delivery of the reinforcer, and last until the time interval is complete. Terminal behaviors occur as the expected reinforcer delivery nears. (Lecture 11, October 5th)

What are the parallels between stamping in and "the four (4) steps involved in the process versus the product of learning?"

Stamping in corresponds to the time after the organism experiences new stimuli, when they begin to understand the relationships between stimuli, and then subsequently form associations based upon the relationships and assigns meaning. This leads to the organism stamping in the new knowledge they have gained, or use their newly-modified old responses. (Lecture 10, September 30th)

How are stamping in and stamping out an integral part of learning or "trial and error learning?"

Stamping out refers to the early part of the learning process when the organism must stamp out or remove their previous experience and associations with the presented stimuli that are not useful in obtaining the desired consequence (i.e. reward). Stamping in refers to the learning process when the organism understands the relationships between stimuli (that lead to the desired consequence), and forms associations based upon the relationships as well as assigns meaning (form memories about the correct responses required to receive the desired consequence). (Lecture 10, September 30th)

How can economic concepts such as price, elasticity, and substitutability be applied to drug abuse? How do addictive drugs compare to other reinforcers? See Box 8.2 on p. 224-225.

Studies have explored the elasticity of different drugs using animal subjects. using different FR schedules of different sizes, researchers can determine how the "price" of a drug affects the consumption. Research with animals has also shown that substitutability can affect the demand for a drug. In one study that had baboons choose between food or heroin, the researchers found that plentiful resources (both food and heroin were available every 2 minutes) the baboons consumed both food and heroin equally often, but when resources were scarce (food and heroin was only available every 12 minutes), baboons almost always chose to consume food. These results suggested to researchers that even drugs conform to standard economics, and that drug consumption will decrease if the costs gets high enough. Studies with humans on nicotine, caffeine, and alcohol addiction has shown that as the price of a drug increases, or as substitute reinforcers become available, drug consumption decreases. (Chapter 8 Review Questions, p. 227)

What are superstitious behaviors?

Superstitious behaviors are rituals or behaviors performed as a result of accidental reinforcement. (Chapter 5, p. 117)

Discuss how the principle of reinforcement can account for superstitious behaviors.

Superstitious behaviors often arise when an individual actually has no control over the events taking place. These behaviors emerge due to accidental reinforcement. Accidental reinforcement occurs when an organism believes that their behavior has an impact on the outcome of an event, but in reality, no behavior is required for reinforcement, and the association that the organism has made is therefore completely superstitious in nature. (Chapter 5 Learning Objectives, p. 113)

Describe the Dopamine Theory of Reinforcement.

The Dopamine Theory of Reinforcement (Olds & Milner) says that a reinforcer is something that triggers the release of dopamine and hedonic sensations in the brain. (Lecture 14, October 19th)

Describe the Drive Reduction Theory, and describe the problem with this theory.

The Drive Reduction Theory (Hull and his student Neal Miller) says that a reinforcer is any stimulus that fulfills a biological need and/or also reduces drives. When drives are depleted, it creates unpleasant tension, and this tension energizes and motivates the organism to respond. Reinforcers would be stimuli or events that reduce this unpleasant tension by replenishing biological resources and satisfying drives. Drives include things like love, safety, achievement, acceptance, etc. We can think about drives as relating to Maslow's Hierarchy of Needs. A problem with this theory is that it only stresses the reduction of tension. However, this is not the case in real life, where tension often serves as a reinforcement (e.g. exercise, loud music, thrill seeking, fast driving, etc.). (Lecture 14, October 19th)

How does Thorndike's Law of Effect explain learning performance in the puzzle boxes?

The Law of Effect says that: 1. Responses that lead to satisfaction will be strengthened and more firmly connected to a given event (so that when that even recurs, the response will be more likely to occur as well). 2. Responses preceding discomfort will have their connections weakened with the event, so that they will be less likely to be displayed when the event recurs. (Lecture 10, September 30th)

Describe Thorndike's Law of Effect and experiments on animals in the puzzle box.

The Law of Effect says that: 1. Responses that lead to satisfaction will be strengthened and more firmly connected to a given event (so that when that even recurs, the response will be more likely to occur as well). 2. Responses preceding discomfort will have their connections weakened with the event, so that they will be less likely to be displayed when the event recurs. Thorndike's puzzle box experiment with cats was set up in the following way: 1. Stray cats were captured from alleys and used as subjects for this experiment 2. The cats were placed in a puzzle box where they had to learn to successfully pull a loop in order to escape and receive food 3. During the earlier trials, the cats had to stamp out old behaviors associated with the stimuli that were no longer effective in obtaining the desired consequence (SR) of food 4. During the remaining trials, cats engaged in the learning process by stamping in new behaviors (R) associated with the stimuli that were successful for obtaining food 5. The results of this study supported Thorndike's view that consequences (SR) strengthened bonds that were formed as a result of the associations between stimuli (S) and responses (R) (Chapter 5 Learning Objectives, p. 113)

Thorndike referred to the principle of strengthening a behavior by its consequences as ______; in modern terminology, this is called ______.

The Law of Effect; reinforcement (Chapter 5 Practice Quiz 1, p. 124)

What are the shortcomings of the Need Reduction Theory?

The Need Reduction Theory (Hull) is too limited and doesn't include reinforcers for non-biological needs. (Lecture 14, October 19th)

Describe the Need Reduction Theory and explain what the issue is with this theory.

The Need Reduction Theory (Hull) says that a reinforcer is any stimulus that reduces a biological need (e.g. hunger, thirst, waste removal, respiration, and temperature regulation). In this theory, stimuli (e.g. food, liquid, oxygen, warmth, and excretion) serve as reinforcers to biological needs. The issue with this theory is that it is too limited, and doesn't include reinforcers for non-biological needs — things like kind words, smiles, trophies, certificates, etc. (Lecture 14, October 19th)

Describe the Optimal Stimulation Theory.

The Optimal Stimulation Theory (Harlow) says that a reinforcer is any stimulus that returns the organism to an intermediate level of arousal, which is unique for each organism. Internal level of arousal is linked to the level of sensory stimulation from the environment. Therefore, Harlow thought that a reinforcer was something that produced any type of increase or decrease in sensory stimulation, when these changes restored the organism to an intermediate level of arousal. With this theory, organisms are seeking to fulfill the brain's ingrained need for stimulation, and reinforcers help to guide the organism to its optimal level of brain stimulation. (Lecture 14, October 19th)

How does the contrafreeloading effect support Guthrie's viewpoints?

The contra-freeloading effect supports Guthrie's viewpoints because it shows that the animal prefers to work (R) for food (SR) because the last time they were in that situation of being placed in a Skinner Box (S) they were on a FR 10 schedule, so they do what they did the last time. (Lecture 10, September 30th)

What is a cumulative recorder?

The cumulative recorder is a simple mechanical device constructed by B. F. Skinner which records responses in a way that allows any observer to see at a glance the moment-to-moment patterns of a subject's behavior. See Figure 6.1 on p. 143 in chapter 6 for a detailed visual diagram of this device. (Chapter 6, p. 143)

Define the Discrimination Hypothesis.

The discrimination hypothesis says that in order for behavior to chance once extinction begins, the individual must be able to discriminate the change in reinforcement contingencies. (Chapter 6, p. 151)

Give examples of how the field of behavioral economics has been applied to animal and human behaviors.

The field of behavioral economics is the result of a cooperative effort between psychologists and economists. The goal of the field is to predict how people will allocate their limited resources to obtain scarce commodities. For example, behavioral ecologists study the behaviors of animals in their natural habitats and attempt to determine how the behavior patterns of different species are shaped by environmental factors and the pressures of survival. Using the theory of optimization (which says that resources will be distributed in whatever way that gives the organism the most satisfaction), predictions can be made about hunting behavior based on the availability of prey in the area. If there's a lot of prey, smaller prey won't bring the predator as much satisfaction, but if the prey is scarce, small prey will bring a lot of satisfaction. With humans, microeconomists study how people will distribute their incomes among all the possible ways it can be spent, saved, or invested. (Chapter 8 Learning Objectives, p. 201)

Define the Generalization Decrement Hypothesis.

The generalization decrement hypothesis (from Capaldi) says that responding during extinction will be weak if the stimuli during extinction are different from those that were present during reinforcement, but it will be strong if these stimuli are similar to those encountered during reinforcement. Note that generalization decrement is a term for the decreased responding one observes in a generalization test when the test stimuli become less and less similar to the training stimulus. (Chapter 6, p. 151)

How does the IRT theory attempt to explain why the rate of responding differs on VR versus VI schedules?

The interresponse time (IRT) theory says that IRT is the time between two consecutive responses, and that response rates are slower on VI schedules than on VR schedules because longer IRTs (long pauses between responses) are more frequently reinforced on VI schedules. Fun fact, this theory was first proposed by B.F. Skinner (p. 157). (Lecture 13, October 14th)

Does the revised theory (Drive Reduction Theory) address the problems found in the Need Reduction Theory? Explain why or why not.

The issue with the Need Reduction Theory was that it didn't address reinforcers for non-biological needs, so yes, this Drive Reduction Theory does add additional non-biological needs—or drives—which resolves that issue. (Lecture 14, October 19th)

What is the partial reinforcement effect?

The partial reinforcement effect occurs when extinction is more rapid after CRF (continuous reinforcement) than after a schedule of intermittent reinforcement. This was a dilemma, because it seemed counterintuitive that a response that's only intermittently reinforced would be stronger than a response that was continuously reinforced. This effect seemed paradoxical to early researchers, and the dilemma was named Humphrey's paradox after the psychologist who first demonstrated the partial reinforcement extinction effect. (Chapter 6, p. 151)

Explain the differences in the rate of responding observed between the pigeon on the VI versus the pigeon on the VR.

The pigeon on the VI schedule learned that reinforcers didn't come from him pecking, but rather frown the passage of time (because he was waiting on the other VR pigeon). On the other hand, the pigeon on the VR schedule learned that his pecking produced the delivery of reinforcers, which led to a high rate of responses. (Lecture 13, October 14th)

The procedure of using a series of test conditions to determine what is maintaining a person's maladaptive behavior is called ______.

functional analysis of reinforcers (Chapter 8 Practice Quiz 2, p. 225)

List the shortcomings of the Drive Reduction Theory as revealed through empirical findings and everyday observations.

The problem with the Drive Reduction Theory is that it doesn't account for things that increase instead of decrease tension that still serve as motivators (e.g. exercise, thrill seeking, driving fast, etc.). (Lecture 14, October 19th)

What is the stop-action principle?

The stop-action principle is a particular version of the Law of Effect that was termed by Brown and Herrnstein (1975). This principle states that because of this strengthening process (where the animal's ongoing behavior strengthens the association between the situation (S) and those precise behaviors (R) that were occurring at the moment of reinforcement (SR)), the specific bodily position and the muscle movements occurring at the moment of reinforcement will have a higher probability of occurring on the next trial. (Chapter 5, p. 116)

What are the implications of the "superstitious experiment" in distinguishing between the validity of Skinner's viewpoint on learning from that of Thorndike and Guthrie?

The superstitious experiment implied that organisms develop expectations about which of their behaviors (R) will be followed by a specific consequence (SR). This viewpoint does not include the role of stimuli (S) in the learning equation, but rather relies only on behaviors (R) and consequences (SR) to explain why individuals do what they do. (Lecture 11, October 5th)

Explain why the pattern of responding observed during the "time out" period on FI schedules are so much different than on VI schedules.

The time out period on these schedules refers to the time after a reinforcer is delivered that continues until the time interval as ended where no amount of responses will lead to a reinforcer. The time out period on FI schedules produces a pattern of scalloped behavior on graphs, which is a gradual slope about mid-way through the time interval that represents when the animal begins checking to see if the interval has ended yet. The time out period on VI schedules produces a pattern of rapidly increasing response rates due to the uncertainty for when the reinforcer will be delivered. (Lecture 13, October 14th)

Describe different theories about why there is a post-reinforcement pause on fixed-ratio (FR) schedules, and explain which theory is best.

There are two (2) main theories: 1. A molecular theory, which means that it is focused on small-scale events, the moment-by-moment relationships between responses and reinforcers, usually events that have a time span of less than 1 minute (p. 157) - The interresponse time (IRT) reinforcement theory is a molecular theory that states that response rates are slower on VI schedules than on VR schedules because long IRTs (long pauses between responses) are more frequently reinforced on VI schedules 2. A molar theory, which means that it deals with large-scale measures of behavior and reinforcement, usually events that events that can be measured over at least several minutes and often over the entire length of an experimental session (p. 157) - The response-reinforcer correlation theory is a molar theory that focuses on the long-term relation between response rate and reinforcement rate on VI and VR schedules (p. 158) (Chapter 6 Learning Objectives, p. 142)

What are the ABC's of learning?

This equation helps us understand the emergence of voluntary behavior. The ACB's of learning are (1) antecedents, (2) behavior, and (3) consequences. Antecedents are stimuli (S), behavior is the response (R), and the consequences are the reinforcers (SR). So, in this way, the ABC equation correlates with the S-R-SR relationship. (Lecture 10, September 30th)

Explain the significance of the following quote by Guthrie: "We do what we did the last time we were in that situation."

This refers to the idea that the last response performed = the last response connected to a stimulus in that specific learning event. Therefore, this quote is the simplified versions of Guthrie's conclusion (from the transparent puzzle box experiment) that stimuli acting at the time of a response tend to evoke a response. (Lecture 10, September 30th)

What procedures were used by Thorndike to determine if animals posses insight or intelligence?

Thorndike used mazes with baby chickens as well as puzzle boxes with stray cats captured from alleys. The experiment with puzzle boxes and stray cats serves as evidence to support his S-R-SR position. (Lecture 10, September 30th)

Briefly explain Thorndike's puzzle boxes experiment (1898).

Thorndike used stray cats captured from alleys to serve as the subjects in his experiment with puzzle boxes. The goal was to examine if animals possessed insight, intelligence, or reasoning during the learning process. The stimuli (S) were a loop and string inside the box. The behavior (R) was the correct response of pulling the loop. The consequence (SR) was escape and food. Thorndike expected that if animals did indeed possess insight or intelligence, then learning should be linear (as reflected on a graph). However, results indicated that the cats' learning was non-linear. This is because during the first part of the trials, the cats were "stamping out" responses that were not associated with the reward. Later during the remaining trials, the cats were "stamping in" responses that lead to food and escape. Consequences (food and escape) that followed a correct response strengthened the cats' associations that formed between the stimuli (the lever, latch, and loop) in the box and the responses that were instrumental in acting on stimuli that lead to escape and food. (Lecture 10, September 30th)

Explain the differences in viewpoints between psychologists Edward Thorndike, Edwin Guthrie, and Burrhus F. Skinner on the impact of consequences on learning.

Thorndike: Believed consequences served to strengthen bonds, or associations forming between stimuli and responses Guthrie: Thought that consequences played a limited role in learning, and that they may serve to preserve or protect (from interference) the stimulus-reponse bonds that form as a result of learning Skinner: Believed that consequences selectively strengthen the responses that precede their delivery (Lecture 10, September 30th)

How do you calculate the actual VR or VI schedule that a pigeon is being trained on if you are given the number of responses per second, session duration, and the number of reinforcers in the session?

To calculate a VR schedule: Divide the total number of responses by the number of reinforcers that were given (e.g. if a pigeon received 5 total reinforcers—one after 3 responses, one after 7 responses, one after 5 responses, one after 9 responses, and one after 11 responses—we would need to add up the total amount of responses—3 + 7 + 5 + 9 + 11 = 35—and then divide this cumulative total number of responses by the number of reinforcers that were given, which in this case was 5 total reinforcers, so we'll divide 35 by 5 to get a VR 7 schedule). To calculate a VI schedule: Divide the total amount of cumulative time by the number of reinforcers that were given (e.g. 28 minutes of cumulative time with 7 total reinforcers given would be a VI 4 schedule). (Lecture 13, October 14th)

Describe Thorndike's experiments with the puzzle box and how they demonstrated his Law of Effect. What did Guthrie and Horton find when they photographed cats in the puzzle box, and what does this tell us about the principle of reinforcement?

To review, the Law of Effect says that responses associated with a satisfactory consequence will be strengthened in the S-R-SR relationship association. Likewise, responses that are associated with a negative consequence will be weakened. Thorndike's experiment with cats: Cats were put in a puzzle box and had to learn to pull a loop in order to escape and get food; results indicated that learning isn't learning, and that instead the cats had to stamp out old behaviors associated with the stimuli and then stamp in new behaviors based on their newly-acquired knowledge of which responses led to the desired consequence Guthrie's experiment with cats: Cats were put into a transparent box and had to learn to knock over a pole to open the front door; results indicated that the stimuli that was present at the time of reinforcement was associated with the behavior displayed by the cat at that exact moment, leading the cats to just "do what they did the last time they were in that situation" over and over again (Chapter 5 Review Questions, p. 139)

Identify the type of reinforcement schedule that is illustrated in a series of real-life events and justify your answers by using the rules governing how a reinforcer will be delivered on a given schedule.

Variable interval (VI) schedule: Waiting for your Uber ride to arrive, where you must wait for a certain amount of time to pass where nothing you do will influence the outcome, and this amount of time varies unpredictably Variable ratio (VR) schedule: Salesperson selling a product at a booth, where their behavior does produce reinforcement (customers stopping to talk or to buy a product), but the number of responses the salesperson must make before a product is sold varies unpredictably Fixed interval (FI) schedule: Waiting for your favorite television show to come on, where you must wait for a certain amount of time (i.e. the show only comes on Wednesdays at 9pm), but this specified amount of time remains fixed (i.e. the TV show reliably comes on at the same time each week) Fixed ratio (FR) schedule: Filling out a loyalty punch card, where you must make a certain number of responses (i.e. punches) in order for the reinforcer (i.e. a free sandwich) to be delivered, but this specified number remains fixed (i.e. each card reliably has 10 punches before a free sandwich) (Chapter 6)

What is a visceral response?

Visceral responses are responses of the body's glands and organs that usually occur without the organism's awareness. (Chapter 8, p. 201)

Provide the actual definition of voluntary behavior, as discussed during the lecture.

Voluntary behvaior is not self-willed, independent, initiated by free choice, or private, but rather it is controlled by consequences. (Lecture 12, October 7th)

List real-world examples of the Drive Reduction Theory principles that are evident in our society.

We can see examples of the Drive Reduction Theory in our society through things like dating apps (reducing the drive for love), social clubs and organizations (reducing the drive for acceptance), and even college degrees (reducing the drive for achievement). (Lecture 14, October 19th)

Identify specific points or patterns of behavior on a cumulative graph that are representative of the separate expectations produced by FR vs FI schedules.

We can see that on the stair-stepped graph showcasing a fixed ratio (FR) schedule, there are diagonal marks that indicate that the organism has made enough responses to receive a reinforcer. The horizontal dotted lines on the graph represent the number of responses. The horizontal lines that appear after receiving the reinforcer represent the post-reinforcement pause. The reason for this pause is explained by the remaining-responses hypothesis, which states that the upcoming requirement of the next FR schedule determines the size of the pause. This is the organism's way of mentally preparing for the required response that is coming up next. We can also see this in humans with procrastination. Looking at the scalloped graph that shows a fixed interval (FI) schedule, there are vertical dotted lines that represent intervals of time. The diagonal marks show when the organism received a reinforcer. This gradual slope occurs because during the time out period (i.e. the period of time directly after a reinforcer is delivered until the time interval ends) no amount of responses will lead to a reinforcer. Therefore, the organism only starts to produce responses about mid-way through the interval, when they are actually just checking to see if the time interval has passed yet. (Lecture 13, October 14th)

Which characteristic (quality or quantity) is more important when primary reinforcers are used or when secondary reinforcers are given?

When primary reinforcers are used: Quality When secondary reinforcers are used: Quantity (Lecture 12, October 7th)

Describe studies on how reinforcement can be used to control visceral responses, and explain how these techniques have been used in biofeedback.

When the body is temporarily paralyzed (i.e. eliminate any possible influence of the body's skeletal muscles), reinforcement can exert direct control over some visceral responses (e.g. curarized rats could increase or decrease heart rate, dilation or constriction of the blood vessels of the skin, increase or decrease the activity of the intestines, and even increase or decrease the rate of urine production by the kidneys). For example, researchers Miller and DiCara (1967) attempted to increase or decrease the heart rates of rats, using electrical brain stimulation (ESB) as reinforcement (p. 205). They used a shaping procedure to gradually increase heart rate (i.e. when the rats' heart rates naturally increased, even if only by a small amount, they would receive the ESB reinforcement). Their experiment was successful, with the average heart rate being over 500bpm. Biofeedback refers to any procedure design to supply the individual with amplified feedback about some bodily process, with the reasoning being that improved feedback may lead to the possibility of better control. (Note that some psychologists have speculated that one reason we have so little control over many of our bodily functions is that feedback from our organs and glands is weak or nonexistent.) The example of biofeedback can be seen in the study by Budzynski, Stoyva, Adler, and Mullaney (1973) which sought to train adults who suffered from frequent muscle-contraction headaches. Using an electromyogram or EMG (feedback from electrodes attached to the patient's forehead) that converted the feedback about the level of tension into a continuous train of clicks that the patient could hear, researchers found that patients were able decide the rate of the clicking (thereby decreasing muscle tension) nearly instantly. Even after the study, patients were able to reduce muscle tension on their own, without the biofeedback equipment. (Chapter 8 Learning Objectives, p. 201)

What conditions are most appropriate for using the DRL schedule?

When you want to decrease the rate of responding (Chapter 6, p. 152)

How do expectations differ in response to how the procedures of delivering a reinforcement differ on a FR (fixed ratio) versus a FI (fixed interval) schedule regarding the response-reinforcement relationship?

With a fixed ratio (FR) schedule, the organism expects to receive a reinforcer after they have given a set number of responses. This results in a pattern of stair-step behavior, where the organism consistently produces the required number of responses, receives the reinforcer, and then immediately continues to resume giving responses until they have achieved the set number required again. This schedule results in the organism gaining perceived control over their environment. With a fixed interval (FI) schedule, the organism expects to receive a reinforcer after a certain amount of time has passed. This results in a pattern of scalloped behavior, where there is a gradual slope (as seen on a graph) about mid-way through the time interval, when the organism begins checking to see if the time interval is over yet. This schedule results in the organism learning patience in timing. (Lecture 13, October 14th)

Explain the difference between contingency-shaped behavior and rule-governed behavior.

With contingency-shaped behavior, a behavior is gradually shaped into its final form as the individual gains more and more experience with a particular reinforcement schedule (p. 154). With rule-governed behavior, humans can be given verbal instruction or rules to follow, and once a person receives or creates such a rule, the actual reinforcement contingencies may have little or no effect on their behavior. It's hypothesized that this is what makes human participants behave differently from animals during laboratory experiments (p. 154). (Chapter 6 Learning Objectives, p. 142)

In photographing cats in the puzzle;e box, Guthrie and Horton found that the behaviors of an individual cat were ______ from trial to trial, but they were ______ from cat to cat.

similar; different (Chapter 5 Practice Quiz 1, p. 124)

What is the difference between free-operant procedures and discrete trial procedures?

With free-operant procedures, the operant response can occur at any time, and can occur repeatedly. These procedures can include the use of lever pressing, key pecking, or other similar responses. With discrete trial procedures, the operant response is the goal of the procedure, and is measured in different trials. These procedures can include things like puzzle boxes (with the operant response being to escape the box) or mazes (where the operant response is to reach the goal box). (Chapter 5, p. 125)

Explain how the Dopamine Theory of Reinforcement has a more global application to many aspects of behavior.

With the Dopamine Theory of Reinforcement, it has a much more broad application, as a reinforcer is anything that produces hedonic sensations and dopamine release in the brain. (Lecture 14, October 19th)

Explain how you could use shaping to teach a dog to jump over a tall hurdle.

You could use shaping to teach the dog, where successive approximations of the target behavior are rewarded. (Chapter 5 Review Questions, p. 139)

A behavior has a high ______ if it is not affected much by distractions or environmental changes.

behavioral momentum (Chapter 6 Practice Quiz 1, p. 155)

When using food to shape the behavior of a rat, the sound of the food dispenser is a ______, and the food itself is a ______.

conditioned reinforcer; primary reinforcer (Chapter 5 Practice Quiz 1, p. 124)

In behavioral marital therapy, a written agreement between spouses is called a ______.

contingency contract (Chapter 6 Practice Quiz 2, p. 166)

______ behavior is controlled by the schedule of reinforcement; ______ behavior is controlled by instructions subjections are given or form on their own.

contingency-shaped; rule-governed (Chapter 6 Practice Quiz 1, p. 155)

If the demand for a product decreases sharply when its price increases, demand for the product is called ______.

elastic demand (Chapter 8 Practice Quiz 2, p. 225)

According to Premack's Principle, ______ behaviors will reinforce ______ behaviors.

high-probability; low-probability (Chapter 8 Practice Quiz 2, p. 225)

Superstitious behaviors are more likely to occur when an individual has ______ of the reinforcer.

little or no control (Chapter 5 Practice Quiz 1, p. 124)

______ theories deal with long-term relationships between behavior and reinforcement, whereas ______ theories deal with moment-to-moment relationships between behavior and reinforcement.

molar; molecular (Chapter 6 Practice Quiz 2, p. 166)

A shaping procedure in which a behavior is reinforced if it is better than a certain percentage of the last few responses the individual has made is called a ______.

percentile schedule (Chapter 5 Practice Quiz 1, p. 124)

Physically guiding the movements of a learner is an example of a ______; gradually removing this physical guidance is called ______.

prompt; fading (Chapter 6 Practice Quiz 2, p. 166)

Research results favor the ______ theory of FR post-reinforcement pauses over the ______ and ______ theories.

remaining-responses; fatigue; satiation (Chapter 6 Practice Quiz 2, p. 166)

IRT reinforcement theory states that longer IRTs are more likely to be reinforced on ______ schedules, but bursts of responding are more likely to be reinforced on ______ schedules.

variable-interval (VI); variable-ratio (VR) (Chapter 6 Practice Quiz 2, p. 166)

Responding on ______ schedules is usually rapid and steady. and responding on ______ schedules is usually slower and steady.

variable-ratio (VR); variable-interval (VI) (Chapter 6 Practice Quiz 1, p. 155)


Kaugnay na mga set ng pag-aaral

New Opportunities pre-intermediate Module 2-2

View Set

Chapter 6B Voting: rights and IDs

View Set

UNIT 6 - Chapter 18 Caring for a Client with Cancer

View Set