PSYC303 Exam 2

Ace your homework & exams now with Quizwiz!

(7) Which factors determine the degree of stimulus control? Be familiar with processes, terminology, examples provided in class and with hypothetical examples which may be evident in every day's life.

1. The organism's sensory capacity: chickens see colors and can learn to respond to one color while withholding response to other colors. Rats have limited color vision. The olfactory and auditory sensitivity of dogs and rats is superior to that of humans. 2. Relative ease of conditioning various stimuli: presence of other cues in the situation (and how salient they are/easily they can be learned about). Overshadowing - competition among stimuli for access to the process of learning. When two stimuli are presented at the same time, the presence of the more easily trained stimuli may hinder learning about the other one. Examples: Strong taste and weak smell to predict illness. Since children memorize pictures faster than words, the pictures will overshadow words (the child will memorize a story by pics rather than words). 3. Type of reinforcement: behavior-reinforcement "belongingness" Certain types of stimuli will gain control over the behavior in appetitive associations, and some - in aversive associations. Pigeons that learned to lever-press to a compound (light + sound) to receive food or avoid shock demonstrates differential stimulus control: - Certain types of stimuli are more likely to gain control over the instrumental response in an appetitive situation (sight). - Certain types of stimuli are more likely to gain control over the instrumental response in an aversive situation (sound). 4. Stimulus elements vs. configural cues in compound stimuli: how does the organism treat a compound stimuli? Two competing perspectives, with no current consensus: a. The organism treat a compound stimuli as a stimuli that is comprised of separate elements, such as a tone and a light (stimulus-elements approach). The organism treat a compound stimuli as an integral unit (configural-cues approach). Equivalent to the estimation of single instruments vs. an entire orchestra. 5. The organism's previous experience with the stimuli may determine the extent to which those stimuli come to control behavior. Pavlov believed that stimulus generalization occurs because learning about the CS is transferred to other stimuli that share physical similarity to the CS. Lashley & Wade (1946) suggested that stimulus generalization represent not learning - but the lack of learning! (lack of experience). Thus, generalization depends on the organism's previous learning experience

(5) What are the effects of the instrumental reinforcer on the efficacy of instrumental conditioning procedures (quantity and quality of the reinforcer, a shift in reinforcer quality or quantity)?

1. The quantity and quality of the reinforcer: In discrete trial procedures and free operant responding - the subject will perform better for a big reinforcer. In a progressive ratio schedule (progressive increase in number of responses is require in order to receive the reinforcer) the subject will willingly increase response # for a big reinforcer. Real life application: Token economy for drug abstinence: better results when token rewards (vouchers, money, cigarettes) are bigger. 2. A shift in the reinforcer's quantity or quality; the effect of the subject's prior experience: Your rat will work hard for 1 food pellet. But if it learned that its behavior results in two food pellets, its behavior will decrease if it starts to receive only one food pellet. = If your rat received two pellets, its learning will progress slower if you shift to one pellet. This is a case of negative behavioral contrast. Positive behavioral contrasts are also possible

Anna is at the casino. She is using two slot machines and is free to change back and forth between both alternatives at any given times. Machine A is reinforced with a "fixed interval 10 minutes" and machine B is reinforced with a "fixed interval 15 minutes". If the rate of reinforcement for both machines is calculated across 1 hour, then the rate of reinforcement for machine A is ______ and the rate of reinforcement for machine B is ______.

6/10; 4/10

(5) What is the triadic design, used to investigate of learned-helplessness effect? What is the learned-helplessness hypothesis? What is the relevance of learned-helplessness to human/non-human mental health?

A series of experiments evaluated the effect of exposure to uncontrollable shock on subsequent escape-avoidance learning in dogs. Exposure to uncontrollable shock disrupted subsequent learning: learned-helplessness effect. Suggest that via exposure to uncontrollable shocks, the animal learn that the shocks are independent of its behavior. It learns to expect future lack of control. Data collected concerning animal learned-helplessness is useful for the explanation of many human conditions. Including: depression, PTSD, sexual abuse and victimization. Learning that the results of behavior are not under one's control may facilitate similar future episodes. Also, explain enhanced fear reactivity and slower extinction of fear reaction

(5) What are positive reinforcement? Positive punishment? Negative reinforcement? Negative punishment/omission training? DRO? Be able to explain textbook examples and hypothetical scenarios using the appropriate terminology.

An event that results in the increase of the response rate - reinforcement. An event that results in the decrease of the response rate - punishment. The contingency (interdependence) between the behavior and the consequences determines whether the behavior will lead to the delivery, or removal of the stimuli: The behavior results in the delivery of a stimulus, positive. The behavior results in the removal of a stimulus, negative. EXAMPLES Positive punishment: getting a parking ticket or getting spanked Negative punishment: Revoking driver's license or a time-out Positive reinforcement: Pleasant stimulus after response ("good job") Negative reinforcement: Remove unpleasant stimulus after response ("scream until dad stops the car": For dad is negative reinforcement, for kid is positive reinforcement). If the consequence consists of something being presented, then it is a positive contingency. If the consequence consists of something being withdrawn, then it is a negative contingency. If it strengthens the behavior (increase its frequency), then we are dealing with reinforcement. If it weakens the behavior (decrease its frequency), then we are dealing with punishment.

Thorndike determined that with extensive training, cats will open their mouths in order to escape a puzzle box, but they will not give a bona fide yawn. This is an example of:

Belongingness

____________ means that certain responses belong with the reinforce because of the animal's evolutionary history.

Belongingness

In the "Conditioned Place Preference" paradigm ("dark-light box", Lab 2), the association between the lit box and the food allows the lit box to turn into to a _____, predicting food, which is a ______

CS; US

(6) What are chained schedules? Which types of chained schedules were covered in class? What are their principles?

Chained Schedules consists of a sequence of two or more simple schedules, each of which has its own schedule, and the last of which results in the delivery of a reinforcer. * The organism must work through a series of linked schedules to obtain the reinforcer. * The behaviors must be completed in a particular order. Examples: * A pigeon may be reinforced for picking 3 times on a green key, followed by piking a red key 4 times, which then leads to the reinforcer-food. * A rat may be trained to go through an agility course. FORWARD: An efficient way to endure a long-chained schedule is to break it down to its components (task analysis), master each component, and then link them together. An efficient way to endure a long-chained schedule is a to-do list. BACKWARD: An efficient way to establish responding on a chained schedule is to train the final link first and the initial link last, a process known as backward chain (backwards chaining). 1. Reinforcing the dog's behavior of taking the Frisbee from hand and returning it. 2. Raising the criterion by holding the Frisbee in the air to make the dog jump for it. 3. Tossing the Frisbee slightly so the dog jumps and catches it in midair. 4. Then, tossing it a couple of feet so the dog has to run after it to catch it. 5. Gradually throwing it farther and farther so the dog has to run farther to get it.

(5) What are the differences between classical conditioning and instrumental (operant) conditioning?

Classical conditioning reflects adjustment to events in the environment that the organism can not directly control (but can predict). In instrumental/operant conditioning the stimuli that the organism encounters are the result/consequences of its behavior. The learning is goal-directed / instrumental / operant: operations (responses) are necessary (instrumental) in order to produce a desired outcome (goal). Whereas classical conditioning involves reactive responses, instrumental conditioning involves active and proactive behaviors. The organism learn that its behavior has an impact on the environment.

(6) What are the differences between continuous schedules of reinforcement and intermittent schedules of reinforcement?

Continuous reinforcement: The desired response is reinforced every time it occurs. Intermittent reinforcement: Responses are sometimes reinforced and sometimes not reinforced. In a continuous reinforcement schedule - each specified response is reinforced. Examples; each time a rat presses the lever, it obtains a food pellet. Each time you turn the ignition in her car, the motor starts. Each time that you insert a quarter you get a bubble gum. Continuous reinforcement (CRF) schedules are very useful when a behavior is first being shaped or strengthened. If the number = 1 (every response result in the delivery of the reinforcer) - this is a continuous reinforcement (CRF) schedule. Situations in which the response is reinforced only some of the time involve intermittent reinforcement.

(5) What are the theoretical and practical components of discrete trial procedures? Free-operant procedures? The components of response shaping? Successive approximations? What is the connection between shaping and the learning of new behaviors?

DISCRETE TRIAL PROCEDURES From 15 puzzle boxes and a variety of manipulations to specific /standard tasks: Thorndike's procedure is an example of a discrete trial procedure: Start -> put an animal in apparatus. Once the instrumental response is performed -> end (remove the animal from apparatus). Mazes, Runways, T-mazes and agility courses are examples of discrete trial procedures. Behaviors in mazes are typically quantified by measuring running speed in maze, time to complete the maze (latency to complete the task), and latency to leave the start box, and ratio of success (number of successful task completion out of a pre-defined number of trials). FREE-OPERANT METHODS B. F. Skinner (1904-1990) developed the free-operant methods. Reflects the ideas that naturally occurring behaviors are continuous rather than discrete, and that different motor behaviors can be executed to achieve a single goal: To press a lever a rat may also sit on it, bite it or nudge it. An operant behavior is defined in terms of the effects on the environment, not on the motor responses of the subject: how the behavior operates on the environment. Preliminary steps: 1. Habituation. 2. Magazine training - classical conditioning (CS-US). 3. Response shaping - reinforcement for the performance of behaviors which approximate the desired results, and omission of behaviors that do not (operant conditioning). RESPONSE SHAPING The components of the response shaping: 1. Clear assessment of the starting point. 2. Clear definition of the desired final response. 3. Divided, progressive, training steps from start point to end point; successive approximations, shaping. The trick is to use two complementary tactics: (1) Reinforcement of successive approximations to the final behavior; (2) Withholding reinforcement for earlier response form.

(7) What are stimulus discrimination and stimulus generalization? How are they expressed in classical conditioning? Operant conditioning? In the lab? In real life?

DISCRIMINATION If the organism demonstrates differential responding to two stimuli - it means that it treats the stimuli as different from each other - stimulus discrimination. + In classical conditioning, stimuli that are different than the CS elicit different responses or do not elicit a CR (little albert, Pavlov's dogs). + In operant conditioning, different operant responses are performed to different stimuli, or the operant response is not performed in the presence of a stimuli that is not reinforced. REAL-LIFE EXAMPLES: 1. Teaching a child to cross the street when the light is green, but not when it is red. 2. Teaching a child to knock on strangers' doors and ask for candies one night a year, but not on other nights. 3. Teaching a child to draw on paper but not on walls. 4. Training a chicken to peck a yellow, but not a red disk. 5. Training a dog/rat to approach one scent and ignore the other or detect a specific condition and ignore others: explosive/drug dogs, seizure alert dogs, medical detection dogs. GENERALIZATION Shown when the organism respond in a similar fashion to two or more stimuli. The opposite of stimulus discrimination or differential responding. + In classical conditioning, stimuli that are similar to a CS can also elicit a CR. + In operant conditioning, an operant response is emitted in the presence of a stimulus that is similar to the original stimulus. + In general, the more similar the stimuli are, the stronger is the response.

(5) How can omission training/DRO be utilized as treatment for self-injurious behaviorists? Other hypothetical behaviors? Examples provided in class?

DRO - Differential reinforcement of other behaviors. Includes a combination of negative punishment and positive reinforcement. Can be utilized to omit reinforcement previously provided for self-injurious behaviors and reinforce only non-self-injurious behaviors. Negative punishment is preferred over positive punishment in humans/animals.

(5) Who are the important theoreticians in early and modern approaches to the study of instrumental conditioning? What are the perspectives, methods and findings representing each theoretician? Which instrumental learning methods were designed by each theoretician?

EARLY APPROACHES THORNDIKE: The generation of lab-based studies and the theoretical analysis of instrumental conditioning is attributed to the work of E. L. Thorndike (1874-1949). Thorndike originally planned to study animal intelligence - experimentally. He created series of "puzzle boxes" for his animal subjects. Thorndike found that as trials progressed, the time which was required for the cat to open the door shortened. He interpreted the decrease in escape times over trials as the formation of associations between the stimulus - the puzzle box, and the escape response (S-R learning / S-R association). Suggested the law of effect: If a response (R) is in the presence of a stimulus (S) is followed by a satisfying event, the S-R association will become strengthen. If the response is followed by an annoying event, the S-R association will be weakened. Thus, satisfying behaviors will be repeated (escape), while unpleasant behaviors will not be repeated (behaviors that don't lead to escape). The law of effect states that if a response (R) in the presence of a stimulus (S) is followed by a satisfying event, the association between the stimulus and the response (R) is strengthened. If the response is followed by an unpleasant event, the association is weakened. Once formed, the stimulus by itself will lead to the habitual response, independent of the consequences. MODERN APPROACHES B. F. Skinner (1904-1990) developed the free-operant methods.

While in Dr. F's 3 hours class you know that you will get a break every 50 min. While in Dr. G's 3 hours class you know that you will get 2 breaks, but you don't know when. The first is an example of _____ schedule, and the second is an example of _____ schedule.

FI; VI

(6) Fixed Ratio (FR)

FIXED-RATIO Reinforcement is contingent upon a fixed, predictable number of responses. In CRF schedule each specified response is reinforced. Thus, CRF=Fixed Ratio 1 = FR1. If a rat receives one food pellet for each 10 lever-press responses. If we have to buy 10 coffees, to get the 11th for free. If we have to deliver 10 flyers to get 1 cent. If we have to read 10 pages before we take a break. It's a fixed ratio 10 = FR10. Produces high rates of responses once the behavior started (ratio-run) but may include a pause (post-reinforcement pause/pre-ratio pause) before each burst of responses. In general, higher ratio requirements produces longer post-reinforcement pauses: *You are likely to take a longer break after finishing a long assignment than after a short one. *A rat will show longer pauses when lever-pressing for FR 30, compared to a FR10 schedule. If the requirement is increased too quickly (FR 2 to FR 20) or is raised to an unrealistically high bar (FR 200) - there may be "breakdown" to the behavior (known as ratio strain / burnout), which is a disruption in responding due to an overly demanding response requirement. The procedure of "stretching the ratio" includes a gradual switch from a low ratio - to a high ratio requirement.

Lorely's pigeon pecks a key in order to get a food reward. Lorely is recording its behavior using a "cumulative recorder", which generate the graph below. If each arrow represents a reinforcer, then it can be assumed that the pigeon's behavior is reinforced using a _________ schedule of reinforcement.

Fixed ratio

Liz is teaching her dog, Akira, to jump. She gives Akira a treat only if she jumped after Liz clapped her hand 4 times. The generalization gradient below describes the relationship between Akira's jumping-behavior and the number of Liz's hand claps. It can be assumed that Akira demonstrate:

High discrimination and low generalization

According to the casino question and to the "matching" law, it can be assumed that Anna's rate of responding for machine A will be ________ her rate of responding for machine B.

Higher than

A salesman keep knocking on every door in his route, never knowing when a customer is going to make a purchase or simply slam the door in his face. His behavior is reinforced using a(n) ________ schedule of reinforcement.

Intermittent

Maria's rat it trained to run in a maze. It receives a food pellet every time that it takes the right turn and get to the end of the arm of the maze. The rat's behavior across 6 trials is summarized in the table below. It can be concluded that the rat: Session Wooden T-Maze latency to leave "start box" session duration 1 25 sec 100 sec 2 20 sec 120 sec 3 5 sec 90 sec 4 1 sec 60 sec 5 1 sec 10 sec 6 1 sec 9 sec

Learned the instrumental behavior of taking the appropriate turn in the maze

In a famous experiment (described in your textbook), pigeons were trained to lever press in order to receive food, or to prevent a foot-shock in the presence of a compound signal (a combined light/sound stimulus). It was found that most pigeons that learned to lever-pressed in order to receive food, pressed the lever in the presence of the _______, but not the _______.

Light; sound

(Lab 2) Recognize and understand the method used to evaluate all types of learning procedures demonstrated in lab 2 ("base-line" & "test" evaluations).

Metal bowl training, clicker training, place preference training

In the "Conditioned Place Preference" paradigm ("dark-light box", Lab 2), it is predicted that after the lit compartment has been associated with food, a rat will spend ______ time in it, compared to the "base-line"/"before" evaluation

More

(7) What is stimulus discrimination training? Discrimination training that is focused on introspective cues? What are the effects of contextual cues on stimulus control of behavior? Be familiar with processes, terminology, examples provided in class and with hypothetical examples which may be evident in every day's life.

Organisms can be trained to respond when two stimuli are presented together or when they are presented alone. Thus, providing evidence for the ability to detect both the stimulus elements, as well as the stimulus's configural cues. In stimulus discrimination training there is a switch from generalization to discrimination. This procedure is perceived to be the most powerful procedure for bringing behavior to be under stimulus control. - Also evident in classical conditioning: from blinking to a tone, to differential blinking to two different tones. In essence: with training, generalization is substituted by discrimination. - Evident in instrumental conditioning: from checking the light while crossing the street, to crossing in green but not in red light. INTROSPECTIVE CUES Organisms can be trained to discriminate different environmental stimuli (sights, sounds). They can also be trained to discriminate introspective cues (internal sensations, such as the physiological responses evoked by drugs). Pigeons and rodents are used intensively to study drug-induced introspective generalization and differentiation. Pecking and lever-press behaviors can indicate discrimination between: Different drugs: anti-anxiety drugs, sedation, analgesia, drugs of abuse (methadone vs. heroin). The effects of different drug doses. The timeline of withdrawal syndromes. CONTEXTUAL CUES Discrete discriminative stimuli occur in the presence of background contextual cues - various features of the environment where the discriminative stimuli are presented (visual, auditory, olfactory etc.) Contextual cues affect learning and behavior: - Learning at the library vs. home - Laughing at a birthday party rather than a funeral - Preference for a compartment associated with a female quail / food (conditioned place-preference). - Preference for a compartment associated with a drug (drug-induced conditioned place-preference). - Preference for a compartment un-associated with foot-shock (conditioned place-aversion).

(6) What are choice behaviors and concurrent schedules of reinforcement? How can we measure choice behaviors in the lab? How can we measure and calculate the distribution of the behavior between two response alternatives? (Be able to make very simple calculations to determine the rate of reinforcement and rate of responding for different choices).

Our every day life is rich and complex: there is more than one possible response at a time and the individual has to choose between different possible behavioral alternatives. Most of what we do involves choosing one activity over another alternative. Each activity produces a different kind of reinforcer across different reinforcement schedules. Example: Choosing a small reinforcement now (FR1) or a big reinforcement later (FI1 month)? Working in a job that pays $7.25/hour or one that pays $750/month? Cake now or being healthy in a few months? Experimental evaluation of choice behaviors: Concurrent schedules: allow for measurement of choices across different reinforcement schedules because the organism is free to change back and forth between response alternatives at any given time. + Different slot machines operate to different reinforcement schedules and allow to switch choices at any given time. Measures of choice behavior: Rate of responding: The individual choice on a concurrent schedule is reflected by the distribution of the behavior between the two response alternatives. Relative rate of responding for each alternative: Rate of responding on the left lever will be calculated as the rate of responding on left divided to total: B(L) = behavior performed on the Left lever. B(L) / [B(L)+B(R)] If the pigeon press levers 10 times, but only press the left lever: 10 / [10 + 0] = 1 Relative rate of reinforcement for each alternative can also be calculated: r(L) = rate of reinforcement for the Left lever. r(L) / [r(L)+r(R)] If lever L is reinforced on a VI60 Sec schedule, and we measure the behavior for 20 minutes, then r(L)=20. If both levers are reinforced on VI60 Sec schedule the rate of reinforcement for each lever alternative will be the same: 0.5

When two stimuli are presented at the same time, the presence of the more easily trained stimuli may hinder learning about the other one. This phenomenon is known as:

Overshadowing

(7) What is a stimulus generalization gradient? How is it generated? Which information is provided by different stimulus generalization gradients? (Be able to identify, understand and analyze different stimulus generalization gradients).

Pigeons were reinforced for pecking a key when a yellow light turned on (a specific wavelength). Responding to other colors was measured. Response rate was high to colors similar to yellow. Response rate was low to colors different than yellow. This is an example of a stimulus generalization gradient. Stimulus generalization gradient: The strength of responding in the presence of stimuli that are similar to the original stimulus or different than it. Gradients can vary in their degree of steepness. - A steep gradient indicates that rate of responding drops sharply as the stimuli become different from the original stimulus. In other words, a steep gradient indicates less generalization (and more discrimination). - A relatively flat gradient indicates that the rate of responding drops gradually as the stimuli become increasingly different from the original stimulus. A flat gradient indicates more generalization (and less discrimination).

If a service dog sits quietly during class time, he get a treat. If he barks, he gets no attention. While the treat is an example of ________, the lack of attention is a _________.

Positive reinforcement; negative punishment

Your roommate is taking a self-paced course that requires the submission of three papers over the semester. He expected to finish all three papers immediately. However, after quickly finishing the first paper three weeks ago, he has done nothing. This behavioral pattern is an example of:

Post-reinforcement pause

Which of the following is an example of negative reinforcement? - A child receiving a time-out from playing outside due to his misbehavior - Turning up the volume when the radio plays a song that you like - Pretending to be ill in order to avoid school in the morning - All of the above

Pretending to be ill in order to avoid school in the morning

(6) How can we test and record the effect of different schedules of intermittent reinforcement in the lab (what is a cumulative recorder of behavior and how does it operate?). Be able to recognize hypothetical cumulative recorder-generated graphs, as well as the behavioral pattern that are exemplified by each graph.

Ratio schedules produce more responses that interval schedules.

When different schedules of reinforcement are compared to one another, we find that _________ produce more _________.

Ratio schedules; responses than interval schedules

The same salesman from question 7 keeps knocking on every door in his route, never knowing when a customer is going to make a purchase or simply slam the door in his face. His behavior is reinforced using a ________ schedule of reinforcement.

VR

(6) Variable Ratio (VR)

Reinforcement is contingent upon a varying, unpredictable number of responses. If a rat is required to press a lever 5, 10 or 15 times, randomly, the mean will be 10. Thus, it's a variable ratio 10, or VR10. If a student is required to attend an unpredictable amount of classes to receive an extra credit opportunity (a mean of 5), its VR5. Produces a steady, and high rate of responding, with less or shorter pauses. VR schedules partially account for the persistence with which some individuals display certain maladaptive behaviors. + Gambling (machines / lottery). + Abusive / inattentive relationships

(6) Variable interval (VI)

Reinforcement is contingent upon the first response performed after a varying, unpredictable period of time. Examples: A rat receives food for the first lever press performed after 5 sec, 10 sec or 15 sec - Variable interval 10 sec - VI10 sec. Checking for email/grade (VI~3 hours?). Waiting on the service line to talk with a company representative. Like VR schedules, VI schedules produce steady and stable responding rate with minimal pauses. Additional examples; - If each day you are waiting for a bus and have no idea when it will arrive, then looking down the street for the bus will be reinforced after a varying, unpredictable period of time. It might be 2 min. the first day, 14 min. the next day, 9 min. the third day, and so on, with an average interval of 10 min. (VI 10-min). - A dog's behavior in public can be reinforced with variable interval 2-minutes (VI 2-min) schedule; Good behavior (sitting nicely, being quiet) will be reinforced on an average interval of 2 min. Thus, a good behavior is reinforced every 1, 2, 3 or 5 min, as long as the mean is 2 min.

(6) Fixed interval (FI)

Reinforcement is contingent upon the first response that occurred after a fixed, predictable period of time. The amount of time that has to pass before the response is reinforced - is constant from one trial to the next. Examples: If a rat receives a food pellet for a lever-press performed 5 seconds after the previous one - It's a fixed interval 5 seconds, FI5 sec. A Paycheck = FI 2 week. The organism slows down right after the reinforcement was received and accelerates responding towards the end of the interval (known as FI Scallop). Produces a choppy stop-start pattern of responding.

(6) Which schedules of intermittent reinforcement were covered in class (FR, VR, FI, VI)? What are the terminology and processes that are relevant to each schedule? What are the effects of each schedule on the acquisition, pattern & maintenance of behavior? Be familiar with examples provided in class and be able to analyze other hypothetical examples.

Simple intermittent schedules are divided into: 1) Ratio schedules - determine the number of responses that should be performed for reinforcement to occur. 2) Interval schedules - determine the amount of time that should elapse for reinforcement to occur. These are subdivided into: 1) Fixed Ratio and Fixed Interval schedules - a set number of responses or amount of time is required. 2) Variable Ratio and Variable Interval schedules - an "unknown" number of responses/amount of time is required. IN GRAPHS: Fixed ratio schedule produces rapid responding and post-reinforcement pause. Variable ratio schedules produce high, steady rate with no pauses. Fixed interval schedules produce long pause after reinforcement that yields "scalloping" effect. Variable interval schedule produces moderate, steady rate with no pauses.

A compound stimulus, made of a white triangle inside of a red circle was used in a key-pecking experiment done with pigeons (described in your textbook). In a follow-up assessment, one trained pigeon was found to respond more to a white triangle compared to a red circle. Another trained pigeon responded more to the red circle compared with the white triangle. This demonstrates that:

Stimulus control developed in both pigeons

(7) How do various stimuli control and affect our everyday life? How can we identify and measure stimulus control in the lab? What are the relevant experiments/examples covered in class?

Stimulus control is demonstrated when the experimental procedure yields variations in response (differential responding) that is related to variations in the stimuli - If the organism respond in one way in the presence of one stimuli and in another way in the presence of another stimuli - the behavior is under the control of these stimuli. Stimulus control variability - different stimuli gain control on the behavior of different organisms.

You are teaching your little brother that it is ok to talk with people that he knows but forbidden to talk with strangers. In essence, you are teaching him:

Stimulus discrimination

The "stimulus-element" approach assumes that:

Stimulus elements maintain individual control of behavior even when in a compound stimulus

(7) How can stimulus generalization facilitate therapeutic outcomes? Which methodologies assist in the generalization of the treatment outcomes?

Stimulus generalization is critical for the success of behavioral therapy. Learned responses cannot only be performed in the therapist's office but must be generalized into everyday life. Methodologies that assist generalization of the treatment outcomes: 1. Modification of treatment situation to resemble natural situations (similar reinforcements and schedule of reinforcement). 2. Sequential modification of the environment - conducting the treatment sessions in additional, new environments (therapists office, classroom, home). 3. Using many examples during training (different dogs for dog phobia, different elevators). 4. Using common stimuli such as language ("relax", "I can do it") to induce generalization in different situations.

(5) What are the effects of the instrumental response on the efficacy of instrumental conditioning procedures (behavioral variability vs. stereotypy, relevance/belongingness)?

The instrumental response: 1. May require repetition (stereotypy) but may alternatively require response variability. EX: Experimental group; reinforcement for change in drawn rectangle. Control group: YOKED (receive reinforcement when the experimental group did, but with no contingency with behavior). 2. Relevance, or "belongingness", in instrumental conditioning: Certain responses belong with the reinforcer because of the animal's evolutionary history. Pressing a lever or pulling a string belong with the attempt to escape confinement. Yawning does not. If the required response contrasts a naturally evolved instinctive response, an instinctive shift will be seen towards repetitive, species-specific behavior.

(6) What is the matching law? Which examples of choice behavior were provided in class or are hypothetically possible in our everyday life?

The pigeon's relative response rate on each lever matches the relative rate of reinforcement on that alternative - The matching law. B(L) / [B(L)+B(R)] = r(L) / [r(L)+r(R)] Thus, the relative rate of responding match the relative rates of reinforcement. Real-life example: 1. The chances to succeed when shooting from an area close to the basket in basketball is different than the chance to succeed shooting from an area further away from the basket. The ratio of reinforcement is different as well (2 vs. 3 points, respectively). Examining the ratio of attempts at each choice on the woman's and men's basketball teams of a big University found that the relative choices were proportional to the relative rate of reinforcement for each shot. 2. The frequency of sexual activity, unprotected sexual activity, unwanted pregnancies, abortions and STD is high - when the number of appealing alternatives is low. Predicted by the matching law! The efforts to change these behaviors should include not only education but also the offering of alternative, reinforcing activities.

(5) What are the effects of the response-reinforcer relationship on the efficacy of the instrumental conditioning (temporal continuity? Response-reinforcer contingency, a delay of reinforcement? Controllability of reinforcers)? What is the relevance of these factors to superstitious behaviors?

The relationship between the response and the outcome: a. Response-reinforcer contingency (the necessity of the response in producing the reinforcer/interdependence). A lack of contingency hinders or prevent learning. b. Response-reinforcer temporal contiguity (proximity in time). Immediate reinforcer is more effective than delayed reinforcer. In the example below - no learning was evident with a 64-sec delay of reinforcement. Why is instrumental conditioning so sensitive to a delay of reinforcement? Since the behavior is continuous, it is hard to indicate which response elicited the reinforcement. ("Credit-assignment problem"). Problem can be solved by clicker-training, marking procedures, and verbal prompts. The controllability of the outcome: the effect of perceived control over the reinforcer/punisher. A series of experiments evaluated the effect of exposure to uncontrollable shock on subsequent escape- avoidance learning in dogs. Exposure to uncontrollable shock disrupted subsequent learning: learned-helplessness effect. Temporal contiguity (proximity in time) vs. response-reinforce contingency (interdependence) To support the idea that proximity in time is more important that the codependency between a response and a reinforce, Skinner demonstrated superstitious behaviors. It seems that if reinforcement is provided on a random basis, association is inferred even it there is no specific reinforced behavior: various behaviors. Results from accidental, adventitious (by chance), unintentional reinforcement.

(Lab 2) Be able to understand all types of learning procedures demonstrated in the lab, including the meaning of the data (be able to analyze hypothetical data presented in tables or in writing). Be able to use & apply appropriate terminology (CS, US, CR, UR).

Your rat will experience classical conditioning - the ability to use stimuli in the environment (a Lego, a clicker), to predict the availability of other stimuli (food, contact; an appetitive stimuli). b. Your rat will experience a conditioned place preference - one of the applications of classical conditioning to our every day's life.

(Lab 3) Be able to understand all types of learning procedures demonstrated in the lab, including the meaning of the data (be able to analyze hypothetical data presented in tables or in writing). Be able to use and apply appropriate terminology (positive reinforcement, negative reinforcement, positive punishment, negative punishment).

Your rat will experience two discrete trial learning procedures ("maze navigation" and "hand climbing") and one free operant learning procedure ("jumping through a hoop").


Related study sets

Chapter 52, Concepts of Care for Patients With Inflammatory Intestinal Disorders

View Set

Ch_04_Constitutional_Bases_for_Business_Regulation

View Set

NCLEX-Style Review Questions Women's

View Set

Prepaid Expenses, Unearned Revenues, Accrued Expenses, and Accrued Revenues

View Set

Final Exam for Prin of psych testing (ch 10-14) (slides)

View Set

Chapter 26: Documentation and Informatics

View Set

Unit 4 (ch.29 hw) - Development, pregnancy, & hereditary

View Set

ATI fundamentals practice test B

View Set

Behavioral Health Slido Module 4

View Set