learning final exam

¡Supera tus tareas y exámenes ahora con Quizwiz!

Learning with aplysia californica

Cellular modification theory: the view that learning permanently alters the functioning of specific neural systems. •Can reflect either -The enhanced functioning of existing neural circuits or -The establishment of new neural connections •Aplysia californica (sea snail) is a simple mollusk that has been used in learning studies. -Three external organs--gill, mantle, and siphon--retract when either the mantle or siphon is touched. -This is a defensive withdrawal response. •Responses in Aplysia satisfy the definition of habituation. -Repeatedly repeating a weak tactile stimulus to Aplysia decreases the strength of its withdrawal reaction. -The decrease in responding is stimulus specific. -That is, the decreased withdrawal reaction occurs only in response to a weak touch on a particular part of the animal's body. •Learning in the Aplysia Californica The Neuroscience of Learning in Aplysia Californica •Habituation of the defensive response of Aplysia lowers the responsiveness of the synapses between sensory and motor neurons. -It reflects a decreased neurotransmitter release from the sensory neuron and decreased activity of the motor neuron. •If the tail is electrically shocked before the siphon is touched, the result is an exaggerated response because of sensitization. -The sensitizing effect is not stimulus specific. •Sensitization results from the increased responsiveness of the synapses between the sensory and motor neurons controlling the withdrawal reflex. -This reflects a greater neurotransmitter release from the sensory neuron and increased activity in the motor neuron.

generality of the law of learning

General laws of learning reveal themselves in any study of behavior, even if the behaviors are not exhibited in a natural setting. •Use of general laws may be advantageous, as past experiences do not interfere with the learning. •These studies of learning are conducted in humans and nonhuman animals. Pavlov believed the specific UCS used was arbitrary •Any event that can elicit a UCR can become associated with the environmental events that precede it. General laws of learning assume that learning is the primary determinant of the way an animal acts. Learning organizes reflexes and random responses so an animal can effectively interact with environment Learning allows an animal to adapt to the environment and, therefore, survive However, some psychologists argue that learning serves to enhance existing organization rather than to create a new organization of behavior. However, some psychologists argue that learning serves to enhance existing organization rather than to create a new organization of behavior.

Hull's associative theory

Hull proposed that primary drives (e.g., hunger, thirst) are produced by states of deprivation •States of deprivation, when aroused, elicit internal arousal •Hull called the internal arousal "drive" •We have biological needs and corresponding psychological drives -For example, we have a biological need for food that is accompanied by a psychological drive called hunger Hull theorized that drive motivates behavior Drive reduction restores homeostasis •Excitatory potential reflects the likelihood that a specific event (S) will cause the occurrence of a specific behavior (R). •Excitatory potential is determined by: -Drive (D), the internal arousal state produced by deprivation, the presence of intense environmental events, or stimuli associated with deprivation or intense environmental events. -Incentive motivation (K), the internal arousal produced by reward or the stimuli associated with reward -Habit strength (H), the strength of the connection between the stimulus and response produced when the response reduces drive -Inhibition (I), the suppression of a response produced when that response fails to produce reward. •According to Hull, if one knows the value of each factor of the mathematical equation, an accurate prediction of behavior is possible Motivation and Associative Learning •Woodworth (1918) defined drive as an intense internal force that motivates behavior. •Hull suggested that in addition to Drive (D), incentive motivation for reward (K), the habit strength, or strength of the S-R association (H) and the level of inhibition (I) control the excitatory potential. •The excitatory potential reflects the likelihood that a specific event (S) will cause the occurrence of a specific behavior (R). •According to Hull, Excitatory potential (sER) = drive (D) × incentive motivation (K) × habit strength (H)—inhibition (I). •Hull's Associative Theory •Unconditioned Sources of Drive -Events that threaten survival activate the internal drive states -Some events that do not threaten survival may also activate the drive state -Highly desirable stimuli, like saccharin, activate drive states -It tastes good but has no caloric value -Highly aversive stimuli, like mild footshock, activate drive states -Mild footshock is aversive, but it does not threaten survival •Acquired Drives -Hull suggested that through classical conditioning, environmental stimuli can acquire the ability to produce an internal drive state. -Acquired drive: an internal drive state produced when an environmental stimulus is paired with an unconditioned source of drive •This system works through classical conditioning. The Reinforcing Function of Drive Reduction •Drive motivates behavior, but each specific behavior depends on the environment -That is, environmental events direct behavior •When drive exists, the environmental cue present automatically elicits a specific response--the response with the strongest habit •Habit strength can be -Innate: (SUR) or -Acquired through experience (SHR) -If a response reduces the drive state, the bond between the stimulus and the response is strengthened. •Thus, habit strength increases each time a response produces drive reduction. The Elimination of Unsuccessful Behavior •Unsuccessful behavior causes a drive to persist. •If drive persists, all behavior is temporarily inhibited. •Reactive inhibition: the temporary inhibition of behavior due to the persistence of a drive state after unsuccessful behavior •Conditioned inhibition: the permanent inhibition of a specific behavior as a result of the continued failure of that response to reduce the drive state. •Habit hierarchy: the varying levels of associative strengths between a stimulus environment and the behaviors associated with that environment. -The effective habit will become the dominant habit in the hierarchy -The animal will continue down the hierarchy until a successful response occurs. Incentive Motivation •Hull's original theory (1943) assumed that drive reduction or reward only influences the strength of the S-R bond: a more valuable reward produces greater drive reduction and, thus, establishes a stronger habit. -Once the habit is established, motivation depends on the drive level but not the reward value. •Incentive motivation (K): the idea that the level of motivation is affected by magnitude of reward, such that the greater the reward magnitude, the higher the motivation to obtain the reward. •However, several studies indicated that value of reward does influence motivation level. •Based on these, Hull revised his theory to include magnitude of reward. •He theorized that the environmental stimuli associated with reward acquire incentive motivation properties and that cues present with a large reward will produce a greater conditioned incentive motivation than stimuli associated with a small reward.

partial reinforcement and resistance to extinction

two factors appear to contribute to the resistance to extinction of an instrumental or operant responding •Reward Magnitude •Consistency of reward •Partial Reinforcement and Resistance to Extinction Influence of Reward Magnitude •The influence of reward magnitude on resistance is dependent upon the amount of acquisition training •When the level of acquisition training is low, a large reward produces greater resistance to extinction than does a small reward. •However, with extended acquisition, a small reward in acquisition produces more resistance to extinction -Why does the influence of reward magnitude on resistance to extinction depend on the amount of acquisition training? -When a small reward magnitude is provided during acquisition, an anticipatory goal response (a conditioned anticipation of impending reward) develops very slowly -During extinction, substantial differences in the level of the anticipatory goal response will not occur •Thus, the frustration produced will be small •Partial Reinforcement and Resistance to Extinction -In contrast, the anticipatory goal response is conditioned rapidly when a large reward is provided during acquisition -Once the anticipatory goal response is strong enough to produce frustration, increases in the strength of the anticipatory goal response should lead to higher levels of frustration during extinction -These increases in frustration, produced as a result of extended training with a large reward, lead to a more rapid extinction of the appetitive response The Partial Reinforcement Effect •Extinction is slower following partial rather than continuous reinforcement •Partial reinforcement effect (PRE): the greater resistance to extinction of an instrumental or operant response following intermittent rather than continuous reinforcement during acquisition -One of the most reliable phenomena in psychology •In general, the lower the number of reinforced responses during acquisition, the greater the resistance to extinction -However, if reinforcement is too low during acquisition, learning will be minimal and extinction rapid The Nature of the Partial Reinforcement Effect Capaldi (1971, 1994) presented a quite different view of the partial reinforcement effect. According to Capaldi, if reward follows a non rewarded trial, the animal will then associate the memory of the non rewarded experience (SN) with the instrumental or operant response. Capaldi suggested that the conditioning of the SN to the operant, or instrumental behavior, causes the increased resistance to extinction with partial reward. During extinction, the only memory present after the first non rewarded experience is SN. Animals receiving continuous reward do not experience SN during acquisition; therefore, they do not associate SN with the instrumental or operant response. •Partial Reinforcement and Resistance to Extinction The presence of SN in continuously rewarded animals during extinction changes the stimulus context from that present during acquisition. This change produces a reduction in response strength because of a generalization decrement, a reduced intensity of a response when the stimulus present is dissimilar to the stimulus conditioned to the response. The loss of response strength because of generalization decrement, combined with the inhibition developed during extinction, produces a rapid extinction of the instrumental or operant response in animals receiving continuous reward in acquisition. •Partial Reinforcement and Resistance to Extinction A different process occurs in animals receiving partial reward in acquisition: These animals associate SN with the instrumental or operant response during acquisition; therefore, no generalization decrement occurs during extinction. The absence of the generalization decrement causes the strength of the instrumental or operant response at the beginning of extinction to remain at the level conditioned during acquisition. Thus, the inhibition developed during extinction only slowly suppresses the instrumental or operant response, and extinction is slower with partial than with continuous reward. Contingency Management: The use of reinforcement and nonreinforcement to control people's behavior was initially called Behavior Modification •Behavior Modification, however, refers to all types of behavioral treatment •Psychologists now use the term contingency management to indicate that contingent reinforcement is being used to increase the frequency of appropriate behaviors and to eliminate or reduce inappropriate responses •Three stages of contingency management implementation •The Assessment Stage -Calculates the frequency of appropriate and inappropriate behaviors and identifies the situations in which these behaviors occur. -The reinforcers maintaining the inappropriate responses and potential reinforcers for appropriate behavior are determined. •The Contingency Contracting Stage -The desired response is determined and the precise relationship between that response and reinforcement is specified -The contingency will indicate that the inappropriate response will no longer be reinforced -Individuals administering reinforcement are trained to identify and reinforce appropriate behavior •Contingency Management •The Implementation of the Contingency Management Program -Token economy -Focus is on the use of positive reinforcement to increase desired behaviors. -Token economies are used to teach toilet training to mentally retarded children, personal grooming, and mealtime behavior. -Can also be used to reduce undesired behaviors and to promote desirable behaviors in classrooms, businesses, and psychiatric institutions

ethical use of punishment

There is tremendous societal concern about appropriate use of punishment The doctrine of "the least restrictive alternative" dictates that less severe methods of punishment must be tried before more severe treatments are used. It is important to balance the importance of behavior change with respect to the dignity and rights of the individual. In institutional settings, a review board or a human rights committee must evaluate whether the use of punishment is justified.

evaluation of the contiguity theory

There were few early studies to validate Guthrie's approach. •Recent studies have supported some of Guthrie's ideas. -Punishment can intensify an inappropriate behavior when it elicits a behavior that is compatible with the punished response. •However, intense punishment can suppress even compatible responses. -Contiguity between a response and reward is critical to prevent acquisition of competing associations. -Only a portion of the environmental stimuli are active at a given time. •Thus, only some of the potential conditioned stimuli can become associated with the response. -Guthrie's methods of breaking habits have been widely adopted. •However, Guthrie is rarely acknowledged as the source of these methods. Other aspects of Guthrie's theory were found to be inaccurate. •Numerous experiments have disproved his concept of reward. -There can be numerous changes in the environment after a response, but if these actions do not serve as rewards, then the response will not be conditioned, even though substantial stimulus changes followed the response. •Reward predicts responses better than either frequency or recency, despite Guthrie's prediction to the contrary. •Though some studies have supported Guthrie's view of single trial learning, many studies have failed to support it

shaping

(or successive approximation procedure): A technique of acquiring a desired behavior by first selecting a highly occurring operant behavior, then slowly changing the contingency until the desired behavior is learned Training a Rat to Bar Press •Step 1: reinforce for eating out of the food dispenser •Step 2: reinforce for moving away from the food dispenser •Step 3: reinforce for moving in direction of bar •Step 4: reinforce for pressing bar Shaping Food Consumption in Children With Autism •Many parents with children with autism struggle to increase the number and amount of healthy foods due to the significant feeding inflexibility exhibited by children with autism. •Hodges, Davis, Crandall, Phipps, and Weston (2017) found that shaping was an effective way to increase the amount and number of healthy foods (chicken, carrots, corn, and peanut butter crackers) eaten by children with autism. •After identifying a preferred reinforcing activity, a small amount of one of the foods was placed in a colored muffin tin and not in a white muffin tin. •The children with autism received access to reinforcing activity when they ate the food in the colored muffin tin. •Shaping •The amount of the first food was slowly increased until they ate the final target amount. •At that point, a small amount of the second food was placed in the colored muffin tin and then the amount slowly increased as the child picked up and ate the second food followed by reinforcement. •The shaping procedure was continued until the children with autism consumed a varied healthy diet.

Long-delay learning

-Animals and humans are capable of associating a flavor stimulus (CS) with an illness experience (UCS) that occurs several hours later known as flavor-aversion conditioning. -A flavor-aversion gradient does exist with the strongest conditioning occurring following 30 min of a time separation (Garcia, Clark, & Hankins, 1973). The Influence of Intensity •Unconditioned Stimulus Intensity -Research on UCS intensity and CR strength indicates that your level of fear will depend upon the intensity of the event. -The strength of the CR increases with higher UCS intensity. -The effect of the CS on the intensity of the CR is inconsistent •In some cases, intense CSs produce strong CRs •In other cases, intense CSs produce weak CRs -An intense CS does not produce a more intense CR than a weak CS if the organism experiences only a single stimulus (either weak or intense). -However, if the organism experiences both a weak and an intense CS, the intense CS will produce a more intense CR -Intensity of the UCS also effects the intensity of the CR •Strength of CR increases with higher UCS intensity

schedules of reinforcement cont

-DRH schedules are extremely effective -If DRH schedule is too high, it cannot be maintained and responding will decrease Differential Reinforcement of Low Responding Schedule •Differential Reinforcement of Low Responding Schedule (DRL): A schedule of reinforcement in which a certain interval of time must elapse without a response and then the first response at the end of the interval is reinforced. -If a response is made during the interval, the interval is reset and the interval starts over. -DRL schedules limit the response rate -DRL schedules effectively control behavior Differential Reinforcement of Other Behaviors Schedule •Differential reinforcement of other behaviors schedule (DRO): A schedule of reinforcement in which the absence of a specific response within a specified time leads to reinforcement -Unique because it reinforces the failure to exhibit a specific behavior during a particular period of time -Widely used in behavioral modification Compound Schedules •Compound schedule: A complex contingency where two or more schedules of reinforcement are combined

neuroscience of predictiveness and surprise

-The corticomedial amygdala is activated during a surprising event, while the presentation of a stimulus that is predictive of an event that is no longer surprising activates the basolateral amygdala (Boll et al., 2013). -Amygdalar are activated by predictive and surprising events. -Holland (2013) found in humans consistent results from animal studies corroborating these brain circuits in the regulation of predictiveness and surprise.

learning

A relatively permanent change in behavior that results from experience. •Learning reflects a change in the potential to behave in the future. We must be sufficiently motivated to translate learning into a functional behavior. •Behavioral changes that learning causes are not always permanent, but can be resistant to change. -We can learn new, competing, and adaptive behaviors. -We can forget existing behaviors that we have already acquired. Not all behavioral change results from learning. Behavioral changes can also result from: •Motivational change •Maturation •Neurological damage •Fatigue •Illness

Occassion Setting

A stimulus can prepare an animal to respond to the CS. The Properties of a Pavlovian Occasion-Setting Stimulus •One stimulus may have the ability to enhance the response to another stimulus. •In the absence of the occasion-setting stimulus, the CS has no effect on behavior. •Called Occasion-Setting because one stimulus "sets the occasion" for another stimulus to elicit a conditioned response -For example, end of a meal may set the occasion for a cigarette to elicit a craving/smoking response. •Interoceptive (i.e., internal) stimuli can also act as occasion setters for conditioned stimuli •Rescorla (1986) argued that the facilitating effect of a stimulus is produced by lowering the threshold of reaction to the CS. •He suggests that this facilitation effect is the opposite of the effect of conditioned inhibition, which raises the threshold reaction. •An occasion setter facilitates response to an excitatory CS only if that CS has previously served as a target stimulus for other occasion setters. Pavlovian Occasion Setting Stimuli and Operant Behavior •An occasion setter can also facilitate an operant response. -For example, Many people smoke at the end of a meal, and seeing a cigarette after a meal makes a smoker crave a cigarette. However, seeing a cigarette does not always make a smoker crave a cigarette. An occasion setter may facilitate a response to seeing others smoking. -Smoking is perceived as more pleasurable after a meal. If the smoker sees a cigarette after a meal, craves it, but does not have a cigarette, he or she may ask a friend for one. Discriminative Stimuli and Conditioned Responses •The elicitation of a CR by an excitatory CS is influenced not only by Pavlovian occasion setting but also by discriminative stimuli. •The presence of the SD can enhance the conditioned response to the CS just as an occasion setter does. Context as Occasion Setting Stimulus •Chaudhri, Sahuque, and Janak (2008) first paired an auditory stimulus (CS) with ethanol (UCS) in rats in one context. The CR was then extinguished by presenting the auditory CS without the ethanol UCS in a second context. Chaudhri, Sahuque, and Janak found that the auditory CS would not elicit the conditioned in the second context (negative occasion setting stimulus), while the auditory CS elicited the CR in the first context (positive occasion setting stimulus), a process referred to as resurgence of the extinguished CR. •Trask, Schepers, and Bouton observed that the suppression of responding following extinction does not mean that original learning had been erased or unlearned but just inhibited by a negative or inhibitory occasion setter. •Gonzalez, Garcia-Burgos, and Hall (2012) reported evidence that context can serve as an occasion setting stimulus for flavor preference learning. In their research, flavor-sucrose was paired in context X (excitatory occasion setting stimulus) but not in context Y (inhibitory occasion setting stimulus). Gonzalez, Garcia-Burgos, and Hall found that their rats consumed the flavor in context X but not in context Y. The Neuroscience of Occasion Setting Stimuli •Memory plays an important role in occasion setting. An animal must remember that a stimulus is associated with reinforcement in the presence of some stimuli (e.g., a specific context) and the same stimulus is associated with nonreinforcement in the presence of other stimuli (e.g., a different context). •The cortical circuits that acquire and maintain this conditionally regulated behavior involve the orbitofrontal cortex (part of the prefrontal cortex) and the dorsal hippocampus. Shobe, Bakhurin, Claar, and Masmanidis (2017) found that occasion setting stimuli activate the orbitofrontal circuit. •Meyer and Bucci (2016) reported that orbitofrontal lesions impair occasion setting. •Yoon, Graham, and Kim (2011) observed that hippocampal lesions disrupted contextual controlled occasion setting stimuli.

concept learning

A symbol that represents a class of objects or events with common properties. For example, think of an airplane: •Have fixed wings •Are heavier than air •Are driven by a screw propeller or high velocity rearward jet •Are supported by the dynamic reactions of the air against the wings These concepts allow us to easily identify these objects as airplanes. Concept learning significantly enhances our ability to effectively interact with the environment. Rather than separately labeling and categorizing each new event or object, we incorporate it into our existing concepts. Concepts have two main properties: •Attributes •Rules The Structure of a Concept •Attributes and Rules -Attribute: Any feature of an object or event that varies from one instance to another. •Can have a fixed value or continuous values -Rule: A rule defines the objects or events that are examples of a particular concept. -Types of Rules •Simple rules •Object/event needs only one attribute to be an example of the concept. •Complex rules •A object/event is defined by the presence or absence of two or more attributes •The Prototype of a Concept -Family resemblance: The degree to which a member of a concept exemplifies the concept. •The more attributes an object/event shares with other members of a concept, the more it exemplifies the concept. -Prototype: The object that has the greatest number of attributes characteristic of the concept and that is therefore the most typical member of that concept. •For example, the chair is the most typical prototype of furniture. Studying Concept Learning •Concept learning involves identification of the properties that characterize a concept as well as those that do not. •Herrnstein et al. (1976) found that pigeons demonstrated concept learning. •Also later found that learning was slower when no concept defined the stimuli. •D'Amato and Van Sant (1988) found that monkeys could learn the concept of humans. •Vonk and MacDonald (2002) found that a female gorilla could learn to discriminate between primate and non-primate animals. •Animals can also learn the concepts of same and different. Theories of Concept Learning •Associative Theory -Hull (1920) envisioned concept learning as a form of discrimination learning. -Said that concepts have both relevant and irrelevant attributes. -As a result of reinforcement, response strength increases to the attributes characteristic of the concept. -Hull's view that associative processes alone control concept learning was supported until the late '50s when further research showed cognitive processes are also involved in concept learning. •Cognitive Process in Concept Learning -Bruner, Goodnow, and Austin (1956) suggested that a concept is learned by testing hypotheses about the correct solution. •If the first hypothesis is correct, the individual has learned the concept. •If it is incorrect, another hypothesis will be generated and tested, and this will be repeated until the correct solution is discovered. •Research by Levine (1966) suggests that individuals do engage in hypothesis testing of concepts.

errorless discrimination learning

A training procedure in which the gradual introduction of SΔ leads to responding to SD with few or no errors to SΔ. •Some discriminations are more difficult to acquire than others. -For example, Pigeons can discriminate colors better than line tilt. Training Procedure During the first phase of this procedure, the presentation of the S∆, a dark key, lasted 5 seconds and then increased by 5 seconds with each trial until it reached 30 seconds. In the second phase of S∆ introduction, the duration of the S∆ was kept at 30 seconds, and the intensity of the S∆ was increased until it became a green light. During the final phase of the study, the duration of the S∆ was slowly increased from 30 seconds to 3 minutes. •By contrast, for pigeons receiving constant training, the S∆ was initially presented at full intensity and full duration. One progressive training group and one constant training group were given an early introduction to S∆; the S∆ was presented during the first session that a pigeon was placed in the training chamber. The other two groups (one progressive, one constant) received late S∆ introduction; the S∆ was introduced following 14 sessions of key pecking with the SD present. •Errorless discrimination learning has been consistently observed since Terrace's original studies. In addition to errorless discrimination learning reported in pigeons by Terrace, errorless discrimination learning has been found in rats (Ishii, Harada, & Watanabe, 1976), chickens (Robinson, Foster, & Bridges, 1976), sea lions (Schusterman (1965), and primates (Leith & Haude, 1969). Recently, Arantes and Machado (2011) were able to demonstrate errorless learning of a conditional temporal discrimination in pigeons. Nonaversive SΔ •The behavioral characteristics found with standard discrimination training are not observed with errorless discrimination training. -The peak shift does not appear. -It produces the same level of responding to the SD as nondiscrimination training does. -Presentation of SΔ , as well as stimuli other than the SD , inhibits responses in subjects receiving errorless discrimination training -Drugs that inhibit frustration-induced behaviors have no effect on responding to the SD. •Errorless Discrimination Learning •Suggests that SΔ is not aversive. •SΔ does not develop inhibitory control; instead, subjects learn only to respond to SD. Application: Errorless Discrimination Training in Humans Gollin and Savoy (1968) taught preschool children a shape discrimination using the fading technique; Moore and Goldiamond (1964) used this procedure to teach a pattern match; and Corey and Shamov (1972) used it to teach oral reading. In each of these studies, the children given the faded series of S∆ stimuli made few errors, while other subjects who received standard discrimination training made many incorrect responses to the S∆. •Learning without errors also has been shown to facilitate discrimination learning in adults (Mueller, Palkovic, & Maynard, 2007). •Kessels and de Haan (2003) reported that young adults and older adults learned face-name associations more readily with an errorless than with trial-and-error discrimination learning. •Komatsu, Mimura, Kato, Wakamatsu, and Kashima (2000) found that an errorless discrimination training procedure facilitated face-name associative learning in patients with alcoholic Korsakoff syndrome •Clare, Wilson, Carter, Roth, and Hodges (2002) observed that face-name associations were more easily learned with an errorless discrimination procedure than trial-and-error discrimination learning in individuals in the early stage of Alzheimer's disease. •Wilson and Manley (2003) observed that errorless discrimination learning enhanced self-care functioning in individuals with severe traumatic brain injury •Kern, Green, Mintz, and Liberman (2003) reported that errorless discrimination learning improved social problem-solving skills in persons with schizophrenia. •Benbassat and Abramson (2002) reported that novice pilots learned to respond to a flare discrimination cue more efficiently (smoother and safer landings) with an errorless discrimination than with a control procedure involving errors.

Tolman's Purposeful Behavior

An understanding or knowledge of the environment Psychologists studying cognition have focused on two areas of inquiry •The individual's understanding of the structure of the psychological environment and how this understanding (or expectation) controls behavior •The processes that enable an individual to acquire knowledge of the environment Learning Principles •Tolman believed that behavior has both purpose and direction -Behavior is "Goal oriented" because we are motivated to either approach a particular reward or to avoid a specific aversive stimulus -We are capable of understanding the structure of our environment •There are paths leading to our goals and tools we can employ to reach the goals -Experience provides us with an expectation of how to use these paths and tools •Just because behavior is involved in purpose and expectation does not mean that we are aware of either the purpose or the direction of our behavior •We also expect a specific outcome to result from our behavior •Certain events in the environment convey information about where our goals are located -We are able to reach our goals only after we have learned to recognize the signs leading to reward or punishment •Tolman suggested that we do not have to be reinforced to learn -However, our expectations will not be translated into behavior unless we are motivated •Motivation has two functions: -Produces a state of internal tension that creates a demand for the goal object -Determines the environmental features we will attend to •We do not respond in fixed or stereotyped ways -Behavior remains flexible enough to attain goals Place-Learning Studies •Tolman suggested that people expect reward in a certain place -They follow paths leading to that place -Hull: Environmental cues elicit specific motor responses that have led to reward in the past. -Tolman designed experiments to distinguish between behaviors based on S-R associations from behavior based on spatial expectations. •T-Maze Experiments -Tolman conducted a series of elegant T-Maze experiments that supported his hypothesis about place learning -Tolman, Ritchie, and Kalish (1946) demonstrated that superior learning occurs when we can obtain reward in a certain place rather than by using a certain response -Tolman et al. (1946) showed that a rat will go to a place associated with reward even when an entirely new motor response is required to reach the place -This suggests that expectations, not S-R associations, controlled behavior -However, other studies have produced inconsistent results, with some studies suggesting that response learning is superior to place learning -The role of experience may explain some of the inconsistency in results •Well-learned behaviors are typically under the control of mechanistic, rather than cognitive, processes •Less familiar behaviors may require more cognitive processing -Another possible explanation addresses the presence or absence of spatial cues •Without spatial cues, organism may be forced to rely on motor responses to obtain reward. •Alternative-Path Studies -Cognitive map: Spatial knowledge of the physical environment gained through experience -Results of research with cognitive maps indicate that knowledge of the environment, rather than "blind habit", often influences our behavior. •Attempts to replicate Tolman's research have not always produced consistent results -Thus, other factors may influence learning The Neuroscience of Cognitive Maps •Place and Grid Cells -Place cells: Neurons in the hippocampus that respond when an animal is in a general location in the physical environment -Place fields: Neurons in the hippocampus that respond when an animal enters a specific location. -Grid cells: Neurons in the entorhinal cortex that respond to specific locations in the physical environment -As animals move through the environment, grid cells provide input to place cells. -Entorhinal cortex provides input to the place cells in the hippocampus (Pilly & Grossberg, 2012). -Hippocampal place cell activity guides goal-directed behavior. -The coordinated activity of grid cells and place cells appear to represent the cognitive map. Is Reward Necessary for Learning? •Although Thorndike and Hull suggested that the consequences were important for learning to occur, Tolman disagreed -His research strongly influenced the nature of the incentive motivation concept •Tolman suggested that simultaneous experience of two events is sufficient for learning to occur. •Latent-Learning Studies -Tolman believed that we can acquire a cognitive map or knowledge of the spatial characteristics of a specific environment merely by exploring the environment. •Reward is not necessary for us to develop a cognitive map. •Reward influences behavior only when we must use that information to obtain reward. -Tolman distinguished between learning and performance by asserting that reward motivates behavior but does not affect learning. -Latent learning: Knowledge of the environment gained through experience but not evident under current conditions -Latent learning appears to depend on •The relevance of the motivation •The point in the experiment when reward is introduced

Flavor aversion learning

Avoidance of a flavor that precedes an illness experience Long-delay learning: The association of a flavor with an illness that occurred even several hours after the flavor was consumed. The Selectivity of Flavor Aversion Learning •Garcia and Koelling (1966) demonstrated this phenomenon in a groundbreaking study with rats. •Seligman (1970): rats possess an evolutionary preparedness for flavor aversion learning. •Rats can also associate an environmental cue with illness. -Taste cues, however, are especially salient. •Birds acquire visual aversions more readily than taste aversions. -Rely more heavily on visual system for food. -Search for food during the day. •Rats are nocturnal, and rely more heavily on gustatory information to find food. The Nature of Flavor Aversion Learning •Learned-Safety Theory: The recognition that a food can be safely consumed when illness does not follow food consumption -A mechanism unique to learning a flavor aversion evolved to allow animals to avoid potentially lethal foods. -Ingestional neophobia: The avoidance of novel foods •Prevents animals from consuming large quantities of a potentially poisonous food, so that if the food is poisonous, the animal will become sick but not die. •Concurrent Interference View -The prevention of learning when a stimulus intervenes between the conditioned and unconditioned stimuli or when a behavior occurs between the operant response and reinforcement -Long-delayed learning occurs due to the absence of concurrent interference. •Occurs because animal unlikely to eat another food before onset of illness. Flavor Aversion Learning in Humans •Many children in the early stages of cancer develop flavor aversions due to toxic chemotherapy. •Adult and child cancer patients receiving radiation therapy also develop flavor aversion. -Causes weight loss in these individuals. The Neuroscience of Flavor Aversion Learning •The lateral and central amygdala also play an important role in flavor aversion learning. •Wig, Barnes, and Pinel (2002) found that stimulation of the lateral amygdala after rats consumed a novel flavor resulted in the development of an aversion to that flavor. •Tucci, Rada, and Hernandez (1998) observed an increase in glutamate activity in the amygdala when rats are exposed to a flavor that had previously been paired with illness •Yamamoto and Ueji (2011) reported that the neural circuit mediating flavor aversions begins with the aversive flavor being detected by the gustatory cortex in the parietal lobe, then to the amygdala, the thalamic paraventricular nuclei, and finally to the prefrontal cortex and the elicitation of avoidance of the aversive flavor. •Agüera and Puerto (2015) observed that damage to the central nucleus of the amygdala impaired flavor aversion learning. •It would appear that the lateral and central amygdala play a central role in all aversive conditioning experiences--that is, experiences involving both pain and illness.

avoidance of aversive events

Avoidance response: a behavioral response that prevents an aversive event Types of Avoidance Behavior •There are two classes of avoidance behavior: -Active avoidance: an overt response to a feared stimulus that prevents an aversive event -Passive avoidance: a contingency in which the absence of responding leads to the prevention of an aversive event •Active Avoidance Learning -Organisms can learn to make responses that prevent the occurrence of aversive events •Among humans, many of these are learned in childhood •The Avoidance of Aversive Events •Passive Avoidance Learning -Another way to prevent exposure of an aversive event is to simply stay away from the event •Passive avoidance allows an organism to prevent exposure to aversive events by not making a response that will bring them in contact with the aversive stimulus How Readily Is Avoidance Behavior Learned? •There are two variables that appear to influence on avoidance learning -Severity of the aversive event -Delay interval between CS and UCS •The Severity of the Aversive Event -In most cases, the greater the severity of the aversive event, the more readily the subject will learn the avoidance response •Greater severity also results in a higher final level of performance in most cases -However, the opposite is true in cases of two-way active avoidance tasks •Response is acquired more slowly with greater severity •Final level of performance is lower with greater severity •The Delay Interval Between the Conditioned Stimulus and the Unconditioned Stimulus -The interval between the CS and the UCS also affects the acquisition of an avoidance response. The literature (Hall, 1979) shows that the longer the CS-UCS interval, the slower the acquisition of the avoidance behavior. -It seems reasonable to assume that the influence of the CS-UCS interval on fear conditioning is also responsible for its effect on avoidance learning; as the level of fear diminishes with longer CS-UCS intervals, motivation to escape the feared stimulus weakens; therefore, the opportunity to learn to avoid it lessens. -The longer the CS-UCS interval, the slower the acquisition of the avoidance behavior -Possible reasons: •Level of fear diminishes with longer CS-UCS interval •Thus, motivation to escape the feared stimulus weakens -So, opportunity to learn to avoid it weakens

animal misbehavior

Breland and Breland (1961) initiated the use of operant procedures to teach exotic behaviors to animals. •Animals learned the behaviors, but began showing instinctive behaviors. Instinctive drift: When operant behavior deteriorates despite continued reinforcement due to the elicitation of instinctive behaviors. Animal misbehavior: Operant behavior that deteriorates, rather than improves, with continued reinforcement •Due to strengthening of instinctive behaviors It is suggested that animal misbehavior may also be a result of Pavlovian conditioning. Possibly both types of conditioning result in animal misbehavior For misbehavior to develop, stimuli resembling natural cues must be consistently paired with the reinforcer and must reinforce naturally occurring species-typical behavior.

conditioning of an operant response

Conditioned drug tolerance may also involve the presence of interoceptive cues •Interoceptive cues: stimuli originating within the body that are related to the functioning of an internal organ or the receptors that the internal organ activates •Experimental evidence for a role of conditioning in drug tolerance -Exposure to the CS without the UCS, once the association has been established, results in extinction of the opponent CR (Siegel, 1977) •The elimination of the response to the CS produces a stronger reaction to the drug itself -An increased response to the drug can be induced by changing the stimulus context in which the drug is administered (Siegel et al., 1978) •A novel environment does not elicit the opponent CR •Thus, there is reduced drug tolerance •The Conditioning of an Opponent Response •Contextual cues associated with drug exposure have been shown to produce the withdrawal symptoms when the contextual cues are experienced without the drug The Conditioned Withdrawal Response •Conditioned withdrawal response: when environmental cues associated with withdrawal produce a conditioned craving and motivation to resume drug use.

ethics of conducting research

Conducting Research With Human Subjects •Human Subjects Ethics Committee must approve all research using human participants' through an Institutional Review Board (IRB). •The committee weighs the benefits of the research against the risks. •The informed consent form must: -Inform the subject about the nature of the research -Assure confidentiality -Inform subject that they are free to withdraw at any time •After the study is complete, all subjects must be debriefed. -Debriefing informs subjects about the true nature of the study -If deceit is involved, researcher reveals it at debriefing •The Use of Nonhuman Animals in Research •Animal models are important for drawing causal inferences that cannot be ascertained with human subjects. •Use of animals is regulated by an Institutional Animal Care & Use Committee (IACUC) •They insure that animals will experience minimal distress and will not be harmed needlessly.

extinction of an instrumental or operant responses

Continued failure of behavior to produce reinforcement causes the strength of the response to diminish until it is no longer performed The Discontinuance of Reinforcement •Extinction: The elimination or suppression of a response caused by the discontinuation of reinforcement or the removal of the unconditioned stimulus •When reinforcement is first discontinued, the rate of responding remains high -Under some conditions, it even increases before it begins to decrease Spontaneous Recovery •Spontaneous recovery: The increase in responding to the CS (in the absence of the UCS) shortly after extinction -If the CS is presented in the absence of the UCS, a temporary inhibition develops that suppresses all responding. -When the temporary inhibition dissipates, the CR reappears -The increase in response shortly after extinction is spontaneous recovery •A similar recovery occurs when an interval of time follows the extinction of an instrumental or operant behavior. •If the CS continues in the absence of the UCS, a conditioned inhibition develops specifically to the CS. -The conditioning of inhibition to the CS leads to a loss of spontaneous recovery. -Loss of spontaneous recovery also occurs if operant or instrumental behavior goes unrewarded. •The Aversive Quality of Nonreward -Nonreward elicits an aversive internal frustration state -Escape from this aversive situation is rewarding Activation of an Instrumental or Operant Behavior •Nonreward sometimes increases, rather than decreases, the intensity of behavior •Nonreward will motivate appetitive responding when frustration cues have been conditioned to elicit appetitive instead of avoidance behavior •Sequential theory suggests that a subject remembers whether reward or nonreward occurred on the last trial •When reward follows a non rewarded trial, the memory of the nonreward becomes associated with the appetitive response •The conditioning of the memory of nonreward to the appetitive response leads to faster responses •Thus, the memory of reward after previously responding on an unrewarded trial elicits continued responding during extinction

continuity versus discontinuity

Continuity theory of discrimination learning: The Hull-Spence model suggests that the development of discrimination is a continuous and gradual acquisition of excitation to SD and inhibition to SΔ. •According to Krechevsky and Lashley, the learning of discrimination is not a gradual, continuous process. -Instead, it is a process of hypothesis testing. Non Continuity theory of discrimination learning: Suggests that discrimination is learned rapidly once an animal discovers the relevant dimension and attends to relevant stimuli. Research has supported the Hull-Spence view and the Krechevsky-Lashley view. Continuity theory explains how the emotional components of discrimination are learned and non continuity theory describes the attentional aspects of discrimination learning.

Attentional View

Dickinson et al. (1976) found that when a surprising event (one UCS rather than two) occurred in the second phase of their study, blocking was not observed; thus, providing evidence that uncertainty plays an important role in Pavlovian conditioning (Le Pelley et al., 2016). •Holland et al. (2002) provided additional evidence in their two-phase study and observed that faster conditioning of the light-food association in the surprise than consistent conditions. •The lack of predictiveness in the first phase reduced the attention to and associability of both the light and tone. •When omitting the tone when food was presented was surprising and reinstated attention to and associability of the light. It resulted in conditioning that was significantly faster to the light in the surprise condition. •Blocking of conditioning to CS2 can be negated, a process called unblocking, if the number of UCS experiences is increased (Bradford & McNally, 2008). •An increase in the number of UCS events is unexpected, which leads to renewed attention to and increased associability of CS2 and consistent with the Pearce-Hall model, the association of CS2 with the UCS The Neuroscience of Predictiveness and Uncertainty •Both predictiveness and surprise are necessary attentional components for associative learning •Surprise enables associations to be acquired, and predictiveness allows attention to stimuli that elicit CRs. •The corticomedial amygdala is activated by a surprising event, while the basolateral amygdala is active by the presentation of a stimulus that is predictive of an event that is no longer surprising (Boll et al., 2013). •Thus, different amygdala circuits appear to be activated by predicative and surprising events. •The Retrospective Processing View Retrospective Processing: the continual assessment of contingencies, which can lead to a reevaluation of prior conditioning of a CS with a UCS •Suggests that learning changes over time as an animal encounters new information about the degree of contingency between a CS and a UCS Backward blocking: reduced conditioned response to the second stimulus caused when two stimuli are paired with a UCS, followed by the presentation of only the first stimulus with the UCS Rescorla-Wagner associative model suggests that the availability of associative strength determines whether a CR develops to a CS paired with a UCS Comparator theory argues that performance of a CR involves a comparison of the response strength to the CS and to competing stimuli Mackintosh's attentional theory proposes that the relevance of and attention to a stimulus determine whether that stimulus will become associated with the UCS Baker's retrospective processing approach suggests that conditioning involves the continuous monitoring of contingencies between a CS and a UCS, with the recognition of a lack of predictiveness diminishing the value of the CS

functionalism

Early school of thought in psychology that emphasized instinctive origins and adaptive function(s) of behavior. According to the theory of functionalism: •The function(s) of behavior is/are to promote survival •Survival is promoted through the development of adaptive behaviors The functionalists expressed many ideas about the mechanisms that controlled human behavior.

biology of reinforcement

Electrical Stimulation of the Brain (ESB) •ESB: electrodes can be placed directly into the brain and the area of placement can be directly stimulated -High levels of instrumental or operant behavior are exhibited when responding leads to activation of reinforcement areas of the brain. -Many species engage in this behavior. The Influence of the Medial Forebrain Bundle (MFB) •The MFB is part of the limbic system, which is part of the brain's reinforcement center •Stimulation of MFB has four characteristics: -Highly reinforcing -Motivates behavior -Its functioning is stimulated by the presence of reinforcers -Its reinforcing effects are enhanced by deprivation. •The reinforcing effects of stimulation of the MFB have been demonstrated in animals and humans. •The Reinforcing Effect of MFB Stimulation -Activation of the MFB reinforces behavior •Rats have engaged in ESB in preference to food, water, or grooming •Eventually starved themselves to death -Also has been shown to eliminate pain in cancer patients. -Produces a strong euphoria that lasts several hours. •The Motivational Influence of MFB Stimulation -Leads to Stimulus-Bound Behavior: Behavior tied to the prevailing environmental conditions and elicited by stimulation of the brain's reinforcement center -MFB stimulation leads to performance of other behaviors related to the stimulus present when the feeling of pleasure is elicited -For example, brain stimulation leads to eating when food is available and to drinking when water is available •The Influence of Reinforcement on MFB Function -The presence of reinforcement enhances the functioning of the MFB -The effect of this enhanced functioning is an increased response to reinforcer •The Influence of Deprivation on the MFB -A physiological need increases the incentive value of reinforcers -Increased activity in the brain's reinforcement system is one probable mechanism behind this enhancement •The Influence of Deprivation on the MFB -A physiological need increases the incentive value of reinforcers -Increased activity in the brain's reinforcement system is one probable mechanism behind this enhancement •Mesolimbic Reinforcement System -The MFB is only part of the brain's reinforcement center. •Mesolimbic reinforcement system: A central nervous system structure that mediates the influence of reinforcement on behavior -Contains two neural pathways •Tegmentostriatal pathway: A neural pathway that begins in the lateral hypothalamus, goes through the MFB and ventral tegmental area, terminates in the nucleus accumbens, and governs the motivational properties of reinforcers •Nigrostriatal pathway. •Ventral tegmental area (VTA): A structure in the tegmentostriatal reinforcement system that projects to the nucleus accumbens •Nucleus accumbens (NA): A basal forebrain structure that plays a significant role in the influence of reinforcement on behavior •Nigrostriatal pathway: A neural pathway that begins in the substantia nigra and projects to the basal ganglia -Serves to facilitate reinforcement-induced enhancement of memory consolidation. •The Function of the Two Reinforcement Systems -It is likely that the two pathways regulate two different aspects of reinforcement. -The tegmentostriatal pathway detects whether sufficient motivation is present for voluntary behavior to occur. -Motivational variables do not influence the behavioral effects of stimulation to structures in the nigrostriatal pathway. -Structures in the nigrostriatal pathway play a role in the storage of a memory. •Dopaminergic Control of Reinforcement -The neurotransmitter dopamine (DA) plays a significant role in regulating the behavioral effects of reinforcement. -DA governs the activity of neurons that connect the VTA to the NA, septum, and prefrontal cortex. •May play an important role in mediating the effect of reinforcement on behavior. -One line of evidence of DA influence is the powerful reinforcing properties of amphetamine and cocaine, which increase the level of activity at dopaminergic receptors. -Animals quickly learn behaviors to self-administer amphetamine and cocaine. -Natural reinforcers (e.g., food and water) also trigger dopamine release, as does MFB stimulation. •Opiate Activation of the Tegmentostriatal Pathway -Animals learn to self-administer opiate drugs such as heroin and morphine. -Opiate drugs do not activate DA receptors. -Both DA and opiate receptors produce activation in the NA. -Tegmentostriatal pathways contain both DA and opiate receptors •Individual differences in the preference for various reinforcers (e.g., gambling behavior) may reflect differences in functioning of reinforcement systems. •Motivational and reinforcing effects of reinforcers are positively correlated with the level of activity in the mesotelencephalic reinforcement system. The Neuroscience of Addiction •DeSousa and Vaccarino (2001) suggested that the motivational and reinforcing effects of reinforcement are positively correlated with the level of activity in the mesolimbic reinforcement system. •Sills, Onalaja, and Crawley (1998) found greater dopamine release into the nucleus accumbens when high sucrose feeders (HSFs) were given access to sucrose than when low sucrose feeders (LSFs) were given access to sucrose. One interpretation of these results is that HSFs find sucrose more reinforcing than do LSFs (and thus consume more sucrose) because their mesolimbic reinforcement systems are more responsive to sucrose. •High dopamine activity in the mesolimbic reinforcement system leads to compulsive behaviors. In support of this view, dopaminergic agonists, or drugs that increase dopamine activity, have been shown to produce compulsive gambling in individuals being treated for Parkinson's disease (Fernandez & Gonzalez, 2009; Hinnell, Hulse, Martin, & Samuel, 2011) and restless legs syndrome (Kolla, Mansukhani, Barraza, & Bostwick, 2010). •Other compulsive behaviors produced by dopamine agonists in Parkinson's patients include hypersexuality (Fernandez & Gonzalez, 2009; Hinnell et al., 2011) and compulsive buying (O'Sullivan et al., 2011). •A similar increase in hypersexuality was found following dopaminergic medications for restless legs syndrome (Driver-Dunckley et al., 2007; Pinder, 2007). •It appears that reinforcements that produce high levels of dopamine activity in the nucleus accumbens are associated with compulsive behavior. •The high levels of dopamine activity in the nucleus accumbens that is associated with the compulsive behaviors characteristic of addiction.

Evaluation of Lorenz

Evaluation of Lorenz-Tinbergen Model •It is important to distinguish the general theory of the instinctive approach from the specific aspects of the hypothetical energy system •Lorenz believed that conditioning enhanced a species' adaptation to its environment -He argued that the ability to learn is programmed into each species' genetic structure -However, he did not detail the mechanism responsible for translating learning into behavior •The instinctive approach has facilitated our understanding of a wide range of behaviors. -The energy system is a hypothetical construct to conceptualize the processes motivating instinctive behavior. -Energy does not appear to accumulate in brain systems nor does it flow from one system to another. -Brain systems do communicate and interact but not in the way that Lorenz and Tinbergen's model indicates.

a mental representation of events

Expectation: A mental representation of event contingencies An internal or mental representation of an experience develops when an animal or person experiences an event. This representation contains information about •The relations among previously experienced events •The relations between behavior and the consequences of this behavior Types of Mental Representations There are two types of expectations •The knowledge contained in each type differs •The first type is an associative-link expectation. -Associative-link expectation: A representation containing knowledge of the contiguity of two events -Establishes a link between the mental representations created by two events, which allows the experience of one event to excite or inhibit the mental representation of the other event. •The second type of expectation is a behavior-reinforcer belief -Behavior-reinforcer belief: A type of declarative knowledge of the consequences of a specific action -Represented in propositional form as "Action A causes Reinforcer B" •Associative-link expectations are associated with Pavlovian learning. •Behavior-reinforcer belief expectations are associated with instrumental learning. Associative-Link Expectations •Irrelevant incentive effect: The acquisition of an excitatory link expectation that a particular stimulus is associated with a specific reinforcer under an irrelevant drive state •According to Dickinson, an excitatory link association develops during training between the contextual cues and the irrelevant stimulus Behavior-Reinforcement Beliefs •Reinforcer devaluation effect: The association of a reinforcer with an aversive event, which reduces the behavioral control that the reinforcer exerts. •Animals develop a belief that specific behaviors yield specific reinforcers. •This expectation will control their response unless the reinforcer is no longer valued. The Neuroscience of Behavior-Reinforcement Beliefs •The basolateral region of the amygdala is related to the formation of behavior-outcome representations. •This area of the brain also appears to be important in the association of foods with environmental events. The Importance of Habits •Some theorists maintain that stimulus-response associations, not expectations, form as the result of experience and that concrete events, not subjective representations, motivate behavior. •Dickinson states that both habits and expectations control behavior -Suggested that continued training leads habits, rather than expectations, to govern behavior Behavioral autonomy: A situation in which the control of response is by habit rather than by expectation. The Neuroscience of Stimulus-Response Habits •Operant behavior is initially under the control of behavior-reinforcer beliefs. •With more training, S-R habits develop and behavior can be controlled by habits rather than expectations. •The nigrostriatal pathway plays a role in the memory storage of reinforcing events. •Memory of a reinforcing event is significantly enhanced when an animal is exposed to a positive reinforcer for a short time following training •Habit or Expectation? -Some behaviors may only be governed by habit, not expectation -If so, such behaviors may be immune to devaluation -One such behavior appears to be responding for alcohol •This may partially explain difficulty treating alcohol abuse in humans

John dewey

Father of functionalism •Suggested that reflexive behaviors of lower animals had been replaced in humans by the mind (i.e., consciousness that is controlled by the brain) -In humans, the mind had evolved as the primary mechanism for survival -The Mind enables individual organisms to adapt to their environment •The main idea of Dewey's functionalism was that the manner of human survival differs from that of lower animals. Thus, the more complex the organism, then the more complex the behavioral action or adaptive behavior would become in response to the complexity of a given environment.

The conditioning of fear

Fear is conditioned when a novel stimulus (CS) is associated with an aversive event •Fear also motivates an escape response to an aversive event •The Motivational Properties of Conditioned Fear -Fear also motivates an escape response to aversive events. -Neal Miller (1948) demonstrated motivational properties of fear using an escape-motivated test with rats to avoid shock within a specific colored chamber. Miller found that half of the rats learned to search the environment, find the wheel and turn it to escape the aversive chamber, while the other half did not. -Environment can facilitate associations with an unconditioned aversive event (shock) and as such the environment can also function to facilitate the acquisition of motivational properties. -The basolateral amygdala plays an important role in both Pavlovian conditioned hunger and fear. However, specifically the lateral and central amygdala appear to regulate important fear conditioned behavioral responses. -Duvarci, Popa, and Pare (2011) showed that increased activity in the central amygdala occurs during fear conditioning in rats. -Watabe et al. (2013) reported that changes in the central amygdala activity occurred following fear conditioning in mice. -Dalzell et al. (2011) observed that neural changes in the lateral amygdala occurred following fear conditioning. -Different genetic breeding (high vs. low fear behavior) are correlated with distinct differences in the number of neurons within the lateral amygdala (Coyner et al., 2014). -Damage to the lateral and central amygdala impairs fear conditioning when pairing an auditory stimulus with an aversive foot shock (Nader et al., 2001), and lesions to the lateral amygdala decreased the conditioned fear response to the context associated with a predator odor. •Other Examples of Conditioned Responses -There are many examples of conditioned responses •Developing nausea when you see a food that previously caused illness •Becoming thirsty at a ball game where you usually have soft drinks -In most cases, several responses are conditioned during CS-UCS pairings

Temporal Relationships Between the CS and the UCS

Five different paradigms have been used in conditioning studies Conditioning paradigms represent various ways to pair a CS with a UCS All paradigms are not equally effective Delayed Conditioning •CS onset precedes UCS onset •Termination of the CS occurs with the onset of the UCS or during UCS presentation •That is, there is no time delay between the CS and the onset of the UCS Trace Conditioning •CS is presented and terminated prior to UCS onset •The time in between CS offset and UCS onset is called the trace interval Simultaneous Conditioning •The CS and UCS are presented at the same time •Produces only weak conditioning •Effectiveness is minimal because the predictive quality of CS-UCS relationship is missing Backward Conditioning •UCS is presented and terminated prior to the CS •May produce inhibitory conditioning instead of excitatory Temporal Conditioning •No distinctive CS •UCS is presented at regular intervals •The passage of time is CS •To determine whether conditioning has occurred, the UCS is omitted and the strength of the CR is assessed •The five paradigms are not equally effective -Delayed is the most effective -Effectiveness is related directly to the length of delay -Backward is the least effective -The others are intermediate

Guthrie's Contiguity View

Guthrie proposed that contiguity, not reward, was sufficient to establish an S-R connection. •He believed that learning is a simple process governed entirely by contiguity The Impact of Reward •Guthrie did not believe that reward strengthened the S-R bond. •He proposed that many responses can become conditioned to a stimulus •The response exhibited just prior to reward will be associated with the stimulus •This is the response that will occur when the stimulus is experienced again. •Reward acts to change the stimulus context that was present prior to reward. •Thus, instead of strengthening the S-R bond, reward functions to prevent further conditioning. -The last response that becomes conditioned to the stimulus will be associated with the reward and further conditioning is unnecessary. •Role of Contiguity -Guthrie also proposed that a reward should be presented immediately after the appropriate response. -If the reward is delayed, actions that occur between the appropriate (i.e., desired) response and the reward will be exhibited when the stimulus is encountered again. -Thus, poor contiguity can result in accidentally rewarding unwanted behavior. The Importance of Practice •Guthrie proposed that learning is not gradual but occurs in a single trial. -The strength of an S-R association is at maximum value after a single pairing of the stimulus and response. •He did acknowledge that behavior improves in strength and efficiency with experience. •According to Guthrie, performance gradually improves for three reasons: -Subjects attend to only some of the stimuli present during conditioning. Because many stimuli are present on any given trial and the stimuli vary from trial to trial, the organism must attend to which stimuli are present on multiple trials. Thus, behavior changes that occur from trial to trial reflect attentional processes, not learning. -Many stimuli can become conditioned to produce a particular response. As more stimuli begin to elicit a response, the strength of the response increases. The increase in strength is not due to stronger S-R connection but to the increased number of stimuli that can produce the response. -Complex behavior consists of many separate responses. For the behavior to be efficient, each response element must be conditioned to the stimulus. As each response element is conditioned to the stimulus, the efficiency of the behavior will improve. Breaking a Habit •Guthrie believed that old habits could not be "forgotten," only replaced by new habits. There are three ways to change old habits: -fatigue method: stimulus eliciting the response is presented so often that the person is too fatigued to perform the old habit. -threshold method: stimulus presented at an intensity below threshold, so the habit is not elicited. The intensity is increased so gradually that the habit does not appear. -incompatible method: person is placed in a situation where the old habit cannot occur.

overshadowing

In a compound conditioning situation, the prevention of conditioning to one stimulus due to the presence of a more salient or intense stimulus Overshadowing was originally observed by Pavlov Overshadowing does not always occur when two cues of different salience are paired with a UCS •In some circumstances the presence of a salient cue produces a stronger CR than would have occurred had the less salient cue been presented alone with the UCS Potentiation: the enhancement of an aversion to a nonsalient stimulus when a salient stimulus is also paired with the UCS According to Rescorla, potentiation occurs because an animal perceives the compound stimuli as a single unitary event and then mistakes each individual element for the compound •Potentiation of a Conditioned Response Other research does not support Rescorla's explanation Thus, the cause of potentiation is unclear CS preexposure effect: when the presentation of a CS prior to conditioning impairs the acquisition of the conditioned response once the CS is paired with the UCS The CS preexposure effect presents a problem for the Rescorla-Wagner model because it assumes that the readiness of a stimulus to be associated with a UCS depends only on the intensity and salience of the CS •The parameter K represents these values in the model Neither the intensity nor the salience of the CS is changed as a result of CS preexposure To explain the CS preexposure effect, the Rescorla-Wagner model must be modified to allow for a change in the value of K as the result of experience Mackintosh argues that a stimulus is irrelevant if it does not predict a significant event •Stimulus irrelevance causes the animal to ignore that stimulus in the future

behavioral contrast

In a two-choice discrimination task, the increase in response to SD that occurs at the same time as responding to SΔ declines Local contrast: A change in behavior that occurs following a change in reinforcement contingency. The change in behavior fades with extended training. Sustained contrast: The long-lasting change in responding due to the anticipated change in the reinforcement contingency. •Also called anticipatory contrast According to Williams (2002) anticipatory contrast only occurs when its effect on behavior is stronger than prevailing reinforcement contingencies. The behavioral contrast phenomenon points to one problem with discrimination learning: It can have negative consequences A decrease in a negative behavior in one setting can lead to its increase in another setting.

Flavor Preference Learning

In addition to learning aversions for specific flavors, a preference for specific flavors can also be learned Learned flavor preferences are similar to flavor aversions in some ways Learned flavor preferences are similar to flavor aversions in some ways For example, •Flavor preferences can be learned rapidly •Flavor preferences can be learned over a delay as long as 60 min The Nature of Flavor Preference Learning •Flavor preference: a conditioned preference for a flavor acquired by association with positive nutritional consequences and/or sweetness •Flavor-sweetness preference: A flavor preference acquired by association with sweetness •Flavor-nutrient preference: A flavor preference acquired by association with positive nutritional consequences •Flavor-sweetness associations can develop when a non-sweet flavor is associated with a sweet flavor •Flavor-nutrient associations develop when a flavor is associated with foods that are nutrient dense The Neuroscience of Flavor Preference Learning •Sclafani, Touzani, and Bodnar (2011): activity in the dopamine neurons of the nucleus accumbens is central to the conditioning of flavor-sweetness and flavor-nutrient preferences. -Sweet flavors and high-density nutrients are unconditioned stimuli able to activate dopamine neurons in the nucleus accumbens as an unconditioned response -Bitter tastes paired with sweet or high-density flavors arouse dopamine neurons in the nucleus accumbens as a conditioned response •This view is supported by the finding that the administration of chemicals that suppressed dopamine activity in the nucleus accumbens (dopamine receptor antagonists) blocked the conditioning of both flavor-sweetness and flavor-nutrient associations

imprinting

Infant Love •Imprinting: The development of a social attachment to stimuli experienced during a sensitive period of development. •Lorenz (1952) found that infant birds form attachments to the first moving object they encounter. -Birds may imprint to inanimate objects as well as animals of another species. •Certain characteristics influence the likelihood of imprinting. •Ducklings imprinted more readily to -Moving object than a stationary object -An object that makes "lifelike" rather than "gliding" movements -Vocalizes rather than remains silent -Emits short, rhythmic sounds rather than long, high-pitched sounds -Objects that measure about 10 cm in diameter •Imprinting found in several species -Harlow (1971) studied this phenomenon in nonhuman primates with surrogate, cloth mothers. -Ainsworth (1982) has studied the effect of imprinting on human infants. •Imprinting can still occur after sensitive development periods when sufficient experience is given. •The sensitive period for attachment differs among species. Nature of Imprinting •Moltz (1960, 1963) proposed that Pavlovian and operant conditioning are responsible for social imprinting. •Associative Learning View of Imprinting -The idea that attraction to the imprinting object develops because of its arousal-producing properties •Before the fear system develops, chicks orient to large, familiar and unfamiliar objects (e.g., the mother) with low levels of arousal. •When chicks fear system does develop, unfamiliar objects elicit high arousal. •Familiar objects produce low arousal and elicit relief. •Thus, mother's presence elicits relief and reduces fear. -Fear reduction may be associated with young human and nonhuman primate attachment to "mother." -Harlow (1971) used inanimate surrogate mothers of different forms (both wire and cloth) for nonhuman primate infants. •Primates clung to the cloth surrogate in the presence of a dangerous object, but fled when only the wire surrogate was present. -The research of Ainsworth and colleagues: •Secure relationship: The establishment of a strong bond between a mother who is sensitive and responsive to her infant •Anxious relationship: The relationship between a mother and her infant when the mother is indifferent to her infant or rejects the infant •An Instinctive View of Imprinting -The view that imprinting is a genetically programmed form of learning -Kovach and Hess (1963) found that, despite its administration of electric shock, chicks still approached the imprinted object. -Punishment may not inhibit imprinting. -Primate infants clung to abusive "monster mothers" even though the infants were abused by them (Harlow, 1971). The Neuroscience of Social Attachments •Social attachments must inhibit fears as well as motivate attachment-related behaviors. •Tottenham, Shapiro, Telzer, and Humphreys (2012) observed that activation of the dorsal amygdala was associated with maternal approach behaviors in children and adolescent humans. •Coria-Avila and colleagues (2014) reported that the neural circuit that begins in the dorsal amygdala and ends in the nucleus accumbens is able motivate social attachment behaviors. These researchers also reported that maternal attachment behavior was associated with increased dopamine activity in the nucleus accumbens. •Strathearn (2011) reported that this dopamine pathway functions well with secure maternal relationships, while activity in this dopamine pathway is reduced with anxious maternal relationships.

sometimes opponent process

Introduced by Wagner as an extension of Opponent-Process theory to explain why the CR is sometimes similar to the UCR and sometimes different from the UCR SOP theory invokes the concept that the UCS elicits two responses (or two components of the same response) •A primary A1 response •A secondary A2 response The A1 component is elicited rapidly by the UCS and decays quickly when the UCS ends The Conditioning of the A2 Response •Both the onset and decay of the A2 component are very gradual •The nature of the A2 response is important -A2 can be the same as A1 or it can be different -A key aspect is that conditioning only occurs to the A2 component •That is, the CR is always the secondary A2 reaction •A2 can be observed in the example of hypoalgesia or decreased sensitivity to pain •The CR and UCR will appear to be the same when A1 and A2 are the same •However, because the CR is always the secondary A2 component, when A1 and A2 are different, they will yield a CR and UCR that look different -They are not really different because they simply represent the primary and secondary components of the UCS •Sometimes-Opponent-Process Theory •SOP theory accounts for Conditioned Emotional Responding (CER) by assuming that A1 and A2 are different (opponent) processes -The initial response (A1) is agitated hyperactivity -The secondary response (A2) is "freezing," the CER response •Conditioned emotional response: the ability of a CS to elicit emotional reactions as a result of the association of the CS with a painful event •The conditioned emotional reaction is an example of the A1 (increased reactivity) and A2 (long-lasting hypoactivity) component being different Backward Conditioning of an Excitatory Conditioned Response •SOP theory also explains Backward Conditioning -Backward conditioning can yield an excitatory response if the CS is presented just prior to the peak of the A2 process Problems With SOP Theory •Divergent results have been obtained from different measures of conditioning •Inconsistency of results has been hard to explain Affective Extension of SOP (AESOP) •Developed by Wagner and Brandon to explain the inconsistencies that SOP could not explain •It is based on the idea that there are two distinct UCR sequences -A sensory sequence -An emotive sequence •The sensory and emotive attributes of an unconditioned stimulus activate separate response sequences •The latency of the sensory and emotive activity sequences can also differ •This leads to different optimal CS-UCS intervals for the emotive and sensory components •There are several important aspects of AESOP -A CS may activate a strong sensory CR but only a weak emotive CR (or vice versa) •This can explain the lack of correspondence between response measures of conditioning •A sensory A2 neural activity may elicit a discrete response while the emotive A2 neural activity may produce a diffuse reaction •Finally, two unconditioned stimuli might activate the same emotive A2 activity but different sensory A2 activities -This would lead to both similarities and differences in the responses that separate UCSs condition

transposition effect

Kohler's idea that animals learn relationships between stimuli and that they respond to different stimuli based on the same relationship as the original training stimuli. •There is evidence to support both the Hull-Spence model and Kohler's model. -Animals often respond to relative, rather than absolute, qualities of the stimuli. -In other cases, they respond to the stimulus' absolute value. •Schwartz and Reisberg (1991) suggest both approaches are adaptive. Inhibitory generalization gradient: According to the Hull-Spence model, at a point on the continuum below SD, the inhibitory generalization gradient does not affect excitatory responses. •According to Hull-Spence model, at this point on the gradient, response to the SD should be greater than the response occurring at the lower wavelength test stimuli. •The relational view suggests that the response occurring at the lower value will always produce a response greater than SD does. Thus, some results support the Hull-Spence model while others support Kohler's relational view. The relational view is supported on a choice test (one or two stimuli). The Hull Spence approach is supported on a generalization test when subjects respond to one stimulus. What if stimuli are tested which have multiple dimensions?

mackintosh attentional view

Learned Irrelevance and Pavlovian Conditioning •Mackintosh's attentional view: the idea that animals attend to stimuli that are predictive of biologically significant events (UCSs) and ignore stimuli that are irrelevant -Thus, conditioning depends not only on the physical characteristics of stimuli but also on the animal's recognition of the correlation (or lack of correlation) between events (CS and UCS) •Learned Irrelevance: the presentation of a stimulus without a UCS leads to the recognition that the stimulus is irrelevant, stops attention to that stimulus, and impairs conditioning when the stimulus is later paired with the UCS •Learned irrelevance is supported by studies that show that uncorrelated CS and UCS preexposure not only impairs excitatory conditioning but also impairs inhibitory conditioning Reinstatement of Attention and the Conditioned Stimulus Preexposure Effect •Hall et al. (1985, 1989) provided evidence for an attentional view of the CS preexposure effect. •Animals exposed to a novel stimulus exhibit an orienting response to the novel stimulus. •Hall and Channell (1985) showed that repeated exposure to light (CS) led to habituation of the orienting response to that stimulus. •They also found that later pairings of the light (CS) with milk (UCS) yielded a reduced CR compared with control animals who did not experience preexposure to the light CS.

instinctive basis of behavior

Lorenz and Tinbergen developed instinctive theory from years of observing animal behavior •They used animal models as examples of the adaptive nature of instinctive behavior •However, we should always keep in mind that these behaviors apply to humans as well Lorenz-Tinbergen Model •Konrad Zacharias Lorenz (1903-1989) and Nikolaas "Niko" Tinbergen (1907-1988) earned the Nobel Prize in Physiology or Medicine in 1973 for their research on the instinctive basis of social behavior. Interaction of Internal Pressure and Environmental Release •Energy Model (Lorenz, 1950) -Action-specific energy: an internal force that motivates a specific action •Builds up "internal pressure" that motivates animal to behave in a certain way. -Appetitive behavior: instinctive or learned response motivated by action-specific energy. •Enables the individual to approach and contact a sign stimulus. -Sign stimulus: a distinctive environmental event that can activate the IRM (innate releasing mechanism) and release stored energy. •Presence of sign stimulus releases the accumulated energy. -Fixed action pattern (FAP): an instinctive response that is triggered by the presence of an effective sign stimulus. -An internal block exists for each FAP, preventing it from occurring until the appropriate time. -Sign stimulus removes the block by stimulating an internal releasing mechanism. -Innate releasing mechanism (IRM): a hypothetical process by which a sign stimulus removes the block on the release of the fixed action pattern (Lorenz & Tinbergen, 1938) -In some situations, a chain of fixed action patterns occur. •There is a block for each FAP and the appropriate releasing mechanism must be activated for each behavior. •Environmental Release -Sign stimuli are environmental stimuli •Some are simple •Others are quite complex -Likelihood of eliciting a FAP depends on •Accumulated level of action-specific energy •Intensity of the sign stimulus -The relationship is inverse •The greater the level of accumulated energy, the weaker the sign stimulus that can release the FAP. -Sensitivity to sign stimuli also change as a function of time •The more time that has elapsed since the previous FAP, the more sensitive the organism will be to the sign stimulus.

Evaluation of Drive Theory

Many of Hull's ideas do accurately reflect important aspects of human behavior: •Intense arousal can motivate behavior •Environmental stimuli can develop the ability to produce arousal, thereby motivating behavior •The value of reward influences the intensity of instrumental behavior Theory Nevertheless, there are several problems with the theory •The concept of reward proved to be inaccurate -Rewarding properties of brain self-stimulation are inconsistent with drive-reduction interpretations -Sheffield argued for drive induction, rather than drive reduction -Hull did not specify a mechanism for drive reduction

Spence's Acquired Motive Approach

Motivation and the Anticipation of Reward •Spence used Hull's concept of K to explain behavior. -Rewards obtained in a goal environment elicit an unconditioned goal response (RG). -This produces a stimulus state that motivates a person to act (SG). -SG is similar to Hull's Drive state in that they represent an internal arousal that motivates behavior. -The reward value determines the intensity of the goal response--the greater the reward magnitude, the stronger the goal response. •During the first few experiences, the environmental cues present during reward become associated with reward. •They produce a conditioned or anticipatory goal response (rG). •The anticipatory goal response causes internal stimulus changes (sG) that motivate approach behavior. •Reward magnitude during conditioning determines the maximum level of response. •Since a large reward creates a more intense RG than a small reward, cues associated with a large reward produce a stronger RG than those with a small reward. •Spence's ideas were consistent with basic classical conditioning principles: -The strength of a CR depends on the intensity of the UCS -The stronger the UCS, the greater the CR Motivation and the Avoidance of Frustrating Events •Hull suggested that by inhibiting habitual behavior, non-reward allows the strengthening of other behaviors. -This view didn't completely explain the influence of non-reward •Amsel (1958) proposed a theory that suggested that frustration both motivates avoidance behavior and suppresses approach behavior •Amsel suggested that the frustration state differs from the goal response. •The frustration response has motivational properties: -Stimulus after-effects (SF) energize escape behavior -Cues present during frustration become conditioned to produce an anticipatory frustration response (rF) -This produces internal stimuli (sF) that motivate the organism to avoid a frustrating situation. Nature of Anticipatory Behavior •Rescorla and Solomon suggested that RG should be thought of as a central, not peripheral, nervous system response •This CNS response is classically conditioned and its strength is determined by reward magnitude •Thus, conditioned activity in the central nervous system motivates behavior.

addictive process

Opponent process theory offers an explanation for the development of addiction. Addictive behavior is a coping response to an aversive opponent B state. Addictive behavior is an example of behavior motivated to terminate (or prevent) the unpleasant withdrawal state. Search for pleasure •Some people expose themselves to an aversive A state in order to experience the pleasant opponent B state.

Is the CR just the UCR elicited by the CS? Or is the CS a behavior that is distinctively different from the UCR?

Originally suggested by Pavlov Stimulus-substitution theory: States that the pairing of the CS and the UCS enables the CS to later elicit the UCR, but as the CR Assumptions of Stimulus-Substitution Theory •There is a direct connection between the brain center related to the UCS and the brain center that controls the UCR •Presentation of the UCS activates a related brain center that generates the UCR -An innate, direct connection exists between the UCS brain center and the brain center controlling the UCR -The neural connection allows the UCS to elicit the UCR •The CS also stimulates a distinct area of the brain •When the UCS follows the CS, the brain centers associated with the CS and UCS are active at the same time •The simultaneous activity in two neural centers leads to a new functional neural pathway between the active centers

negative consequences of punishment

Pain-Induced Aggression •Punishment often inflicts pain, which leads to aggressive behavior •Aggressive behavior is not motivated by expectation of avoiding punishment -It reflects an impulsive act energized by emotional arousal characteristic of anger Pain-Induced Aggression •Punishment often inflicts pain, which leads to aggressive behavior •Aggressive behavior is not motivated by expectation of avoiding punishment -It reflects an impulsive act energized by emotional arousal characteristic of anger •The prefrontal cortex provides executive control of the amygdala, the area of the brain eliciting anger and aggression. •Punishment does not elicit aggression if individuals have -Been reinforced for nonaggressive reactions to aversion -Been punished for aggressive reactions to painful events -Both The Modeling of Aggression •Inflicting punishment often models aggressive behavior, which children and other recipients of punishment may later imitate •Modeling: behaviors are learned not by receiving explicit reinforcement but by observing another person's actions •Classic modeling experiment: Bandura et al.'s (1963) Bobo doll experiment •Two sources of evidence suggest that children who are physically punished model aggressive behavior -Physically punished children use the same method of punishment when trying to control the behavior of other children -Correlational studies report a strong relationship between parental use of punishment and the level of aggression in the child The Aversive Quality of a Punisher •The recipient of punishment may come to fear the punisher as well as (or instead of) the punishment •This effect can be reduced if the "punisher" also uses reinforcement for appropriate behavior Additional Negative Consequences of Punishment •There are two additional negative consequences of punishment -The suppressive effects of punishment may generalize to similar behaviors, and the inhibition of these behaviors may be undesirable -Recipient of punishment may not recognize the contingency between punishment and the undesirable behavior. The aversive events may be perceived as independent of behavior •Given the potential negative consequence of punishment, why do we continue to use it? •We may continue to use punishment because we see punishment modeled as a means of behavior change -Although modeling can be a negative consequence of punishment, it can also show punishment being effective •There are situations in which differential reinforcement cannot be easily applied but punishment can be readily employed

cognitive view of phobic behavior

Phobic fears are unrealistic fears that interfere with an individual's life. Phobic responses may result from unpleasant experiences, stimulus generalization, or higher order conditioning. Phobias and Expectations •Outcome expectations: The perceived consequences of either a behavior or an event •Stimulus-outcome expectation: The belief that a particular consequence or outcome will follow a specific stimulus •Response-outcome expectation: The belief that a particular response leads to a specific consequence or outcome •Efficacy expectation: A feeling that one can or cannot execute a particular behavior Self-Efficacy and Phobic Behavior •Bandura and Adams (1977) -Snake phobic clients were given systematic desensitization therapy. -Even when patients no longer became emotionally disturbed by an imagined aversive event, differences existed in their ability to approach a snake. -The greater the perceived self-efficacy, the more phobic behavior was inhibited. The Importance of Our Experiences •We use several types of information to establish an efficacy expectation. •Successful experiences increase our expectations of mastery. •Failures decrease our sense of self-efficacy. •Our sense of self-efficacy is also influenced by the successes and failures of other people whom we perceive as similar to ourselves. •Emotional arousal also influences our sense of competence. -We feel less able to cope with an aversive event when we are agitated or tense. Application: A Modeling Treatment for Phobias •Modeling: The acquisition of behavior as a result of observing the experience of others. •In a modeling treatment for phobias, clients see the model move closer and closer to the feared object until the model encounters it. •The success of modeling therapy must be attributed to the vicarious modifications of a client's expectations. •The effectiveness of modeling appears to be long-lasting. -Some reports indicate that 90 percent of patients maintained the improved behavior for up to four years. An Alternative View •Some psychologists have continued to advocate a drive-based view of avoidance behavior. -They maintain that cognitions are elicited by anxiety, but they do not affect the occurrence of avoidance responses. •Some psychologists suggest that there are three types of anxiety: -Cognitive: refers to the effect of anxiety on self-efficacy -Physiological: affects the physiological state -Behavioral: directly influences behavior •The relative contribution of each type of anxiety depends on the person's learning history and type of situation.

types of reinforcement

Primary reinforcer: An activity whose reinforcing properties are innate; a biologically relevant reinforcer Secondary reinforcer: An event that has developed its reinforcing properties through its association with primary reinforcers Several variables affect the strength of secondary reinforcers •The magnitude of the primary reinforcer •The greater the number of primary-secondary pairings, the stronger the reinforcing power of the secondary reinforcer •The time elapsed between the presentation of the secondary reinforcer and the primary reinforcer affects the strength of the secondary reinforcer Positive reinforcer: Event added to the environment that increases the frequency of the behavior that produces it Negative reinforcer: The termination of an aversive event, which reinforces the behavior that terminated the aversive event Negative reinforcement versus punishment •Negative reinforcement is not the same thing as punishment. •Negative reinforcement occurs when a specific behavior removes an unpleasant event. -The termination of an aversive event reinforces the behavior that preceded it. •Punishment is the application of an unpleasant event contingent on the occurrence of a specific behavior. -The application of an unpleasant event suppresses the behavior

Legacy of BF Skinner

Principles of Appetitive Conditioning •Skinner's contribution •Contingency: The specified relationship between behavior and reinforcement -The environment determines the contingencies -Reinforcer: An event (or termination of an event) that increases the probability of the behavior that it follows. •Defined merely by its effect on future behavior Distinguishing Between Instrumental and Operant Conditioning •Instrumental conditioning: A conditioning procedure in which the environment constrains the opportunity for reward •Operant conditioning: When a specific response produces reinforcement, and the frequency of the response determines the amount of reinforcement obtained •In operant conditioning, there is no constraint on the amount of reinforcement the subject can obtain -Evidence of learning is the frequency and consistency of responding •Operant chamber -An enclosed environment with a bar on the side wall as well as a food or liquid dispenser, used for the study of operant behavior within it

stimulus generalization process

Responding in the same manner to similar stimuli. Discrimination Learning: Responding in different ways to different stimuli; responding to some stimuli but not others Stimulus generalization occurs frequently in the real world Sometimes it is undesirable (e.g., racial, ethnic, religious prejudice). However, generalization is often adaptive. •For example, if a parent reads a book to a child and the child likes it, a positive emotional experience is conditioned to the book. Sometimes it is undesirable (e.g., racial, ethnic, religious prejudice). However, generalization is often adaptive. •For example, if a parent reads a book to a child and the child likes it, a positive emotional experience is conditioned to the book. Generalization causes us to respond in basically the same way to stimuli similar to the stimulus in past experience. Generalization Gradients •A visual representation of the response strength produced by stimuli of varying degrees of similarity to the training stimulus. •Generalization gradients can depict either -Generalization of excitatory conditioning (S+): S+ is presented with test stimuli ranging from similar to dissimilar to S+ -Generalization of inhibitory conditioning (S-): S- is presented with test stimuli ranging from similar to dissimilar to S- •Excitatory Generalization Gradients -Most studies of generalization gradients have investigated the generalization of excitatory conditioning. -Excitatory generalization gradients: A graph showing the level of generalization from an excitatory conditioned stimulus (S+) to other stimuli. -Many studies of these gradients employ pigeons, which have excellent color vision. -Guttman and Kalish (1956) trained pigeons to peck illuminated keys for food. •Used different colored disks; pigeons responded to the disk color associated with food in training phase. •Gradient was similar regardless of the training stimulus -Degree of generalization can be determined from shape of the gradient •Flat gradient indicates similar responding to all stimuli •Steep gradient indicates responding differently to different stimuli -Although in many circumstances the individual will respond to stimuli similar to the conditioning stimulus, in other situations, animals or people may generalize to stimuli that are both very similar and very different to the conditioning stimulus -As is true of excitatory generalization, in certain circumstances, inhibition generalizes to stimuli quite different from the training stimulus. •Inhibitory Generalization Gradients -Weisman and Palmer (1969) illustrate the inhibitory-conditioning generalization gradient. •Pigeons were trained to peck at a green disk (S+) to receive reinforcement on a VI-1 minute schedule. •When a white vertical line (S-) was presented, pigeons were not reinforced for disk pecking •When presented with a series of white lines ranging from 180° to 290°, the vertical white line inhibited responding, with the generalization differing depending on the degree of similarity to the S- •The greater the deviation from S-, the less inhibition that occurred The Nature of the Stimulus Generalization Process •Lashley-Wade theory of generalization: Suggested that animals and people respond to stimuli that differ from the training stimulus because they are unable to distinguish between the generalization test stimulus and the conditioning stimulus. -Thus, an inability to discriminate between training and test stimuli is responsible for stimulus generalization. •According to Lashley & Wade, generalization represents the failure to discriminate, discrimination prevents generalization, and failure to discriminate leads to generalization. •Generalization to stimuli dissimilar to the training stimulus occurs when nondifferential reinforcement training is used. •Discrimination training results in generalization only to stimuli very similar to the conditioning stimulus. •Four lines of evidence support Lashley-Wade -generalization to stimuli dissimilar to the training stimulus occurs when nondifferential reinforcement training is used -discrimination training results in generalization only to stimuli very similar to the conditioning stimulus •The Stimulus Generalization Process -Generalization occurs when an animal cannot differentiate between the training stimulus and generalization test stimuli -Perceptual experience influences the amount of generalization

applications of aversive conditioning

Response Prevention, or Flooding •A behavior therapy in which a phobia is eliminated by forced exposure to the feared stimulus without an aversive consequence •The human or animal is exposed to the conditioned fear stimulus without an aversive consequence •It differs from typical extinction procedures because it cannot be escaped -Otherwise the two procedures are identical Effectiveness of Flooding •Flooding appears to be: -Effective •Effectiveness increases with longer exposure to stimulus •Long-lasting -Decreases in anxiety response to stimulus still observed after 6+ months •Although effective, many people do not want to participate in flooding because the initial anxiety is so great The Neuroscience of Flooding •Exposure to the feared stimulus is not only aversive but arouses the sympathetic nervous system, leading to increased heart rate and respiration and the release of epinephrine from the adrenal medulla. •The feared stimulus also causes the release of cortisol from the adrenal cortex. •Individuals respond differently to a feared stimulus, the physiological response can vary from intense to mild (Klein & Thorne, 2007). •Siegmund and colleagues (2011) reported that the effectiveness of flooding was related to cortisol response; the greater the cortisol response to the feared stimulus, the more successful the flooding treatment. Punishment •The use of punishment is widespread in our society •Examples include: -Spankings -Jail time -Military court-martials -Termination of employment for employee infractions •Positive Punishment -Positive punishment: •The use of a physically or psychologically painful event as the punisher •Punishment will be more effective if it is -Severe -Immediate -Consistent •Punishment will be more effective if it is -Severe -Immediate -Consistent •Although positive punishment is generally effective, there is sometimes a problem with generalizing suppression of unwanted behaviors from the therapy situation to the real world -Spanking is the most widely used form of positive punishment •However, recognition of negative effects of corporal punishment has lead to a decrease in its use -For example, many states ban spankings in school Response Cost •Response cost: A negative punishment technique in which an undesired response results in either the withdrawal of or failure to obtain reinforcement •Response cost is a form of negative punishment -It refers to a penalty or fine contingent upon the occurrence of an undesired behavior -Response cost has also been found to increase desired behaviors while reducing undesired ones -Response cost has been used to successfully treat a wide range of behaviors including: •Self-mutilation •Smoking •Overeating •Tardiness •Aggressiveness •Time-Out from Reinforcement -Time-out from reinforcement •A negative punishment technique in which an inappropriate behavior leads to a period of time during which reinforcement is unavailable -If a time-out area is used, it must not be reinforcing -Time-out is effective in the laboratory and in real-life situations -Hierarchy of behavior change procedures •Alberto and Troutman (2006) describe a hierarchy of procedures used to suppress or eliminate undesired behaviors -Hierarchy ranges from most socially acceptable (Level I) to least socially acceptable (Level IV) •Level I: differential reinforcement procedures •Level II: nonreinforcement procedures (e.g., extinction) •Level III: negative punishment procedures (e.g., response cost, time-out) •Level IV: positive punishment procedures (e.g., spanking)

discrimination learning

SD: A stimulus that indicates the availability of reinforcement contingent upon the occurrence of an appropriate operant response. SΔ: A stimulus that indicates that reinforcement is unavailable and that the operant response will be ineffective. Discriminative stimulus: A stimulus that signals the availability or unavailability of reinforcement Discriminative operant: An operant behavior that is under the control of a discriminative stimulus. To interact effectively with our environment, we must learn to discriminate the conditions that indicate reinforcement availability from the conditions that do not. Discriminative Control of Behavior involves learning to respond to the SD and not to the SΔ •Involves activity of Prefrontal Cortex and Hippocampus •Discrimination learning involves discovering not only when reinforcement is available or unavailable, but also when aversive events may or may not occur. •SD and SΔ in discrimination learning are comparable to S+ and S− in generalization. -We are simply referring to different properties of the stimulus. The Neuroscience of Discrimination Learning involves learning to respond to the SD and not to the S∆. Two areas of the brain, the prefrontal cortex and the hippocampus, play a significant role in discrimination learning. •Kosaki and Watanabe (2012) reported that positional discrimination learning (one lever associated with reinforcement and the other two were not) was impaired in rats following damage to the medial prefrontal cortex or the hippocampus. Impaired discrimination learning occurred because the animals with medial prefrontal cortical or hippocampal damage perseverated on a previously correct lever. •Sometimes, discrimination learning involves one stimulus as the SD and another as the S∆. In contrast, the same stimulus can be an SD sometimes and the S∆ at other times. Two-Choice Discrimination Tasks •Two-choice discrimination learning: A task when the SD and SΔ are on the same stimulus dimension. -Responding to the SD produces reinforcement or punishment, and choosing the SΔ leads to neither reinforcement nor punishment. -Sometimes the two are presented simultaneously and sometime sequentially. -In this paradigm, they must choose to which to respond. •Research shows that initially, subjects will respond to the SD and SΔ equally. •With continued training, responding to the SD increases and responding to the SΔ declines. Conditioned Discrimination Task •Conditioned discrimination: A situation in which the availability of reinforcement to a particular stimulus depends upon the presence of a second stimulus. -In some circumstances, a particular cue indicates that reinforcement is contingent on the occurrence of an appropriate response, whereas under other conditions, the cue does not signal reinforcement availability.

schedules of reinforcement

Schedules of reinforcement: A contingency that specifies how often or when one must make a required response to receive reinforcement Ratio schedule of reinforcement: A contingency that specifies that a certain number of responses are necessary to produce reinforcement Interval schedule of reinforcement: Reinforcement becomes available a certain period of time after the last reinforcement, with the first response occurring at the end of the interval being reinforced Both ratio and interval schedules may be either fixed or variable •Fixed schedules have the same response requirements from trial to trial •Variable schedules have response requirements that change from trial to trial Thus, four basic (i.e., simple) reinforcement schedules •Fixed-ratio •Variable-ratio •Fixed-interval •Variable-interval •Schedules of Reinforcement Fixed-Ratio Schedules •Fixed-ratio schedule: A specific number of responses is needed to produce reinforcement -Produces a consistent response rate •Post-reinforcement pause: A pause in behavior following reinforcement on a ratio schedule, which is followed by resumption of responding at the intensity characteristic of that ratio schedule -The higher the number of responses needed to obtain reinforcement, the more likely a post-reinforcement pause -The higher the ratio schedule, the longer the pause -The greater the satiation, the longer the pause Variable-Ratio Schedules •Variable-ratio schedules: The number of responses required to produce reinforcement varies from trial to trial -The schedule designation reflects the average number of responses required for reinforcement over a block of trials •For example, a block of five trials requires responses of 5, 3, 7, 2, 3; schedule will be designated VR-4 -Note: None of the trials require four responses, but the average is 4 -Produces a high and steady rate of responding -Post-reinforcement pauses occur only occasionally on variable ratio schedules •Thus, the rate of responding is higher on VR than FR schedules Fixed-Interval Schedules •Fixed-interval schedule: Reinforcement is available only after a specified period of time has passed and a response has been emitted -The first response after the interval has elapsed is reinforced •Scallop effect: The pattern of behavior characteristic of fixed interval schedules -Responding stops after reinforcement and then slowly increases as the time approaches when reinforcement will be available •The length of the pause on an FI schedule is affected by: -Experience--the ability to withhold the response until close to the end of the interval increases with experience -The pause is longer with longer FI schedules Variable-Interval Schedules •Variable-interval schedule: An average interval of time between available reinforcers, but the interval varies from one reinforcement to the next contingency -Characterized by steady rates of responding -The longer the interval, the lower the response rate -Scallop effect does not occur on VI schedules -There is no pause following reinforcement on VI schedules •Differential Reinforcement Schedules -Differential reinforcement schedule: Schedule of reinforcement in which a specific number of behaviors must occur within a specified time in order for reinforcement to occur -So, reinforcement depends on both time and number of responses Differential Reinforcement of High Responding Schedules •Differential reinforcement of high rates of responding schedules (DRH): A schedule of reinforcement in which a specified high number of responses must occur within a specified time in order for reinforcement to occur

Schedule induced behavior

Skinner (1948) found that reinforcing pigeons on an FI-15 second schedule resulted in ritualistic, stereotyped behavior. Superstitious behavior: A "ritualistic" stereotyped pattern of behavior exhibited during the interval between reinforcements. •Animals may have associated superstitious behavior with reinforcement. Terminal behavior: The behavior that precedes reinforcement when an animal is reinforced on an interval schedule of reinforcement •It is reinforcer oriented. Interim behavior: The behavior following reinforcement when an animal is reinforced on an interval schedule of reinforcement Schedule-induced behavior: The high levels of interim behavior that occur following reinforcement on an interval schedule Schedule-Induced Polydipsia •The high levels of water consumption following food reinforcement on an interval schedule -Some important aspect of providing food on an interval schedule can produce excessive drinking. -Has been observed in rats, pigeons, and nonhuman primates. Other Schedule-Induced Behaviors •Interval schedules of reinforcement can be used to induce high rates of wheel running using water and food as reinforcers. -Usually occurs in the immediate time following reinforcement and decreases as time for the next reinforcement nears. The Nature of Schedule-Induced Behavior •Riley and Wetherington (1989) proposed that schedule-induced instinctive behavior is a product of periodic reinforcement. •The relative insensitivity of schedule-induced polydipsia to taste aversion is compelling evidence of this. Scheduled-Induced Polydipsia and Human Alcoholism •Excessive levels of appetitive behaviors occur in humans. •Gilbert (1974) suggested that interval schedules could be responsible for some peoples' excessive alcohol consumption. •Rats on interval schedules show excessive consumption of cocaine solution. •Primates' self-administration of cocaine is higher on fixed-interval schedules. •Schedule-induced polydipsia in animals is a good model for excessive alcohol consumption in humans. •Smoking and eating are other types of schedule-induced behaviors seen in humans. -Evidence is weaker in humans, and natural schedule-induced behavior develops more slowly in humans than in animals. •There are significant differences in schedule-induced behaviors in animals on an interval schedule. Individual differences in schedule-induced behaviors in humans may account for the fact that only 10% of the population drink half of the alcohol consumed in the United States each year and are considered to be alcoholic (National Institute of Alcohol Abuse and Alcoholism, 2015). It may be that individual differences in schedule-induced behaviors may be the cause of the apparent differences between animals and humans.

avoidance of aversive events

Species-Specific Defense Reactions (SSDR) •An instinctive reaction, elicited by signals of danger, that allows the avoidance of an aversive event. •Animals either possess an instinctive means of keeping out of trouble, or they perish. •An animal's evolutionary history determines which behaviors will become SSDRs. •Animals easily learn to avoid an aversive event when they can use SSDRs. -But they have difficulty in learning to avoid an aversive event if it requires performing a behavior other than the SSDR. Predispositions and Avoidance Learning •Bolles (1978) suggests that aversive events elicit instinctive species-specific defensive responses. -The environment in which aversive events occur can cue instinctive defensive reactions as conditioned responses. -Suggests that Pavlovian, rather than operant, conditioning is responsible for avoidance learning. -Bolles and Riley (1973) found that reinforcement is not responsible for rapid acquisition of avoidance behavior. -Some animals could avoid being shocked by freezing. -Neither reinforcement nor punishment affected freezing responses. Species-Specific Defense Reactions In Humans •Fredrickson and Branigan (2005) found that the subjects experiencing a positive emotion of joy or contentment were able to imagine significantly more responses to that emotion than were the subjects experiencing the neutral emotion, while subjects experiencing the negative emotion of fear or anger imagined fewer responses than did the subjects experiencing the neutral emotion. •According to Fredrickson and Branigan (2005), these results indicated that positive emotions broaden a person's thought--action repertoire, whereas a negative emotion, elicited by threat or danger, limits the ways in which a person can respond to threat or danger. •Other studies by Fredrickson and her colleagues demonstrate that positive emotions allow humans to come up with many new solutions to solve threat or danger, while negative emotions limit the ways in which treat or danger is met.

Sutherland and Mackintosh attentional theory

Suggests that attention to the relevant dimension is strengthened in the first stage, and association of a particular response to the relevant stimulus occurs in the second stage of discrimination learning. Certain "analyzers" are aroused, which determines which dimension of the stimulus the subject responds to. Initially, the level of arousal of a particular analyzer is related to the intensity and salience of the stimulus dimension; the greater the strength and salience of a particular dimension, the more likely that dimension will activate the analyzer sufficiently to arouse attention. The predictive value of a particular stimulus dimension influences the amount of attention the analyzer of that stimulus dimension arouses. The analyzer will arouse more attention if the stimulus predicts important events. In the second phase of discrimination learning, the activity of the analyzer is attached to a particular response, and the response strengthens as a result of reinforcement. The relative predictiveness of an SD determines its ability to control response. The Recognition of the Relevant Dimension •According to Sutherland and Mackintosh (1971), each stimulus dimension can activate an analyzer. The analyzer detects the presence of the salient or relevant aspect of a stimulus, and the arousal of a particular analyzer causes an animal to attend to that dimension. Thus, the presentation of a compound stimulus arouses the analyzer of the relevant dimension but not the analyzers of the other stimulus dimensions. •According to Sutherland and Mackintosh (1971), the predictive value of a particular stimulus dimension influences the amount of attention the analyzer of that stimulus dimension arouses. The analyzer will arouse more attention if the stimulus dimension predicts important events. However, an analyzer will arouse less attention if the stimulus dimension for that analyzer is not predictive of future events. Association of the Analyzer With a Response •In the second phase of discrimination learning, the activity of the analyzer attaches to a particular response. •The connection between the analyzer and the response strengthens as the result of reinforcement. •Sutherland and Mackintosh's (1971) viewed reinforcement that increases both the attention to a particular dimension and the ability of a particular stimulus to elicit the response. Predictive Value of Discriminative Stimuli •Wagner, Logan, Haberlandt, and Price (1968) investigated the influence of the SD's predictiveness on its control of operant response. •Wagner and his colleagues (1968) were interested in the degree of control the light cue would gain. •Wagner and colleagues reported that the light better controlled responses for subjects in the first group than in the second group. These results indicate that it is the relative predictiveness of an SD that determines its ability to control an operant bar-press response.

Hull-spence theory of discrimination learning

Suggests that conditioned excitation first develops to the SD, followed by the conditioning of inhibition to SΔ. Development of Conditioned Excitation and Inhibition •According to the Hull-Spence view, discrimination learning develops in three stages. -First, conditioned excitation develops to the SD as the result of reinforcement. -Second, nonreinforcement in the presence of the SΔ results in the development of conditioned inhibition to the SΔ. -Finally, the excitation and inhibition generalize to other stimuli. •The combined influence of excitation and inhibition determines the level of response to each stimulus •The Hull-Spence model predicts a steeper generalization gradient with discrimination training than with nondiscrimination training. •The maximum response occurs not to the SD, but rather to a stimulus other than the SD, and in the stimulus direction opposite that of the SD. -This is known as the Peak Shift Phenomenon. The Peak Shift Phenomenon •Hanson (1959) reported three important differences between the discrimination and nondiscrimination generalization gradients. -A steeper generalization gradient appeared with discrimination than with nondiscrimination training. •This is supported by the Hull-Spence model. -The greatest response for discrimination-training subjects was not to the SD, but to a stimulus away from SD opposite the direction of the SΔ. •This is also supported by the Hull-Spence model. -The overall level of response was higher with discrimination training than with nondiscrimination training. •This was not supported by the Hull-Spence model. •Peak shift phenomenon is not always present with human subjects. -This finding has been interpreted as evidence that, in some cases, humans responses are based on associative processes early in testing and cognitive processes later in testing. •Peak shift: The shift in the maximum response, which occurs to a stimulus other than SD and in the stimulus direction opposite that of the SΔ •In contrast, pigeons receiving nondiscrimination training responded maximally to the SD •The overall level of response was higher with discrimination training than with nondiscrimination training, which the Hull-Spence model did not predict. The Aversive Character of SΔ •Terrace (1964) suggested that behavioral contrast is responsible for the heightened response seen with discrimination training. •Argued that exposure to the SΔ is an aversive event and that the frustration produced during SΔ periods increased the intensity of the response to other stimuli. •The effect of drugs that eliminate frustration-induced behavior supports Terrace's view. -Administration of these drugs disrupts the performance on a discrimination task.

Principles of Pavlovian Conditioning

The Conditioning Process •Basic Components: Four basic components make up the conditioning paradigm: -Unconditioned stimulus (UCS): A biologically significant environmental event that can elicit an instinctive reaction without any experience -Unconditioned response (UCR): An innate reaction to an unconditioned stimulus -Conditioned stimulus (CS): A neutral stimulus that becomes able to elicit a learned response as a result of being paired with an unconditioned stimulus -Conditioned response (CR): A learned reaction to a conditioned stimulus to make up the conditioning paradigm •Prior to conditioning, the UCS elicits the UCR, but the CS cannot elicit the CR •During conditioning, the CS is paired with the UCS •Following conditioning, the CS can elicit the CR •The strength of the CR increases gradually during acquisition until a maximum level is reached -Asymptotic level: the maximum level of conditioning •The UCS-UCR complex is the unconditioned reflex •The CS-CR complex is the conditioned reflex

habituation and sensitization

The Habituation and Sensitization Process •Habituation: a decrease in responsiveness to a specific stimulus as a result of repeated experience with it. •Sensitization: an increased reactivity to all environmental events following exposure to an intense stimulus. •One example of habituation is when animals gradually eat more of a novel food after initial ingestional neophobia. •Ingestional neophobia: the avoidance of novel foods. •Habituation of the neophobic response can lead to increased consumption of novel foods. •Animals can also show an increased neophobic response. -If an animal is ill when it ingests a novel food, it may avoid the food in the future. -The greater neophobic response when animals are ill is due to sensitization process. •Experience can also influence the properties of a reward. •Homeostasis has traditionally been considered responsible for either increased or decreased effectiveness of a reward. -Decreased effectiveness due to satiation -Increased effectiveness due to deprivation •However, it appears that habituation and sensitization can also influence effectiveness of reward. -Habituation leads to decreased effectiveness. -Sensitization leads to increased effectiveness. •Habituation and sensitization can explain changes in reward effectiveness that cannot be explained by homeostasis. Determinants of the Strength of Habituation and Sensitization •Several variables affect habituation and sensitization •Stimulus intensity -Rate of habituation and sensitization is determined by stimulus intensity. -More intense stimuli produce stronger sensitization than weaker ones. -Weaker stimuli produce rapid habituation. -Habituation may not occur with very intense stimuli. •Frequency of presentation -Habituation increases with more frequent stimulus presentations. -Greater sensitization occurs when a strong stimulus is presented frequently. •Stimulus characteristics -Habituation to a stimulus appears to depend on the specific characteristics of the stimulus. -A change in any characteristic of the stimulus will result in an absence of habituation. -Sensitization is much less stimulus specific. -A change in the properties of the stimulus typically does not affect sensitization. •Time course -Both habituation and sensitization can be relatively transient phenomena. -Habituation may be either short-term or long-term. -Sensitization is a temporary effect.

how readily is an instrumental or operant response?

The Importance of Contiguity •Reward can lead to the acquisition of an instrumental response if it immediately follows the behavior -Learning is impaired if reward is delayed -The longer the delay, the less conditioning occurs •The Effect of Delay -Delay impairs learning -The presence of a secondary reward can bridge the interval and reduce the impact of delay •When these cues are not present, even short intervals produce little conditioning •Delay of Reward and Conditioning in Humans -Longer delays between the instrumental behavior and the reward result in poorer conditioning. -The immediacy of reinforcement contributes to the effectiveness of reinforcement that motivates gambling behavior. The Impact of Reward Magnitude •The Acquisition of an Instrumental or Operant Response -The greater the magnitude of the reward, the faster the task is learned -The differences in performance may reflect motivational differences •The Performance of an Instrumental or Operant Response -Crespi's 1942 study also evaluated the level of instrumental performance as a function of reward magnitude. -Crespi discovered that the greater the magnitude, the faster the rats ran down the alley to obtain the reward, an observation that indicates that the magnitude of reward influences the level of instrumental behavior. -Other studies also show that reward magnitude determines the level of performance in the runway situation (Mellgren, 1972) and in the operant chamber (Gutman, Sutterer, & Brush, 1975). -Many studies (Crespi, 1942; Zeaman, 1949) reported a rapid change in behavior when reward magnitude is shifted, which suggests that motivational differences are responsible for the performance differences associated with differences in reward magnitude •The Importance of Past Experience -Depression effect: The effect in which a shift from high to low reward magnitude produces a lower level of response than if the reward magnitude had always been low •Also called negative contrast -Elation effect: The effect in which a shift from low to high reward magnitude produces a greater level of responding than if the reward magnitude had always been high •Also called positive contrast -The contrast effect lasts for only a short time -Frustration seems to play a role in the negative contrast effect •Negative contrast effect can be reduced by anxiety reducing drugs •Negative contrast effect can also be eliminated by lesions to the medial amygdala, a region of the brain which plays a role in producing frustration -The emotional response of elation may explain the positive contrast effect The Neuroscience of Behavioral Contrast •The medial amygdala plays a significant role in producing the emotion of frustration when a goal is blocked (Amaral, Price, Pitkanen, & Carmichael, 1992). •Lesions in the medial amygdala have been found to eliminate the frustration experienced by rats following the omission of reward in a goal box and to produce a decreased speed running to that goal box (Henke & Maxwell, 1973). •Becker, Jarvis, Wagner, and Flaherty (1984) reported that localized lesions of the medial amygdala eliminated the negative contrast effect. •Kawasaki, Annicchiarico, Glueck, Morón, and Papini (2017) following lateral amygdala lesions. Liao and Chuang (2003) also found that the anxiolytic drug Valium (diazepam) disrupted the negative contrast effect when administered directly into the amygdala but not into the hippocampus. •Brewer and colleagues (2017) observed that suppressing the amygdala reduces the aversive qualities of the shifting to a smaller reward magnitude. The Influence of Reward Magnitude in Humans •Research with young children (see Hall, 1979) has shown that the magnitude of reward does affect the development of an instrumental or operant response. •Siegel and Andrews (1962) found that 4- and 5-year-old children responded correctly more often on a task when given a large reward (e.g., a small prize) rather than a small reward (e.g., a button). •Studies have also shown that reward magnitude influences instrumental or operant behavior in adults. •Atkinson's (1958) research found that the amount of money paid for successful performance influences the level of achievement behavior

habituation and sensitization

The Nature of Habituation and Sensitization •Dual Process Theory -Habituation reflects a decreased responsiveness of innate reflexes. -Sensitization reflects a readiness to react to all stimuli. •The process responsible for the habituation effect operates at the level of the stimulus and response. -It reflects a decreased responsiveness of innate reflexes. -A stimulus becomes less able to elicit a response resulting from repeated exposure to that stimulus. •The process responsible for the sensitization effect is influenced by the individual's state of arousal. -It operates at the level of the central nervous system. -Can be affected by factors like drugs, emotional distress, or fatigue. -For example, anxiety increases responsiveness, whereas depression decreased responsiveness. •Evolutionary Theory -Survival of an animal depends on its ability to recognize biologically significant stimuli. •Animals need to set sensory thresholds to maximize the probability of detecting potentially significant external events but not detect irrelevant ones. •Habituation and sensitization evolved as nonassociative forms of learning to modify sensory thresholds so that only significant external events will be detected. -Habituation is the process that filters out external stimuli of little relevance by raising the sensory threshold to those stimuli. -Sensitization decreases sensory thresholds to potentially relevant external events. -Thus, habituation and sensitization are homeostatic processes that optimize an animal's likelihood of detecting significant external events.

learned helplessness

The belief that events are independent of an individual's behavior and are uncontrollable, which results in behavioral deficits characteristic of depression. An Expectancy Theory of Learned Helplessness •Original Animal Research -Seligman's original learned helplessness theory was developed from his animal studies. •The original studies used dogs as subjects. -One group received escapable shock. -A second group received inescapable shock. •The group that was exposed to inescapable shock did not later learn to avoid shock. -The learned helplessness effect has been demonstrated in a wide range of species, including cats, rats, and humans. -Helplessness in Humans •The result of human studies indicates that uncontrollable experiences produce similar negative effects on learning in both humans and animals. •Similarities of Helplessness and Depression -The importance of learned helplessness lies in its potential relation to clinical depression. -Studies indicate that non-depressed individuals who are exposed to inescapable noise behave similarly to depressed individuals. •Criticism of the Learned Helplessness Approach -Learned helplessness model has been criticized because. •It is too simplistic. •It did not precisely reflect the process that produces depression. -One problem is that some learned helplessness studies have not produced performance deficits. •In some cases, performance actually improved on subsequent tasks after exposure to insoluble problems. -One explanation for these inconsistencies lies in attribution. •Depressed individuals attribute their successes to external factors of luck and task ease. •But, depressed individuals attribute their failures to internal factors of lack of effort and ability. •Non-depressed individuals attribute successes to internal factors and failures to external factors. -Learned helplessness models cannot explain such observations. •Causal attributions can be made on three dimensions -Personal-universal (i.e., internal-external) -Global-specific -Stable-unstable •Internal attribution: The assumption that personal factors lead to a particular outcome •External attribution: The view that environmental forces determine success (reward) or failure (aversive event) •Stable attribution: The belief that the perceived causes of past success or failure will determine future outcomes •Unstable attribution: The belief that other factors may affect outcomes in the future •Specific attribution: The belief that a particular outcome is limited to a specific situation •Global attribution: The assumption a specific outcome will be repeated in many situations •The combination of these three dimensions produces eight possible attributional outcomes. •The specific attribution will determine whether: -Depression occurs -Depression generalizes to other situations -The depression is temporary or ongoing •Personal Versus Universal Helplessness -Personal helplessness: Occurs when an internal factor is perceived to be the cause of an undesired outcome •For example, a student may believe that they failed a test because they are incompetent. -Universal helplessness: Occurs when the environment is structured so that no one can control future events. •For example, a student may believe that a test is so difficult that no one can pass. it. -The nature of the helplessness determines whether loss of esteem occurs. •Those who attribute their failure to external forces, that is, universal helplessness, experience no loss of self-esteem. •They do not consider themselves responsible for their failure. •Those who attribute their failure to internal forces, that is, personal helplessness, do have a loss of self-esteem. •They believe their incompetence causes failure. -Both personal and universal helplessness produce the expectation of an inability to control future events and a lack of an ability to initiate voluntary behavior, which are characteristic of depression. •Individuals who are personally depressed make internal attributions for failure. •Individuals who are universally depressed make external attributions for failure. •Global Versus Specific Causal Attributions -Individuals who make a specific attribution for failure may not become depressed. •They make a specific attribution for their failure and do not experience helplessness in other situations. -Those who make a global attribution for failure after an uncontrollable event are more likely to become depressed. •They make a global attribution for their failure and assume they will fail in other, or even most, situations as well. •Stable Versus Unstable Causal Attributions -Ability is considered a stable factor. -Effort is considered an unstable factor. -The idea that the stability or instability of the perceived cause influences helplessness explains why depression is sometimes temporary and sometimes enduring, depending on the situation. •Severity of Depression -Severe depression typically appears when a person attributes their failure to internal, global, and stable factors. -The depression is intense because they perceive themselves as incompetent (internal) in many situations (global) and believe the incompetence is unlikely to change (stable).

functionalism

The concept of instincts was strongly criticized: •Anthropologists noted that difference in values, beliefs, and behaviors among cultures is inconsistent with the idea of universal human instincts. •In addition, widespread and uncritical use of the "instinct concept" did nothing to advance the understanding of human behavior. These criticisms led to the Behavioral Revolution. By the 1920s, psychologists had moved away from the "instinct concept" explanation and began to emphasize the learning process through a scientific method. •Psychologists who viewed experience as the major determinant of human actions were called Behaviorists Today, most behavioral scientists agree that behavior results from an interaction of influences from both instinctive (i.e., internal) and experiential (i.e., external) processes. •This is called the nature-nurture interaction

nature of pavlovian conditioning

The ease with which the CS develops depends on: •The predictive quality of the CS •The predictive quality of factors related directly to the CS •Rescorla-Wagner Associative Model expresses four main ideas: -There is a maximum associative strength that can develop between a CS and a UCS •Different UCSs support different maximum levels of conditioning and therefore have different asymptotic values. -Although the associative strength increases with each training trial, the amount of associative strength gained on a particular training trial depends on the level of prior training. •Because the typical learning curve negatively accelerates, more associative strength will accrue on early trials than on later ones. -The rate of conditioning varies depending on the CS and UCS used. •Associative strength accrues quickly to some stimuli but slowly to others. -The level of conditioning on a particular trial is influenced not only by the amount of prior conditioning to the stimulus but also by the level of previous conditioning to other stimuli also paired with the UCS. •When two stimuli are presented they must share the associative strength of the UCS •Rescorla and Wagner developed a mathematical equation based on the four ideas: -ΔVA = K(λ - VAX), where •VA = associative strength between CSA and UCS •And ΔVA is the change in associative strength that develops on a specific trial when CSA and UCS are paired •The symbol K refers to the rate of conditioning determined by the nature of the CSA and the intensity of the UCS -K can be separated into "a," or alpha, which refers to the power of the CSA, and "b," or beta, which reflects the intensity of the UCS -The symbol λ defines the maximum level of conditioning the UCS supports -The term VAX indicates the level of conditioning that has already accrued to the conditioned stimulus (A) as well as to other stimuli (X) presented during conditioning -Thus, VAX = VA + VX •The formula has been successful in explaining the blocking phenomenon

Hopelessness theory of depression

The hopelessness theory suggests that attributional style and environmental circumstances together contribute to hopelessness and hopelessness depression. Negative Life Events and Hopelessness •Hopelessness: The belief that negative events are inevitable and that the person is deficient or unworthy. -Occurs when a negative life event is attributed to stable and global factors -Hopelessness Theory of Depression: The view that whether a person becomes hopeless and depressed depends upon a person making a stable and global attribution for negative life events and the severity of those negative life events A Negative Explanatory Style and Vulnerability to Depression A Positive Explanatory Style and Hopefulness •Positive attributional style: A belief system that negative life events are controllable, which leads to hopefulness and a resilience to depression •Hopefulness: The belief that negative life events can be controlled and are not inevitable. •"Perceived control is basic to human functioning" (Langer, 1983). •Optimists tend to have a positive attributional style whereas pessimists tend to have a negative attributional style. •A change in attributional style may influence the tendency to become depressed. Repeated exposure to uncontrollable events may cause biochemical, as well as behavioral, changes in depressed persons. •These include decreases in norepinephrine in the locus coeruleus. Glutamate appears to play a role in learned helplessness and in clinical depression.

behavior systems approach

The idea that learning evolved as a modifier of innate behavior systems and functions to change the integration, tuning, instigation, or linkages within a particular system •According to Timberlake, an animal possesses instinctive behaviors such as feeding, mating, social bonding, care of young, and defense. •Learning evolved as a modifier of existing behavior systems. Learning can also improve simple and complex motor tasks through repetition and contingent delivery of reinforcement. Activation of a mode can also be conditioned to cues that signal the receipt of reinforcers. Conditioning of a specific mode produces a general motivational state that sensitizes all of the perceptual-motor modules in that mode. Different stimuli can be conditioned to different modes. Variations of learning can occur between species. Predisposition: Instances where learning occurs more rapidly or in a different form than expected. •Timberlake (2001) suggests that these occur when environmental circumstances easily modify the instinctive behavior system of the animal. Constraint: When learning occurs less rapidly or less completely than expected. •Occurs when environmental circumstances are not suited to animal's instinctive behavior system.

Conditions Affecting the Acquisition of a Conditioned Response

The pairing of a CS and UCS does not automatically ensure that conditioning will occur Several factors influence whether a CR will develop following a CS-UCS pairing •Contiguity: temporal pairing of two events •CS-UCS interval: the interval between the termination of the CS and the onset of the UCS -Also called interstimulus interval (ISI) •In general, conditioning occurs best when the CS-UCS interval is very short -However, the optimal CS-UCS interval is different for different responses •The differences appear to depend on the nature of the responses -For example, eyeblink conditioning requires an extremely short CS-UCS interval •In nature, for an eyeblink to protect the eye, it must occur very quickly -Flavor aversion occurs best after a relatively long CS-UCS interval •In nature, it takes time for an ingested substance to make the animal ill •The Optimal Conditioned Stimulus-Unconditioned Stimulus Interval -The optimal CS-UCS interval or interstimulus interval (ISI) is thought to reflect the latency to respond in a particular reflex system (Hilgard & Marquis, 1940). They suggested that the different optimal CS-UCS intervals occur because the response latency of the heart rate response is longer than that of the eyeblink closure reflex. -Wagner and Brandon (1989) suggested that response latency can affect the optimal ISI. -The attenuation of conditioning produced by a temporal gap between the CS and UCS can be reduced if a second stimulus is presented between the CS and the UCS -The intermediate stimulus acts as a catalyst, enhancing the association of the CS and the UCS •Thus, with appropriate procedures, a high level of conditioning can occur even with a significant delay between CS and UCS

nature of reinforcement

There has been much speculation about the characteristics that make a reinforcer reinforcing Skinner defined a reinforcer as an event whose occurrence will increase the frequency of any behavior that produces it. Others have been interested in specifying the conditions that determine whether an event is reinforcing Probability-Differential Theory: the idea that an activity will have reinforcing properties when its probability of occurrence is greater than that of the reinforced activity •Thus, a reinforcer is any activity whose probability of occurring is greater than that of the reinforced activity Note that in Premack's view it is the eating response to food, not the food per se, that is the reinforcer for a hungry rat in an operant chamber •Since eating is a more probable behavior than bar pressing, eating can reinforce a rat's bar pressing behavior

the uses of activity reinforcement

There have been several studies that have documented the successful use of activities as reinforcers •Opportunities to engage in high-probability activities have been used to increase desired behaviors in diverse settings: -Educational settings •Preschool settings •Intellectually challenged students -Business settings •Retail stores •Fast food restaurants •Technology and engineering departments Using activities as reinforcers, psychologists also discovered that not only can the use of an activity increase the performance of the target behavior but it can also decrease the level of the activity used as the reinforcer •Response Deprivation Theory Response deprivation theory: the idea that when a contingency restricts access to an activity, it causes that activity to become a reinforcer Principles of response deprivation theory have been demonstrated with human and animal subjects in a variety of settings. Often an operant contingency requires more responses than an animal is willing to make Behavior economics examines the "cost-benefit" relationship between the reinforcer and the responses required to obtain the reinforcer Behavioral Allocation •Blisspoint: the free operant level of two responses; also called paired basepoint •When an operant contingency results in disruption of blisspoint, the animal allocates the number of responses that brings it as close to blisspoint as possible •Behavioral allocation view: the idea that an animal emits the minimum number of contingent responses in order to obtain the maximum number of reinforcing activities -Thus, we can think of a contingent activity as a cost and a reinforcing activity as a gain •This approach assumes that the economic principles that apply to purchasing products tell us about the level of response in an operant conditioning setting Choice Behavior •In some circumstances, there is not a simple operant contingency -Rather, sometimes the animal must choose from two or more contingencies •Matching Law -Matching law: when an animal has free access to two different schedules of reinforcement, its response is proportional to the level of reinforcement available on each schedule -Matching law is a simple economic principle that predicts behavior in many choice situations -Other behavioral economic principles apply to more complex choice situations •Maximizing Law -Maximizing law: the goal of behavior in a choice task is to obtain as many reinforcements as possible -Momentary maximization theory: the view that the behavior in a choice task is determined by which alternative is perceived as best at that moment in time -Delay reduction theory: the behavior economic theory that states that overall behavior in a choice task is based on the matching law, while individual choices are determined by which choice produces the shortest delay in the next reinforcer •The Neuroscience of Choice Behavior -Behavioral neuroscientists have identified different neurons that respond to different probabilities of success and the reinforcement value of each choice and that the rate of activity of those neurons corresponds to choice behavior as predicted by the matching law (Katahira, Okanoya, & Okada, 2012; Sakai & Fukai, 2008). -Barraclough, Conroy, and Lee (2004) and Montague and Berns (2002) found that different neurons in the prefrontal cortex encode the probability of success and the value of success of each choice and respond according to the predictions of the matching law. -Kubanek and Snyder (2015) reported that different neurons in the parietal cortex responded based on the desirability of different choices as predicted by the matching law. -The anticipation of a specific future reinforcement selectively activates dopamine neurons in the nucleus accumbens to produce reinforcement-seeking behavior (Knutson, Adams, Fong, & Hommer, 2001). Two-factor theory of avoidance learning: Mowrer's view that in the first stage, fear is conditioned through the classical conditioning process, and in the second stage, an instrumental or operant response is acquired that terminates the feared stimulus

attentional view

They also found that later pairings of the light (CS) with milk (UCS) yielded a reduced CR compared with control animals who did not experience preexposure to the light CS. •These results suggest that habituation of an orienting response to a stimulus is associated with the later failure of conditioning to that stimulus. •Reinstatement: An orienting response indicates that an animal is attending to the stimulus, and attention allows the stimulus to be associated with the UCS. •These observations provide further support for an attentional view of the CS preexposure effect. •Jones and Haselgrove (2013) provided support for the view that the overshadowed stimulus loses future associability as a result of being paired with a more salient stimulus. •Overshadowing experience of stimulus A caused it to lose associability and thereby, not be associated with food reinforcement during operant conditioning relative to auditory stimulus Y that had not been overshadowed during the initial stage of the study. The Role of Uncertainty in Pavlovian Conditioning •Animals attend to a stimulus that predicts a biologically significant event and ignores stimulus that it has learned is irrelevant. •Pearce and Hall (1980) proposed an alternative to Mackintosh's attentional theory that rather than the predictiveness principle determining attention, the Pearce-Hall model assumes that attention is determined by uncertainty; that is, the uncertainty principle assumes that attention is focused on stimuli whose predictiveness is uncertain. •Dickinson (1980) argued that it makes most sense to mostly focus attention on stimuli whose predictive value is uncertain rather than whose predictive value is known.

nature of punishment

Thorndike's Negative Law of Effect •The negative law of effect states that punishment weakens the strength of an S-R bond •The recovery of responding shortly after exposure to a mild punishment contradicts this view because the weakened bond should be permanent, not temporary Guthrie's Competing Response View •Guthrie suggested that punishment will suppress a behavior if a response incompatible with the behavior has been conditioned •In order to be effective, punishment must elicit a behavior incompatible with the punished response •Other research indicates that response competition alone is insufficient to make punishment effective Two-Factor Theory of Punishment •Mowrer's view: fear is classically conditioned to the environmental events present during punishment in the first stage. Any behavior that terminates the feared stimulus will be acquired through instrumental conditioning in the second stage. The reinforcement of the escape response causes the animal or person to exhibit the escape response rather than the punished response in the punishment situation. -The suppressive effect of punishment results from elicitation of a behavior other than the punished behavior. •Mowrer's view of avoidance learning and his view of punishment are describing two aspects of the same process: -Fear motivates an avoidance behavior which enables an animal or person to prevent punishment -The occurrence of the avoidance behavior causes an animal or person not to exhibit the punished response, which is thus responsible for the suppressive effect of punishment •Criticism of Mowrer's view -The overt behavior that is motivated by fear and that prevents an animal from exhibiting the punished response is often difficult to identify. Estes's Motivational View of Punishment •Estes's view explains why an overt response is not essential •When a behavior is reinforced, the motivational system present prior to reward and the response become associated •When this motivational system is activated again, the response is elicited •The primary mechanism underlying the influence of punishment on behavior is the competition of motives •If a stimulus precedes punishment, it develops the ability to inhibit the motive associated with the punished behavior •Thus, punishment works because an animal or person is no longer motivated, and therefore the punished response is no longer elicited •Punishment suppressed behavior by inhibiting the motivational system responsible for eliciting the punished response

Tolman's Purposive Behavior

Tolman's theories were very influential in the development of the cognitive theories. When Tolman proposed his cognitive theory in the 1930s and 1940s, most psychologists preferred Hull's mechanistic theory. By the 1950s, the cognitive view started to gain supporters. The Flexibility of Behavior •Tolman argued that behavior has both direction and purpose. -This doesn't mean we are necessarily aware of either the purpose or direction. •He believed that behavior is goal oriented. -We are motivated either to achieve a desired condition or to avoid an aversive situation •Specific events in the environment convey information about where our goals are located. -We can reach our goals only after we learn to read the signs leading to reward or punishment. Motivation and Expectancy Theory •There are two types of motivation -Deprivation: Produces an internal drive state that increases demand for the goal object. •Environmental events can acquire motivational properties through association with primary drives or rewards. -Cathexis: Environmental events can acquire motivational properties though association with either a primary drive or a reward •the ability of deprivation states to motivate behavior transfers to the stimuli present during the deprivation state. •Positive cathexis: leads organism to approach stimulus •Negative cathexis: leads organism to avoid stimulus •The concept of Cathexis is similar to Hull's theory of Acquired Drive. •Tolman's Equivalence Belief Principle is the idea that the reaction to a secondary reward is the same as the reaction to the original goal. •Comparable to Spence's anticipatory goal concept. Is Reward Necessary for Learning? •Tolman suggested that reward is not really necessary for learning. -He argued that simultaneous experience of two events is sufficient for learning -Reward effects performance, but not learning •The understanding of when events will occur can develop without a reward. -Presence of reward will motivate the organism to exhibit previously learned behavior. •An Evaluation of Purposive Behaviorism Tolman's work caused Hull to make changes in Drive Theory. Once the ideas were incorporated into Drive Theory, Tolman's work did not have a big impact on learning theory. When Drive Theory developed problems in the 60s and 70s, cognitive approach gained wider approval

traditional learning theories

Two major theoretical approaches to explain the learning process •S-R (stimulus-response) associative theories state that learning occurs through the association of environmental stimuli -Originally neutral environmental stimuli develop the ability to elicit specific responses -Cognitive theories state that learning involves recognizing when important events are likely to occur and understanding of how to obtain these events •Compare/Contrast S-R & Cognitive Theories -S-R theorists propose an inflexible view of behavior -S-R theorists assume that the stimulus environment controls behavior -Cognitive theorists propose a flexible view of behavior -Cognitive theorists assume that mental processes control behavior •S-R Theories -Mechanistic -An originally neutral environmental stimulus develops the ability to elicit a specific response •Cognitive Theories -Involve recognition of when important events, such as reward and punishment, are likely to occur. -Involve an understanding of how to obtain reward and avoid punishment •There are two types of S-R theories -One proposes that reward is necessary to learn an S-R association -The other proposes that the only necessity is for the response to occur in the stimulus context

two-factor theory of avoidance learning

Two-Factor Theory of Avoidance Learning •Mowrer proposed that we learn to avoid aversive events in two stages -First, fear is conditioned to the environmental conditions that precede an aversive event -Second, we learn an instrumental or operant behavior that successfully terminates the feared stimulus -Although it appears that we are avoiding painful events, we are actually escaping a feared stimulus -Initial research evaluating the theory was positive •However, some problems with Mowrer's view became evident Criticisms of Two-Factor Theory •Several problems exist with the two-factor theory: -First, although exposure to the conditioned stimulus without the unconditioned stimulus should eliminate avoidance behavior, avoidance behavior is often extremely resistant to extinction •If fear is acquired through classical conditioning and is responsible for motivating the avoidance behavior, then the presentation of the CS during extinction should cause a reduction in fear and the avoidance behavior should cease -Second, there is an apparent absence of fear in a well-established avoidance response •Strong fear is not necessary to motivate a habitual avoidance response -Third, the mechanism of the Sidman avoidance task is inconsistent with the two-factor theory of avoidance learning •There is no external warning stimulus in the Sidman avoidance task -Sidman avoidance task: A procedure in which an animal experiences periodic aversive events unless it responds to prevent them, with the occurrence of the avoidance response delaying the occurrence of the aversive event for a specific period of time -Fourth, the results of an experiment by Kamin did not match the predictions of the two-factor theory •This study clearly demonstrated that two factors--termination of the CS and avoidance of the UCS--play an important role in avoidance learning D'Amato's View of Avoidance Learning •Anticipatory pain response: Stimuli associated with painful events produce a fear response, which motivates escape from the painful environment •Anticipatory relief response: Stimuli associated with termination of an aversive event produces relief, which motivates approach behavior •D'Amato asserts that we are motivated to approach situations associated with relief as well as to escape events paired with aversive events -The relief experienced following avoidance behavior rewards the response •The amount of relief depends upon the length of time between aversive events -The longer the period of relief after an aversive event, the more readily the animal learns to avoid the aversive event •D'Amato's theory suggests that trace conditioning does not produce avoidance response because there is no distinctive cue associated with the absence of UCS -Mowrer's two-factor theory cannot account for failure of trace conditioning to produce avoidance learning

conditioned stimulus-alone presentations

UCS-Alone Presentations (Hartman & Grant, 1960) light and air puff pairings model showed that conditioning was strengthened as the percentage of trials that paired the CS with the UCS increased. •CR acquisition is impaired if the UCS is presented frequently without the CS -CS-Alone Presentations: •CR acquisition is also impaired if the CS has been presented frequently without the UCS The Redundancy of the CS •For a cue to elicit a CR, it must -Predict the occurrence of the UCS and -Provide information not signaled by other environmental cues •Blocking: the prevention of the acquisition of a CR to a second stimulus (CS2) when two stimuli are paired with a UCS and conditioning has already occurred to the first stimulus (CS1) •The Importance of Surprise -Nicholas Mackintosh suggested that surprise (novelty) is necessary for Pavlovian conditioning to occur. -The occurrence of the UCS must be surprising for the conditioned and unconditioned stimuli to be associated.

Principles of Aversive Conditioning

We usually respond to aversive events by either •making a response that will let us escape from the aversive situation or •making a response that will let us avoid the aversive situation •Escape response: a behavioral response to an aversive event that is reinforced by the termination of the aversive event -Several factors play a role in determining whether an organism learns to escape aversive events -Intensity of the aversive stimulus -Absence of reinforcement -Impact of delayed reinforcement •There are also many factors that affect the efficiency of the escape response •Intensity of the Aversive Event -The more intense the situation, the greater the motivation to escape it -Researchers have identified examples of situations that increase the intensity of the aversive event -Higher cost of helping another individual -Higher likelihood of failure on a task -Higher level of pain (i.e., electric shock) or sensory impact (i.e., noise or light) •The Magnitude of Negative Reinforcement -The likelihood of escape behavior depends on the amount of negative reinforcement •This may be defined as the degree of decrease in the severity of an aversive event -Campbell and Kraeling (1953) exposed rats to a 400-V electric shock in the start box of an alley. Upon reaching the goal box, the shock was reduced to 0, 100, 200, or 300 V. -Campbell and Kraeling reported that the greater the reduction in shock intensity, the faster the rats escaped from the 400-V electric shock. -Other experiments have found that the level of escape behavior directly relates to the level of shock reduction the escape response induces (Bower, Fowler, & Trapold, 1959). -The magnitude of the negative reinforcement's influence on the asymptotic level of escape performance also is evident with cold water as the aversive stimulus (Woods, Davidson, & Peters, 1964). •The Impact of Delayed Reinforcement -The longer reinforcement is delayed after an escape response, •The slower the acquisition of the escape behavior and •The lower the final level of escape performance •In some studies, a delay of even 3 s eliminated escape conditioning The Elimination of an Escape Response •An escape response can be terminated by -No longer presenting the aversive event -No longer terminating the aversive stimulus following the escape response •In other words, by removal of negative reinforcement •The Removal of Negative Reinforcement -An escape response is eliminated when an aversive event continues despite the escape response •However, the response will continue for some time until the organism learns that the escape response no longer terminates the aversive event -The strength of the escape response during acquisition affects resistance to extinction •The greater the acquisition training, the slower the extinction of the escape behavior •The Absence of Aversive Events -Elimination of an escape response also occurs when the aversive event is no longer experienced -Nevertheless, the organism exhibits a number of responses even when the aversive event no longer occurs •This may occur because the cues that predict the aversive event are still present •Vicious circle behavior: an escape response that continues despite punishment, due to a failure to recognize that the absence of the escape behavior will not be punished -It occurs because the organism doesn't realize that not responding will not lead to punishment

behaviorism

a school of thought that emphasizes the role of experience in governing behavior •Behaviorists believed that the important processes governing behavior are learned •We learn motives that initiate behavior and specific behaviors that occur in response to these motives through our interactions within the environment. •A major goal of behaviorism was to determine the laws governing learning. The Importance of Associations •Some behaviorists' ideas could be traced as far back as Aristotle. -Aristotle's concept of Associationism (i.e., the association of ideas) provided an important foundation for behaviorist theories. -Association: a tie that develops when two events become associated with each other; therefore, when you think of one, you automatically think of the other. •Aristotle proposed that in order for associations to develop, the two events must be: -Contiguous (i.e., paired together in time [temporally]) -Similar to each other or opposite each other

conditioned inhibition

a stimulus (CS-) may develop the ability to suppress the response to another stimulus (CS+) when the CS+ is paired with a UCS and the CS- is presented without the UCS -Conditioned inhibition is believed to reflect the ability of the CS- to activate an inhibitory state, which can suppress the CR •External inhibition: the presentation of a novel stimulus during conditioning suppresses response to the conditioned stimulus -This is a temporary inhibitory state •Inhibition of delay: the suppression of the CR until the end of the CS-UCS interval -The ability to inhibit the response until the end of the CS-UCS interval improves with increased exposure to CS-UCS pairings •Disinhibition: when the conditioned stimulus elicits a conditioned response after a novel stimulus is presented during extinction •A Conditioned Response Without Conditioned Stimulus-Unconditioned Stimulus Pairings Many stimuli develop the ability to elicit a CR indirectly •That is, a stimulus that is never directly paired with a UCS comes to elicit the CR •A Conditioned Response Without Conditioned Stimulus-Unconditioned Stimulus Pairings Three common ways for this to happen •Higher-order conditioning •Sensory preconditioning •Vicarious conditioning

flavor aversion learning

avoidance of a flavor or food that precedes an illness experience -Flavor aversion develops rapidly •Often after one pairing of flavor and illness •May occur even if there is a long time interval between the flavor and the illness -The classic Flavor Aversion Experiments were conducted by Garcia et al. (1957) •Although rats generally like saccharin very much, they found that rats would not consume saccharin if illness followed its consumption •The illness did not result from the saccharin but rather from either high doses of irradiation or Lithium Chloride •Research indicates that almost half of college students surveyed have experienced taste aversion.

Dickinson, Hall, Mackintosh

conducted a study using a two-phase study. In Phase 1, CS1 that was presented prior to two UCSs, and a strong conditioning occurred to CS1. In Phase 2, both CS1 and CS2 were presented prior to either on or two UCSs. Blocking was found when the two UCSs in Phase 1, but two UCSs in Phase 2 were not surprising. Thus, only one UCS in Phase 2 was surprising, which resulted in the CS2 being associated with the UCS, and blocking was therefore not observed.

Watson

demonstrated the importance of early learning principles to human behavior. •He was influenced by Vladimir Bechterev as well as Pavlov and Thorndike. -Whereas Pavlov used pleasant or positive UCSs to study conditioning processes (i.e., reinforcement-based techniques), Bechterev used aversive or unpleasant stimuli (e.g., shock) (i.e., punishment-based techniques). •He believed that abnormal as well as normal behavior is learned -In particular, he believed that fear is acquired through classical conditioning. •Little Albert Study -To study the effect of learning on development of phobia, Watson and Raynor conditioned a nine-month-old baby boy named Albert, to fear a white rat. •UCS: loud noise •CS: White rat -Initially, Albert was not afraid of the rat. Then, each time he reached out for the rat, Watson and Raynor sounded a loud gong behind his back, causing a fear response -Eventually, Albert showed a fear response to the white rat alone through this learned aversive association -Thus, Watson and Raynor demonstrated that fear-induced emotions or phobias could be learned. Despite the ethical dilemmas within this case study, this experiment was rather important as it showed a baby boy with little life experience in acquiring complex emotions could perhaps learn a complex emotion such as a fear-induced phobia through associative learning or classical conditioning. •Some behaviorists assumed that the Little Albert study demonstrated that fear could be learned and that it could be learned in one experience. •However, the study had several limitations -Single subject -Accounts of behavior can be very subjective -No reliable measurement of Albert's responses was recorded -Attempts to replicate the study have been unsuccessful and are considered unethical today -Therefore, Watson and Raynor's findings should be interpreted with caution.

vicarious conditioning

the development of the CR to a stimulus after observation of the pairing of the CS and UCS. •In other words, if an organism observes the CS-UCS pairing lead to a CR in another individual, the organism can learn the association with having direct personal experience. •Research on Vicarious Conditioning -Stress response to a task can be conditioned vicariously by observing others fail at the task -Mineka and colleagues found that monkeys learn to fear snakes by observing another monkey's fear reaction to the snake •The Importance of Arousal -For vicarious conditioning to occur, we must respond emotionally to the scene •Applications of Pavlovian Conditioning Two important applications of classical conditioning in therapeutic settings are •Systematic desensitization •Treatments to extinguish drug cravings in substance abusers

sensory preconditioning

the initial pairing of two stimuli, which will enable one of the stimuli (CS2) to elicit the CR without being paired with a UCS if the other stimulus (CS1) is paired with UCS •The Sensory Preconditioning Process -Phase 1: two neutral stimuli are paired (e.g., CS1 and CS2) -Phase 2: (CS1) is paired with UCS -Phase 3: (CS2) can produce CR even though it was never directly paired with the UCS •Research on Sensory Preconditioning -Research has determined certain factors that strengthen sensory preconditioning •The CS2 precedes the CS1 by several seconds •Only a few CS2-CS1 pairings should be used to prevent the development of learned irrelevance •The Neuroscience of Higher-Order Conditioning and Sensory Preconditioning -The hippocampus appears to plan an important role in the acquisition of a CR through higher-order conditioning and sensory conditioning. -Hoang et al. (2014) found that lesions impaired appetitive higher-order conditioning, while Yu et al. (2014) reported that the hippocampus becomes active during sensory preconditioning. -As the hippocampus is involved in the storage and retrieval of events, it is not surprising that the hippocampus is crucial to Pavlovian higher-order conditioning and sensory preconditioning.

higher-order conditioning

the phenomenon in which a stimulus (CS2) can elicit a CR even without being paired with the UCS if the CS2 is paired with another conditioned stimulus (CS1) •In the natural environment, many associations are made by higher-order conditioning •The strength of a CR acquired through higher-order conditioning is weaker than that developed through first-order conditioning •Some researchers have had difficulty producing higher-order conditioning in the lab •This may occur because the pairing of CS1 and CS2 without the UCS also represents a conditioned inhibition paradigm •Rescorla and his colleagues found that conditioned excitation develops faster than conditioned inhibition -Thus, with only a few pairings, a CS2 will elicit a CR -As conditioned inhibition develops, CR strength declines until the CS2 can no longer elicit the CR

Dishabituation

the recovery of a habituated response as the result of the presentation of a sensitizing stimulus. •The Dishabituation Process -Dual Process Theory states that the arousing effect of the sensitizing stimulus causes the habituated response to return. -In the absence of the sensitizing stimulus, habituation remains. •The Nature of Dishabituation -Pavlov suggested that dishabituation is caused by a reversal of habituation. -In contrast, Grether proposed that dishabituation results from a process similar to sensitization being superimposed on habituation. •Recent research (Steiner & Barry, 2011) suggests that dishabituation is not due to sensitization processes but reflects a disruption in the habituation process. -Thus, most research supports Grether's view. -Habituation and sensitization are independent processes. •The independence of habituation and sensitization has adaptive value. -Habituation allows us to ignore unimportant stimuli. -Sensitization makes sure that stimuli remain relevant. -Dishabituation reinstates the response in cases where habituated stimuli have become relevant again.

opponent-process theory

the theory that an event produces an initial instinctive affective response, which is followed by an opposite affective response. •The Initial Reaction -All experiences, both biological and psychological, produce an initial affective reaction. •A state: the initial affective reaction to an environmental stimulus in opponent process theory. •The strength of the A state is influenced by the intensity of the experience. •The more the intense, the stronger the A state •The A state arouses a second affective reaction. -B state: the opposite affective response that is elicited by the initial affective reaction in opponent-process theory. -The B state is the opposite of the A state. •Initially, the B state is less intense than the A state. •The B state also intensifies more slowly than the A state. •After an event has terminated, the B state diminishes more slowly than the A state. •The opponent affective response will be experienced only when the event ends. The Intensification of the Opponent B State •Repeated experience with a certain event often increases the strength of the opponent B state. •This reduces the magnitude of the affective reaction experienced during the event. -Thus, the strengthening of the opponent B state may well be responsible for the development of tolerance. •Tolerance: reduced reactivity to an event with repeated experience. •When the event ends, an intense opponent affective response is experienced. -Withdrawal: an increase in the intensity of the effective opponent B state following the termination of an event.

punishment

use of an aversive event contingent on the occurrence of an inappropriate behavior •The intent of punishment is to suppress an undesired behavior •If punishment is effective the frequency, intensity, or both of the inappropriate behavior will decline Punishment is the response-contingent presentation of an aversive event Positive punishment: addition of an aversive event (e.g., spanking) to reduce the undesirable behavior Negative punishment: removal of an appetitive event (watching TV) to reduce an unwanted behavior Negative punishment is also called omission training With omission training: •Reinforcement is provided when the unwanted response does not occur •The unwanted behavior leads to no reinforcement There are two categories of negative punishment •Response cost: an undesired response results in either the withdrawal of or failure to obtain reinforcement. •Time-out from reinforcement (time out): a period of time during which reinforcement is unavailable •The Effectiveness of Punishment Punishment appears to suppress unwanted behaviors •However, the suppression is often temporary In some cases, however, punishment permanently suppresses unwanted behaviors The Severity of Punishment •The more severe the punishment, the more likely it is to suppress unwanted behavior •If mild punishment does lead to behavior suppression, it is usually short lived •The more intense the punishment, the more complete the suppression of behavior The Consistency of Punishment •Punishment must be consistent to suppress behavior •Punishment should be administered with each and every occurrence of the unwanted behavior •Very difficult to do this in real life Delay of Punishment •Punishment must be immediate to suppress behavior •Immediate administration of punishment is often difficult or impossible in real life

Ivan Pavlov

was trained as a physiologist and was studying digestion, using the dog as an animal research model. -He noticed that the dogs started to secrete stomach juices before the food was placed into their mouths. -He concluded that the dogs had learned a new behavior. •He suggested that humans and animals have innate or unconditioned reflexes (i.e., require no prior training or experience to elicit the behavior). -Unconditioned stimulus: An environmental event that can elicit an instinctive reaction without any experience -Unconditioned response: An innate reaction to an unconditioned stimulus •A conditioned reflex develops when a neutral environmental event occurs along with unconditioned stimulus. •The neutral stimulus becomes the conditioned stimulus •Conditioned stimulus: A stimulus that comes to elicit a learned response as a result of being paired with an unconditioned stimulus •Conditioned response: A learned reaction to a conditioned stimulus •Demonstrating a learned reflex in animals was important because -It illustrated an animal's ability to learn -It illustrated the mechanism responsible for the learned behavior •Pavlovian conditioning: Pavlov studied the conditioning process extensively and identified procedures that influence learned behaviors. •Generalization: Responding in the same manner to similar stimuli •Extinction: The elimination or suppression of a response caused by the removal of the conditioned stimulus.

Thorndike

•A scientist, not a philosopher, like Locke and Hume •Observed cats trying to escape from an apparatus called a puzzle box -Asserted that cats were not conscious of the behaviors and associations but rather exhibited habits (pairings of specific stimuli with specific responses). -The connection developed because the cat received a reward •His observations led him to develop laws of behavior -Law of effect stated that a response, made in the presence of a stimulus that leads to a satisfying result, will strengthen the bond between the stimulus and the response -Law of readiness stated that the organism must be motivated to develop an association or to exhibit a previously established habit. -It is noteworthy that, in Thorndike's formulation, the consequence or reward was merely a facilitator to strengthen the stimulus-response relation. •Thorndike relied on future behaviorists to examine the critical role of motivation in the S-R connection. •Thorndike also proposed a second mechanism by which learning can occur. -Associative shifting: when a stimulus that has elicited a response becomes associated with a second stimulus. The second stimulus gradually elicits the response also. •For example, the chime of your alarm clock (Stimulus 1) prompts you to wake up (response). A new song that you like has just been released, so now you play the radio along with the alarm chime. Soon you wake up to the sound of the radio, even if there is no alarm chime. -Very similar to Classical Conditioning

William James

•Argued that the major difference between humans and lower animals is in the character of their inborn or instinctive motives -Humans possess a greater range of instincts that guide behavior than do lower animals •These include "social" instincts, which directly enhance our interaction with the environment and our survival •Instincts: are both purposeful and directional •James differed from Dewey in that James believed that instincts motivate the behavior of both humans and lower animals Many psychologists opposed the mentalist concept •Mentalist concept (i.e., the idea that instincts have purpose and direction) Opponents of the mentalist concept argued that internal biochemical forces motivate behavior in all species (i.e., today this would be considered neurotransmitters within the brain). •This mechanistic approach was based on developments in physics and chemistry during the second half of the 19th century •The Mechanistic view can be summed up: ". . . the living organism is a dynamic system to which the laws of chemistry and physics apply . . ." Ernst Brucke (1874) •Today the mechanistic view would be considered a branch from the behavioral neuroscience perspective •Today the mechanistic view would be considered a branch from the behavioral neuroscience perspective

Historical origins of behaviorism

•Behavioral scientists don't always agree on the causes of a specific behavior. -Some argue that instinct determines behavior. -Others argue that experience determines behavior. •Behavioral theory emphasizes the central role of experience in determining behavior (i.e., experience-dependent learning). •However, several early schools of thought contributed to the development of behavioral theory and the school of thought called behaviorism.

systematic desensitization

•Developed by Joseph Wolpe •Used to inhibit fear and suppress phobic behavior •Phobia is an unrealistic fear of an object or situation •SD uses counterconditioning and Wolpe based it on three lines of evidence •The Contribution of Mary Cover Jones -Mary Cover Jones (1924) developed an effective technique to eliminate fears (Peter and the Rabbit). She taught Peter to become acclimated to a white rabbit after being initially afraid of it. -This gradual desensitization behavioral procedure through conditioning a positive emotional response to the rabbit from the original negative emotional response of fear is called counterconditioning (i.e., an opponent process). -Jones is credited for first establishing the development for an effective treatment of human phobic behavior. Original Animal Studies •Sherrington's (1906) statement that an animal can only experience one emotion at a time -Wolpe called this reciprocal inhibition •Mary Cover Jones's report that she eliminated a young boy's fear of rabbits by presenting the rabbits to the boy while the boy was eating •Wolpe's earlier research which demonstrated that fear inhibits eating behavior in cats -He reasoned that eating, if sufficiently intense, could, therefore, suppress fear •Applications of Pavlovian Conditioning Clinical Treatment •Systematic desensitization involves performing deep muscle relaxation techniques while first imagining, then experiencing, anxiety-inducing scenes -Relaxation involves cue-controlled relaxation, a conditioned relaxation response that enables a word cue (e.g., "calm") to elicit relaxation promptly » •Applications of Pavlovian Conditioning -SD consists of four separate stages: •Construction of the anxiety hierarchy •Relaxation training •Counterconditioning: the pairing of relaxation with the feared stimulus •Assessment of whether the patient can successfully interact with the phobic object » •Applications of Pavlovian Conditioning -Hierarchies may be either •Thematic: scenes all related to a basic theme •Spatial-temporal: based on phobic behavior in which the intensity of the fear is determined by distance--either physical or temporal -After the hierarchy is constructed, the patient learns to relax in the presence of the imagined stimuli -Counterconditioning continues until the patient can imagine the most aversive scene without becoming anxious Clinical Effectiveness •To assess the effectiveness of SD, the patient must encounter the feared object or situation •SD is very effective and also produces rapid extinction of phobia •A wide range of phobia has been extinguished using SD Extinction of Drug Craving •Conditioned withdrawal response: environmental cues associated with withdrawal produce a conditioned craving and motivation to resume drug use •Conditioned withdrawal reactions can be elicited after months of abstinence •To ensure a permanent cure, addict must stop "cold turkey" and also extinguish all of the conditioned cues •To increase sustained abstinence, some therapists have used a technique that involves exposing the addict to as many drug-related cues as possible during extinction •Withdrawal responses and drug cravings decreased as a result of exposure to drug-related cues

stimulus narrowing

•Extinction of the Conditioned Response Stimulus narrowing: the restriction of a response to a limited number of situations Extinction Procedure •Extinction of a conditioned response: when the conditioned stimulus does not elicit the conditioned response because the unconditioned stimulus no longer follows the conditioned stimulus How Rapidly Does a Conditioned Response Extinguish? •There are several factors that influence the rate of extinction •The Strength of the CR -Hull considered the extinction process to be a mirror image of the acquisition •Thus, the stronger the CS-CR bond, the more difficult to extinguish the CR -Recent research shows that there is not a perfect correspondence •One reason is that extinction alters the motivation level via omission of the UCS •The Influence of Predictiveness -The more predictive the CS, the more rapid the extinction •Thus, there is an inverse relationship •The effect is not the result of number of CS-UCS pairings, it is the predictiveness of the CS •Duration of CS Exposure -As the duration of CS-alone exposure increases, the strength of the CR weakens This effect is determined by total duration of CS-alone exposure, not the number of extinction trials

instinctive basis of behavior

•Hierarchical System -Central instinctive system controls the occurrence of a number of potential behaviors. -Energy accumulates in a specific brain center for each major instinct. •Many systems can contribute energy for each instinct. -Once an effective sign stimulus releases energy, the energy flows to lower centers. •Several FAPs might be released but sign stimulus determines the specific FAP that will be exhibited. The Importance of Experience •Conflicting Motives -When two incompatible sign stimuli are encountered, the response may be different from the FAP from either acting alone. •A third instinct system, different from either of the two conflicting systems, is activated. •In some circumstances, experience can modify instinctive systems. -A conditioning experience can alter •Instinctive behavior •Releasing mechanism for the instinctive behavior •Both •The change can be either an increase or decrease in sensitivity to the sign stimulus. •Only the consummatory response at the end of the behavior chain is resistant to modification. •Experience with the stimulus can establish new behaviors or can establish a new releasing stimulus.

spontaneous recovery

•Inhibition: the presentation of the CS without the UCS suppresses CR -Occurs because of the activation of a central inhibitory state that occurs when the CS is presented without the UCS •The initial inhibition is only temporary -As the strength of the inhibitory state diminishes, the ability of the CS to elicit the CR returns •Spontaneous Recovery: the return of the CR when an interval intervenes between extinction and testing without additional CS-UCS pairings -Thus, the return of the CR following extinction is spontaneous recovery Inhibition of a CR can also become permanent •Pavlov called this process conditioned inhibition There are also other types of inhibition •External inhibition •Latent inhibition •Inhibition of delay

British Empiricism

•John Locke, a 17th-century British empiricist philosopher, expanded on Aristotle's ideas -Claimed that there are no innate ideas •All ideas result from experience •Tabula rasa (i.e., the blank slate of life experience) •Distinguished between simple ideas and complex ideas -Simple ideas: ideas on sensory input -Complex ideas: ideas based on combinations or associations of several simple ideas David Hume •Proposed that three principles of association connect simple ideas to make complex ideas: -Resemblance: How similar the ideas are to each other -Contiguity: How close the ideas are to each other in time and/or space -Cause and effect: The order in which events occur, that is, when one event precedes another •Regarding cause and effect, when two events reliably occur together, with one always preceding the other, we come to believe that A causes B. Note A may not actually cause B, but we infer cause and effect because of contiguity and order of events. A scientific approach to causal effects and order or sequencing effects.

salience of the CS

•Preparedness: an evolutionary predisposition to associate a specific conditioned stimulus and a specific unconditioned stimulus •Contrapreparedness: an inability to associate a specific conditioned stimulus and a specific unconditioned stimulus despite repeated conditioning experiences •Preparedness makes a stimulus more salient -Salient stimuli rapidly become associated with a particular UCS -Salience: the property of a specific stimulus that allows it to become readily associated with a particular UCS •Salience is species dependent The Predictiveness of the CS •The CS must be a reliable predictor of the UCS -When there are multiple CSs, the most reliable predictor of the UCS will be able to elicit the CR •Cue Predictiveness: the consistency with which the CS is experienced with the UCS, which influences the strength of conditioning

the new neural connection causes:

•Stimulus-Substitution Theory •The new neural connection causes: -Exposure to the CS to activate the neural center which processes the CS -Activation of the CS neural center arouses the UCS neural center -The UCS neural center activates the response center for the UCR -Which allows the CS to elicit the CR •Because the response was generated by environmental exposure to the CS (rather than the UCS), we refer to it as the CR •The CS becomes a substitute for the UCS and elicits the same response as the UCS In many cases, the CR and UCR are not the same •In fact, in some cases, the CR and UCR seem opposite to each other The result is that the CS and UCS elicit opposite or "opponent" responses of an Opponent Response •An example of this can be found in studies in which the United States is morphine. •The UCR to morphine is analgesia, reduced sensitivity to pain •The CR to neutral stimuli paired with morphine administration is hyperalgesia, an increased sensitivity to pain

cue deflation effect

•The Cue Deflation Effect Cue deflation effect: when the extinction of a response to one cue leads to an increased reaction to the other conditioned stimulus Rescorla-Wagner model cannot explain a change in the reaction to the less salient cue when response to the more salient cue is extinguished Within-Compound Association: the association of two stimuli, both paired with a UCS, which leads both to elicit the CR •Results in a single level of conditioning to both stimuli •Any change in the value of one stimulus will have a similar impact on the other stimulus Comparator Theory: the theory that the ability of a particular stimulus to elicit a CR depends on a comparison of the level of conditioning to that stimulus and to other stimuli paired with the UCS Extinguishing the response to one conditioned stimulus (the deflated stimulus) changes the value of K to a second conditioned stimulus, which serves to increase the associative strength of the second conditioned stimulus Nicholas Mackintosh (1975) suggested that animals seek information from the environment that predicts the occurrence of biologically significant events (UCSs). Once an animal has identified a cue that reliably predicts a specific event, it ignores other stimuli that also provide information about the event. Animals attend to stimuli that are predictive and ignore stimuli that are irrelevant. Thus, conditioning depends not only on the physical characteristics of stimuli but also on the animal's recognition of the correlation (or lack of correlation) between events (CS and UCS).

conditioning of hunger

•The Motivational Properties of Conditioned Hunger -UCS: taste and smell of food -UCR: internal physiological changes that prepare us to digest and metabolize food •For example, secretion of saliva, gastric juices, insulin •Important: insulin lowers blood sugar, which stimulates hunger, which motivates eating -CS: Kitchen, refrigerator, sight of food -CR: hunger as a conditioned response -Note that fast food restaurants (and other chain restaurants) have uniform appearance and uniform menu items. -This is ideal for specific food contexts to become conditioned stimuli •The Neuroscience of Conditioned Hunger -Basolateral region of the amygdala seems responsible for eliciting feeding behavior via conditioned stimuli •In humans, amygdala is activated in satiated subjects while viewing names of preferred foods •Damage to basolateral amygdala prevents conditioning of feeding in satiated rats when damage occurs prior to conditioning •Damage to basolateral amygdala abolishes conditioning of feeding in satiated rats when damage occurs after conditioning

evaluation of rescorla-wagner model

•The Unconditioned Stimulus Preexposure Effect Evaluation of the Rescorla-Wagner Model •The support for the model has been inconsistent •The UCS preexposure effect supports the model •Potentiation effects, CS preexposure effects, and cue deflation effects do not support the model •The Unconditioned Stimulus Preexposure Effect •UCS preexposure effect: the effect caused by exposure to the UCS prior to conditioning; it impairs later conditioning when a CS is paired with that UCS •The Rescorla-Wagner model explains this: -The presentation of the UCS without the CS occurs in a specific environment or context, which results in the development of associative strength to the context •Since the UCS can only support a limited amount of associative strength, conditioning of associative strength to the stimulus context reduces the level of possible conditioning to the CS •Thus, the presence of the stimulus context will block the acquisition of a CR to the CS when the CS is presented with the UCS in the stimulus context •The UCS preexposure effect is attenuated when the preexposure context is different from the conditioning context •Context Blocking: The idea that conditioning to the context can prevent acquisition of a conditioned response to a stimulus paired with an unconditioned stimulus in that context. Rescorla-Wagner model predicts that when a salient and non salient cue are presented together with the UCS, the salient cue will accrue more associative strength than the non salient cue

Conditioning Techniques

•There are several techniques used to investigate the conditioning process •They include -Eyeblink conditioning -Fear conditioning -Flavor aversion •Eyeblink Conditioning -Procedure: •CS: tone •UCS: puff of air •CR: blink •UCR: blink -Eyeblink conditioning is conducted with rabbits because they rarely blink •They have a third eyelid called a nictitating membrane •This membrane responds to air movement, which causes the eye to close. -Eyeblink conditioning is used to investigate: •Classical conditioning parameters •Brain mechanisms -Noteworthy characteristics of eyeblink conditioning: •UCR and CR differ -UCS elicits a rapid eyeblink response -CS produces a slow, gradual closure of the eye -Eyeblink conditioning is slow •It may take up to 100 CS-UCS pairings to produce responding on 50% of trials •Fear Conditioning -Fear can be measured in several ways •Escape from the stimulus •Avoidance of the stimulus •Conditioned emotional response: the ability of a CS to elicit emotional reactions as a result of the association of the CS with a painful event •The emotional reaction may take the form of freezing which would suppress ongoing operant behavior -Conditioned emotional response •Animal must first learn to bar press for reward •Following operant training, a neutral stimulus (CS) is paired with an aversive stimulus (UCS) •The CS is presented during operant responding •If fear has been conditioned, animal will reduce or stop responding when CS is present. -The level of fear is assessed with a suppression ratio •Suppression ratio: a measure of the amount of fear produced by a specific conditioned stimulus •Suppression ratio = CS responding/(CS responding + Pre-CS responding) -Interpretation of suppression ratio •Values can range from 0 to 0.5 •0.5 means that fear conditioning has not occurred •0 indicates total conditioning •Most scores will fall somewhere in between


Conjuntos de estudio relacionados

France and the French Revolution

View Set

Lecture 5 - International Business Ethics

View Set

African American's Quest for Civil Rights (1917-80)

View Set

Module 6 & 7 study guide Fund of data communication.

View Set

The Business Plan: Creating and Starting the Venture 7

View Set