Psychology chapter 5: Learning

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

LO 12 Name the schedules of reinforcement and give examples of each. (p. 182)

In a fixed-ratio schedule, reinforcement follows a pre-set number of desired responses or behaviors. In a variable-ratio schedule, reinforcement follows a certain number of desired responses or behaviors, but the number changes across trials (fluctuating around a precalculated average). In a fixed-interval schedule, reinforcement comes after a preestablished interval of time; the response or behavior is only reinforced after the given interval passes. In a variable-interval schedule, reinforcement comes after an interval of time passes, but the length of the interval changes from trial to trial.

Office Pranks-classical conditioning

Jim plays a classical conditioning trick on his coworker Dwight Every time Jim's computer makes the reboot sound, Jim hands Dwight an Altoids breath mint. After several pairings of the reboot sound and the mint, Dwight automatically reaches out his hand in anticipation of the mint. "What are you doing?" asks Jim. Looking confused, Dwight replies, "My mouth tastes so bad all of the sudden" (Williams & Whittingham, 2007, 0:53-1:03).

shaping

Process by which a person observes the behaviors of another organism, providing reinforcers if the organism performs at a required level.

punishment

The application of a consequence that decreases the likelihood of a behavior recurring

models

The individual or character whose behavior is being imitated.

negative reinforcement

The removal of an unpleasant stimulus following a target behavior, which increases the likelihood of it occurring again.

Reinforcement

in operant conditioning, any event that strengthens the behavior it follows Process of increasing the frequency of behaviors with consequences.

Russian physiologist Ivan Pavlov spent the 1890s studying the digestive system of dogs at Russia's Imperial Institute of Experimental Medicine, where about 100 people worked in his laboratory (Todes, 2014; Watson, 1968). Many of his early experiments involved measuring how much dogs salivate in response to food. At first, the dogs salivated as expected, but as the experiment progressed, they began salivating3 at the mere sight or sound of the lab assistant arriving to feed them. Pavlov realized that the dogs' "psyche," or personality, and their "thoughts about food" were interfering with the collection of objective data on their digestion (Todes, 2014, p. 158). In other words, some unobservable activities were affecting their physiology, making it difficult for the researchers to study digestion as an isolated phenomenon. The dog was associating the sound of footsteps or the sight of a bowl with the arrival of food; it had linked certain sights and sounds with eating. Intrigued by his discovery, Pavlov shifted the focus of his research, investigating the dogs' salivation in these types of scenarios

A dog naturally begins to salivate when exposed to the smell of food, even before tasting it. This is an involuntary response of the autonomic nervous system, which we explored in Chapter 2. Dogs do not normally salivate at the sound of footsteps, however. This response is a learned behavior, as the dog salivates without tasting or smelling food.

conditioned taste aversion

A form of classical conditioning that occurs when an organism learns to associate the taste of a particular food or drink with illness

cognitive maps

A mental representation of physical space

succesive approximations

A method that uses reinforcers to condition a series of small steps that gradually approach the target behavior.

conditioned stimulus (CS)

A previously neutral stimulus that an organism learns to associate with an unconditioned stimulus

unconditioned response (UR)

A reflexive, involuntary response to an unconditioned stimulus.

primary reinforcer

A reinforcer that satisfies a biological need; innate reinforcer

variable-interval schedule

A schedule in which a behavior is reinforced after an interval of time, but the length of the interval changes from trial to trial.

variable-ratio schedule

A schedule in which the number of desired behaviors that must occur before a reinforcer is given changes across trials and is based on an average number of behaviors to be reinforced.

fixed-interval schedule

A schedule in which the reinforcer comes after a preestablished interval of time; the behavior is only reinforced after the given interval is over.

fixed-ratio schedule

A schedule in which the subject must exhibit a predetermined number of desired behaviors before a reinforcer is given.

partial reinforcement

A schedule of reinforcement in which target behaviors are reinforced intermittently, not continuously.

unconditioned stimulus (US)

A stimulus that automatically triggers an involuntary response without any learning needed.

Secondary reinforcers do not satisfy biological needs, but often derive their power from their connection with primary reinforcers.

Although money is not a primary reinforcer, we know from experience that it gives us access to primary reinforcers, such as food and a safe place to live. Thus, money is a secondary reinforcer. Good grades might also be considered secondary reinforcers, because doing well in school leads to job opportunities, which provide money to pay for food and other necessities.

Conditioned Emotional Response (CER)

An emotional reaction acquired through classical conditioning; process by which an emotional reaction becomes associated with a previously neutral stimulus.

LO 5 Summarize how classical conditioning is dependent on the biology of the organism. (p. 171)

Animals and people show biological preparedness, meaning they are predisposed to learn associations that have adaptive value. For example, a conditioned taste aversion is a form of classical conditioning that occurs when an organism learns to associate the taste of a particular food or drink with illness. Avoiding foods that induce sickness increases the odds the organism will survive and reproduce, passing its genes along to the next generation.

Have you ever experienced food poisoning? After falling ill from something, whether it was uncooked chicken or unrefrigerated mayonnaise, you probably steered clear of that particular food for a while. This is an example of conditioned taste aversion, a powerful form of classical conditioning that occurs when an organism learns to associate a specific food or drink with illness. Often, it only takes a single pairing between a food and a bad feeling—that is, one-trial learning—for an organism to change its behavior. Imagine a grizzly bear that avoids poisonous berries after vomiting from eating them. In this case, the unconditioned stimulus (US) is the poison in the berries; the unconditioned response (UR) is the vomiting caused by the poison. After acquisition, the conditioned stimulus (CS) would be the sight or smell of the berries, and the conditioned response (CR) would be a nauseous feeling. The bear would likely steer clear of the berries in the future.

Avoiding foods that induce sickness has adaptive value, meaning it helps organisms survive, upping the odds they will reproduce and pass their genes along to the next generation. According to the evolutionary perspective,8 humans and other animals have a powerful drive to ensure that they and their offspring reach reproductive age, so it's critical to steer clear of tastes that have been associated with illness.

LO 10 Explain shaping and the method of successive approximations. (p. 179)

Building on Thorndike's law of effect and Watson's approach to research, Skinner used shaping through successive approximations (small steps leading to a desired behavior) with pigeons and other animals. With shaping, a person observes the behaviors of animals, providing reinforcers when they perform at a required level. Animal behavior can be shaped by forces in the environment, but instinct may interfere with the process. This instinctive drift is the tendency for instinct to undermine conditioned behaviors.

What would happen if your friend heard a similar-sounding notification coming from someone else's phone? If he salivates, he is displaying stimulus generalization, which is the tendency for stimuli similar to the conditioned stimulus (CS) to elicit the conditioned response (CR). Once an association is forged between a conditioned stimulus (CS) and a conditioned response (CR), the learner often responds to similar stimuli as if they were the original CS. When Pavlov's dogs learned to salivate in response to a metronome ticking at 90 beats per minute, they also salivated when the metronome ticked a little faster (100 beats per minute) or slower (80 beats per minute; Hothersall, 2004). Their response was generalized to metronome speeds ranging from 80 to 100 beats per minute.

If a new stimulus is significantly different from the conditioned stimulus (CS), stimulus generalization may not occur. Suppose Pavlov's dogs learned to salivate in response to a high-pitched sound. If these dogs were exposed to lower-pitched sounds, they may not salivate. If so, they would be demonstrating stimulus discrimination, the ability to distinguish between a particular conditioned stimulus (CS) and other stimuli sufficiently different6 from it. Getting back to your cell-phone prank, if your friend does not salivate in response to a notification sound from someone else's phone, he is displaying stimulus discrimination.

LO 3 Identify the differences between the US, UR, CS, and CR. (p. 166)

In classical conditioning, a neutral stimulus (NS) is something in the environment that does not normally cause a relevant automatic response. This neutral stimulus (NS) is repeatedly paired with an unconditioned stimulus (US) that triggers an unconditioned response (UR). During this process of acquisition, the neutral stimulus (NS) becomes a conditioned stimulus (CS) that elicits a conditioned response (CR). In Pavlov's experiment with dogs, the neutral stimulus (NS) might have been the sound of a buzzer; the unconditioned stimulus (US) was the meat; and the unconditioned response (UR) was the dog's salivation. After repeated pairings with the meat, the buzzer (originally a neutral stimulus) became a conditioned stimulus (CS), eliciting the conditioned response (CR) of salivation, a learned behavior.

LO 13 Explain how punishment differs from reinforcement. (p. 188)

In contrast to reinforcement, which makes a behavior more likely to recur, the goal of punishment is to decrease a behavior. Punishment decreases a behavior by instilling an association between a behavior and some unwanted consequence (for example, between stealing and going to jail, or between misbehaving and loss of screen time). Negative reinforcement strengthens a behavior that it follows by removing something aversive or disagreeable.

positive reinforcement

Increasing behaviors by presenting positive stimuli, such as food. A positive reinforcer is any stimulus that, when presented after a response, strengthens the response. The process by which reinforcers are added or presented following a target behavior, increasing the likelihood of it occurring again.

LO 15 Describe latent learning and explain how cognition is involved in learning. (p. 194)

Latent learning occurs without awareness and regardless of reinforcement. Edward Tolman showed that rats could learn to navigate mazes even when given no reinforcement. Their learning only became apparent when it was needed. Latent learning is evident in our ability to form cognitive maps, or mental representations of our physical surroundings. Studies on latent learning and cognitive maps focus on the cognitive processes underlying behavior.

LO 1 Define learning. (p. 163)

Learning is a relatively enduring change in behavior or thinking that results from experiences. Organisms as simple as fruit flies and as complex as humans have the ability to learn. Learning is about creating associations. Sometimes we associate two different stimuli (classical conditioning). Other times we make connections between our behaviors and their consequences (operant conditioning). We can also learn by watching and imitating others (observational learning), creating a link between our behavior and the behavior of others.

classical conditioning

Learning process in which two stimuli become associated with each other; when an originally neutral stimulus is conditioned to elicit an involuntary response.

operant conditioning

Learning that occurs when voluntary actions become associated with their consequences

latent learners

Learning that occurs without awareness and regardless of reinforcement, and is not evident until needed.

pavlov abusing animals..

Many of Pavlov's studies had the same basic format (INFOGRAPHIC 5.1). Prior to the experiment, the dog had a tube surgically inserted into its cheek so researchers could determine exactly how much saliva it was producing. Once the dog had recovered from the surgery, it was placed alone in a soundproof room and outfitted with equipment to keep it from moving around. Because Pavlov was interested in exploring the link between a stimulus and the dog's response, he had to pick a stimulus that was more controlled4 than the sound of someone walking into a room. Pavlov used a variety of stimuli, including flashing lights and sounds produced by metronomes and buzzers, which normally have nothing to do with food or salivation. In other words, they are neutral stimuli in relation to feeding and responses to food. On numerous occasions during an experimental trial, Pavlov and his assistants presented a dog with a stimulus—the sound of a buzzer, for instance—and then moments later gave the dog a piece of meat. Each time the buzzer was sounded, the assistant would wait a couple of seconds and then give the dog meat. All the while, the dog's saliva was being measured, drop by drop. After repeated pairings, the dog began to link the buzzer with the meat. It would salivate in response to the sound alone, with no meat present, evidence that learning had occurred. The dog had learned to associate the buzzer with food.

The classic case study of "Little Albert," conducted by John B. Watson (1878-1958) and Rosalie Rayner (1898-1935), provides a famous illustration of conditioned emotional response (Watson & Rayner, 1920). Little Albert was around 9 months old when first tested by Watson and Rayner (Griggs, 2015d; Powell, Digdon, Harris, & Smithson, 2014). Initially, he had no fear of rats; in fact, he was rather intrigued by the white critters and sometimes reached out to touch them. But all this changed when Albert was about 11 months old; that's when the researchers began banging a hammer against a steel bar whenever he reached for the rat (Harris, 1979). Each time the researchers paired the loud noise (an unconditioned stimulus) and the appearance of the rat (a neutral stimulus), Albert responded in fear (the unconditioned response). After only seven pairings, he began to fear rats and generalized this fear to other furry objects, including a sealskin coat and a rabbit (Harris, 1979). The sight of the rat went from being a neutral stimulus (NS) to a conditioned stimulus (CS), and Albert's fear of the rat was a conditioned response (CR).

Nobody knows exactly what happened to Little Albert after he participated in Watson and Rayner's research. Some psychologists believe Little Albert's true identity is still unknown (Powell, 2010; Reese, 2010). Others have proposed Little Albert was Douglas Merritte, who had a neurological condition called hydrocephalus and died at age 6 (Beck & Irons, 2011; Beck, Levinson, & Irons, 2009; Fridlund, Beck, Goldie, & Irons, 2012). Still others suggest Little Albert was a healthy baby named William Albert Barger, who lived until 2007 (Bartlett, 2014; Digdon, Powell, & Harris, 2014; Powell et al., 2014). We may never know the true identity of Little Albert or the long-term effects of his conditioning through this unethical study. Watson and Rayner (1920) discussed how they might have reduced Little Albert's fear of rats (for example, giving him candy while presenting the rat), but they were never able to provide him with such treatment (Griggs, 2014a). The Little Albert study would never happen today. Contemporary psychologists conduct research according to stringent ethical guidelines, and instilling terror in a baby would not be accepted or allowed at research institutions.

positive punishment

The addition of something unpleasant following an unwanted behavior, with the intention of decreasing that behavior.

At the start of a trial, before a dog is conditioned or has learned anything about the neutral stimulus (NS), it salivates when it smells or receives food, in this case meat. The meat is considered an unconditioned stimulus (US) because it triggers an automatic response. Salivating in response to food is an unconditioned response (UR) because it doesn't require any learning; the dog just does it involuntarily. To reiterate, the smell or taste of meat (US) elicits the automatic response of salivation (the UR). After conditioning has occurred, the dog responds to the buzzer almost as if it were food. The buzzer, previously a neutral stimulus (NS), has now become a conditioned stimulus (CS) because it prompts the dog to salivate. When salivation occurs in response to the buzzer, it is a learned behavior; we call it a conditioned response (CR). When trying to figure out the proper label for the response, ask yourself what caused it: Was it the food or the buzzer? Knowing this will help you determine whether it is conditioned (learned) or unconditioned (not learned).

Now that you have a general understanding of classical conditioning, let's have some fun with it. Imagine you wanted to play a Pavlovian prank on an unsuspecting friend. All you would need is a phone and some Sour Patch Kids (sour candy). Personalize your alert tone with a unique sound (twinkle). Then, every time you get a notification ("twinkle, twinkle!"), hand your friend a Sour Patch Kid. The twinkle sound is initially a neutral stimulus (NS), and the candy is an unconditioned stimulus (US), because it causes salivation (an unconditioned response). With repeated pairings of the "twinkle, twinkle!" and the candy, your friend will begin to associate the neutral stimulus (twinkle sound) and the unconditioned stimulus (candy). After this conditioning has occurred, the twinkle sound becomes a conditioned stimulus (CS) that has the power to produce a conditioned response (salivation)—and your friend may wonder why your phone is making his mouth water!

LO 14 Summarize what Bandura's classic Bobo doll study teaches us about learning. (p. 191)

Observational learning can occur when we watch a model demonstrate a behavior. Albert Bandura's classic Bobo doll experiment showed that children readily imitate aggression when they see it modeled by adults. Studies suggest that children may be inclined to mimic aggressive behaviors seen in TV shows, movies, video games, and on the Internet. Observation of prosocial behaviors, on the other hand, can encourage kindness, generosity, and other forms of behavior

LO 4 Recognize and give examples of stimulus generalization and stimulus discrimination. (p. 167)

Once conditioning has occurred, and the conditioned stimulus (CS) elicits the conditioned response (CR), the learner may respond to similar stimuli as if they were the original CS. This is called stimulus generalization. For example, someone who has been bitten by a small dog and reacts with fear to all dogs, big and small, demonstrates stimulus generalization. Stimulus discrimination is the ability to differentiate between a particular conditioned stimulus (CS) and other stimuli sufficiently different from it. Someone who was bitten by a small dog may be afraid of small dogs, but not large dogs, thus demonstrating stimulus discrimination.

LO 7 Describe Thorndike's law of effect. THORNDIKE AND HIS CATS

One of the first scientists to objectively study how consequences affect behavior was American psychologist Edward Thorndike (1874-1949). Thorndike's early research focused on chicks and other animals, which he sometimes kept in his apartment (Hothersall, 2004). The research with chicks was only a starting point, as Thorndike's most famous studies involved cats. One of his experimental setups involved putting a cat in a latched cage called a "puzzle box" and planting enticing pieces of fish outside the door. When first placed in the box, the cat would scratch and paw around randomly, but after a while, just by chance, it would pop the latch, causing the door to release. The cat would then escape the cage to devour the fish (Figure 5.2). The next time the cat was put in the box, it would repeat this random activity, scratching and pawing with no particular direction. And again, just by chance, the cat would pop the door latch and escape to eat the fish. Each time the cat was returned to the box, the number of random activities decreased until eventually it was able to break free almost immediately (Thorndike, 1898). The cat's behavior, Thorndike reasoned, could be explained by the law of effect, which says that a behavior (opening the latch) is more likely to happen again when followed by a pleasurable outcome (delicious fish). Behaviors that lead to pleasurable results will be repeated, while behaviors that don't lead to pleasurable results (or are followed by something unpleasant) will not be repeated. The law of effect applies broadly, not just to cats. When was the last time your behavior changed as a result of a pleasurable outcome? Most contemporary psychologists would call the pieces of fish in Thorndike's experiments reinforcers, because these treats increased the likelihood that the preceding behavior (escaping the cage) would occur again. Reinforcers are events or stimuli that follow behaviors, and they increase the chances of those behaviors being repeated. Examples of reinforcers that might impact human behavior include praise, hugs, good grades, enjoyable food, and attention. Through the process of reinforcement, target behaviors become more frequent. A dog given a treat for sitting is more likely to obey the "sit" command in the future. An Instagram user who is reinforced with a lot of "likes" is more inclined to continue posting photos and videos.

LO 8 Explain how positive and negative reinforcement differ. (p. 177)

Positive reinforcement occurs when target behaviors are followed by rewards and other reinforcers. The addition of reinforcers (typically pleasant stimuli) increases the likelihood of the behavior recurring. Behaviors can also increase in response to negative reinforcement, or the removal of something unpleasant immediately following the behavior. Both positive and negative reinforcement increase desired behaviors.

LO 9 Distinguish between primary and secondary reinforcers. (p. 179)

Primary reinforcers satisfy biological needs. Food, water, and physical contact are considered primary reinforcers. Secondary reinforcers do not satisfy biological needs, but often derive their power from their connection with primary reinforcers. Money is an example of a secondary reinforcer; we know from experience that it gives us access to primary reinforcers, such as food

LO 11 Describe continuous reinforcement and partial reinforcement. (p. 181)

Reinforcers can be delivered on a constant basis (continuous reinforcement) or intermittently (partial reinforcement). Continuous reinforcement is generally more effective for establishing a behavior. However, behaviors learned through partial reinforcement are generally more resistant to extinction (the partial reinforcement effect).

secondary reinforcer

Reinforcers that do not satisfy biological needs but often gain power through their association with primary reinforcers.e

Skinner identified various ways to administer partial reinforcement, or partial reinforcement schedules.

Skinner used four schedules of partial reinforcement: fixed-ratio, variable-ratio, fixed-interval, and variable-interval

Once the dogs associate the buzzer sound with meat, can they ever listen to the sound without salivating?

The answer is yes—if they are repeatedly exposed to the buzzer without the meat to follow. Present the conditioned stimulus (CS) in the absence of the unconditioned stimulus (US), over and over, and the conditioned response (CR) decreases and eventually disappears in a process called extinction. In general, if dogs are repeatedly exposed to a conditioned stimulus (for example, a metronome or buzzer) without any tasty treats to follow, they produce progressively less saliva in response to the stimulus and, eventually, none at all (Watson, 1968).

LO 6 Describe the Little Albert study and explain how fear can be learned. (p. 173)

The case study of Little Albert illustrates conditioned emotional response (fear in Little Albert's case) acquired via classical conditioning. When Little Albert heard a loud bang (an unconditioned stimulus), he responded in fear (an unconditioned response). Through conditioning, the sight of a rat became paired with the loud noise and went from being a neutral stimulus to a conditioned stimulus (CS). Little Albert's fear of the rat was a conditioned response (CR).

Adaptive trait

The degree to which a trait or behavior helps an organism survive.

LO 2 Explain what Pavlov's studies teach us about classical conditioning. (p. 165)

The dogs in Pavlov's studies learned to link food to various stimuli, such as flashing lights and buzzer sounds, that normally have nothing to do with food or salivation. Once such a link was formed, the dogs would salivate in response to the stimulus alone, even with no food present. An originally neutral stimulus (a buzzer sound, for example) triggered an involuntary response (salivation). We call this type of learning classical conditioning.

acquisition

The initial learning phase in both classical and operant conditioning

negative punishment

The removal of something desirable following an unwanted behavior, with the intention of decreasing that behavior.

biological preparedness

The tendency for animals to be predisposed or inclined to form certain kinds of associations through classical conditioning.

partial reinforcement effect

The tendency for behaviors acquired through intermittent reinforcement to be more resistant to extinction than those acquired through continuous reinforcement.

stimulus generalization

The tendency for stimuli similar to the conditioned stimulus to elicit the conditioned response

In Chapter 3, we discussed sensory adaptation, which is the tendency to become less aware of constant stimuli. Becoming habituated to sensory input keeps us alert to changes in the environment.

This chapter focuses on three major types of learning: classical conditioning, operant conditioning, and observational learning. As you make your way through each section, you will begin to realize that learning is very much about creating associations. Through classical conditioning, we associate two different stimuli: for example, the sound of a buzzer and the arrival of food. In operant conditioning, we make connections between our behaviors and their consequences: for example, through rewards and punishments. With observational learning, we learn by watching and imitating other people, establishing a closer link between our behavior and the behavior of others. Learning allows us to grow and change, and it is critical for achieving our goals.

LO 7 Describe Thorndike's law of effect. (p. 175)

Thorndike's law of effect states that a behavior is more likely to reoccur when followed by a pleasurable outcome. For the cats in Thorndike's puzzle boxes, the behavior was breaking free and the pleasurable outcome was a piece of fish waiting outside the door. Over time, the cats escaped faster and faster, until eventually they were breaking free almost immediately. The pieces of fish in Thorndike's experiments served as reinforcers, because they increased the frequency of the preceding behavior (escaping the box). Reinforcers are a key component of operant conditioning, a type of learning wherein people or animals come to associate their voluntary actions with consequences.

higher-order conditioning

With repeated pairings of a conditioned stimulus and a second neutral stimulus, that second neutral stimulus becomes a conditioned stimulus as well.

Law of Effect (Thorndike)

a behavior followed by a reward or satisfying consequences is is strengthened and more likely repeated Thorndike's principle stating that behaviors are more likely to be repeated when followed by pleasurable outcomes, and less likely to be repeated when followed by unpleasant outcomes.

neutral stimulus (NS)

a stimulus that elicits no response before conditioning A neutral stimulus doesn't trigger any particular response at first, but when used together with an unconditioned stimulus, it can effectively stimulate learning. A good example of a neutral stimulus is a sound or a song. When it is initially presented, the neutral stimulus has no effect on behavior.

operant conditioning

a type of learning whereby people or animals come to associate their voluntary actions with consequences. B. F. Skinner coined the term operant conditioning, and its meaning is fairly simple. The term operant "emphasizes the fact that the behavior operates on the environment to generate consequences" and in "operant conditioning we 'strengthen' an operant in the sense of making a response more probable . . . or more frequent"

One of the most basic forms of learning is called habituation . Habituation occurs when

an organism reduces its response1 to a recurring stimulus (an event or object that generally leads to a change in behavior). Initially, an animal might respond to a stimulus, but that response may diminish with repeated exposures (assuming the stimulus is not threatening). Essentially, an organism learns about a stimulus and becomes less responsive to it. This type of learning is apparent in a wide range of living beings, from humans to sea slugs to baby chickens. When 3-day-old chicks are exposed to a loud sound, they automatically freeze. But if the loud sound is repeated, even just 5 times, the newborn chicks become habituated to it and carry on with what they were doing

Habituation

an organism's decreasing response to a stimulus with repeated exposure to it A basic form of learning evident when an organism does not respond as strongly or as often to an event following multiple exposures to it

Stimulus

any event or situation that evokes a response An event or object that generally leads to a response.

Reinforcers

are events or stimuli that follow behaviors, and they increase the chances of those behaviors being repeated. Examples of reinforcers that might impact human behavior include praise, hugs, good grades, enjoyable food, and attention.

Any time a new, nonuniversal link between a stimulus and a response is established (footsteps and salivation), a type of learning called ___________ has occurred.

conditioning

According to the evolutionary perspective

humans and other animals have a powerful drive to ensure that they and their offspring reach reproductive age, so it's critical to steer clear of tastes that have been associated with illness.

observational learning

learning by observing others Learning that occurs as a result of watching the behavior of others.

Before the experiment begins, the sound of the buzzer is a

neutral stimulus (NS)—something in the environment that does not normally cause a relevant automatic response.

Reinforcers are a key component of

operant conditioning

Continuous reinforcement is ideal for establishing new behaviors during the acquisition phase, but delivering reinforcers intermittently generally works better for maintaining behaviors. We call this approach

partial reinforcement.

prosocial behavior

positive, constructive, helpful behavior Actions that are kind, generous, and beneficial to others.

continuous reinforcement

reinforcing the desired response every time it occurs

The reappearance of the conditioned response (CR) following its extinction is called

spontaneous recovery.

stimulus discrimination

the ability to distinguish between a particular conditioned stimulus (CS) and other stimuli sufficiently different6 from it.

Extinction

the diminishing of a conditioned response; occurs in classical conditioning when an unconditioned stimulus (US) does not follow a conditioned stimulus (CS); occurs in operant conditioning when a response is no longer reinforced. In classical conditioning, the process by which the conditioned response decreases after repeated exposure to the conditioned stimulus in the absence of the unconditioned stimulus; in operant conditioning, the disappearance of a learned behavior through the removal of its reinforcer.

Learning

the process of acquiring new and relatively enduring information or behaviors A relatively enduring change in behavior or thinking that results from experiences

If reinforcement is delivered in a fixed-ratio schedule,

the subject must exhibit a preset number of desired responses or behaviors before a reinforcer is given. A pigeon in a Skinner box, for example, must peck a spot five times in order to score a delicious pellet. Generally, the fixed-ratio schedule produces a high response rate, but with a characteristic dip immediately following the reinforcement. (The pigeons rest briefly after being reinforced.) Some instructors use the fixed-ratio schedule to reinforce attendance. For example, treats are provided when all students show up on time for three classes in a row.

In a variable-ratio schedule,

the subject must exhibit a specific number of desired responses or behaviors before a reinforcer is given, but the number changes across trials (fluctuating around a precalculated average). If the goal is to train a pigeon to peck a spot on a target, a variable-ratio schedule could be used as follows: Trial 1, the pigeon gets a pellet after pecking the spot twice; Trial 2, the pigeon gets a pellet after pecking the spot once; Trial 3, the pigeon gets a pellet after pecking the spot three times; and so on. Here's another example: To encourage on-time attendance, an instructor provides treats after several classes in a row, but students don't know if it will be on the third class, the second class, or the fifth class. This variable-ratio schedule tends to produce a high response rate. And because of its unpredictability, this schedule tends to establish behaviors that are difficult to extinguish.

instinctive drift

the tendency for instinct to undermine conditioned behaviors. The tendency for animals to revert to instinctual behaviors after a behavior pattern has been learned


Set pelajaran terkait

NUR 204 Chapter 27 : Hygiene and Personal care

View Set

PrepU: Chapter 67: Management of Patients With Cerebrovascular Disorders

View Set

Womens Health Exam 2 Practice Questions

View Set

Chapter 18: Breasts and Axillae, Jarvis

View Set

Vocabulary - Travel and holidays

View Set