Chapter 4, Part 2

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Pavlov is to classical conditioning, as _____ is to operant conditioning. A) Skinner. B) Freud. C) Watson. D) Bandura.

A Pavlov would be associated with classical conditioning and B.F. Skinner would be associated with operant conditioning. B.F. Skinner coined the term operant conditioning. Freud is most famous for his psychoanalysis and Bandura is well known for his social learning theory (pg. 117).

In initial training for the acquisition of a response, the most efficient schedule of reinforcement is _____ reinforcement. A) partial. B) fixed-interval. C) variable-ratio. D) continuous.

D In initial training, continuous reinforcement is most efficient, but once trained, an animal will continue to perform for partial reinforcement. The different schedules produce different response rates in ways that make sense if we assume that the person or animal is striving to maximize the number of reinforcers and minimize the number of unreinforced responses (pg. 119-120).

Which statement does NOT distinguish an animal's play from serious, non-play behavior? A) Young animals play most at those skills that they most need to learn. B) Play involves a great deal of repetition, whereas non-play behavior does not. C) In play, the drive state appropriate to the corresponding non-play behavior is absent. D) Animals do not learn as much from play as they do from non-play behavior.

D Must of what we know about play in animals makes sense in the light of Groos's theory and thereby provides evidence supporting that theory. Here are five categories of such evidence: Young animals play more than adults of their species, species of animals that have the most to learn play the most, young animals play most at those skills they most need to learn, play involves much repetition, and play is challenging. (pg. 128).

The phenomenon in which a person who initially performs a task for no reward (except the enjoyment of the task) becomes less likely to perform that task for no reward after a period during which this person has been rewarded for performing it, is a result of: A) latent learning. B) negative contrast effect. C) positive contrast effect. D) overjustification effect.

D This decline is called the overjustification effect because the reward presumably provides an unneeded extra justification for engaging in the behavior. The result is that people come to regard the task as something that they do for an external reward rather than for its own sake--that is, as work rather than play. Latent learning refers to learning that is not immediately demonstrated. The authors do not discuss a negative or positive contrast effect in this chapter (pg. 124).

A dog wanders around town looking for food. The first day, it walks at random and finds little food in people's garbage. The next day, it finds restaurants in the neighborhood and finds more food in the garbage. The third day, the dog walks down one particular street, it finds that one restaurant has even more food, and by the fourth day, the dog goes directly to that particular restaurant. The dog's behavior has been modified by: A) discriminative stimulus. B) chaining. C) partial reinforcement. D) shaping.

D This is an example of shaping because the behavior of going to one particular restaurant was shaped over time by closer and closer approximations to the final behavior. It is not chaining because it is not a complex chain of behaviors. The answer is not partial reinforcement because we are not looking at a reinforcement schedule. Last, a discriminative stimulus is a stimulus that serves as a signal that a particular response will produce a particular reinforcer (pg. 118).

A variable-ratio schedule would require: A) an unpredictable number of responses before reinforcement. B) a fixed number of responses before reinforcement. C) an unpredictable amount of time before a response would be reinforced. D) continuous reinforcement.

A A variable-ratio schedule is like a fixed-ratio schedule except that the number of responses required before reinforcement varies unpredictably around some average. In a fixed-interval schedule, a fixed period of time must elapse between one reinforced response and the next; whereas, continuous reinforcement is when the response is always reinforced. (pg. 119-120).

A puppy watches an older dog claw its way into a large bag of potato chips and starts eating them with enthusiasm. The puppy is more attracted to eating potato chips than he previously was, which illustrates _____ as a type of _____. A) goal enhancement; observational learning. B) goal enhancement; operant conditioning. C) stimulus enhancement; observational learning. D) stimulus enhancement; operant conditioning

A Goal enhancement refers to an increased drive to obtain rewards similar to what the observed individual is receiving. Observational learning is a type of learning done simply by watching others. On the other hand, stimulus enhancement would be incorrect because it refers to an increase in the salience or attractiveness of the object that the observed individual is acting upon. (pg. 131 & 133).

The type of operant conditioning that will lead to a decrease in the likelihood that a response will be repeated can be identified as: A) punishment. B) negative reinforcement. C) positive reinforcement. D) fixed-ratio schedule of reinforcement.

A Skinner's terminology, punishment is the opposite of reinforcement. It is the process through which the consequence of a response decreases the likelihood that the response will recur. As with reinforcement, punishment can be positive or negative—this question is referring to the overall process, not the type of punishment. In a fixed-ratio schedule a reinforcer is delivered after a set/fixed number of responses—that option is incorrect (pg. 121).

A puppy that sees its mother licking a water bottle to get water may become attracted to the bottle and motivated to drink the water. According to research on observational learning, the puppy's attraction to the bottle is known as: A) stimulus enhancement. B) goal enhancement. C) imitation. D) latent learning.

A The answer is stimulus enhancement. Stimulus enhancement refers to an increase in the salience or attractiveness of the object that the observed individual is acting upon. Goal enhancement on the other hand, refers to an increased drive to obtain rewards similar to what the observed individual is receiving. Imitation is not the answer, because even if the puppy appears to imitate its mother, the attraction to the bottle itself is not imitation. Latent learning refers to learning that is not immediately demonstrated, that is not the case here (pg. 133).

In Hefferline's experiment, people listened to music overlaid occasionally by static and were conditioned to produce a tiny thumb twitch to turn the static off. In that experiment the static turning off served as a(n) _____ for the thumb twitch. A) negative reinforcer. B) positive reinforcer. C) unconditioned stimulus. D) conditioned response

A The removal of the static served as a negative reinforcer, as opposed to a positive one, since the static was an aversive stimulus and it was being removed by their behavior. It is not an unconditioned stimulus since the static does not automatically elicit the thumb twitch and it is not a conditioned response because it is a stimulus (pg. 118).

When a group of goslings hatch, a researcher and an adult female goose are both there to welcome them into the world. The goslings will respond by: A) imprinting on whomever they see first. B) imprinting on the goose. C) imprinting on the researcher. D) running from both the goose and the researcher.

B Although early studies suggested that young birds could be imprinted on humans or other moving objects as easily as on their mothers, later studies proved otherwise. Given a choice between a female of their species and some other object, newly hatched birds invariably choose to follow the former. That said, all of the other answer options are incorrect (pg. 141).

What type of reinforcement schedule produces the greatest resistance to extinction? A) continuous reinforcement. B) partial reinforcement on a variable schedule. C) partial reinforcement on a fixed schedule. D) continuous reinforcement on a variable schedule.

B Partial reinforcement on a variable schedule produces the greatest resistance to extinction. If a rat is trained to press a lever only on continuous reinforcement and then is shifted to extinction conditions, the rat will typically make a few bursts of lever-press responses and then quit. But if the rat has been shifted gradually from continuous to an ever-stingier variable schedule and then finally to extinction, it will often make hundreds of unreinforced responses before quitting (pg. 119-120).

What are the three main methods designed for learning in nature? A) observation, schooling, trial and error. B) play, observation, exploration. C) imitation, trying, instinctual. D) play, exploration, instinctual.

B Play, exploration, and observation are species-typical behavioral tendencies, or drives, that came about through natural selection precisely because they promote learning. In nature, animals (especially mammals) are active learners. The text does not describe schooling, trial and error, instinctual, or imitation as methods for learning in nature (pg. 127).

What is the difference between punishment and reinforcement? A) Punishment increases and reinforcement decreases the likelihood that a response will recur. B) Punishment decreases and reinforcement increases the likelihood that a response will recur. C) Punishment occurs after the response in question and reinforcement occurs prior to the response. D) Punishment always involves unpleasant stimuli and reinforcement always depends on pleasant stimuli.

B Reinforcement and punishment always refer to an increase or decrease in the likelihood that the response will recur. In Skinner's terminology, punishment is the opposite of reinforcement. It is the process through which the consequence of a response decreases the likelihood that the response will recur. (pg. 121).

B. F. Skinner's laboratory procedures were most closely related to: A) Pavlov's salivation-measurement technique. B) Thorndike's puzzle boxes. C) Watson's procedures to condition a fearful response. D) Tolman's maze-learning experiments.

B Skinner's laboratory procedures were most like Thorndike's—both were concerned with the consequences following a behavior. Unlike Thorndike and Skinner, both Watson and Pavlov studied classical conditioning. Tolman, on the other hand, studied latent learning (pg. 116).

Raymond does not like to clean his room, even when his parents continuously tell him to do so. For every day that he doesn't clean his room, his parents take 25 cents from his weekly allowance. Raymond doesn't like that his parents are taking away money, so he stops his messy behavior. This is an example of: A) positive punishment. B) negative punishment. C) positive reinforcement. D) negative reinforcement

B The answer is negative punishment. In negative punishment, the removal of a stimulus decreases the likelihood that the response will occur again. Positive punishment involves the arrival or presentation of a stimulus, which decreases the likelihood that the response will occur again. Reinforcement—both negative and positive—is incorrect because that process increases the likelihood of a response. Because Raymond's behavior stops, we know his behavior is punished, not reinforced.

amal has confessed to going a little overboard to try to control his eating. He has bought a gadget that will dispense two cookies when he presses a button. But, as soon as the cookies have been dispensed, it will lock itself until 8 hours have passed, forcing Jamal to wait to get more cookies. The gadget is reinforcing Jamal's button press on a _____ schedule of reinforcement. A) fixed-ratio. B) fixed-interval. C) variable-ratio. D) variable-interval.

B The gadget is reinforcing Jamal's button press on a fixed-interval schedule of reinforcement. In a fixed-interval schedule, a fixed period of time must elapse between one reinforced response and the next. For a variable-interval schedule to be used in this example, a variable amount of time would have to pass before a button press is reinforced—that is not the case. Because this example involves an amount of time that must elapse and not a number of responses that must occur, the ratio answer options can be eliminated (pg. 120).

A mother gives her son two dollars for every day that his room is clean. After several weeks, she decides that her son has learned the value of cleaning up and withdraws the daily reward. He stops cleaning his room. To which of the following is this cessation of cleaning probably attributable? A) punishment. B) generalization. C) extinction. D) negative reinforcement.

C An operantly conditioned response declines in rate and eventually disappears if it no longer results in a reinforcer. The absence of reinforcement of the response and the consequent decline in response rate are both referred to as extinction. Punishment is the process through which the consequence of a response decreases the likelihood that the response will recur. Punishment can be distinguished from extinction because during the process of extinction, the previously reinforced response no longer produces any effect. Generalization occurs when the subject responds not only to the original conditioned stimulus, but also to other new and similar stimuli—that's not what the question is describing. Last, negative reinforcement involves the removal of some stimulus, making a response more likely to recur—in this example, the son's cleaning behavior stops (pg. 119).

In B.F. Skinner's operant-conditioning chamber, he studied operant behavior in rats by having the rat press a lever, which produces what effect? A) opening of a door. B) delivery of an electrical shock. C) delivery of a food pellet. D) stopping the experiment.

C In his chamber the pressing of the lever produced the delivery of a food pellet. The whole procedure of his experiment is described in the text and the other answers do not match up with the description of his experiment (pg. 117).

Which statement concerning conditioning of fear responses is TRUE? A) Fear is the only emotion that can be conditioned through classical conditioning. B) Any stimulus that elicits fear has come to do so through conditioning. C) Some stimuli can become classically conditioned fear stimuli more easily than others can. D) Attempts to condition fear have repeatedly failed, suggesting that fear is purely a mental phenomenon.

C Martin Seligman suggested that people are biologically predisposed to acquire fears of situations and objects, such as rats and snakes, that posed a threat to our evolutionary ancestors and are less disposed to acquire fears of other situations and objects. The other options are incorrect because 1) fear is not the only emotion that can be conditioned through classical conditioning, 2) researchers such as Seligman argue that not all stimuli that elicit fear have come to do so through conditioning, and 3) decades of research shows that fear is not purely mental (pg. 140).

An operant response will be most resistant to extinction if it is: A) continuously reinforced. B) partially reinforced on a fixed schedule. C) partially reinforced on a variable schedule. D) never punished.

C Research suggests that an operant response will be most resistant to extinction if it has been partially reinforced on a variable schedule. For example, if a rat is trained to press a lever only on continuous reinforcement and then is shifted to extinction conditions, the rat will make a few bursts of lever-press responses and then quit. But if the rat has been shifted gradually from continuous reinforcement to partial reinforcement and then to extinction, it will make hundreds of unreinforced responses before quitting (pg. 120).

A reinforcement schedule in operant conditioning that would be best to use if someone wants to produce behavior that is very resistant to extinction is a _____ schedule. A) continuous. B) fixed-ratio. C) variable-ratio. D) fixed-interval.

C Research suggests that an operant response will be most resistant to extinction if it has been reinforced on a variable schedule. For example, if a rat is trained to press a lever only on continuous reinforcement and then is shifted to extinction conditions, the rat will make a few bursts of lever-press responses and then quit. But if the rat has been shifted gradually from continuous reinforcement to partial reinforcement and then to extinction, it will make hundreds of unreinforced responses before quitting (pg. 120).

A consequence of a response that makes the response more likely to occur again is called a(n): A) operant. B) discriminative stimulus. C) reinforcer. D) shaper.

C That consequence is referred to as a reinforcer. It is used in operant conditioning, but is not an operant itself. It is not a discriminative stimulus because it is a consequence after a stimulus has been presented. Your text does not use the term shaper—even if it did, the question is not referring to shaping. (pg. 117)

Suzy is making fun of her little brother because she has a cookie and he has none. Her mother sees Suzy making fun of her little brother and waving her cookie in his face and takes away her cookie. Suzy stops making fun of her little brother. What kind of punishment is this? A) positive. B) partial. C) negative. D) reinforcing.

C The answer is negative punishment. In negative punishment, the removal of a stimulus decreases the likelihood that the response will occur again. Positive punishment involves the arrival or presentation of a stimulus, which decreases the likelihood that the response will occur again. Partial is not a type of punishment, so that option is incorrect. In addition, there is no such thing as reinforcing punishment (pg. 121).

At first a coach praises a basketball player for behaviors that are only remotely like those that will sink a basket. Gradually, the coach restricts praise to behaviors that are closer and closer to the desired behavior. The coach is using a training strategy called: A) discrimination training. B) generalization. C) shaping. D )fixed-ratio reinforcement.

C The coach is using a technique called shaping. Shaping occurs when successively closer approximations to the desired response are reinforced. Let's talk about why the other answer options are incorrect. Discrimination training is when the subject learns to respond only in the presence of a particular stimulus (pecking a red circle, not a blue one). Generalization is when the subject not only responds to the original conditioned stimulus, but also to similar stimuli. Last, "fixed- ratio reinforcement" as a response option refers to a type of reinforcement schedule (pg. 119).

What is the effect of partial reinforcement on the target behavior? A) Compared to continuous reinforcement, partial reinforcement causes the behavior to be produced more irregularly. B) Partial reinforcement is generally effective only if training began with it rather than continuous reinforcement. C) Partial reinforcement produces greater resistance to extinction than continuous reinforcement. D) Partial reinforcement is more effective than continuous reinforcement for initial training.

C The effect is that partial reinforcement produces greater resistance to extinction than continuous reinforcement. In initial training, continuous reinforcement is most efficient, but once trained, an animal will continue to perform for partial reinforcement. Behavior that has been reinforced on a variable-ratio or variable-interval schedule is often very difficult to extinguish (pg. 119-120).

Most contemporary researchers who study play and exploration believe that play evolved as a mechanism for _____ learning, whereas exploration evolved as a mechanism for _____ learning. A) information; skill. B) instinctual; novel. C) skill; information. D) novel; instinctual.

C Groos considered exploration to be a category of play, but most contemporary researchers of play and exploration now consider the two to be distinct. Learning can be divided at least roughly into two broad categories--learning to do (skill learning) and learning about (information learning). Play evolved to serve the former, and exploration evolved to serve the latter (pg. 129).


संबंधित स्टडी सेट्स

The Iroquois Creation Myth: "The World on Turtle's Back"

View Set

Basic concepts of sensation and perception

View Set

BCOMM - Ch. 8 - Routine and Positive Messages

View Set

Chapter 30 - China: The World's Most Populous Country

View Set

WGU UFC1 Wild Managerial Accounting

View Set