500 Questions - Operant Conditioning and Cognitive Learning

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

151. What is one major difference between operant conditioning and classical conditioning? (A) Operant conditioning takes place as a result of some voluntary action, while classical conditioning takes place without choice. (B) Operant conditioning takes place before the response, while classical conditioning takes place after the response. (C) Operant conditioning is learned by association, while classical conditioning is learned by reinforcement. (D) Classical conditioning is part of social cognitive learning, while operant conditioning is not. (E) Classical conditioning has a stimulus but no response, while operant conditioning has both a stimulus and a response.

151. (A) Operant conditioning is a kind of learning in which a behavior is performed, followed by a consequence. Learning takes place as a result of some voluntary action by the learner. In classical conditioning, learning takes place without choice. The stimulus causes the response. Choice (B) is incorrect because it is actually the opposite. Operant condi- tioning takes place after the response, while classical conditioning takes place before the response. Choice (C) is also the opposite. Classical conditioning is learning by association, and operant conditioning is learning by reinforcement. Choices (D) and (E) are completely incorrect. Classical conditioning is not part of social cognitive learning.

152. Suspending a basketball player for committing a flagrant foul is an example of: (A) Negative reinforcement (B) Positive reinforcement (C) Punishment (D) Primary reinforcement (E) Secondary reinforcement

152. (C) Very often students get confused between negative reinforcement and punishment. Negative reinforcement occurs when something unpleasant is taken away if the subject does something. It is conditional. Punishment is not the same as negative reinforcement. It is an attempt to weaken a response, or a behavior, by following it with something unpleasant. It is not conditional. Because the basketball player should not commit flagrant fouls, he was suspended; therefore, it is a punishment.

153. A defendant is harassed and tortured until he confesses. This is an example of: (A) Positive reinforcement (B) Negative reinforcement (C) Punishment (D) Positive punishment (E) Negative punishment

153. (B) In this scenario the defendant is harassed until he confesses. The harassment is something unpleasant and it will be taken away once the confession is given, making it negative reinforcement and not punishment.

154. Punishment can best be defined as: (A) The reinforcement of a behavior every time it occurs (B) Taking away something unpleasant when the subject performs the correct behavior (C) An attempt to weaken a response by following it with something unpleasant (D) Adding something unwanted when the subject is not doing the correct behavior and then stopping it when he or she displays the correct behavior (E) Anything that comes to represent a primary reinforcer

154. (C) Remember, punishment is an attempt to stop an unwanted behavior. It is not contingent upon a person doing the correct behavior.

155. Which of the following statements best explains E. L. Thorndike's law of effect? (A) Behaviors that are negatively reinforced are more likely to discontinue than behaviors that are punished. (B) Receiving reinforcement every time a person performs a good deed, continuous reinforcement, will increase the likelihood that the person will continue that behavior. (C) The stimuli of food, water, and sex are innately satisfying and require no learning. (D) Behaviors are strengthened by positive consequences and weakened by negative ones. (E) Behaviors are reinforced through primary reinforcers

155. (D) The law of effect says that if a random act is followed by a pleasurable consequence, such actions are strengthened and will likely occur again. Choice (D) is the definition of the law of effect.

156. B. F. Skinner used his "Skinner Box" to work on a procedure in which the experimenter successfully reinforced behaviors, which led up to the desired behavior. This procedure is known as: (A) Reinforcement (B) Chaining (C) Primary reinforcers (D) Secondary reinforcers (E) Shaping

156. (E) Shaping is a procedure in which the experimenter successively reinforces behaviors that lead up to the desired behavior. Many students get confused between shaping and chaining. Chaining is an instructional procedure that involves reinforcing responses in a sequence to form a more complex behavior. In terms of the Skinner box, B. F. Skinner used shaping to condition his rats to press the lever.

157. Schedules of reinforcement have a direct effect on maintaining your behavior. Which of the following schedules of reinforcement is identified in this example: Calling a friend and getting a busy signal because he or she is frequently on the phone? (A) Fixed interval (B) Variable interval (C) Fixed ratio (D) Variable ratio (E) Fixed variable

157. (B) Variable interval refers to an unknown amount of time, more or less waiting for a desired response to occur. Because it does not matter how many times you pick up the phone to call your friend, the correct answer is variable interval and not variable ratio. Ratio refers to the number of desired acts required before reinforcement will occur.

158. Which of the following is the best example of a negative reinforcement? (A) A child getting spanked for bad behavior (B) A kindergarten student being put in "time-out" (C) A teenager not being allowed to go to her friend's party (D) A mother taking an aspirin to eliminate her headache (E) A father getting a speeding ticket

158. (D) Once the mother takes an aspirin, the unpleasantness of the headache will go away. Choices (A), (B), (C), and (E) are all examples of punishment.

159. Which of the following best describes the basic principle behind operant conditioning? (A) The consequences one receives are directly based on his or her behavior. (B) The conditioned stimulus one responds to is called a conditioned response. (C) Continuous reinforcement is the best way to reinforce positive behavior. (D) To decrease undesired behaviors one must use negative punishment. (E) Negative reinforcement and punishment both equally help to rid unwanted behavior.

159. (A) Because operant conditioning is learning by reinforcement, which takes place after the response, choice (A) has to be the correct answer. None of the other choices have anything to do with the principles of operant conditioning

160. What is the goal of both positive and negative reinforcement? (A) To decrease the likelihood that a negative reinforcer will follow a behavior (B) To increase the likelihood that the preceding behavior will be repeated (C) To decrease the likelihood that the preceding behavior will be repeated (D) To ensure there are no negative consequences following the behavior (E) To add a primary reinforcer after someone does a proper behavior

160. (B) Positive reinforcement occurs when something the subject wants is added to encourage the wanted behavior to continue. Negative reinforcement occurs when something unpleasant is taken away once the wanted behavior continues. Both have the same goal, to repeat wanted behavior.

161. Latent learning can best be described as: (A) Learning that depends on the mental process (B) Learning that is not immediately reflected in a behavior change (C) A learning technique that provides precise information about one's inner bodily functions (D) Learning that is based on rewards and punishments (E) A type of learning that occurs after the behavior has already been done

161. (B) Choice (B) is the definition of latent learning. Often humans and animals need motivation or good reason to show their behavior, which does not mean they have not learned the behavior. Choice (A) can apply to almost any form of learning. Choice (D) defines operant conditioning. Choice (E) can sound similar, but latent learning does not say the actual learning occurs after the behavior, just the demonstration of the learning.

162. Thorndike's law of effect neglects the inner drives or motives that make learners pursue the "satisfying state," allowing learners to reach their goals. Which of the following psychologists would have agreed with that statement? (A) Kohler (B) Pavlov (C) Tolman (D) Skinner (E) Watson

162. (C) Edward Tolman's theory of latent learning suggested that the concept of response needed to include a range of behaviors that would allow learners to reach their goals. Tolman felt that learning usually occurs before the goal is reached.

163. Which of the following scenarios is the best example of a cognitive map? (A) A dog sits by the window an hour before her owner should return home. (B) A little girl remembers to get her jacket before leaving for school. (C) A boy follows his big sister home on his bicycle. (D) When asked for directions to his job, a man recites them in great detail. (E) A teacher remembers all the names of her students.

163. (D) A cognitive map is a learned mental image of a spatial environment. This image is usually learned without the learner realizing he or she has learned it. Choice (D) is the only answer that suggests this.

164. Wolfgang Kohler conducted a series of experiments in which he placed a chimpanzee in a cage with a banana on the ground just out of his reach outside of the cage. After a period of inaction, the chimp suddenly grabbed the stick in the cage, poked it through the cage, and dragged the banana within reach. This type of learning is called: (A) Insight (B) Latent (C) Cognitive (D) Operant (E) Observational

164. (A) Insight is learning that occurs rapidly based on understanding all the elements of a problem. In this case, the chimps learned how to obtain the banana shortly after figuring out their environment.

165. Harry Harlows's goal was to get his monkeys to figure out that in any set of six trials, the food was always under the same box. Initially the monkeys chose the boxes randomly, sometimes finding food and sometimes not. However, after a while their behavior changed: after two consistent trials of finding the correct box, they continually went back to the same box. Harlow concluded that the monkeys had "learned how to learn." According to Harlow the monkeys established: (A) Cognitive maps (B) Reinforcers (C) Cognitive sets (D) Learned maps (E) Learning sets

165. (E) A learning set is the ability to become increasingly more effective in solving problems the more practice you have. In this case the monkeys learned how to choose the correct box based on their problem-solving techniques used with each trial. Based on this idea, learning sets really means learning how to learn

166. Which of the following statements best exemplifies the idea behind social cognitive learning? (A) Learning occurs when we see someone else being punished for a behavior. (B) Learning is likely to happen whether we see someone else punished or rewarded for behavior. (C) Learning occurs when we see someone else being rewarded for a behavior. (D) Learning is simply based on observation. (E) Learning is based on external rewards and behaviors.

166. (B) Social cognitive learning emphasizes the ability to learn by observation without firsthand experience. It does not specify that a person must observe rewarded behavior. Choice (D) can be confused for the correct answer, but it is too vague when the question is asking which statement best exemplifies social cognitive learning.

167. In Albert Bandura's "bobo" doll experiment, which group of children spontaneously acted aggressively toward the doll rather quickly? (A) Model-reward condition (B) Model-punished condition (C) No-consequences condition (D) Reward and punishment condition (E) No condition

167. (A) In the Albert Bandura "bobo" doll experiment, the children who watched the video in which a person was rewarded for acting violently toward the doll were first to act aggressively. Although after being offered candy by the experimenter, children from all of the groups did demonstrate aggressive behavior, initially it was the model-reward condition.

168. Devyn watches a violent television show and then pretends to shoot her brother Tyler with a toy pistol. A psychologist would say that Devyn has learned this behavior through: (A) Operant conditioning (B) Classical conditioning (C) Vicarious learning (D) Latent learning (E) Learning set

168. (C) Vicarious learning, or observational learning, is simply learning by observing other people, as Devyn did in this scenario.

169. Which of the following psychologists would argue that learning can take place when someone is watching another person and performs that behavior even when not reinforced? (A) Edward Tolman (B) Wolfgang Kohler (C) B. F. Skinner (D) John Watson (E) Albert Bandura

169. (E) Albert Bandura is the most prominent proponent of social cognitive learning, which emphasizes learning through observation. Tolman studied latent learning. Kohler studied insight learning. B.F. Skinner studied operant conditioning. Watson studied classical conditioning.

170. Which of the following responses is not learned through operant conditioning? (A) Shelly gets $50 after getting a 90 percent in her math class. (B) A pigeon learns to peck a disc to get food pellets. (C) A dog learns to turn in circles for a reward. (D) A baby takes his first steps. (E) A horse jumps over a fence to avoid an electric shock.

170. (D) Choice (D) is the only behavior that is innate. Although toddlers do get positive reinforcement when they begin to walk, it would happen with or without the reinforcement.

171. Joey is refusing to complete his homework on time. After learning about Joey's love of trains, Mrs. Anderson promises to reward Joey with a Thomas and Friends video upon completion of his next two homework assignments. This is an example of: (A) Positive reinforcement (B) Generalization (C) Insight (D) Latent learning (E) The Premack Principle

171. (E) The Premack Principle states that more probable behaviors will reinforce less probable behaviors. In this case it is that Joey will respond with correct behavior when using trains. Applying this reinforcement to get him to complete his homework could work, according to this principle.

172. While taking his math placement exam, Spencer became stuck on one problem. With only five minutes left, he suddenly arrived at the answer. This is an example of: (A) Latent learning (B) Insight (C) Learning set (D) Abstract learning (E) Operant conditioning

172. (B) Insight learning occurs rapidly as a result of understanding all of the elements of a problem. In this case, Spencer suddenly arrived at the answer after working out the elements of the math problem. Choice (A) refers to learning that is not immediately reflected in the behavior. Choice (C) is simply learning how to learn. Choice (D) is vague and incorrect. Choice (E) is learning based on reward and punishment.

173. After several attempts at escape with no success, the electrically shocked dogs give up. At that moment the gates open and the dogs could simply walk out, but they don't; instead they just sit there. This could most likely be explained by the concept of: (A) Latent learning (B) Spontaneous recovery (C) Vicarious learning (D) Learned helplessness (E) Intrinsic motivation

173. (D) Learned helplessness is defined as failure to take steps to avoid or escape from an aversive stimulus that occurs as a result of previous unavoidable painful stimuli. The dogs, having no way out for several minutes, gave up, even when there was a viable escape

174. After overcoming her fear of the dentist, Jada finds out she needs a root canal. On her way to the dentist's office, her old fears and anxieties return and she begins to panic. This is an example of: (A) Generalization (B) Spontaneous recovery (C) Discrimination (D) Insight (E) Classical conditioning

174. (B) Spontaneous recovery is the reappearance of an extinguished response after some time has passed. In this case, Jada's fear of going to the dentist returned only when she had to go back for a root canal. That is an example of spontaneous recovery. Generalization would have been if Jada feared all doctors as a result of her fear of the dentist.

175. Salina receives a one-thousand-dollar bonus at her job after she sold the most cars this month. The one-thousand-dollar bonus is an example of a: (A) Primary reinforcer (B) Secondary reinforcer (C) Partial reinforcer (D) Continual reinforcer (E) Total reinforcer

175. (A) Money is an example of a primary reinforcer.


Kaugnay na mga set ng pag-aaral

Chapter 4 Section 1 and 2 Study Guide

View Set

U.S. II Midterm Study Guide Part 2

View Set

Anatomy & Physiology Chapter 8 Study Guide (Test 2)

View Set