Psy. Ch 6

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

reflexes

are a motor or neural reaction to a specific stimulus in the environment. They tend to be simpler than instincts, involve the activity of specific body parts and systems (e.g., the knee-jerk reflex and the contraction of the pupil in bright light), and involve more primitive centers of the central nervous system (e.g., the spinal cord and the medulla). do not have to be learned; helps adaptation to an environment

general processes in classical conditioning

In classical conditioning, the initial period of learning is known as acquisition, when an organism learns to connect a neutral stimulus and an unconditioned stimulus. During acquisition, the neutral stimulus begins to elicit the conditioned response, and eventually the neutral stimulus becomes a conditioned stimulus capable of eliciting the conditioned response by itself. Timing is important for conditioning to occur. Typically, there should only be a brief interval between presentation of the conditioned stimulus and the unconditioned stimulus. Depending on what is being conditioned, sometimes this interval is as little as five seconds (Chance, 2009). However, with other types of conditioning, the interval can be up to several hours. Taste aversion is a type of conditioning in which an interval of several hours may pass between the conditioned stimulus (something ingested) and the unconditioned stimulus (nausea or illness). Here's how it works. Between classes, you and a friend grab a quick lunch from a food cart on campus. You share a dish of chicken curry and head off to your next class. A few hours later, you feel nauseous and become ill. Although your friend is fine and you determine that you have intestinal flu (the food is not the culprit), you've developed a taste aversion; the next time you are at a restaurant and someone orders curry, you immediately feel ill. While the chicken dish is not what made you sick, you are experiencing taste aversion: you've been conditioned to be averse to a food after a single, negative experience. Extinction is the decrease in the conditioned response when the unconditioned stimulus is no longer presented with the conditioned stimulus. When presented with the conditioned stimulus alone, the dog, cat, or other organism would show a weaker and weaker response, and finally no response. In classical conditioning terms, there is a gradual weakening and disappearance of the conditioned response. However, after a couple of hours of resting from this extinction training, the dogs again began to salivate when Pavlov rang the bell. What do you think would happen with Tiger's behavior if your electric can opener broke, and you did not use it for several months? When you finally got it fixed and started using it to open Tiger's food again, Tiger would remember the association between the can opener and her food—she would get excited and run to the kitchen when she heard the sound. The behavior of Pavlov's dogs and Tiger illustrates a concept Pavlov called *spontaneous recovery*: the return of a previously extinguished conditioned response following a rest period Of course, these processes also apply in humans. For example, let's say that every day when you walk to campus, an ice cream truck passes your route. Day after day, you hear the truck's music (neutral stimulus), so you finally stop and purchase a chocolate ice cream bar. You take a bite (unconditioned stimulus) and then your mouth waters (unconditioned response). This initial period of learning is known as acquisition, when you begin to connect the neutral stimulus (the sound of the truck) and the unconditioned stimulus (the taste of the chocolate ice cream in your mouth). During acquisition, the conditioned response gets stronger and stronger through repeated pairings of the conditioned stimulus and unconditioned stimulus. Several days (and ice cream bars) later, you notice that your mouth begins to water (conditioned response) as soon as you hear the truck's musical jingle—even before you bite into the ice cream bar. Then one day you head down the street. You hear the truck's music (conditioned stimulus), and your mouth waters (conditioned response). However, when you get to the truck, you discover that they are all out of ice cream. You leave disappointed. The next few days you pass by the truck and hear the music, but don't stop to get an ice cream bar because you're running late for class. You begin to salivate less and less when you hear the music, until by the end of the week, your mouth no longer waters when you hear the tune. This illustrates extinction. The conditioned response weakens when only the conditioned stimulus (the sound of the truck) is presented, without being followed by the unconditioned stimulus (chocolate ice cream in the mouth). Then the weekend comes. You don't have to go to class, so you don't pass the truck. Monday morning arrives and you take your usual route to campus. You round the corner and hear the truck again. What do you think happens? Your mouth begins to water again. Why? After a break from conditioning, the conditioned response reappears, which indicates spontaneous recovery. Acquisition and extinction involve the strengthening and weakening, respectively, of a learned association. Two other learning processes—stimulus discrimination and stimulus generalization—are involved in distinguishing which stimuli will trigger the learned association. When an organism learns to respond differently to various stimuli that are similar, it is called *stimulus discrimination*. In classical conditioning terms, the organism demonstrates the conditioned response only to the conditioned stimulus. Pavlov's dogs discriminated between the basic tone that sounded before they were fed and other tones (e.g., the doorbell), because the other sounds did not predict the arrival of food On the other hand, when an organism demonstrates the conditioned response to stimuli that are similar to the condition stimulus, it is called stimulus generalization, the opposite of stimulus discrimination. The more similar a stimulus is to the condition stimulus, the more likely the organism is to give the conditioned response. For instance, if the electric mixer sounds very similar to the electric can opener, Tiger may come running after hearing its sound. But if you do not feed her following the electric mixer sound, and you continue to feed her consistently after the electric can opener sound, she will quickly learn to discriminate between the two sounds (provided they are sufficiently dissimilar that she can tell them apart). classical conditioning can sometimes lead to *habituation* = occurs when we learn not to respond to a stimulus that is presented repeatedly without change. As the stimulus occurs over and over, we learn not to focus our attention on it. For example, imagine that your neighbor or roommate constantly has the television blaring. This background noise is distracting and makes it difficult for you to focus when you're studying. However, over time, you become accustomed to the stimulus of the television noise, and eventually you hardly notice it any longer.

classical conditioning

Pavlov (1849-1936), a Russian scientist, performed extensive research on dogs and is best known for his experiments in classical conditioning (Figure 6.3). As we discussed briefly in the previous section, classical conditioning is a process by which we learn to associate stimuli and, consequently, to anticipate events. - Ivan Pavlov's research on the digestive system of dogs unexpectedly led to his discovery of the learning process now known as classical conditioning. Pavlov came to his conclusions about how learning occurs completely by accident. Pavlov was a physiologist, not a psychologist. Physiologists study the life processes of organisms, from the molecular level to the level of cells, organ systems, and entire organisms. Pavlov's area of interest was the digestive system (Hunt, 2007). In his studies with dogs, Pavlov surgically implanted tubes inside dogs' cheeks to collect saliva. He then measured the amount of saliva produced in response to various foods. Over time, Pavlov (1927) observed that the dogs began to salivate not only at the taste of food, but also at the sight of food, at the sight of an empty food bowl, and even at the sound of the laboratory assistants' footsteps. Salivating to food in the mouth is reflexive, so no learning is involved. However, dogs don't naturally salivate at the sight of an empty bowl or the sound of footsteps. - These unusual responses intrigued Pavlov, and he wondered what accounted for what he called the dogs' "psychic secretions" (Pavlov, 1927). To explore this phenomenon in an objective manner, Pavlov designed a series of carefully controlled experiments to see which stimuli would cause the dogs to salivate. He was able to train the dogs to salivate in response to stimuli that clearly had nothing to do with food, such as the sound of a bell, a light, and a touch on the leg. Through his experiments, Pavlov realized that an organism has two types of responses to its environment: (1) unconditioned (unlearned) responses, or reflexes, and (2) conditioned (learned) responses. the dogs salivated each time meat powder was presented to them. The meat powder in this situation was an unconditioned stimulus (UCS): a stimulus that elicits a reflexive response in an organism. The dogs' salivation was an unconditioned response (UCR): a natural (unlearned) reaction to a given stimulus. Before conditioning, think of the dogs' stimulus and response like this: Meat powder (UCS) → Salivation (UCR) In classical conditioning, a neutral stimulus is presented immediately before an unconditioned stimulus. Pavlov would sound a tone (like ringing a bell) and then give the dogs the meat powder (Figure 6.4). The tone was the neutral stimulus (NS), which is a stimulus that does not naturally elicit a response. Prior to conditioning, the dogs did not salivate when they just heard the tone because the tone had no association for the dogs. Quite simply this pairing means: Tone (NS) + Meat Powder (UCS) → Salivation (UCR) - When Pavlov paired the tone with the meat powder over and over again, the previously neutral stimulus (the tone) also began to elicit salivation from the dogs. Thus, the neutral stimulus became the conditioned stimulus (CS), which is a stimulus that elicits a response after repeatedly being paired with an unconditioned stimulus. Eventually, the dogs began to salivate to the tone alone, just as they previously had salivated at the sound of the assistants' footsteps. The behavior caused by the conditioned stimulus is called the conditioned response (CR) - Tone (CS) → Salivation (CR)

application of classical conditioning

What if the cabinet holding Tiger's food becomes squeaky? In that case, Tiger hears "squeak" (the cabinet), "zzhzhz" (the electric can opener), and then she gets her food. Tiger will learn to get excited when she hears the "squeak" of the cabinet. Pairing a new neutral stimulus ("squeak") with the conditioned stimulus ("zzhzhz") is called higher-order conditioning, or second-order conditioning. This means you are using the conditioned stimulus of the can opener to condition another stimulus: the squeaky cabinet - pg. 190

instincts

are innate behaviors that are triggered by a broader range of events, such as aging and the change of seasons. They are more complex patterns of behavior, involve movement of the organism as a whole (e.g., sexual activity and migration), and involve higher brain centers. do not have to be learned; helps adaptation to an environment

cognition and latent learning

Although strict behaviorists such as Skinner and Watson refused to believe that cognition (such as thoughts and expectations) plays a role in learning, another behaviorist, Edward C. Tolman, had a different opinion. Tolman's experiments with rats demonstrated that organisms can learn even if they do not receive immediate reinforcement - This finding was in conflict with the prevailing idea at the time that reinforcement must be immediate in order for learning to occur, thus suggesting a cognitive aspect to learning. - In the experiments, Tolman placed hungry rats in a maze with no reward for finding their way through it. He also studied a comparison group that was rewarded with food at the end of the maze. As the unreinforced rats explored the maze, they developed a cognitive map: a mental picture of the layout of the maze - After 10 sessions in the maze without reinforcement, food was placed in a goal box at the end of the maze. As soon as the rats became aware of the food, they were able to find their way through the maze quickly, just as quickly as the comparison group, which had been rewarded with food all along. This is known as *latent learning*: learning that occurs but is not observable in behavior until there is a reason to demonstrate it. Latent learning also occurs in humans. Children may learn by watching the actions of their parents but only demonstrate it at a later date, when the learned material is needed. For example, suppose that Ravi's dad drives him to school every day. In this way, Ravi learns the route from his house to his school, but he's never driven there himself, so he has not had a chance to demonstrate that he's learned the way. One morning Ravi's dad has to leave early for a meeting, so he can't drive Ravi to school. Instead, Ravi follows the same route on his bike that his dad would have taken in the car. This demonstrates latent learning. Ravi had learned the route to school, but had no need to demonstrate this knowledge earlier

reinforcement

The most effective way to teach a person or animal a new behavior is with positive reinforcement. In positive reinforcement, a desirable stimulus is added to increase a behavior. in negative reinforcement - an undesirable stimulus is removed to increase a behavior. For example, car manufacturers use the principles of negative reinforcement in their seatbelt systems, which go "beep, beep, beep" until you fasten your seatbelt. The annoying sound stops when you exhibit the desired behavior, increasing the likelihood that you will buckle up in the future. Negative reinforcement is also used frequently in horse training. Riders apply pressure—by pulling the reins or squeezing their legs—and then remove the pressure when the horse performs the desired behavior, such as turning or speeding up. The pressure is the negative stimulus that the horse wants to remove. (the pressure is thus removed in order to increase a behavior from the horse, in this case)

shaping

In his operant conditioning experiments, Skinner often used an approach called shaping. Instead of rewarding only the target behavior, in shaping, we reward successive approximations of a target behavior. Why is shaping needed? Remember that in order for reinforcement to work, the organism must first display the behavior. Shaping is needed because it is extremely unlikely that an organism will display anything but the simplest of behaviors spontaneously. In shaping, behaviors are broken down into many small, achievable steps. The specific steps used in the process are the following: 1. Reinforce any response that resembles the desired behavior. 2. Then reinforce the response that more closely resembles the desired behavior. You will no longer reinforce the previously reinforced response. 3. Next, begin to reinforce the response that even more closely resembles the desired behavior. 4. Continue to reinforce closer and closer approximations of the desired behavior. 5. Finally, only reinforce the desired behavior. Shaping is often used in teaching a complex behavior or chain of behaviors. Skinner used shaping to teach pigeons not only such relatively simple behaviors as pecking a disk in a Skinner box, but also many unusual and entertaining behaviors, such as turning in circles, walking in figure eights, and even playing ping pong; the technique is commonly used by animal trainers today. An important part of shaping is stimulus discrimination. Recall Pavlov's dogs—he trained them to respond to the tone of a bell, and not to similar tones or sounds. This discrimination is also important in operant conditioning and in shaping behavior. ex. Let's consider parents whose goal is to have their child learn to clean his room. They use shaping to help him master steps toward the goal. Instead of performing the entire task, they set up these steps and reinforce each step. First, he cleans up one toy. Second, he cleans up five toys. Third, he chooses whether to pick up ten toys or put his books and clothes away. Fourth, he cleans up everything except two toys. Finally, he cleans his entire room.

observational learning (modeling)

In observational learning, we learn by watching others and then imitating, or modeling, what they do or say. The individuals performing the imitated behavior are called models. Research suggests that this imitative learning involves a specific type of neuron, called a mirror neuron Like Tolman, whose experiments with rats suggested a cognitive component to learning, psychologist Albert Bandura's ideas about learning were different from those of strict behaviorists. Bandura and other researchers proposed a brand of behaviorism called social learning theory, which took cognitive processes into account. According to Bandura, pure behaviorism could not explain why learning can take place in the absence of external reinforcement. He felt that internal mental states must also have a role in learning and that observational learning involves much more than imitation. In imitation, a person simply copies what the model does. Observational learning is much more complex. According to Lefrançois (2012) there are several ways that observational learning can occur: 1. You learn a new response. After watching your coworker get chewed out by your boss for coming in late, you start leaving home 10 minutes earlier so that you won't be late. 2. You choose whether or not to imitate the model depending on what you saw happen to the model. Remember Julian and his father? When learning to surf, Julian might watch how his father pops up successfully on his surfboard and then attempt to do the same thing. On the other hand, Julian might learn not to touch a hot stove after watching his father get burned on a stove. 3. You learn a general rule that you can apply to other situations. [Bandura identified three kinds of models: live, verbal, and symbolic.] - A live model demonstrates a behavior in person, as when Ben stood up on his surfboard so that Julian could see how he did it. A verbal instructional model does not perform the behavior, but instead explains or describes the behavior, as when a soccer coach tells his young players to kick the ball with the side of the foot, not with the toe. A symbolic model can be fictional characters or real people who demonstrate behaviors in books, movies, television shows, video games, or Internet sources

operant conditioning

In operant conditioning, organisms learn to associate a behavior and its consequence (Table 6.1). A pleasant consequence makes that behavior more likely to be repeated in the future. ex. For example, Spirit, a dolphin at the National Aquarium in Baltimore, does a flip in the air when her trainer blows a whistle. The consequence is that she gets a fish. - The stimulus (either reinforcement or punishment) occurs soon after the response. Psychologist B. F. Skinner saw that classical conditioning is limited to existing behaviors that are reflexively elicited, and it doesn't account for new behaviors such as riding a bike. He proposed a theory about how such behaviors come about. Skinner believed that behavior is motivated by the consequences we receive for the behavior: the reinforcements and punishments. His idea that learning is the result of consequences is based on the law of effect, which was first proposed by psychologist Edward Thorndike. law of effect: behaviors that are followed by consequences that are satisfying to the organism are more likely to be repeated, and behaviors that are followed by unpleasant consequences are less likely to be repeated - Working with Thorndike's law of effect as his foundation, Skinner began conducting scientific experiments on animals (mainly rats and pigeons) to determine how organisms learn through operant conditioning (Skinner, 1938). He placed these animals inside an operant conditioning chamber, which has come to be known as a "Skinner box" (Figure 6.10). A Skinner box contains a lever (for rats) or disk (for pigeons) that the animal can press or peck for a food reward via the dispenser. Speakers and lights can be associated with certain behaviors. A recorder counts the number of responses made by the animal. [B.F. Skinner dev. operant conditioning for systematic study of how behaviors are strengthened or weakened according to their consequences] In operant conditioning, positive and negative do not mean good and bad. Instead, positive means you are adding something, and negative means you are taking something away. Reinforcement means you are increasing a behavior, and punishment means you are decreasing a behavior. Reinforcement can be positive or negative, and punishment can also be positive or negative. All reinforcers (positive or negative) increase the likelihood of a behavioral response. All punishers (positive or negative) decrease the likelihood of a behavioral response. - pg. 198

behaviorism

John B Watson = founder of behaviorism - used principles of classical conditioning in the study of human emotion arose during the first part of the 20th century which incorporates elements of Pavlov's classical conditioning In stark contrast with Freud, who considered the reasons for behavior to be hidden in the unconscious, Watson championed the idea that all behavior can be studied as a simple stimulus-response reaction, without regard for internal processes. Watson argued that in order for psychology to become a legitimate science, it must shift its concern away from internal mental processes because mental processes cannot be seen or measured. Instead, he asserted that psychology must focus on outward observable behavior that can be measured. According to Watson, human behavior, just like animal behavior, is primarily the result of conditioned responses. Whereas Pavlov's work with dogs involved the conditioning of reflexes, Watson believed the same principles could be extended to the conditioning of human emotions (Watson, 1919). Thus began Watson's work with his graduate student Rosalie Rayner and a baby called Little Albert. Through their experiments with Little Albert, Watson and Rayner (1920) demonstrated how fears can be conditioned. Through these experiments, Little Albert was exposed to and conditioned to fear certain things. Initially he was presented with various neutral stimuli, including a rabbit, a dog, a monkey, masks, cotton wool, and a white rat. He was not afraid of any of these things. Then Watson, with the help of Rayner, conditioned Little Albert to associate these stimuli with an emotion—fear. For example, Watson handed Little Albert the white rat, and Little Albert enjoyed playing with it. Then Watson made a loud sound, by striking a hammer against a metal bar hanging behind Little Albert's head, each time Little Albert touched the rat. Little Albert was frightened by the sound—demonstrating a reflexive fear of sudden loud noises—and began to cry. Watson repeatedly paired the loud sound with the white rat. Soon Little Albert became frightened by the white rat alone. - Days later, Little Albert demonstrated stimulus generalization—he became afraid of other furry things: a rabbit, a furry coat, and even a Santa Claus mask (Figure 6.9). Watson had succeeded in conditioning a fear response in Little Albert, thus demonstrating that emotions could become conditioned responses. It had been Watson's intention to produce a phobia—a persistent, excessive fear of a specific object or situation— through conditioning alone, thus countering Freud's view that phobias are caused by deep, hidden conflicts in the mind. However, there is no evidence that Little Albert experienced phobias in later years. Little Albert's mother moved away, ending the experiment, and Little Albert himself died a few years later of unrelated causes. While Watson's research provided new insight into conditioning, it would be considered unethical by today's standards.

primary and secondary reinforcers

Let's go back to Skinner's rats again. How did the rats learn to press the lever in the Skinner box? They were rewarded with food each time they pressed the lever. For animals, food would be an obvious reinforcer. Primary reinforcers are reinforcers that have *innate reinforcing qualities*. These kinds of reinforcers are not learned. Water, food, sleep, shelter, sex, and touch, among others, are primary reinforcers. Pleasure is also a primary reinforcer. Organisms do not lose their drive for these things. For most people, jumping in a cool lake on a very hot day would be reinforcing and the cool lake would be innately reinforcing—the water would cool the person off (a physical need), as well as provide pleasure. A secondary reinforcer has no inherent value and only has reinforcing qualities when linked with a primary reinforcer. Praise, linked to affection, is one example of a secondary reinforcer, as when you called out "Great shot!" every time Joaquin made a goal. Another example, money, is only worth something when you can use it to buy other things—either things that satisfy basic needs (food, water, shelter—all primary reinforcers) or other secondary reinforcers. (tokens are also considered secondary reinforcers) Time-out is another popular technique used in behavior modification with children. It operates on the principle of negative punishment. When a child demonstrates an undesirable behavior, she is removed from the desirable activity at hand (Figure 6.12). For example, say that Sophia and her brother Mario are playing with building blocks. Sophia throws some blocks at her brother, so you give her a warning that she will go to time- out if she does it again. A few minutes later, she throws more blocks at Mario. You remove Sophia from the room for a few minutes. When she comes back, she doesn't throw blocks.

punishment

Many people confuse negative reinforcement with punishment in operant conditioning, but they are two very different mechanisms. Remember that reinforcement, even when it is negative, always increases a behavior. In contrast, *punishment always decreases a behavior*. ex. In positive punishment, you add an undesirable stimulus to decrease a behavior. An example of positive punishment is scolding a student to get the student to stop texting in class. In this case, a stimulus (the reprimand) is added in order to decrease the behavior (texting in class). negative punishment - you remove an aversive stimulus to decrease behavior. For example, when a child misbehaves, a parent can take away a favorite toy. In this case, a stimulus (the toy) is removed in order to decrease the behavior. While positive punishment can be effective in some cases, Skinner suggested that the use of punishment should be weighed against the possible negative effects. Today's psychologists and parenting experts favor reinforcement over punishment—*they recommend that you catch your child doing something good and reward her for it*

steps in the modeling process

Of course, we don't learn a behavior simply by observing a model. Bandura described specific steps in the process of modeling that must be followed if learning is to be successful: attention, retention, reproduction, and motivation. First, you must be focused on what the model is doing—you have to pay attention. Next, you must be able to retain, or remember, what you observed; this is retention. Then, you must be able to perform the behavior that you observed and committed to memory; this is reproduction. Finally, you must have motivation. You need to want to copy the behavior, and whether or not you are motivated depends on what happened to the model. If you saw that the model was reinforced for her behavior, you will be more motivated to copy her. = *vicarious reinforcement* - On the other hand, if you observed the model being punished, you would be less motivated to copy her. This is called *vicarious punishment*. What are the implications of this study? Bandura concluded that we watch and learn, and that this learning can have both prosocial and antisocial effects. Prosocial (positive) models can be used to encourage socially acceptable behavior. Parents in particular should take note of this finding. If you want your children to read, then read to them. Let them see you reading. Keep books in your home. Talk about your favorite books. If you want your children to be healthy, then let them see you eat right and exercise, and spend time engaging in physical fitness activities together. The same holds true for qualities like kindness, courtesy, and honesty. The main idea is that children observe and learn from their parents, even their parents' morals, so be consistent and toss out the old adage "Do as I say, not as I do," because children tend to copy what you do instead of what you say.

reinforcement schedules

Remember, the best way to teach a person or animal a behavior is to use positive reinforcement. For example, Skinner used positive reinforcement to teach rats to press a lever in a Skinner box. At first, the rat might randomly hit the lever while exploring the box, and out would come a pellet of food. After eating the pellet, what do you think the hungry rat did next? It hit the lever again, and received another pellet of food. Each time the rat hit the lever, a pellet of food came out. When an organism receives a reinforcer each time it displays a behavior, it is called continuous reinforcement. = This reinforcement schedule is the quickest way to teach someone a behavior, and it is especially effective in training a new behavior. Once a behavior is trained, researchers and trainers often turn to another type of reinforcement schedule—partial reinforcement. In partial reinforcement, also referred to as intermittent reinforcement, the person or animal does not get reinforced every time they perform the desired behavior. There are several different types of partial reinforcement schedules (Table 6.3). These schedules are described as either fixed or variable, and as either interval or ratio. [fixed = the number of responses between reinforcements, or the amount of time between reinforcements, which is set and unchanging variable = the number of responses or amount of time between reinforcements, which varies or changes. interval = the schedule is based on the time between reinforcements ratio = the schedule is based on the number of responses between reinforcements.] - pg. 202-203 fixed interval reinforcement schedule: is when behavior is rewarded after a set amount of time. For example, June undergoes major surgery in a hospital. During recovery, she is expected to experience pain and will require prescription medications for pain relief. June is given an IV drip with a patient-controlled painkiller. Her doctor sets a limit: one dose per hour. June pushes a button when pain becomes difficult, and she receives a dose of medication. Since the reward (pain relief) only occurs on a fixed interval, there is no point in exhibiting the behavior when it will not be rewarded. variable interval reinforcement schedule: the person or animal gets the reinforcement based on varying amounts of time, which are unpredictable. - His productivity regarding prompt service and keeping a clean restaurant are steady because he wants his crew to earn the bonus. [pg. 203] fixed ratio reinforcement schedule: there are a set number of responses that must occur before the behavior is rewarded. - Fixed ratios are better suited to optimize the quantity of output, whereas a fixed interval, in which the reward is not quantity based, can lead to a higher quality of output. variable ratio reinforcement schedule: the number of responses needed for a reward varies. This is the most powerful partial reinforcement schedule. An example of the variable ratio reinforcement schedule is gambling. *Among the reinforcement schedules, variable ratio is the most productive and the most resistant to extinction. Fixed interval is the least productive and the easiest to extinguish* - pg. 204

learning

is a relatively permanent change in behavior or knowledge that results from experience. In contrast to the innate behaviors discussed above, learning involves acquiring knowledge and skills through experience. involves a complex interaction of conscious and unconscious processes. Learning has traditionally been studied in terms of its simplest components—the associations our minds automatically make between events. Our minds have a natural tendency to connect events that occur closely together or in sequence. associative learning: occurs when an organism makes connections between stimuli or events that occur together in the environment. You will see that associative learning is central to all three basic learning processes discussed in this chapter; classical conditioning tends to involve unconscious processes, operant conditioning tends to involve conscious processes, and observational learning adds social and cognitive layers to all the basic associative processes, both conscious and unconscious. classical conditioning: aka Pavlovian conditioning - organisms learn to associate events—or stimuli—that repeatedly happen together. We experience this process throughout our daily lives. For example, you might see a flash of lightning in the sky during a storm and then hear a loud boom of thunder. The sound of the thunder naturally makes you jump (loud noises have that effect by reflex). Because lightning reliably predicts the impending boom of thunder, you may associate the two and jump when you see lightning. Psychological researchers study this associative process by focusing on what can be seen and measured—behaviors. operant conditioning: organisms learn, again, to associate events—a behavior and its consequence (reinforcement or punishment). A pleasant consequence encourages more of that behavior in the future, whereas a punishment deters the behavior. - In operant conditioning, a response is associated with a consequence. This dog has learned that certain behaviors result in receiving a treat. observational learning - extends the effective range of both classical and operant conditioning. In contrast to classical and operant conditioning, in which learning occurs only through direct experience, observational learning is the process of watching others and then imitating what they do. *All of the approaches covered in this chapter are part of a particular tradition in psychology, called behaviorism*


Set pelajaran terkait

Chapter Seven Homework - Microeconomics

View Set