Unit 6
innate behavior
Instinctive behavior that naturally occurs as a result of automatic inborn processes.
Once a conditioned behavior is extinguished, it can no longer appear again. F
A neutral stimulus causes no response. T
learning
A relatively permanent change or modification in behavior that occurs as a result of experiential processes.
conditioned response
A response that is elicited from a conditioned stimulus.
unconditioned response
A response to an unconditioned stimulus that occurs naturally and does not have to be learned.
operant chamber
A small enclosure designed for an animal to make responses as specific consequences are administered; also known as a Skinner box.
neutral stimulus
A stimulus for which there is no conditioned response.
discriminative stimulus
A stimulus that increases or decreases the likelihood of a particular response.
primary reinforcer
A stimulus that is naturally reinforcing because it satisfies innate needs.
secondary reinforcer
A stimulus that is reinforcing because of its association with primary reinforcers.
unconditioned stimulus
A stimulus that naturally provokes a behavior or response.
operant conditioning
A type of learning in which an organism learns to associate a behavior with a specific consequence
classical conditioning
A type of learning that occurs when an organism learns to associate two or more stimuli with a specific event.
Factors influencing the process of classical conditioning include acquisition, generalization, discrimination, extinction, and spontaneous recovery. Let's look at how they're involved in the conditioning process. Acquisition is the initial stage of learning. It occurs when the neutral stimulus evokes the conditioned response. So in the first trial, a researcher will pay the neutral stimulus with the unconditioned stimulus and the unconditioned response occurs. By the 30th trial, if the initial stimulus evokes a conditioned response, then acquisition has occurred and the neutral stimulus has become the conditioned stimulus. The smell of cookies is an unconditioned stimulus, which usually naturally elicits a craving for cookies, which would be unconditioned response. When grandma's house, which is a neutral stimulus, evokes a craving for cookies each time you visit, even if you're not cooking when you arrive, acquisition has occurred and grandma's house has become the conditioned stimulus, evoking the conditioned response of a craving for cookies. Generalization is a learning process that occurs when a conditioned response is elicited from stimuli that is similar to the conditioned stimulus. For example, if you've been conditioned to fear grasshoppers because your brother used to throw them on you and tell you that they'd eat you when you were little, you may have a fear when encountering anything that resembles a grasshopper. You've generalize your fear to other bugs such as crickets, cockroaches, beetles, and cicada. John Watson was one researcher who demonstrated generalization through his Little Albert experiment. He conditioned a young infant to fear white rats. In his experiment, the unconditioned stimulus was a loud noise. The unconditioned response was agitation or fear by Little Albert. The neutral stimulus was the white rat because at first it elicited no response. Watson paired the loud noise with the white rat to get the unconditioned response of agitation or fear. Which process is responsible for the gradual diminishment of the association between a conditioned stimulus and a conditioned response? extinction Which learning process occurs when a connection between a stimulus and a response is strengthened as learning begins? A. acquisition A vervent monkey is in a tree eating bugs that are crawling along the branches. The vervent monkey sees a hawk circling overhead. The hawk is a predator of the monkeys. The vervent monkey cries out, warning the other monkeys to descend to the ground below the trees. In this scenario, what is the unconditioned stimulus for the vervent monkey's behavior? NOT the vervent monkey crying out
After a while, the neutral stimulus of the white rat became a conditioned stimulus because it elicited a conditioned response of fear. Albert generalized the same fear to other items that resemble brats like Watson's hair, or a fur coat, or a white rabbit, or a white beard. Discrimination is the ability to distinguish between the conditioned stimulus and other similar stimuli. So in our grasshopper example from earlier, discrimination would occur if you're afraid of a grasshopper but you weren't afraid of other beetles or crickets. Here's another example, say you learned to fear dogs because an attack by a pit bull. Anytime you're around pit bulls you get afraid. Discrimination occurs when you don't have the same response to other dogs like Golden Retrievers or Labradors. Extinction is a learning process that occurs when a conditioned stimulus no longer elicits a conditioned response. So after time, the conditioned stimulus is no longer associated with the unconditioned stimulus. It becomes a neutral stimulus again because it no longer elicits a response. But how long does it take for extinction to occur? The length of time it takes to extinguish a conditioned response is dependent upon the strength of bond between the conditioned response and stimulus. So extinction is basically the process of disassociating the stimuli. After while without reinforcement of the unconditioned stimulus, in this case the food, the ringing of a bell stops being a useful predictor of food to the dog and will no longer elicit a response. Spontaneous recovery is a process that occurs when a conditioned response reappears after it has been extinguished. Ivan Pavlov suggested that spontaneous recovery occurs because the conditioned response is simply repressed rather than extinguished. Many individuals who smoke often report craving cigarettes upon smelling cigarette smoke. So the unconditioned stimulus is the smell of smoke. The unconditioned response is a craving for cigarettes. If there have been enough instances where one smells smoke when they're stressed, then eventually they begin to crave cigarettes when they're stressed, even without the smell of cigarette smoke. Conditioning has occurred when stress causes a craving for cigarettes. As individuals work to stop smoking, they will find that stress no longer causes cravings. Extinction has occurred because the conditioned response no longer happens. However, there may be a time when you're stressed and in cigarette smoke. So you may crave a cigarette because spontaneous recovery has occurred. The fact that some conditioned behaviors appear so easily may be why Pavlov believed that a condition behavior that spontaneously reappears after being extinct may only be repressed rather than have completely disappeared.
Methods to enhance learning are demonstrated differently depending on one's culture. One way is collaborative learning. This is the type of learning that utilizes the unified intellectual efforts of both students and teachers. With collaborative learning, the student works with the teacher and their peers to increase their knowledge and subject understanding. This usually involves students working on projects together. They establish a unified effort to understand the meaning of concepts, to solve problems, or to create a product. Collaborative learning focuses on the benefits of working in a group. Group members guide other members in problem solving and provide assistance on difficult tasks. Collaborative learning approaches include cooperative learning, problem centered instruction, and peer teaching. Cooperative learning involves organizing students into small groups so that students work together to increase understanding and learning. It was established from research on the benefits of social interdependence and interaction. Cooperative learning attempts to increase student achievement and enhance the development of interpersonal skills. It involves assigning roles to students in encouraging teamwork and includes group processing, or periods, to reflect on learning. These are sometimes included intermittently into learning in order to increase student ability to analyze and process information. Possible positive effects of cooperative learning are that it increases student retention and enhances student learning satisfaction. It also assist in improving social skills. By relating concepts or verbalizing problems out loud, students learn to communicate more effectively, as they have to listen and collaborate together. Cooperative learning also assists in improving student self-esteem. Students feel more capable of achieving tasks if they know assistance from others is expected and allowed. Problem-centered instruction involves working on complex problems students must analyze and solve together. The approach is generally used in professional education, and grants students experience in solving real world problems, through things like guided design. This is when students work in groups to practice decision making and sequence tasks. Or through case studies. A case study is a real life story that sets up a problem or unresolved issue that students must solve. Or through simulations. These are complex role-playing situations that mimic or simulate real life experiences. Possible positive effects of problem-centered instruction are that it assists in developing problem-solving abilities. It helps the student to understand complex concept relationships, and teaches the student how to make decisions in uncertain situations. This can help students develop specific rules that help in guiding them in solving the problem. Peer teaching involves students guiding and assisting other students in analyzing and solving problems. The more experienced student often guides less knowledgeable students in the area of study. Possible positive effects of peer teaching are that it provides experience in working with a team, and deepens the knowledge of the subject. The student who is teaching another student must understand what they're teaching and elaborate on the subject in order for the other student to comprehend what they're being taught. This elaboration deepens a student teacher's own knowledge. Peer teaching also improves interpersonal skills. It increases student comfort and decreases feelings of inferiority. The student often feels more free to ask questions, express ideas, or question the content. All of the following are factors that influence one's performance when self-regulating behavior except the __________. self-reaction to goal outcome Behavior modification is a systematic method of conditioning strategies used to change one's behavior. It's used to improve one's self control and can occur through therapeutic behavior modification or self-modification. Advocates for behavior modification believe that behavior is a result of learning or conditioning. They claim that individuals can be reconditioned to demonstrate effective behaviors. They point out that therapeutic behavior modification strategies have been used effectively in hospitals, schools, homes, and mental health centers, and that behavior modification techniques have been used effectively for improving control of one's behavior. Critics of behavior modification believe that individuals may become to reliant on external rewards. They claim that the new behavior may stop when reinforcers are no longer provided. Critics also raise ethical concerns involving the right to control another's behavior. They claim that authorities in charge of reinforcing the behavior deprive others of desires. One type of therapeutic behavior modification technique is the use of the token economy. This is an operant conditioning behavior modification technique in which individuals earn tokens for demonstrating a desired behavior, such as doing their chores. They can exchange these tokens for privileges or prizes like candy or more hours watching TV. This is often used in institutional settings such as schools, hospitals, and mental health facilities. Self-modification is a self-implemented behavior modification program that assists in changing one's behavior. It consists of five steps. First, one must specify the target behavior to change. Then, one should gather data about the target behavior and design a program. Next, one should implement and evaluate the program. And finally, bring the program to an end. Specify the target behavior. The target behavior should be clearly defined. This is accomplished by evaluating past unwanted behavior and identifying wanted behavior. For example, a desire to stop complaining or yelling is more clearly defined than a desire to be less irritable. Gather data about the target behavior. The target behavior should be evaluated for a specific period of time in order to establish an effective program strategy. This involves monitoring various factors that influence the target behavior. You need to identify how often the target behavior occurs. Note possible events that precede or trigger the target behavior, and identify consequences that follow the target behavior. Next, design a program. A program of intervention should be designed to increase or decrease the target behavior. This involves establishing goals that are challenging, yet realistic. You should try and answer the question what will success or failure look like. If the program is not challenging enough, this will probably not lead to an improvement in the behavior. But if it's too challenging, it may lead to discouragement. Designing a program involves determining goals that signal an end of the program, for example, a specific body weight is reached, or a habit has been stopped for a specific period of time. When you design a program to increase the desired behavior, it can be accomplished through establishing a reinforcer and conditions of reinforcement. The reward must be personally effective, and it must be clearly defined. The conditions must be met before a reward is given. For example, you can spend $3,000 on a new wardrobe upon losing 100 pounds. To decrease unwanted behavior, it can be accomplished in a variety of ways, for example, establishing a reinforcer that is administered when unwanted behavior is decreased. You might reward oneself with a new pair of shoes for eating less than 1600 calories a day. You should avoid exposure to triggers. If you're trying to reduce or change your eating behaviors, you may want to avoid specific restaurants. And you should establish a form of punishment that is administered when unwanted behavior is demonstrated This can be difficult to follow through with, though, because it is challenging to carry out the punishment every time. Next, implement and evaluate a program. Implementing a program involves testing one's established self-modification strategies and evaluating one's progress. This involves determining whether the program is successful or needs adjustments. If the plan works, maintain the plan until your final goal is reached. Use this assessed strategy to change other behaviors if needed. If your plan fails, identify the reason for failure and modify your existing plan. So if the plan doesn't work as planned, you'll want to determine more things you can do to help yourself meet your goals. Finally, bring the program to an end. Program ending should be phased out by gradually Reinforce behavior intermittently until the new habit completely forms without the need of reinforcement.
All of the following represent methods of problem-centered instruction except ___________. independent exploration self-regulation - The process of controlling thoughts, emotions, and behaviors in order to achieve specific goals. self-efficacy - The belief in one's personal ability to accomplish a specific goal. behavior modification - Systematic method of conditioning strategies used to change one's behavior. token economy - An operant conditioning behavior modification technique in which individuals earn "tokens" for demonstrating desired behavior. Individuals are capable of regulating their own behavior and learning strategies. Taking charge of one's behavior and learning involves self-regulation. This is the process of controlling thoughts, emotions, and behaviors in order to achieve specific goals. Self-regulated individuals tend to possess various skills that increase opportunities for effective learning and change. For example, they demonstrate high effort and persistence. They work hard and are persistent in the face of obstacles. They create optimal learning environments. They manage their productivity. And this allows them to have the easy access to things that are necessary for achieving their goal. They plan and manage their time effectively. So they are not overwhelmed with their workload. They possess positive beliefs about their abilities. These individuals tend to be optimistic about their abilities and understand their strengths. They use social resources appropriately. They understand how to ask for help, or seek advice. Finally, self-regulated individuals engage in self-instruction and self-reinforcement. There are three phases of self-regulation that help an individual effectively regulate their behavior when pursuing goals or desires. These include forethought, self-reflection, and performance. Self-regulation forethought involves setting a goal and determining strategies that are needed in order to implement the goal. Various factors are considered when one establishes a strategy for accomplishing the goal, such as self-efficacy which is the belief in one's personal ability to accomplish a specific goal, or outcome expectations, which involve what the person feels will happen as a result of pursuing the goal and the perceived value associated with goal accomplishment. The performance phase of self-regulation involves actively pursuing a goal. The performance demonstrated in order to accomplish a goal is influenced by various factors, such as ability to self-monitor. One must monitor their own behavior and adjust it in relation to how close they are in accomplishing their goal. Or level of motivation. Motivation determines if one will persist in the pursuit of their goals. Or level of attention, how well one focuses on the goal. Or type of strategy selected to pursue the goal. One must utilize and engage in strategy to pursue a goal. Or level of support from yourself and others. If you're encouraged by others to pursue your goal, it will influence your effort and motivation in working on your goal. Self-reflection is the last phase of self-regulation and involves evaluating progress and adjusting behavior. There are three processes that are included in self-reflection. They are self-observation, self-judgment, and self-reaction. Self-observation involves focusing on, or paying attention to, one's own behavior. This means observing strengths and weaknesses in one's performance and understanding where one stands in relation to a desired goal. You might have to ask questions like am I where I want to be in relation to my goal, or does the goal seem close or far away. Self-judgment involves evaluating one's performance against a specific standard, so determining whether the strategy applied is effective or ineffective and evaluating whether or not one has adequate abilities to accomplish a goal that is not yet accomplished. You may have to break your larger goals into smaller goals if you're feeling overwhelmed. You might have to ask yourself, what aspects of my behavior work and what do not when it comes to accomplishing my goal? If one believes they're working effectively enough to reach a goal, they're more likely to continue to pursue a goal. Self-reaction involves responding to evaluations about one's performance. This means adjusting your regulation strategies when needed.For example, if studying two hours a night is not improving your science grade, your strategy may have to change. All of the following steps are included in a self-modification program except __________. focusing on one goal at a time Decreasing target behavior in a self-modification program can be done in all of the following ways except ___________. establishing a reinforcer that is given when unwanted behavior is increased
Conditioning occurs when two events that aren't naturally paired together become associated with each other. For example, many racehorses are conditioned to run at the sound of the starting bell in a race. They learn to associate this sound with a sign to begin the race. An organism can represent anything that learns. So classical conditioning is a type of learning that occurs when an organism learns to associate two or more stimuli with a specific event. This learned behavior can occur when researchers pare one stimuli with another stimuli that naturally evokes a specific response. Here's an example of classical conditioning. Individuals generally feel a little anxious going to the dentist and sitting in a chair, even if it's just for a teeth cleaning. If the dentist had to use a drill on one visit, you'd probably become more anxious when you heard the drill the next time. After several visits and unpleasant experiences, the sound of the drill alone would be enough to make you cringe, even if you weren't in the dental chair. You learn to associate the sound of the drill with the experience of at the dentist's office. Ivan Pavlov was a Russian physiologist who discovered and studied the principles of classical conditioning. He found that a stimulus could become associated with a response. Pavlov was studying the digestive process of a dog. He noticed that the dog salivated when the food dish was nearby, even without food in it. Pavlov decided to investigate. So the initial conditions of Pavlov's experiment included food, which resulted in salvation from the dog, and a bell, which elicited no reaction at first. Pavlov rang a bell when serving food to the dog. When the dog saw the food, the dog salivated. And this process was repeated several times. In pairing the food with the bell, Pavlov got the dog to salivate. Pavlov's dogs then learned to associate the sound of the bell with the food. So when the dog heard the bell without the food, salivation occurred. Pavlov studied the principles and processes of classical conditioning for the rest of his career. He won many honors for his work, including the 1904 Nobel Prize. This encouraged the growth of physiology as a study. Pavlov demonstrated that classical conditioning is one way organisms learn to adapt to their environment. He demonstrated that learning can be studied as an objective science, which influenced behaviorism. Classical conditioning was also researched by John B. Watson. He was born in 1878 and died in 1958. He was influenced by Pavlov's work, and founded behaviorism. Watson believed all behaviors, although biologically influenced, are a result of conditioning. He conducted the Little Albert experiment, where he conditioned a fear response in a small child and conditioned an emotional response to rats. To him, this provided support for the claim that many learned behaviors were conditioned. A stimulus that naturally provokes a behavior or response is known as a(n) __________. A. unconditioned stimulus
All of the following terms are used to describe the process of classical conditioning. They can help explain why we developed certain automatic reactions to things in our environment. The terms include the unconditioned stimulus, the unconditioned response, the neutral stimulus, the conditioned stimulus, and the conditioned response. The unconditioned stimulus is a stimulus that naturally provokes a behavior or response. It's a natural reflex or reaction to eating delicious food. The unconditioned response is a response to an unconditioned stimulus that occurs naturally and does not have to be learned. So salivating to delicious food is the response. The unconditioned stimulus and unconditioned response occur together naturally, so it's natural for your hand to recoil when it touches a hot stove. It's natural for you put your hands out to catch yourself if you trip and fall. Or to have the urge to run if a wild animal charges at you. The neutral stimulus is a stimulus for which there is no conditioned response. Researchers pair the neutral stimulus with the unconditioned stimulus in hopes that the subject will link or associate the two. So initially, if you ring a bell at a dog, you'll have no response. The conditioned stimulus is an originally neutral stimulus that stimulates a response due to it's association with an unconditioned stimulus. So if you ring a bell and give a dog food, the dog will salivate. Eventually the neutral stimulus becomes the conditioned stimulus and you could ring a bell and the dog will salivate just at the sound of a bell. The conditioned response is a response that's elicited from a conditioned stimulus. Generally the unconditioned response and conditioned response are the same. In this case, it's just that the dog is now salivating in response to something learned. That's different than the naturally occurring stimulus. The difference between the conditioned stimulus or the unconditioned stimulus and unconditioned response, is that something that's unconditioned was never learned, it was naturally occurring. The conditioned response and conditioned stimulus are a result of learning and associations. The unconditioned stimulus causes the unconditioned response. The neutral stimulus causes no response. The unconditioned stimulus and the neutral stimulus are paired, as the unconditioned stimulus causes the unconditioned response. The neutral stimulus becomes associated with the unconditioned stimulus, and eventually the neutral stimulus can elicit the response alone. The neutral stimulus becomes the conditioned stimulus. And the unconditioned response becomes the conditioned response. The conditioned stimulus causes the conditioned response. Freedom Avenue is a residential street that your best friend lives on. You initially have no anxiety when driving on it every day. One day, you get a traffic ticket for speeding on it, which causes you anxiety. The traffic ticket is the unconditioned stimulus. The anxiety is your unconditioned response. The neutral stimulus at first is Freedom Avenue, because it causes you no anxiety. After several times of being stopped and given a ticket on Freedom Avenue, you find that as soon as you turn onto the street, your heart starts racing, and your palms sweat. The neutral stimulus has now become the conditioned stimulus, causing anxiety even without a police or a ticket. So the traffic ticket is paired with Freedom Avenue, which causes anxiety, and initially the neutral stimulus of Freedom Avenue becomes the conditioned stimulus, which causes the conditioned response of anxiety.
conditioned stimulus
An originally neutral stimulus that stimulates a response due to its association to an unconditioned stimulus.
learned behavior
Behavior that has been changed or modified as a result of a subject's experience.
Watson's "Little Albert" experiment demonstrated which of the following pairs of classical conditioning processes? NOT acquisition and discrimination
Classical conditioning occurs when the unconditioned stimulus evokes a response from a neutral stimulus. F
How does advertising use classical conditioning to help sell products? A. It trains people to associate the product with positive emotions.
In classical conditioning, the __________ stimulus causes an unconditioned response. unconditioned
Conditioning occurs when certain events begin to become associated with each other. There are two types of conditioning, classical and operant. Both types are ways in which animals learn new behavior, but they're different approaches. Classical conditioning is based on involuntary behavior. It uses a stimulus-response model, where the stimulus causes a response. Operant conditioning is based on voluntary behavior. It uses a response-consequence model, where the consequence shapes the response. Classical and operant conditioning are similar in that they deal with reinforcement of behavior, but they use different models to do so. Both classical and operant conditioning use the same terms, like acquisition, discrimination, extinction, and generalization, but they're applied differently, depending on the model. Classical conditioning looks at the link between a stimulus and response, whereas operant conditioning looks at the link between a behavioral response and the consequences or stimuli following this behavior. Operant conditioning is a type of learning in which an organism learns to associate a behavior with a specific consequence. In this formation of conditioning, the behavior occurs first. Then the consequence occurs, which helps shape future behavior. The consequence of the voluntary behavior is what determines whether or not the behavior will strengthen or weaken. The consequences can be both positive and negative. Here's an example of operant conditioning. Say a small child tells a joke to his parents. They humor him and laugh, even though it may not be funny, which encourages the child. And eventually, after several instances of this, the child may tell unfunny jokes all the time. Let's look at how operant conditioning can discourage behavior. Say a child is prone to throwing temper tantrums to get attention. His parents want to extinguish this behavior, so they put him in time out for every tantrum. He learns to associate this unpleasant experience with tantrums and becomes less likely to throw temper tantrums. Two figures influential in operant conditioning were Edward Thorndike and his law of effect and B. F. Skinner, who developed an operant chamber or Skinner box. Let's talk about both of them. Edward Thorndike was born in 1874 and died in 1949. He studied animal learning and observed problem solving abilities in cats. Thorndike placed the cats in puzzle boxes which could be rigged to open once the cat inside correctly triggered a series of events necessary for escape. Thorndike noticed as the cat was put in this situation over and over again, the time necessary for it to correctly guess the sequence was reduced if the behavior was rewarded. Based on this, Thorndike established the law of effect. It was a principle of learning stating that rewarded behavior is strengthened and likely to be repeated. This law became the starting point of B. F. Skinner's research. Burrhus Fredrick Skinner was a well-known American psychologist. He was born in 1904 and died in 1990. He was interested in studying behavior. Skinner was only interested in studying a subject's behavior because it represented how learning could be measured. Skinner studied Pavlov's and Watson's discoveries and used their research as a basis for his own theory. Skinner believed that behavior was something that could be quantified, or scientifically tested, so he didn't give much credit to how thoughts influence behavior, because their influence couldn't be measured. Skinner established the theory of operant conditioning. He believed behaviors that are followed by favorable consequences tend to be repeated, and he created many scientific experiments to test his theory. The Skinner box was a device used to condition behavior. This was also called an operant chamber. It was a small enclosure designed for an animal to make specific responses as specific consequences were administered. The Skinner box included an electrified grid, a response lever, a food dispenser, loudspeaker, and lights. The chamber was designed to allow a variety of responses, and these responses were recorded by a measuring device. In the archetypical Skinner box experiment, Skinner presented a subject with a lever. Once the lever was pushed, food, or reward, was dispensed to the subject. After the subject ate the food, he begin to push the lever more and more as long as food continued to be the reward.
In this section we'll discuss methods used to strengthen or weaken behavior. We'll talk about the learning processes that influence operant conditioning as well as the types of reinforcers used to change behavior. The use of stimuli is what operant theories use to change behavior as we've learned in this lesson so far. When one is being conditioned, the stimulus or consequence that follows that behavior may either increase or decrease that behavior. Consequences or stimuli following a behavior have the potential to change behavior. If you wish to strengthen behavior, you use reinforcement. This is the addition or removal of a stimulus to increase or encourage a particular behavior. So when we say that a behavior is reinforced, we're saying that we went to behavior to be strengthened. If you'd like to weaken a behavior, you would use punishment. This is the addition or removal of a stimulus to decrease or discourage a particular behavior. Either method can be used in shaping and changing behavior. There are different types of reinforcers that can be useful in strengthening or weakening behaviors. Primary reinforcers are a stimulus that is naturally reinforcing because it satisfies innate need, such as food and water or the need for methods of affection. A secondary reinforcer is a stimulus that is reinforcing because of its association with primary reinforces, for example, applause or money. The ability of a reinforcer to shape behavior is often dependent on the situational context. In other words, the situation influences how powerful a reinforcer is. What is a powerful reinforcer for one individual may not be for another. For example, if you're not hungry, the promise of food as a reward wouldn't be effective in getting you to help your friend move. The operant conditioning learning process includes acquisition, shaping, extinction, stimulus generalization, and stimulus discrimination. The initial stage of learning is called acquisition. When an individual has made an association between the stimulus or consequence that follows the behavior, acquisition has occurred. This can be acquired through shaping. For example, once you understand that every time you bring home A's on your report card your parents are more likely to stay out late, acquisition has occurred. Shaping is the gradual acquisition of a behavior that occurs by reinforcing closer and closer approximations of the target behavior. For example, if you want your dog to go to a ball and pick it up, you might first reward your dog with a treat after it just goes to the ball, so the dog becomes more likely to go to the ball. After a while, you would reward your dog with a treat when it picked up the ball, which would mean the dog would be more likely to go to the ball and pick it up. Shaping can help form incredibly complex behaviors. B. F. Skinner taught his pigeons to walk in a figure eight and play ping-pong. Some dogs have been taught to skate board. And guide dogs can be trained to be able to stop at intersections and help people navigate. Say your researcher wants a pigeons to walk in a straight line, then turn left, then hop in the air. This method involves applying consequences to each behavior in a sequence of behaviors that will eventually lead to the target behavior. So in this situation, it would be hard to do all of these that once. A pigeon is most likely not going to walk in a straight line, turn left, and hopped in the air all at random. So you have to approach it in stages. First, when the pigeons walks in a straight line, the pigeons will get food, which means the pigeon will walk in a line more frequently. Next, if the pigeon walks in a line and turns left, the pigeon will get food, which means the pigeon turns left after walking in line more frequently. Finally, if the pigeon hops after turning left, the pigeon will get food, which will mean the pigeon will hop after turning left more frequently. A discriminative stimulus is a stimulus that increases or decreases the likelihood of a particular response. It's a cure that predicts the probability of a consequence occurring. For example, if before you leave for school, your parent asked you to mow the yard even though it's not one of your normal chores, this maybe a cue for a reward because you were rewarded in the past for doing extra chores. Mowing the yard is a discriminative stimulus because it may predict a future reward. Here's another example, if the owner yells, "paper," and the dog fetches the paper, the dog gets a treat. The dog is more likely to fetch when the owner yells, "paper." The paper is the discriminative stimulus. It's present, so when it happens, the dog is more likely to get rewarded, so it's more likely to engage in the behavior of collecting the paper. Stimulus generalization occurs when an organism responds in the same way to stimuli that is similar to the discriminative stimulus. For example, if the owner yells, "fetch," the dog fetches the ball, the dog gets a treat. So the dog goes and grabs the ball when the owner yells, "Awesome play," in the same tone of voice while watching a football game as he would normally yell, "fetch." Stimulus generalization has occurred because the dog goes and fetches the ball when the owners said, "Awesome play, " rather than when the owner said, "fetch." Stimulus discrimination occurs when an organism learns to distinguish between discriminative stimuli and other similar stimuli. For example, if the owner yells, "fetch," the dog catches the ball, the dog gets a treat, stimulus discrimination is when the dog learns to distinguish between the owner yelling, "fetch," and the owner yelling, "awesome play," during a football game. Extinction occurs as behavior gradually diminishes due to lack of reinforcement. For example, when a pigeon walks in a straight line and gets food, the pigeon will walk in a line more frequently. Extinction occurs when after no reinforcement the pigeon walks in a line less frequently. In extinction, you remove the consequence that's reinforcing the response, which makes the behavior less likely to occur until it is diminished completely. Resistance to extinction occurs when an organism continues the behavior after the reinforcement has been eliminated. This depends on the strength of connection between the reinforcer and behavior. Stronger connections may cause a behavior to continue longer. So if the behavior is deeply ingrained and has been reinforced for a long time, it may be more difficult to diminish. This is why some individuals have problems breaking bad habits. Behavior may continue if it is being reinforced by other unobserved reinforcers or stimuli. For example, a dog may continue to fetch a ball after it gets a treat. But even if there's no reinforcement and the dog continues to fetch the ball without a reward, this is resistance to extinction. The behavior continues even without a reinforcer.
generalization
Learning process that occurs when a conditioned response is elicited from stimuli that is similar to the conditioned stimulus.
extinction
Learning process that occurs when a conditioned stimulus no longer elicits a conditioned response.
Classical and operant conditioning can't account for how individuals learn all behavior. For example, how one learns to drive a car or how one learns a specific dance. People can also learn by observing the behavior of others. Observation is an important component in learning and replicating many human behaviors. Observational or social learning may be influenced by mirror neurons. These are frontal lobe neurons that activate when performing or witnessing behavior. They enable imitation and empathy when observing others. Albert Bandura was born in 1925. He established social learning theory, which states that learning happens when people observe and imitate the behavior of others. Bandura claimed that observation influences conditioning processes. Here's some examples of behaviors that are influenced through social observation-- driving a car, playing sports, aggression, dependence and attitudes. What is observed can be learned from models in the environment. Models are individuals that are being observed. This could be anyone within your environment that you're watching do something, such as parents, other family members, peers, teachers, or media. Behavior can be learned through vicarious reinforcement. This is behavioral reinforcement that occurs as a result of observing the consequences of another individual's actions. It doesn't matter if modeling is intentional or accidental. If the stimulus-response consequence model can be observed, modeling is occurring. So vicarious reinforcement occurs when behavior is observed and the witness's behavior is reinforced by the consequences experienced by the model. The reinforcement is not experienced directly by the witness. They create a mental model that contains the reinforcement. If a father is difficult with a sales staff when buying a new car and as a result he gets a great deal on the car, the son might observe his father's behavior and consequence and then the son will become more likely to be difficult with salespeople in return. seen the consequence of his father's behavior, that he's gotten a great deal on a car. Bandura conducted an experiment called the Bobo doll experiment to test the effects of observed behavior and vicarious reinforcement. In the first experiment he conducted, the group one of children witnessed an aggressive adult modeling behavior. In group two, the children witnessed nonaggressive adults modeling behavior. And in group three, the children did not witness any adults modeling behavior. The outcome was that children who witnessed aggression were much more likely to act aggressively than those in the nonaggressive modeling group. In repeated experiments with the Bobo the Clown doll, Bandura added consequences to an adult's behavior to see the impact that it had on children. So if the adult displayed aggressive behavior towards the doll and they were not punished for the behavior, the child observed this aggressive behavior and consequences and became more likely act aggressively towards the doll. This is the vicarious reinforcement, that the adult was not punished for their behavior. So as a result, the children who witnessed aggressive behavior being punished were less likely to behave aggressively. The children who witnessed aggression being rewarded were more likely to repeat the aggressive behavior. Bandura agreed with behaviorist Edward Tolman in his claim that latent learning, or learning without demonstration, can occur. Some other assumptions about social learning are that performance is not required for learning to occur. Bandura claimed that reinforcement influenced whether or not behavior was demonstrated. The demonstration of behavior is dependent upon situational factors and the demonstration of behavior does not determine if behavior was learned. Bandura elaborated on his social learning theory to establish the social cognitive theory. This is a learning theory that states that knowledge acquired through observation is influenced by the interaction of behavior, personal factors, and the environment. So one's behavior is influenced by the situation, just as the situation can be impacted by one's behavior. For example, if you saw your brother get grounded for breaking house rules on one occasion and not another, you'd take into account your own perception on his behavior as well as your own feelings and note the situation and how it influenced his consequence the first time around before deciding what your behavior would be. more - The behavioral norms and values of a given culture. stereotype threat - Threat that occurs as a result of internalizing negative societal stereotypes. disidentification - The process of separating one's identity from a particular aspect of performance or group. collaborative learning - Type of learning that utilizes the unified intellectual efforts of both students and teachers. Culture can __________ the context for certain behaviors and __________ behaviors with consequences. create . . . shape
Many behavioral psychologists would disagree with the social cognitive theory because most believe that behavior is __________. not affected by cognition Albert Bandura claims that there are four processes that influence social learning, attention, retention, reproduction, and motivation. Attention. A person must pay attention to the behaviors and consequences of others in order to learn the observed behavior. Paying attention is vital to observational learning. You can't learn something you don't pay attention to. You can also pay attention to irrelevant factors, as well.Think about observing someone famous. You might replicate behavior of theirs that you find glamorous and ignore the hard work it took them to become a success. Retention. A person must retain the observed behavior by creating a mental representation of the behavior. This mental representation must be kept in memory, and you cannot replicate forgotten behavior. Reproduction. A person must be able to reproduce, or replicate, the observed behavior. Some behaviors might be too complex to replicate. For example, someone may not be able to replicate the behavior that's beyond their ability, like dunking a basketball. Motivation. A person must be motivated to perform the observed behavior. The observer must want to perform the behavior and see an advantage in replicating it. If there's no reward for one's behavior, a person is less likely to demonstrate this learned behavior. So what factors influence whether another will mimic the behavior of this person jumping a dangerous fence? If you're a kid looking at this situation, are you likely to model the behavior? First, attention. You must be paying attention, like understanding how someone got over the fence. Once you've paid attention, you must retain the information. If the retained actions of the fence jumper are in your mind, you'll be able to replicate the behavior. After retention is reproduction. Could you reproduce the behavior? If the behavior's not too complex, you could replicate the behavior. Finally, motivation. You have to be motivated in order to replicate the behavior and copy the other person. This might depend on the circumstances, like, is there punishment or consequences involved, or what were the reactions of others? In order to __________ a certain behavior at a later time, individuals must retain the observed behavior by creating a mental picture of the behavior. demonstrate We know that social learning exists, and that modeling and observation can lead to the transmission and learning of behavior. But what implications does that have for our society? Specifically, there are people who can transmit their thoughts, plays, and shows through media. So can media affect behavior? Media are forms of communication where information can be transmitted to a large number of people very efficiently. Media includes television and movies, the internet, video games, radio and books, newspapers, magazines, and advertising. In this section, we're also going to discuss the implications of social learning theory in regards to media violence in our society. We'll talk about media exposure, media content, and media effect. In the United States, the average American watches four hours of television per day. That's an average of 20 hours per week, and this includes children. The average American youth watches 1,500 hours of television per year, whereas they only spend 900 hours in school per year. This amount of observation may even surpass the number of hours someone spends observing the real people in their lives, like their parents. Much of American television includes violence and aggression. 61% of television programs contain violence. 75% of violent actions on television are not punished, and 51% of violent acts on television are featured with no apparent pain. Aggressive behavior displayed through the media includes any behavior intended to harm another person. This may be physical or verbal. Research on the effects of media violence examines many kinds of outcomes on young people. Violent behavior is a form of aggressive behavior that can involve assault or murder. Some studies have focused on how media violence affects aggressive thinking, while others have focused on its effects on emotions. So what effect does media have an aggression and violence? The answer isn't clear because there's no definitive, conclusive evidence to say that watching violent media necessarily leads to violent behavior. Some studies show a correlation between viewing aggressive acts and aggressive behavior, while others show little affect on aggressive behavior, and still others demonstrate a positive social impact. Some studies show a correlation between viewing aggressive acts and aggressive behavior. Viewing violent behavior can decrease sensitivity to the effects of violence. Exposure to TV, video game, and movie violence may increase physical and verbal aggression. Some researchers claim that video game violence may be more influential on aggressive behavior because of the high level of attention and action involved in playing video games. Viewing violent behavior may increase aggressive thoughts or emotions, and violent media can reinforce negative stereotypes and harmful habits. But it's difficult to show causation between actual violent behaviors and media portrayals of violence because the causes of violence are often numerous and complicated, and because aggressive people may be drawn to aggressive media. Some studies show media can have a positive effect. Some viewers experience a release of aggression or attention by watching aggressive media, which helps to reduce aggression. But the long term effects of whether or not this is beneficial are unclear. Educational programming can increase the learning level of younger viewers. Educational programs like Sesame Street and Blue's Clues can increase a young child's learning level and help them to develop healthy ways of coping with aggression and anger.
Culture is a system of behaviors and beliefs shared by a group of people. This could be the American culture, Californian culture, or youth culture. We learn about emotions, ideas, and more from the culture we live in. Culture establishes cultural expectations, values, and beliefs. In relationship to learning and behavior culture has two effects. It creates a context for behaviors and shapes behaviors with consequences. Modes of appropriate behavior are established by culture and these development into mores. Look at the pictures and see the different ways people greet each other depending on their culture. At the top is the traditional way the Maori people of New Zealand greet each other. At the bottom is a traditional way people in the United States greet each other, with a handshake. Each custom is established by one's culture. Mores are the behavioral norms and values of a given culture. They establish cultural expectations, beliefs, and values. Diverse mores can influence group relationships. They can lead to conflict between cultural groups. But they can also lead to increased understanding and tolerance of other cultural groups. Mores are generally internalized as individuals begin to interact with various levels of a culture. In a diverse society, like the United States, it's possible for person to interact with many people who have diverse mores. Look at this man's clothing. In what places is this outfit acceptable and in what places is this outfit unacceptable? It would be appropriate in an office, at a wedding, or at a funeral. But it would be inappropriate at the beach, in a pizza restaurant, or at the gym. So how do you know that your answers are reasonable? Where did you learn that a business suit is appropriate or inappropriate in certain situations? Are you sure that everyone will react the same way? The reason you know or think you know where this outfit is appropriate is because culture creates a series of expectancies about what is appropriate and what is not. Think about being at school and in the playground versus in a classroom. How does culture encourage behavior in each context? You might behave differently in a playground or in a classroom because of societal expectations of the two places. For example, in a classroom you'd be respectful and studious. But you would be playful on the playground. You've learned these expectations whether you realize it or not.What learning is going on? The audience is reinforcing the speaker's actions with their applause and attention. The speaker is reinforcing the audience's attention by talking to them and giving them attention. So culture can influence behavior by using reinforcements and punishments. Reinforcements could include peer recognition, fame, love, a promotion, good grades, or recommendations. Punishments could include time out, jail time, discipline, shunning, a lowered status, or labels. Social learning makes it possible for group members to learn indirectly. Reinforcements and punishments can affect behavior without being applied to every member of a group. Individuals within the same culture may choose to adopt or reject cultural mores. Similar cultural reinforcements or punishments impact individual behavior differently. This depends on the interpretation of experience. For example, you may or may not agree with your own culture's norms. As an individual, you can choose to adopt them or establish a different way of living and behaving.
Many environmental and cultural factors impact one's level of learning and academic performance. These could include educational opportunities, authority expectations, and stereotype threat. Maximizing learning opportunities leads to improve reasoning and problem solving abilities. It can also lead to increased IQ scores and improved memory. Lack of learning opportunities has been linked to diminished expectations. Individuals not granted learning opportunities may be unaware of the opportunities that exist and come to expect that they can't change their circumstances. Lack of learning opportunities has also been linked to lower academic achievement, poverty, and behavioral problems, such as delinquency problems with drugs and alcohol. Learning and performance are separate concepts. Performance is a demonstration of the learning. One can learn something fully, but not demonstrate knowledge effectively through performance. This can influence level of perceived achievement and may be the result of environmental and cultural factors. For example, you know that sometimes a bad day can influence test results no matter how hard you've studied and how well you know the information.Your behavior can be fully learned and understood, but poorly performed. If your learning is never accurately demonstrated however, another's perception of your level of achievement can be distorted as well. The fact that sometimes people don't effectively demonstrate what they've learned may be influenced by environmental and cultural factors like authority expectations. Once an authority expectation is revealed, it can influence self-efficacy and self-esteem, which influences motivation and achievement. For example, many girls who haven't been encouraged by their parents to learn math and have been told they don't have the ability to learn it feel they aren't capable of doing so even when their scores in math reflect the opposite. Parental expectations are a strong predictor of academic performance and future achievement. High parental expectations can lead students to perform better in school, but they must have an accurate assessment of student ability. Low parental expectations could limit learning and achievement. Teacher expectations highly impact one's level of learning and academic achievement. Students learn of teacher expectations through the type of work assigned. Whether the student is assigned easy or difficult work is a reflection of student expectations. They also learn of teacher expectations through teacher feedback and nonverbal behavior such as tone of voice and facial expression. If a student perceives that the teacher has low expectations, reduced motivation and effort often result. For example, if a teacher expects students to fail, they're less likely to academically succeed. If expectations are high, the reverse usually happens as long as the expectations are within reason and the authority has an accurate perception of one's ability. Culture influences achievement. What is considered a success or accomplishment is often defined by culture. Achievement is often demonstrated by performance an awarded in ways established by culture. This could be reflected in academics, in one's career, or one's personal life. Cultural attitudes towards groups are learned. These can lead to biases and stereotypes. An expression of these learned attitudes can lead to stereotype threat, where you feel threatened or even insecure about achieving. This is especially true when the stereotype is about your lack of ability or skills. So a stereotype threat is a threat that occurs as a result of internalizing negative societal stereotypes. This generally has a negative impact on performance and self-efficacy. Stereotype threat was the main focus of Claude Steele's research. Claude Steele was a social psychologist from Stanford. So once a stereotype type of threat is initiated, influences on self-efficacy and self-esteem will start to surface. This influences your motivation and performance. So an individual could experience conflict and anxiety and then they wouldn't be able to focus enough to maintain motivation or perform well. This may also motivate one to try even harder to disprove this stereotype. Individuals often demonstrate low achievement. Some students influenced by stereotype threat eventually drop out of school. Self-efficacy and self-esteem usually decrease as well. Individuals may attempt to preserve their self-esteem by reducing their effort, devaluing the subject or area, or disidentifying with education in general or the group stereotyped. Disidentification is the process of separating one's identity from a particular aspect of performance or group. This assists in preserving self-esteem. An individual may disidentify by ceasing to think of themselves as a math person or a soccer person, for example. All people can be affected by stereotype threat regardless of gender, ethnicity, religion or economic status. Not all people who care about success worry about being stereotyped. Researchers found that those who identify with the stereotyped group and have high achievement abilities are most likely to be affected. Those who care about succeeding are often negatively impacted by stereotype threat. Negative cultural and environmental influences can lead an individual to thwart or limit their educational learning opportunities as demonstrated by this cycle. A lack of opportunity, low authority expectation, and stereotype threat can reduce self-efficacy, motivation, and effort, which can cause one to devalue education. This may lead to decreases in academic achievement, which will limit learning opportunities, which will start the cycle over again.
negative punishment - Type of punishment used in operant conditioning in which the removal of a rewarding or pleasant stimulus decreases the tendency of a particular response.
Negative punishment is a type of punishment used in operant conditioning in which the __________ of a rewarding stimulus decreases the tendency of a particular response. removal
Learning is a relatively permanent change or modification of behavior that occurs as a result of experiential processes. For example, learning how to pull oneself up and stand on two feet. We know things like facts, such as names, locations, and times, and how to do things like how to walk, write, and speak. The question is, how do you know things? For part of the answer, we're going to talk about learning. People learn through association. That's the linking of two concepts or stimuli due to often being repeated together. Most learning takes place through association. So for example, volleyball players have learned that when one player sets the ball, they will have a chance to spike it. So when they see a player getting ready to set the ball, they prepare to spike it. Seeing another player act causes them to alter their behavior and act in a specific way, so it's a learned behavior. We've learned to associate the walk sign or the white stick figure with safety of our environment. So when we see the sign, we walk. Variations in experience can lead to variations in learning. Our shared knowledge could lead to a variation. For example, a child may know how to fish because it's a pastime of this family members, so it can influence how much information is retained in school if he's reading a book about fishing. Consequences and rewards can influence learning as well. If someone is punished for a certain behavior as a child, they may be more likely to refrain from that behavior as an adult versus someone who never received punishment for that particular behavior. An environment set up to promote critical evaluation and learning in children often leads to better reasoning and problem solving skills than an environment that doesn't promote learning. And lack of opportunities can limit one's understanding of ideas and concepts in life. Some behaviors automatically occur without conscious thought like learned fears or aversions to particular foods. You weren't born hating a particular food, but a bad experience might have led you to dislike it automatically in the future. Learning must be demonstrated to be assessed. You cannot assess knowledge about demonstration, so you'll need something like a performance assessment. You made devise a way to test or assess if learning has occurred. So have you learned your multiplication tables? We might have you take a multiplication test to figure that out. Learning can be invisible. Learning can still occur even if it's not shown. So for example, you can't tell whether these men know their math facts or the capital of Iowa if you're just looking at them. A demonstration is necessary for them to show their knowledge. Learning can be both positive and negative. Individuals can learn healthy or unhealthy behaviors. Some positive behaviors that could be learned are being kind to others or taking responsibility for one's behavior. Negative behav innateiors that can be learned are unhealthy coping strategies, such as using alcohol or drugs to cope with problems to the point of interfering with healthy behavior. Or prejudice and chronically being late can also be learned.
Observable behavior can be classified into two categories, behavior that we learn from experience and from observing others, like how to ride a bike, and behavior that happens automatically, like our ability to yawn. In terms of observable behavior, you have innate, or instinctive behaviors, and learned behaviors. An innate behavior is a behavior that naturally occurs as a result of an automatic, inborn process. Learning, repetition, and practice are not necessary to perform innate behaviors. So even if a squirrel was born in a lab and had never seen a nut before, the first time it encountered a nut, it would know how to open it. A rhesus monkey who sees a threatening picture of another rhesus monkey will show a submissive gesture. Humans will crawl once their bones and ligaments are strong enough. Sea turtles will automatically move toward the ocean once they are hatched. And humans are born with the ability to smile. A reflex is a rapid, simple, and involuntary response to a stimulus. Babies have a grasping reflex. They'll grasp whatever comes into their hands. It's an automatic function. Reflexes occur without thought. Other reflexive behaviors include sneezing, drawing your hand away from a hot stove, and flinching. A learned behavior is a behavior that has been changed or modified as a result of a subject's experience, like learning how to ride a bike. Learning shapes many of our behaviors, like our personal habits, such as biting our nails or a lip, our personality traits, like we're anxious or shy, personal preferences, such as our likes or dislikes in movies and music, and our emotional responses, like the amount of frustration felt. Many behaviors are complex, and they have both instinctive and learned components. For example, older locusts use less energy to fly than younger locusts. This fact suggests that although flying is instinctual, the locust has also learned a more efficient way to fly. Sleeping is an instinct that is shaped by environmental factors. So while sleeping is instinctive, how much and how well one sleeps is shaped by stuff that's going on in the environment. Which of the following is an example of a learned behavior? human toddlers walking All of the following are shaped by learned behavior except __________. NOT shyness Both learned and innate behavior can occur as a result of automatic processes. T The fact that older locusts use less energy to fly than younger locusts suggests that __________. the older locusts have learned a more efficient way to fly All of the following are shaped by learned behavior except __________. NOT personal habits The fact that sea turtles automatically move toward the ocean once hatched is an example of a(n) __________ behavior. innate Innate behavior occurs as a result of practice and repetition. F All behaviors occur only after conscious thought. F Behavior that happens automatically without conscious input is called __________ behavior. innate A reflex is a type of innate behavior. T
law of effect
Principle of learning stating that rewarded behavior is strengthened and likely to be repeated.
Learning is a relatively permanent change or modification in behavior that occurs as a result of experiential processes. Cognition is mental activity that assists in acquiring knowledge and increasing understanding. So let's look at how cognition affects learning. Conditioned behaviors were first studied in psychology in order to assess one's learning ability. Conditioning occurs when certain events begin to become associated with each other. Behaviorists wanted to look only at observable behavior. They argued that cognition, or thinking, or reasoning, is unobservable, and therefore not subject to scientific inquiry. Behaviorists argue that learning must be demonstrated in order to be properly assessed, so you must demonstrate, or show, that you know how to ride a horse. Classical and operant conditioning do not account for thinking in behaviorist models. For example, individuals would react differently to the same stimuli. Some people react more strongly to rewards and punishment than others, which is an outcome not predicted by traditional models of classical and operant conditioning. By using cognitive theory, we can explain this outcome. Each person had a unique understanding of the stimuli, and therefore, a different belief of how it would turn out.So each one responded differently to the same stimuli. Cognitive influences on conditioning are now accepted in psychology, and the influence of cognition on learning has been demonstrated. Conditioning looks at the causes and outcomes of behavior. Cognition is not considered. With classical conditioning, the assumption is that a specific stimulus can be paired with an unconditioned stimulus to elicit a response. With operant conditioning, someone's behavior is conditioned to occur because of the consequences that reinforce that particular behavior. Many proponents of behaviorism are only concerned with the outcomes of learning. They don't see value in describing what happened between the stimulus and response. Without accounting for this behavior, however, we can't accurately predict all outcomes of the conditioned event. Cognition is happening between the stimulus and response or the response and consequence. After the stimulus, response, or consequence, the mind is taking information into account. We respond to the stimulus very quickly, so we don't notice a lag in processing While we can't observe cognition directly, we have to look for evidence of cognition, or signs that it changes how we behave. Let's look at how cognition plays a role in a conditioned behavior. Say you just bought a new car, and you want to take out to see how it handles at high speeds. As a result of your speeding, or initial behavior, you get a ticket as your consequence. The ticket is punishment that will work to reshape your behavior and, hopefully, get you to slow down. So different reasoning, such as do I have a good traffic record so far, can I afford the ticket, and is it worth the thrill, will occur during or after receiving a ticket. Cognition helps determine if consequence will work in controlling behavior. So cognition will determine whether or not receiving a ticket will prevent you from speeding again. You can use a computer as a computer metaphor for human thought. Cognitive influences are comparable to computers. So the computer responds to stimuli in the environment. The stimuli are inputs from the user collected through our perceptions. On a computer, this would be the keyboard or mouse. The computer compares the inputs to things in the memory, or the RAM and hard drive of the computer. Then it decides the behavioral response, or the programmic output. The output is then fed back to the user.
Research indicates that learning can occur even without direct behavioral demonstration. Even though original models of conditioning didn't take cognition into account, there's plenty of evidence to suggest that learning can be affected by mental representations that an organism has. In other words, learning may appear invisible. It can occur through reading, watching others, or listening. Two researchers who helped demonstrate the influence of cognition on learning were Edward Tolman, whose research on rats led to the discovery of latent learning and the use of cognitive maps, and Robert Rescorla, who showed the influence of the predicted value of stimuli. Edward Tolman was born in 1886 and died in 1959. He was a behaviorist who conducted experiments with rats in mazes. He discovered the use of latent learning and cognitive maps. Tolman had three groups of rats in his experiment. The group A rats were reinforced every time they reached the end of the maze. The group B rats received no reinforcement for completing the race. And the group C rats received no reinforcement until the 11th day. As expected, the group A rats became used to the maze and ran in faster and faster every time. The group B rats were slower in completion of the maze. And the group C rats became the best at running the maze shortly after being reinforced. Tolman theorized the rats were learning the maze without reinforcement. He believed the rats did not demonstrate their learning because they weren't reinforced or rewarded for their behavior. Tolman concluded that the rats were forming cognitive maps of the maze, as well as experiencing latent learning. A cognitive map is an organism's mental representation or diagram of information. Latent learning is a type of learning that occurs without reinforcement and isn't obvious until demonstrated. So the rats in group C had been learning about the maze all along. They were just never motivated to show their learning. Tolman's conclusions were significant because at the time, most believed that learning occurred because of conditioning, and that no behavior could be learned without reinforcement. Tolman was one of the first to open the door to the influence of the mind on learning and behavior. Robert Rescorla was born in 1940. He was a behavioral neuroscientist who tested rats' responses to stimuli. He wanted to determine if variations in stimuli influenced conditioned behavior. Rescorla assigned a predictive value to the stimuli used in the experiment. Predictive value is the value of a conditioned stimulus that determines if a consequence will occur. Rescorla made some stimuli more likely to predict behavior for certain rats than for others. Let's talk about predictive value. Say you're trying to cross a busy street. If the walk sign is lit, are you sure it's safe to walk? The lit sign is a stimuli to predict safety, so usually when you see it, it's safe to walk. So when you see the sign, most cross the street. The sign has a high predictive value of making people walk. Sometimes, a car will ignore the red light and try to race through the intersection. People who have seen this often are more likely to rely on looking both ways to cross the street and not use the sign. These people have learned that sometimes, the sign means it's safe, and sometimes it does not. In this case, the walk sign has a low predictive value for those people. So a high predictive value means outcome will usually occur after a stimulus. A low predictive value means the outcome has no clear link to the stimulus. Rescorla demonstrated predictive value in an experiment where he tested two groups of rats. He paired the groups of rats with a shock and tone. In the group A, the shock and the tone were paired 20 out of 20 times, or 100% of the time. In group B, the shock and tone were paired 10 out of 20 times, or 50% of the time. In the other 50% of the time, the shock came unannounced. The first group of rats, or group A, showed more fear of the tone, because the tone preceded the shock 100% of the time. In the second group, or group B, the rats showed less fear of the tone because it was not as good of a predictor for them, since only occurred half of the time. There's a lower predictive value. The tone was a good indicator only for the first group. It had a high predictive value for when the shock would follow the tone. Rescorla felt the rats in the second group were cognitively evaluating and understanding that the shock didn't always follow the tone. In the same way, you learn to repeat things and react to things in the environment that provide reliable predictors of when certain responses will occur.
discrimination
The ability to distinguish between a conditioned stimulus and other similar stimuli.
Operant conditioning uses reinforcement in order to increase behavior. Reinforcement is the addition or removal of a stimulus to increase or encourage a particular behavior. There are two types of reinforcement that are used to increase behavior. They are positive reinforcement and negative reinforcement. Positive reinforcement is a type of reinforcement using operant conditioning, in which the addition of a rewarding or pleasant stimulus increases the tendency of a particular response. Positive reinforcement has been identified by researchers as the most powerful reinforcement technique in conditioning. Here's an example of positive reinforcement. Man goes to work all week and he receives a paycheck. So due to the positive consequence of receiving a paycheck, the man keeps going back to work all week. Negative reinforcement is a type of reinforcement used in operant conditioning in which the removal of an unwanted or unpleasant stimulus increases the tendency of a particular response. So positive and negative reinforcement serve to increase behavior. The difference is whether there is the addition or removal of a stimulus to increase the behavior. Here's an example of negative reinforcement. Say you light a fire in the fireplace. Cold air is removed, which makes you more likely to light the fireplace to warm up the house. The fact that we removed an unwanted stimulus, the cold air, is what makes this an example of negative reinforcement. Both positive and negative reinforcement attempt to strengthen or increase the behavior would properly applied. So, say you earn straight A's on a report card. You're either taken to an ice cream shop or you no longer have to do the dishes. Both have the same result, they strengthen the behavior. Being taken to an ice cream shop is an example of positive reinforcement because something is being added to the situation, you're getting ice cream. No longer having to do the dishes is an example of negative reinforcement, because something is being taken away, you having to do the dishes. Some psychologists claim that the distinctions between positive and negative reinforcement are basically meaningless, because there's no way to really differentiate between the two in many cases. A sweaty person jumps into the swimming pool on a hot day. The person becomes cool, which is an example of positive reinforcement. This makes them more likely to jump in the pool next time that they're hot. Or the same sweaty person jumps into a swimming pool on a hot day and the heat is removed. This is negative reinforcement, because something is being removed. But the result is the same, the person is more likely to jump in a pool the next time that they're hot. The main point is that the action of jumping into the pool is reinforced.
The addition or removal of a stimulus is used to increase or encourage a particular behavior when reinforcement is used. In reality, individuals are not reinforced for behaviors all the time. Variations in reinforcement can influence the strength and impact that reinforcers have on behavior. For example, if you received $200 every time you made an A on an exam in school, rather than only at the end of the year, your thoughts about studying may be influenced even more than when they were normally reinforced. Researchers use different schedules of reinforcement to determine the influence of variations in reinforcement on behavior. Testing the effects of varying schedules of reinforcements could also determine how often a behavior should be reinforced to effectively initiate a specific behavior. There are two schedule categories-- continuous reinforcement and intermittent reinforcement. Continuous reinforcement is reinforcement that occurs when the behavior or a response is reinforced after every demonstration. For example, every time you touch a key on the keyboard, it creates input on the screen. Intermittent reinforcement is reinforcement that occurs when a behavior or response is reinforced only some of the time. For example, you may get verbal praise after you surprise your parents with cleaning your room. Intermittent reinforcement is connected to greater resistance to extinction. Intermittent schedules of reinforcement are divided into two broad categories and four subcategories. Ratio scheduling includes fixed-ratio and variable-ratio schedules of reinforcement. Interval scheduling includes fixed-interval, as well as variable-interval schedules of reinforcement. Ratio scheduling is a schedule of reinforcement that's based on the number of demonstrations. Interval scheduling is a schedule of reinforcement that's based on the amount of time. Fixed scheduling is when the reinforcement is consistently administered. Variable scheduling is when the reinforcement is inconsistently administered. A ratio schedule occurs when reinforcement is based on a number of demonstrations of target responses. A fixed-ratio schedule of reinforcement is a schedule of reinforcement in which reinforcement occurs after behavior is demonstrated a specific number of times. A variable ratio schedule of reinforcement is a schedule of reinforcement in which the reinforcement occurs after a random number of behaviors are demonstrated. Here's an example of a fixed-ratio schedule of reinforcement. Remember, after a certain number of repetitions of a targeted behavior, the reinforcement is provided. The number of reinforcements doesn't change on a fixed-ratio schedule. So if a parent wants to encourage their child to do homework, they may strike a deal. For every three out of six homework assignment a child completes, they'll get a slice of pizza. A slot machine is a good example of a variable-ratio schedule of reinforcement. The reinforcement, or money, comes after an average number of random behaviors. The number varies each time between payoffs. Interval scheduling occurs when reinforcement is based on the amount of time that passes between responses or behavior. A fixed-interval schedule of reinforcement is a schedule of reinforcement in which the reinforcement occurs after a specific amount of time has passed between responses. A variable interval schedule of reinforcement is a schedule of reinforcement in which the reinforcement occurs after a random amount of time has passed between responses. Here's an example of a fixed-interval schedule of reinforcement. The system of worker compensation seen at most workplaces means that everyone is paid every two weeks. This happens as long as the employee keeps coming to work. So it's a fixed-interval schedule of reinforcement. Here's an example of a variable-interval schedule of reinforcement. Say someone hangs up the laundry to dry. They check it occasionally after hanging it. The reinforcement comes when they find a piece of dry clothing. Different materials will dry at different times. So it may take 10 minutes to dry one shirt, five minutes to get another dry shirt, or 15 minutes to get one dry shirt. It's random, so it's a variable-interval schedule of reinforcement. Let's take a look at how each schedule influences one's behavior. A fixed-ratio schedule of reinforcement has a rapid response rate. This means the behavior is demonstrated faster. For example, with a fixed-ratio schedule, an organism will respond once they learn that the reinforcement is coming every five times. A fixed-ratio schedule is easier to extinguish. The behavior is not repeated very long after the reinforcement stops. For example, if after a specific number of responses, you still see no reward, you're likely to leave the reinforcement is no longer occurring. A variable ratio schedule of reinforcement has a rapid response rate, but is harder to extinguish. So the behavior is repeated or demonstrated after the reinforcement stops. This is because the amount of times a behavior was reinforced vary to begin with, so the organism may be unsure if the reinforcement has stopped completely. A fixed-interval schedule of reinforcement has a slower response rate. So the behavior takes longer to be demonstrated until around the time of reinforcement. For example, people may hold off spending money until the night before payday because they know the reinforcement is about to come again for the week. A fixed-interval schedule is easier to extinguish. A variable interval schedule of reinforcement has a slower response rate so it takes longer to demonstrate, and it's harder to extinguish. So the behavior continues after the reinforcement has stopped. This is because the time between reinforcement has varied so one is unsure if the reinforcement has stopped completely.
punishment
The addition or removal of a stimulus to decrease or discourage a particular behavior.
reinforcement
The addition or removal of a stimulus to increase or encourage a particular behavior.
shaping
The gradual acquisition of a behavior that occurs by reinforcing closer and closer approximations of the target behavior.
acquisition
The initial stage of learning.
Edward Thorndike's law of effect was a precursor to the theory of __________. operant conditioning
The psychologist who believed that behaviors that are followed by favorable consequences tend to be repeated was __________. B. F. Skinner
Behaviors never come back once they have been extinguished. F
When a response becomes generalized, then someone will react to things that remind them of the first stimuli that caused a response. T
negative reinforcement - Type of reinforcement used in operant conditioning in which the removal of an unwanted or unpleasant stimulus increases the tendency of a particular response.
continuous reinforcement - Reinforcement that occurs when a behavior or response is reinforced after every demonstration.
variable-ratio schedule of reinforcement - Schedule of reinforcement in which reinforcement occurs after a random number of behaviors are demonstrated.
fixed-interval schedule of reinforcement - Schedule of reinforcement in which reinforcement occurs after a specific amount of time has passed between responses.
intermittent reinforcement - Reinforcement that occurs when a behavior or response is reinforced only some of the time.
fixed-ratio schedule of reinforcement - Schedule of reinforcement in which reinforcement occurs after behavior is demonstrated a specific number of times.
Operant conditioning uses punishment in order to decrease behavior. Punishment is the addition or removal of a stimulus to decrease or discourage a particular behavior. There are two types of punishment used to decrease behavior, positive punishment and negative punishment. Positive punishment is a type of punishment used in operant conditioning in which the addition of an unwanted or unpleasant stimulus decreases the tendency of a particular response. So having to do extra chores for staying out late is an example of positive punishment. Extra chores have been added to your workload with the aim of decreasing the unwanted behavior. In positive punishment, we add adverse consequences for behavior, which generally decreases after. Sometimes this happens automatically, like when you touch a light socket and you receive an electric shock. Negative punishment is a type of punishment used in operant conditioning in which the removal of a rewarding or pleasant stimulus decreases the tendency of a particular response. So a pleasant stimulus is removed in order to decrease behavior. In this example, a child who's gotten into trouble has had her favorite teddy bear removed. Her parents are taking away something pleasant, her teddy bear, in order to decrease her bad behavior. It's important to realize that negative reinforcement is not the same as punishment. The first point is that negative reinforcement reinforces or increases behavior, so it makes the behavior more likely to occur. Punishment seeks to extinguish or decrease behavior. Negative reinforcement uses a different mechanism than either kind of punishment. Negative reinforcement removes an undesirable stimuli. For example, when you take a shower, you no longer smell bad, so you're more likely to take another shower. Punishment is the addition of undesirable stimuli or the removal of desirable stimuli, for example, being put in time out, or getting your car taken away. Research involving the effects of physical punishment link physical punishment to poor quality parent-child relationships. The use of physical punishment can have many negative consequences. For example, it can lead to an increase in aggression. It may lead to an increase in the risk for delinquency and other behavioral problems, and may lead to an increase in the risk for child abuse. However, critics argue that research conclusions concerning the effects of physical punishment are vague. For example, research fails to differentiate between mild and harsh punishment, so the results may not apply to mild forms of physical punishment. Also, it's difficult to establish a cause and effect relationship between physical punishment and negative consequences. It's unclear if physical punishment leads to negative issues or if negative issues provoke physical punishment. Research indicates that using punishment to shape behavior may have unwanted side effects. In order to use all forms of positive or negative punishment effectively, the following guidelines should be taken into consideration. Punishment should be applied immediately. A delay in punishment seems to decrease its effectiveness. Punishment should only be severe enough to be effective. Punishment tends to decrease unwanted behavior, but it also produces more negative side effects. Punishment should be consistent. Responses punished every time they occur generally decrease more than responses not punished every time. Punishment should be explained and accompanied with reasoning. And punishment should be non-physical, although research has shown that this is not as effective in decreasing behavior as other forms of punishment.
latent learning - Type of learning that occurs without reinforcement and isn't obvious until demonstrated. cognitive map - An organism's mental representation or diagram of information. predictive value - The value of a conditioned stimulus that determines if a consequence will occur. taste aversion - An effect of classical conditioning that occurs when an organism learns to associate a certain food or drink with an unpleasant or even dangerous reaction. phobia - An extreme automatic fear response that is triggered by a specific stimulus. preparedness - The predisposition of certain organisms to be conditioned in certain ways. superstition - Learned behavior intended to effect an outcome upon which it has no apparent influence. learned helplessness - A general sense of inability to affect one's position. cognition - Mental activity that assists in acquiring knowledge and increasing understanding. Learning that occurs without any demonstration is known as __________ learning. latent Classical and operant conditioning were the first systems of learning to be studied in psychology that focused on __________ behavior. describing
variable-interval schedule of reinforcement - Schedule of reinforcement in which reinforcement occurs after a random amount of time has passed between responses.
positive punishment - Type of punishment used in operant conditioning in which the addition of an unwanted or unpleasant stimulus decreases the tendency of a particular response.
In operant conditioning, acquisition is the __________. point at which a behavior is associated with a specific consequence
positive reinforcement - Type of reinforcement used in operant conditioning in which the addition of a rewarding or pleasant stimulus increases the tendency of a particular response.
Conditioning states that if a behavior is reinforced enough, it should occur again. Psychologists have discovered that not all learned or demonstrated behaviors follow the established rules of conditioning. Some limitations or exceptions occur in the conditioning process, such as taste aversions, phobias, superstitions, and learned helplessness. Say you go to a restaurant and you eat chicken for dinner. Six hours later you feel nauseous and get sick. The next day, the smell of chicken cooking in the school cafeteria makes you feel nauseous. This is a taste aversion. A taste aversion is an effect of classical conditioning that occurs when an organism learns to associate a certain food or drink with an unpleasant or even dangerous reaction. The theory is that taste aversions happen because the body is biologically predisposed to rid itself of really dangerous toxins by using taste aversion. This makes you averse to a food if it's associated with dangerous stuff, like poison. So becoming nauseous after again encountering specific foods or drinks as a result of taste aversion breaks the rules of the conditioning process. For example, too much time has elapsed between the behavioral response and the consequence. The rule for conditioning states that a behavior must be immediately reinforced for an association to be made between the stimulus and response. And it only took one trial for conditioning to occur. The rule for conditioning states that one trial of association is not sufficient enough to condition a response. John Garcia researched taste aversion in rats. He was able to show that some associations are easier to create than others and concluded that natural associations occurring between toxins in food or drink and nausea help organisms survive. A natural version that developed will lead one to avoid toxic substances. Many people have a fear of heights. You might be a bundle of nerves when you see a drop like in this picture. But some fears are more easily conditioned while others are unlikely to develop. A phobia is an extreme automatic fear response that is triggered by a specific stimulus. Common phobias are things that have threatened us for some time such as snakes, spiders, heights, close spaces, darkness, or strangers. Uncommon phobias are modern dangers like knives, guns, hot stoves, cars, computers and bridges. It's not that we're unafraid of these items, we're just not phobic about them because there's no biological history of something like guns until very recently in the process of evolution. Martin Seligman suggested the reason why developing phobias breaks the rules of conditioning is because we're predisposed to be conditioned to certain things. He called it preparedness or the predisposition of certain organisms to be conditioned in certain ways. For example, it would be easier to condition a mouse to be afraid of snakes than it would be to make a mouse fear social situations. When you watch baseball, you might notice that some of the players go through strange rituals before they come up to bat. These are superstitions or learned behaviors intended to affected an outcome upon which they have no apparent influence. The players perform them because they believe that the superstitions may help influence their performance. But really, the superstitions do not affect the intended behavior of doing well in baseball. So why do people practice superstitions and why do they keep doing them even when they appear to fail? B. F. Skinner observed that pigeons who underwent intermittent reinforcement would behave in strange ways. For example, if they nodded their heads before the first time they were reinforced for pushing a lever, they'd continue to nod their heads before pushing a lever, especially if the reinforcement for doing so was only provided occasionally. Skinner concluded that superstitions are the result of intermittent reinforcement for accidental behaviors. So the pigeons created a cognitive model of their world where they believed that each of their superstitions had a direct and tangible effect on the outcome of their experience. This is a cognitive explanation. Mental models may inaccurately associate meaningless behaviors with consequences. So the reason that superstitions break the rules of conditioning is because for behavior to continue reinforcement needs to follow. But people associate some reward or punishment with the wrong behavior in superstitions. Martin Seligman experimented on dogs to test the effect of having no control over environmental situations. In stage one, dogs were harnessed together and given identical shocks. Dog One was given a pedal to stop both shocks. Dog Two was given an ineffective pedal. At the end of stage one, the first dog learned quickly to push the lever. But the second dog learned that it had no control over the shocks and soon stopped trying to do anything to avoid the shocks. In stage two, the dogs were individually taken to a room that was split by small wall. Shocks were given on only one side of the wall. And the dogs could freely move to the shock-free free side. What will the dogs do? Dog One easily escapes to the nonelectrified side. But Dog Two sat and cowered on the electrified side because it had learned to be helpless. Learned helplessness is a general sense of inability to affect one's position. So the reason learned helplessness breaks the rules of conditioning is because it demonstrates the influential role that cognitive processes and beliefs play in conditioning an organism.
vicarious reinforcement - Behavioral reinforcement that occurs as a result of observing the consequences of another individual's actions. social cognitive theory - Learning theory that states that knowledge acquired through observation is influenced by the interaction of behavior, personal factors, and environment.