chapter 6 ap psych

Ace your homework & exams now with Quizwiz!

Extending Pavlov's understanding

Behaviorists dismissed anything mental as hogwash. They might just have gone too far though, and underestimated cognitive processes (thinking, perceptions, and expectations) and biological constraints.

associative learning.

learning that certain events occur together. The events may be two stimuli (as in classical conditioning) or a response and its consequences (as in operant conditioning).

Albert Bandura

pioneer in observational learning (AKA social learning), stated that people profit from the mistakes/successes of others; Studies: Bobo Dolls-adults demonstrated 'appropriate' play with dolls, children mimicked play

Variable ratio

reinforcer is given after a random number of behaviors. Think of pulling a slot machine handle, you never know which pull will win.

Variable interval

reinforcer is given after a random time period. Think of watching a bob-cork and waiting for a fish to bite. See the graph at the bottom of this page.

Fixed ratio

reinforcer is given after a set number of behaviors. Think of being paid for every 10 units you make on an assembly line.

Fixed interval

reinforcer is given after a set time period. Think of being paid every Friday.

Edward Tolman

researched rats' use of "cognitive maps"

BF skinner

skinner is well known for creating the skinner box which helped him in his operant conditioning experiments these experiemnts involved teaching rats to repeat behaviors by giving them postive or negative reinforcers skinner strongly believed that external influence shape behavior skinner experiements made clear the idea of operant conditioning and elbaborated on the law of effect skinner desinged the operant chamber and developed behavioral technology

additional form of learning

One additional form of learning is through language. In this way, we can learn without experiencing something or watching someone else experience it.

There are two other types of reinforcers...

Primary reinforcers Conditioned or secondary reinforcers

john garcia

garcia is know for experiementing whether association can be learned waully well he did this by giving rats a certain tasting water that would later make then nauseous the rats then tended to aviod the taste this leas to the finding tha some assocation we are biologically prepared for where as others need to be learned over time organisms are predisposed to learn associations that help them adapt

Conditioned or secondary reinforcers

get their power by attaching to a primary reinforcer. This "attaching" must be learned.

john b watson

watson is most famous for his little albert experiement in which he trained an infant to fear rabbits by using loud noise when the child was exposed to the rabbit his experiment was based off the udea that human emotions and behaviors though biologically influenced re mainly a bundle of conditoned response theis also helped prove the idea of generalization and discrimination

partial rienforcement

when responses are sometimes reinforced and sometimes not giring salesman making a sale sometimes but not always

There are two main types of reinforcers

- anything that INCREASES a response. Positive reinforcement Negative reinforcement

Classical conditioning is very broad

- many responses can be associated to many stimuli in many organisms.

CR (conditioned response)

- this is the response (which is the same as the UCR) - salivation.

The key to classical conditioning is that it's a natural thing, there is no decision involved. Usually it's a biological process over which the person/animal has no control.

A person could be classically conditioned using the pucker response to a lemon, or cringe response to fingernails on a chalkboard, or dilated eyes to the change from light to dark.

Television takes up a lot of our time and can therefore be a powerful tool in observational learning.

A person who lives to 75 years old will spend 9 years watching TV. 9 out of 10 teens watch TV daily. Kids see lots of violence on TV - 8,000 murders and 100,000 violent acts before getting out of grade school. The "violence-viewing effect" occurs when...

The "violence-viewing effect" occurs when...

A violent act goes unpunished (happens 74% of the time). The victim's pain is not shown (happens 58% of the time). The violence is somehow "justified" (happens about 50% of the time). The violent person is shown as attractive (happens about 50% of the time)

Applications of observational learning Antisocial effects of observational learning...

Abusive parents are more likely to turn out kids who turn into abusive parents. Men who beat their wives are more likely to turn out sons who beat their wives. Are these findings more due to nature (due to genetics) or nurture (due to upbringing)? A study of monkeys leans toward saying the cause is nurture. Television takes up a lot of our time and can therefore be a powerful tool in observational learning. A person who lives to 75 years old will spend 9 years watching TV. 9 out of 10 teens watch TV daily. Kids see lots of violence on TV - 8,000 murders and 100,000 violent acts before getting out of grade school. The "violence-viewing effect" occurs when... A violent act goes unpunished (happens 74% of the time). The victim's pain is not shown (happens 58% of the time). The violence is somehow "justified" (happens about 50% of the time). The violent person is shown as attractive (happens about 50% of the time). Two key results seem to occur due to the violence-viewing effect... People imitate the behavior they see. People become desensitized to violence - we're not as shocked if we see graphic violence. Correlational studies have linked TV violence and real violence. When TV came to America in the mid-late 1950s, homicide rates rose dramatically. The same tendency was seen in other countries that got TV later. Their homicide rates spiked too in sync with TV.

five main conditioning processes...

Acquisition, Extinction, Generalization, Spontaneous recovery, Discrimination

In this demonstration, there were definitely ethical problems here. Specifically, the APA's suggestion of "informed consent" wasn't met. Though Albert's mother gave the okay, he certainly didn't.

After resigning in a scandal (where he eventually married his assistant), Watson went on to work for Maxwell House and start the "coffee break".

Bandura's experiments

Albert Bandura is the top name in observational learning. He is most famous for the Bobo doll experiment.

People and animals have biological predispositions, meaning we're naturally good at some things and bad at others.

Animals easily learn to associate things that help them to survive. And, animals don't easily learn things that don't help them survive. For example, in one experiment, pigs were being taught to pick up wooden "dollars" then put them in a piggy bank as fast as possible. The natural urge of the pigs to root with their noses slowed down their time.

Introduction

Animals tend to live by an instinctive genetic code. For example, a salmon's life is pretty much hard-wired into its genes - birth, youth in the stream, adulthood in the ocean, then return to the stream to spawn and die. Humans also have an instinctive genetic code, but we however, can adapt to our environment and learn new behaviors and make changes. Learning is defined as a relatively permanent behavior change due to experience. This brings up the question, "How do we learn?"

Cognitive processes

Animals, to a behaviorist, are simply very complex machines. Animals go beyond just robotic, mechanical reactions though. They get into prediction, a mental process. In one experiment for example, when accompanied with an unpleasant electrical shock, rats can distinguish light and a tone. They recognized that the tone was a better indicator of a coming shock. Thus, they predicted the likelihood of a shock based on the stimulus (light or tone). Animals, and people, who are trapped or just feel that they're trapped become depressed and passive. This situation is called learned helplessness. This was seen in an experiment by Martin Seligman with caged dogs who were given shocks. They eventually cowered in fear. Even after the trap was "unlocked" and they could've escaped, they didn't try. A dog who had not learned the helplessness would quickly escape. People certainly respond to their environment but the message seems to be that, with people especially, what we think about a situation matters as well. What goes on outside of us stimulates our behavior. What goes on in our heads also affects our behavior.

Skinner's legacy

B. F. Skinner was a controversial figure in psychology, mainly due to his belief that we were nothing more than biological reactors to stimuli. People were machines or robots. Still, he said that operant conditioning can improve our lives, such as in the fields of:

A typical Skinner Box was set up with a (1) food dispenser, (2) water dispenser, (3) a light bulb, (4) a speaker, and (5) a lever the animal could pull. An (6) electrical shock might be added as well.

Being a behaviorist, Skinner had to measure behaviors. So, each pull of the lever was tallied. Skinner typically placed pigeons or lab rats in his boxes . He used shaping to "teach" them to do things like walk in figure-8s or "play" ping pong. Shaping simply rewards desired behavior and directs the animal toward a desired behavior. At first, the animal is rewarded for getting close to the behavior, then rewarded for going a bit further, and finally rewarded for the actual behavior. A discriminative stimulus is a stimulus that an animal can distinguish from another. A pigeon might respond to a speaker making a "ding dong" sound in its box, but might not respond to a "whoop whoop" sound. These discriminative stimuli enable us to determine things like, "Can a dog see colors?" (will they respond to one color but discriminate another). Can a baby do this?

Thorndike built "puzzle boxes" and put cats in them. The cat had to do a series of things to escape.

Being a behaviorist, Thorndike had to objectively measure the "learning" the cats made. So, he measured the time it took for the cats to escape after successive tries. The resulting graph showed a clear and typical "learning curve" - the cats learned quickly, then not quite as much, and then their learning leveled off. See the graph at the bottom of this page.

Operant conditioning differs from classical conditioning in the following ways:

Classical - two outside stimuli are associated with one another. Operant - your actions are associated with consequences.

Pavlov's experiments

Classical conditioning falls under the psychological approach called behaviorism. Behaviorism is only concerned with observable behavior - things an animal or person does that can be seen and counted (measured). Behaviorism shunned the "mentalist" approaches as hogwash. They're unconcerned with anything that goes on in your head. They're only concerned with what you do, your behavior.

Two major lessons come out of Pavlov's work...

Classical conditioning is very broad - many responses can be associated to many stimuli in many organisms. Classical conditioning showed how something as abstract as "learning" can be studied objectively (with opinions stripped away).

To a psychologist, "learning" is more specific than what we think of learning in school. To psychologists, there are three main types of learning...

Classical conditioning occurs when we associate two stimuli and thus expect a result. Operant conditioning occurs when we learn to associate our own behavior (or our response) and its consequence. We therefore repeat behaviors with good results, we cut down on behaviors with bad results. Observational learning occurs by watching others' experiences. One additional form of learning is through language. In this way, we can learn without experiencing something or watching someone else experience it.

Latent learning part 2

Edward Tolman did an experiment with rats in a maze showed latent learning. There were two groups of rats - one was given a reward at each correct decision. They got to the end quickly. Another group was given no reward until they finished the maze. Needless to say, they floundered around and it took them a long time. After each group finally learned the maze, however, the second group was able to run the maze even quicker than the first group. They'd developed a "mental maze." This learning didn't become apparent until later. Similarly, children learn things from parents and adults that they may not use until much later in life, perhaps when they become parents themselves.

Partial or intermittent reinforcement types

Fixed ratio Variable ratio Fixed interval Variable interval

Applications came out of Pavlov's work as well such as the health and well-being of people.

For example, a drug addict may re-enter the world and situations that were associated with getting high, then get the urge to take the drug again. Therefore, addicts are encourage to not go back to those situations. Or in an experiment, when a taste was associated with a drug that boosts the immune system, it got to the point where the taste alone was able to boost the immune system.

Conditioned or secondary reinforcers example

For example, in a Skinner Box, rats learned that pulling the lever (conditioned reinforcer) gave some food (primary reinforce).

B. F. Skinner built on Thorndike's work and is likely the biggest name in operant conditioning.

He built "Skinner Boxes", which were contraptions in which an animal could manipulate things and/or respond to stimuli. The responses were measured. A typical Skinner Box was set up with a (1) food dispenser, (2) water dispenser, (3) a light bulb, (4) a speaker, and (5) a lever the animal could pull. An (6) electrical shock might be added as well.

Bandura felt that we... bobo doll experiement

Imitate based on reinforcements and punishments that we see others get (or don't get). Will imitate people like us, who we see as successful, or who we see as admirable.

Applications of observational learning

In business, observational learning has been applied usefully to train workers. It's best to watch someone model the appropriate behavior than to just study it. Prosocial effects of observational learning... Antisocial effects of observational learning...

acquisition

In classical conditioning, the initial stage, when one links a neutral stimulus and an unconditioned stimulus so that the neutral stimulus begins triggering the conditioned response. In operant conditioning, the strengthening of a reinforced response.

People can learn without actually experiencing something. We can learn by watching others through what's called observational learning. Observational learning is learning by observing others or learning without direct experience or our own.

In modeling, we learn by watching and mimicking others. This is simply like the old adage, "monkey see, monkey do."

Animals, to a behaviorist, are simply very complex machines. Animals go beyond just robotic, mechanical reactions though. They get into prediction, a mental process.

In one experiment for example, when accompanied with an unpleasant electrical shock, rats can distinguish light and a tone. They recognized that the tone was a better indicator of a coming shock. Thus, they predicted the likelihood of a shock based on the stimulus (light or tone).

Bobo doll experiment.

In this experiment, a child watched an adult beat up an inflatable clown. The adult yelled things like, "Take that!" in the process. The children were then placed into a "play room" and mimicked the adult by beating up the Bobo doll with almost the exact same actions and words as the adult model. Children who had not observed the adult were less aggressive to the doll. Bandura felt that we... Imitate based on reinforcements and punishments that we see others get (or don't get). Will imitate people like us, who we see as successful, or who we see as admirable.

Skinner's learning

Latent learning Insight learning Intrinsic motivation

But, there seems to be evidence that's disagrees with Skinner's anti-cognitive beliefs...

Latent learning is learning that doesn't become apparent until later when it's needed. Until then, it remains latent (hidden). Edward Tolman did an experiment with rats in a maze showed latent learning. There were two groups of rats - one was given a reward at each correct decision. They got to the end quickly. Another group was given no reward until they finished the maze. Needless to say, they floundered around and it took them a long time. After each group finally learned the maze, however, the second group was able to run the maze even quicker than the first group. They'd developed a "mental maze." This learning didn't become apparent until later. Similarly, children learn things from parents and adults that they may not use until much later in life, perhaps when they become parents themselves. Insight learning is learning that comes all-at-once. You may be stumped on something, but then, all-of-a-sudden, the problem is solved in a flash. Intrinsic motivation is the desire to perform a behavior for its own sake. This would be like reading a book just for the joy of reading it. If appears that offering rewards to something that's intrinsically motivated can actually decrease the motivation. It's as if the thinking becomes, "If they have to bribe me to do this, it must not really be worth doing." Extrinsic motivation is the desire to perform a behavior in order to get some type of reward. This would be like reading a book in order to get an "A" in class or to win a prize.

Biological predispositions

Like it or not, animals and people are hard-wired by their biology. We naturally tend to like certain things, dislike others, and we have limitations on what we can do. The early behaviorists (Pavlov, Watson) thought all animals were the same. To them, we're simply machines responding to stimuli (our environment). However, there are non-examples to this idea...

We have mirror neurons that "fire" in the brain when we watch someone else doing an action. It's as though we're actually doing it, but we're just observing it.

Mirror neurons improve our empathy for others. It helps us to feel others' pain. More concrete examples are how we imitate others when they yawn or how it's difficult to smile when looking at a frown, or vice versa.

reinforcers

Negative reinforcement encourages a behavior by removing something bad. Punishment discourages a behavior by adding something bad.

skinner legacy Work

Operant conditioning can boost productivity. For this to happen, the reinforcement needs to be specific and immediate. General reinforcers, like "merit", don't cut it.

Skinner's experiments

Operant conditioning differs from classical conditioning in the following ways: E. L. Thorndike was the first big name in operant conditioning. B. F. Skinner built on Thorndike's work and is likely the biggest name in operant conditioning.

skinner legacy Home

Parents can notice good behavior in children, reward it, then watch the behavior increase. Yelling at the child doesn't seem to help. Pointing out what's wrong is okay, but the ideal is again, to notice good behavior in children, reward it, then watch the behavior increase.

He noticed dogs salivated at the sight of food. This is a natural reaction. He wondered if he could associate something unnatural to salivation.

Pavlov rang a bell, then fed the dog. The bell meant nothing to the dog. He repeated this over and over and over until, the bell did mean something - the bell meant food was coming! Eventually, the bell alone could cause the dog to salivate. He rigged tubes to the dog's neck to measure the salivation (and thus the response).

Ivan Pavlov is the godfather of behaviorism.

Pavlov was a Russian doctor who used dogs as his subjects. He noticed dogs salivated at the sight of food. This is a natural reaction. He wondered if he could associate something unnatural to salivation. Using this dog experiment, we can see the "parts" of classical conditioning... The key to classical conditioning is that it's a natural thing, there is no decision involved. Usually it's a biological process over which the person/animal has no control.

Mirrors in the brain

People can learn without actually experiencing something. We can learn by watching others through what's called observational learning. Observational learning is learning by observing others or learning without direct experience or our own. We have mirror neurons that "fire" in the brain when we watch someone else doing an action. It's as though we're actually doing it, but we're just observing it.

Two key results seem to occur due to the violence-viewing effect...

People imitate the behavior they see. People become desensitized to violence - we're not as shocked if we see graphic violence.

Punishment is often confused with negative reinforcement. Punishment DISCOURAGES a behavior

Punishment can be very effective if it's sure and swift. Physical punishment, like spanking, is looked down upon by some groups. Their belief is that:

However, there are non-examples to this idea... biological predisposition

Rats associate best by using the sense of taste (rather than sight or sound). This may help them survive by distinguishing "okay" and "not okay" food to eat. Humans similarly associate very well by taste. Anyone who's ever gotten food poisoning will likely have a hard time going back to that food again. Men seem predisposed to find the color red attractive in females. The idea is that red is naturally sexy.

Applications of observational learning Prosocial effects of observational learning...

Role models can have a very real positive impact on young people. Observational learning of morality starts at a very young age and is real. Parents who live by the "Do as I say, not as I do" mentality tend to raise kids that wind up doing what they do. Then they in turn tell their kids, "Do as I say, not as I do." Hypocrites beget hypocrites. This shows, in a not-so-good way, the power of parental role models.

Negative reinforcement

STRENGHTHENS a behavior by removing something unpleasant. Simply, you do something to make something bad go away. For example, you hit the snooze button to make the annoying sound stop. This increases the likelihood you'll hit it again. It's important to remember, negative reinforcement is not punishment.

Positive reinforcement

STRENGTHENS a behavior with a pleasurable stimulus after a response. Simply, if you do what's wanted, you get a doggie treat! For example, a dog sits and you give him a piece of a hot dog. This increases the likelihood he'll sit.

Thorndike came up with his "Law of Effect" which said a "rewarded behavior is likely to recur."

Simply put, if you do something then get a reward, you'll likely do it again.

Skinner's legacy school

Skinner felt we'd eventually use computers alone to teach. The program would give instant feedback, right or wrong, then direct the learner to the appropriate next-step.

Operant - your actions are associated with consequences.

The animal or person makes a choice or decision about what it does. Realize that a "consequence" can be either bad or good.

difference between clasical and operant conditioning

The differences between classical and operant conditioning can be summarized as... Classical conditioning Links two stimuli together through association. Involves a natural, biological response. There is no decision made - Pavlov's dogs salivated naturally, biologically, with no decision of their own. Operant conditioning Links a behavior to its results. There is a decision made here to do or not do a behavior. Behavior that gets reinforced is more likely to be repeated.

associations.

The phenomenon in learning that states we are better able to remember information if it is paired with something we are familiar with or otherwise stands out.

Physical punishment, like spanking, is looked down upon by some groups. Their belief is that:

The punishment isn't forgotten, but suppressed. It teaches that it's okay to do the behavior some times, but not others. It teaches fear of the body doling out the punishment. It might increase aggressiveness by modeling aggression itself.

Classical - two outside stimuli are associated with one another.

This is natural, automatic, and biological. Pavlov's dogs didn't choose to salivate. It was natural, automatic, biological.

Animals, and people, who are trapped or just feel that they're trapped become depressed and passive. This situation is called learned helplessness.

This was seen in an experiment by Martin Seligman with caged dogs who were given shocks. They eventually cowered in fear. Even after the trap was "unlocked" and they could've escaped, they didn't try. A dog who had not learned the helplessness would quickly escape.

E. L. Thorndike was the first big name in operant conditioning.

Thorndike came up with his "Law of Effect" which said a "rewarded behavior is likely to recur." Simply put, if you do something then get a reward, you'll likely do it again. Thorndike built "puzzle boxes" and put cats in them. The cat had to do a series of things to escape. Being a behaviorist, Thorndike had to objectively measure the "learning" the cats made. So, he measured the time it took for the cats to escape after successive tries. The resulting graph showed a clear and typical "learning curve" - the cats learned quickly, then not quite as much, and then their learning leveled off. See the graph at the bottom of this page.

law of effect

Thorndike's principle that behaviors followed by favorable consequences become more likely, and that behaviors followed by unfavorable consequences become less likely puzzle box and cats timed them to see how long it took to get put with a fish being the reward

skinner legacy self improvement

To better yourself, first state your goal in measurable terms then announce it to someone. Secondly, monitor how you spend your time and avoid wasting it. Reinforce the desired behavior with a reward. Gradually reduce rewards. This will move the motivation away from the extrinsic and toward the intrinsic.

Pavlov's legacy

Two major lessons come out of Pavlov's work... Applications came out of Pavlov's work as well such as the health and well-being of people. Building on Pavlov's work, John B. Watson became the second well-known classical conditioning behaviorist.

Using this dog experiment, we can see the "parts" of classical conditioning...

UCS (unconditioned stimulus) - this is the natural stimulus - the food. UCR (unconditioned response) - this is the natural response - salivation. CS (conditioned stimulus) - this is what's associated to the UCS - the bell. CR (conditioned response) - this is the response (which is the same as the UCR) - salivation.

Finally, the white rat and banging sound were associated. Merely the sight of the rat caused Albert to cry.

UCS = banging sound, UCR = crying CS = white rat, CR = crying

Extending Skinner's understanding

Up until his death, Skinner shunned anything cognitive - anything having to do with thinking or the mental. To him, even things going on inside your head are just more behaviors in response to stimuli. We're robots. But, there seems to be evidence that's disagrees with Skinner's anti-cognitive beliefs...

Building on Pavlov's work, John B. Watson became the second well-known classical conditioning behaviorist.

Watson worked with a baby known as Little Albert, an average baby. Watson knew that babies/people have a natural fear of sudden, loud sounds. Also, babies do not have a fear of white rats. Watson associated the two. Watson placed a white rat next to Albert. Albert wanted to touch the rat. As he reached out, Watson banged a hammer on metal just behind Albert. Albert was scared and cried. This was repeated over and over. Finally, the white rat and banging sound were associated. Merely the sight of the rat caused Albert to cry. UCS = banging sound, UCR = crying CS = white rat, CR = crying In this demonstration, there were definitely ethical problems here. Specifically, the APA's suggestion of "informed consent" wasn't met. Though Albert's mother gave the okay, he certainly didn't. After resigning in a scandal (where he eventually married his assistant), Watson went on to work for Maxwell House and start the "coffee break".

This brings up the question, "How do we learn?"

We learn by making associations. This is connecting events that occur one after another. These events can be good, like connecting the birthday song to eating cake, or bad like seeing a flash of lightning then hearing loud thunder. If a stimulus occurs normally in an environment, an animal's natural response may dwindle. This lessening of a response is called habituation. Think of the stimulus as becoming habit, so why respond to it? The examples above illustrate associative learning.

People certainly respond to their environment but the message seems to be that, with people especially, what we think about a situation matters as well.

What goes on outside of us stimulates our behavior. What goes on in our heads also affects our behavior.

Correlational studies have linked TV violence and real violence.

When TV came to America in the mid-late 1950s, homicide rates rose dramatically. The same tendency was seen in other countries that got TV later. Their homicide rates spiked too in sync with TV.

skinners legacy Sports

When learning a skill, we can start small, master that skill, then move to the next skill. Through shaping, we can gradually build toward the desired skill. Notably, athletes are well-known to be superstitious. If they do something just before hitting a home run, a ball player might start to think of that action as having something to do with the home run. He or she may likely repeat that behavior.

operant chamber

a box designed by BF skinner that has a bar or key then an animal presses or pecks to release a reward of food or water and a device which recors these response

intrinsic motivation

a desire to perform a behavior effectively for its own sake

extrinsic motivation

a desire to perform a behavior to receive promised rewards or avoid threatened punishment

cognitive map

a mental representation of a place a person in a new place could mae a cognitive map of the city

higer-order conditioning

a procedure in which a new neutral stimilus becomes a new conditioned stimulus (often weaker). All that is required for this happen is for it to become associated with a previously conditioned stimulus ex an aminal that has learned that a tone predicts food might then learn that a light predicts the tone and begins responding to the light alone

shaping

a procedure in which reinforcers gradually guide an animals actions towards a desired behavior ex if you were training a hungry rat to press a bar you would give it food everytime it approaches the bar once the rat does this regulay you would require it to move closer still and finally you would require it to touch the bar before you gave it food

Modeling

a process by which we observe and imitate specific behaviors

learning

a relatively permanent behavoir change due to ones experience

respondent behavior

actions that are automatic response to stimul salivation in reposne to meat power and in later in repsone to a time

positive punishment

add a bad thing

positive reinforcement

add good thing

albert bandura

after watching an adult pound kick and throw around a bobo doll a child is much more likely to do the same when frustrated then children whod di not watch and adult do this apparently oberserving the aggressice outburst lowered their inhibitions this odea introduced us to obersevational learning can both procial and anitsocalu effects on those learning them the observational learning experiment involving bobo the inflatable doll is his experiement led us to understand the concept of obersevational learning

Primary reinforcers

are natural, they are unlearned, such as food or getting rid of pain.

prosocial behavior

behavior that is postive contructive and helpful encouraging children to read reading to them and surrounding them with books and people who read would be prosocial behavior

operant behavior

behavior that operates on the environment, producing consequences

habituation

decreasing responsiveness with repeated stimulation. As infants gain familiarity with repeated exposure to a visual stimulus, their interest wanes and they look away sooner.

Spontaneous recovery

emerges even after extinction. This is when, after a time lapse, the association between the UCS and the CS reappears. The association is not as strong as before, and will wear off if not linked with the CS. See the graph at the bottom of this page.

ivan pavlov

he paired neutral events stuff the dog did not associate with the food but he could hear with food in the dogs mouth pavlov presented a neutral stimulus just before an unconditioned stimulus food in the mouth the neutral stimilus then became a conditioned stimulus producing a conditioned response classical conditioning is a basic form of learning pavlov famous for classical conditioning known for developing the idea of classical conditinoning he helped us to understand a basic form of learning also he taught us how to study things objectively

Classical conditioning showed

how something as abstract as "learning" can be studied objectively (with opinions stripped away).

extinction

in classical conditioning the dimishing responding that occurs when the cs no longer signlas an impending us in opernat condition when a response is no longer rienforced

us example

in pacloc experiemnet the food in the moth is the unconditioned stimulus

cr

in pavlovs experiemnet it is the salivation at the sound of the tone

ur example

in pavolvs experiment it is the salivation when the food is in the mouth

BF Skinner

introduced behavorism

Discrimination

is drawing the line between responding to some stimuli, but not others. For example, Pavlov's dogs might respond to a bell or a buzzer, but discriminate against a police siren. They're essentially saying, "The buzzer is like the bell, but the siren is not."

Insight learning

is learning that comes all-at-once. You may be stumped on something, but then, all-of-a-sudden, the problem is solved in a flash.

Latent learning

is learning that doesn't become apparent until later when it's needed. Until then, it remains latent (hidden).

Punishment

is often confused with negative reinforcement. Punishment DISCOURAGES a behavior (whereas negative reinforcement encourages a behavior by removing something unpleasant).

Intrinsic motivation

is the desire to perform a behavior for its own sake. This would be like reading a book just for the joy of reading it. If appears that offering rewards to something that's intrinsically motivated can actually decrease the motivation. It's as if the thinking becomes, "If they have to bribe me to do this, it must not really be worth doing."

Extrinsic motivation

is the desire to perform a behavior in order to get some type of reward. This would be like reading a book in order to get an "A" in class or to win a prize.

Extinction

is the diminished association between the UCS (food) and the CS (bell) after the UCS is removed. In other words, if you stop nailing the food and the bell together, the link wears off - the bell goes back to meaning nothing to the dog.

Acquisition

is the initial learning of a stimulus-response relationship. This is where the dogs learned to associate the bell and food.javascript:;

Generalization

is the tendency to respond to a similar CS. For instance, Pavlov's dogs might feel that a buzzer is close enough to a bell and they might salivate to a buzzer. Or, if they're conditioned to respond to a white light, they might also respond to a red light.

operant conditioning

learning in which actions followed by rienforcers increase those followed by punishers decrease

Observational learning

occurs by watching others' experiences.

Continuous reinforcement

occurs when the reinforcement is given every time the behavior is down.

Partial or intermittent reinforcement definition

occurs when the reinforcement is not given after every behavior.

Classical conditioning

occurs when we associate two stimuli and thus expect a result.

Operant conditioning

occurs when we learn to associate our own behavior (or our response) and its consequence. We therefore repeat behaviors with good results, we cut down on behaviors with bad results.

negative punishment

take away a good thing

negative reinforcement

take away bad thing

learned helplessness

the hopelessness and passive resignation an animal or human learns when unable to avoid repeated aversive events

discrimination

the learned ability to distinguish between a conditioned stimulus which predicts the US and other irrelevant stimuli

mirror neurons

the neurons activity provides a neral basis for imitation and observational learning when a money grasps hods or tears something these neurons fire and they likewise fire when the monkey observes another doing so

spontaneous recovery

the reappearance, after a pause, of an extinguished conditioned response

generalization

the tendecy to respond to stimuli similar to the CS

UCR (unconditioned response)

this is the natural response - salivation.

UCS (unconditioned stimulus)

this is the natural stimulus - the food

CS (conditioned stimulus)

this is what's associated to the UCS - the bell.


Related study sets

Issues in Contemporary Art Final

View Set

Chronic Illness and End-of-Life Care

View Set

Math 250 Chapter 1 Sections 1.1 to 1.9

View Set

Chapter 31: Transcultural and Social Aspects of Nutrition

View Set

AP GOV REVIEW: Interest Groups, Political Parties and Elections

View Set