SOC 142 final Reading Qs

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

What are three different ways in which deprivation can occur?

- 1. Natural Deprivation: o While engaging in long chains of activities that are reinforced by one type of reinforcer, people often do not notice that they are being deprived of other reinforcers But when the long sequence of behavior terminates, they may notice some of their other states of deprivation o For example, while playing basketball all afternoon, the players may not detect the increasing food deprivation, but when it comes to an end, some may say, "I feel starved!" Because active and busy people are usually very intent on the activities of the moment, they often do not notice levels of deprivation that others might consider aversive - 2. Deliberate Deprivation: o People often deliberately abstain from eating before a big Thanksgiving meal because it makes the food seem more delicious and allows them to eat more of it o People sometimes deliberately stay up until they are very tired before going to bed, because the deprivation makes them sleep more deeply, and they find this more pleasurable than light, fitful sleep after minimal sleep deprivation - 3. Compulsory Deprivation: o Sometimes couples have to live in different cities because of the location of their schools or jobs When they do get a chance to see each other, they often find that the days of deprivation have enhanced the degree to which touching and having sex are positive reinforcers

What are predictive stimuli in Pavlovian conditioning? Where do they come from? What do they produce?

- A CS (conditioned stimulus), contains important information, and it functions as a predictive stimulus that signals that a biological reflex might soon be elicited o In the case of bees for example, the CSs serve as a warning stimulus that signals danger by eliciting fear before the child carelessly touches one o On a more positive note, the sight and odor of a beautifully cooked dinner are CSs that can elicit salivation and pleasurable anticipation of delicious food, even before it is served In this case, CSs are predictive stimuli for pleasurable feelings o Our brains are sensitive to correlations between reflexes and the cues (CSs) that precede them The cues that most reliably precede and predict the onset of a reflex are most likely to become CSs

Explain how counterconditioning takes place? Give examples.

- After a bee sting causes us to view nearby bees as CSs for fear, that can be changed by counterconditioning o Some people learn how to work with bees with safety nets and clothing that prevents them from being stung, and gradually, the fear goes away due to extinction o There is another thing that can happen: They come to love bees and feel proud of what they do All of these positive things can cause counterconditioning, and bees now become positive stimuli that elicit happy emotions - Counterconditioning counteracts the effects of the original conditioning by combining extinction and new conditioning o Counterconditioning occurs commonly in everyday life and is also a central procedure in cognitive behavior therapy - In everyday life, people's responses are often conditioned one way at one age, then conditioned another way at another age - Counterconditioning can work in the other direction, too, turning a once positive stimulus into something negative o Some people are socialized to respond to powerful cars and fast driving as CSs for thrills and excitement but these positive associations with speed can be counter conditioned if speed becomes paired with several aversive events such as a car accident or speeding ticket - The rate of counterconditioning is influenced by the number of positive and negative stimuli present during counterconditioning

What might cause the spontaneous recovery of the child's fears after extinction?

- Although extinction weakens conditioned reflexes, conditioned reflexes regain some of their strength during periods between extinction experiences, due to a process that Pavlov called spontaneous recovery o Whereas conditioning takes place when a CS is paired with a US and extinction takes places when a CS appears without its US, spontaneous recovery occurs during periods after extinction—when the CS is not present at all In the pause, when a CS is not on extinction, it can spontaneously recover some of the power it had before—and become a stronger CS than it was at the end of extinction - Spontaneous recovery requires no additional conditioning, only a pause period since the end of the last extinction - Spontaneous recovery also occurs with positive emotional reflexes o Infant who breasts feeds learn to respond to their mother's breasts as CSs for pleasurable anticipation of nursing When a mother cease breast feeding, her child's positive conditioned responses to the breasts are put on extinction Each time the child reaches and is not allowed to nurse, the positive conditioned responses become weaker o If the mother leaves for a brief period, there will be a spontaneous recovery of positive conditioned responses when she returns, and the child will make stronger positive responses to the breasts—looking and reaching more—than it did before the mother left town

How is the acquisition phase influenced by reinforcers for vigilance and attention?

- An observer's degree of attentiveness to a model can lie anywhere on a continuum from not paying any attention to focusing very close attention to the model's activities o Clearly, there can be no acquisition when there is no attention, thus, increasing vigilance increases the likelihood that the observer will acquire information about the model's behavior o Attention and vigilance are responses that can be modified by differential reinforcement, observational learning, prompts, and rules

How does Pavlovian conditioning help explain how primary reinforcers can become more rewarding? How does Pavlovian conditioning help explain how primary reinforcers become punishers?

- Any US that is a primary reinforcer or punisher can be shifted to a new location on the continuum from strong reinforcers to strong punishers o This modification occurs through Pavlovian conditioning when the US is predictive of and frequently followed by other reinforcers or punishers For example, for a moderately hungry dog, food is normally a US that elicits salivation and reinforces relevant operant, however, if food is regularly presented before a powerful punisher, such as electric shock, food also becomes a CS that elicits aversive emotional responses Food can thus become so powerful a punisher that it suppresses eating

What is fading? Why do people fade out prompts?

- As learning proceeds, prompts can be gradually faded out until the target behavior occurs without prompts o Fading is the gradual removal of any prompt as the prompted response comes under stimulus control of naturally after-effect of successful prompting For example, as an actor learns the lines, the director phases out the prompts because prompting is no longer needed - As prompted behavior comes under the control of natural reinforcers, the director can also fade out any special social reinforcers that were used after the prompted behavior appeared o New behavior is learned most efficiently if fading is done neither too rapidly nor too slowly, but is systematically paced according to the learner's rate of progress

Are primary punishers influenced by satiation and deprivation effects? Why? Give the reasons and examples.

- Deprivation does not affect primary punishers, nor does a person cease to find them aversive due to satiation o For example, after being cut several times in 30 minutes, while learning to operate a new machine, people do not "satiate" on cuts and cease to find them aversive - The fact that punishers are not influenced by deprivation and satiation can be understood in terms of biological survival o It is adaptive for individuals to cease finding food, water, or sex reinforcing when they have had enough of them o However, it is not adaptive for us to cease responding to primary punishers after several exposures to them because they always pose a threat to health and or survival whenever they appear

What is deprivation? How does it affect primary reinforcers?

- Deprivation is the opposite of satiation: the longer a person has been deprived of a given primary reinforcer, the more power that US acquires for reinforcing behavior o For example, when a person is not access to food for many hours, food becomes increasingly reinforcing o In everyday life, deprivation can result from a variety of causes: natural, deliberate, and compulsory

How can having "practice dates" help a person learn better social skills?

- Differential reinforcement can be used as a means of behavior modification to help people acquire valuable social skills o For example, Keith had difficulties meeting women and seldom got a date To help Keith learn better skills for interacting with women, the therapist arranged for Keith to have a series of practice dates 3 times a week a woman would go to lunch with Keith, listen to his conversation and provide differential reinforcement for the good and bad variations in his social behavior • She was instructed to give him feedback by making normal conversation when Keith interacted in any manner that a woman might enjoy, and by raising her hand and saying boring when he began rambling about uninteresting topics As Keith continued with his practice lunch dates, he was less boring and received less negative feedback He began to ask more meaningful questions that helped him learn even more about women his age and ways to interact with them in a mutually rewarding manner • His behavioral changes led him to new levels of awareness, which is the goal of cognitive behavioral therapy (CBT) Differential reinforcement had successfully changed his interaction style • An early response pattern underwent response differentiation as Keith learned to avoid being boring and focus on topics of mutual interest • Both he and his companions found the new style more rewarding o It also increased his empathy with and understanding of others

What is differential reinforcement? Give examples.

- Differential reinforcement has been used in behavior modification and cognitive-behavior therapy (CBT) o Sports psychologists use differential reinforcement to help athletes improve the quality of their performance to higher levels of sophistication o People who want to run further, hike higher, bike longer, etc., greatly benefit and improve from the use of differential reinforcement - Differential reinforcement means that you reward a behavior differently when it is done well than when it is done poorly o The difference affects that way the behavior will be done in the future o After sensing the differences in outcomes, people begin to do the behavior better in the future—since that is where the reinforcers are - After repeated differential reinforcement, the frequencies of the various responses are modified o The responses that are reinforced become more frequent, whereas the responses followed by nonreinforcement become less frequent due to extinction o Due to the different consequences, the child learns to attain successful results most of the time In this type of learning, behavior gradually shifts from the original dashed line to the solid line The operant skill of lace tying for example, comes under such strong SD control those older children and adult can perform the task without paying attention - Response differentiation o Differential reinforcement produces changes in behavior that are called response differentiation An early, undifferentiated (unspecialized) response pattern becomes differentiated into 2 separate sets of behavior o The process of response differentiation can be seen in many everyday situations For example, when children are first learning to play baseball, outfielders are often less than outstanding in throwing the ball from the outfield to home plate • Some throws come to home plate, but others fall short, and this leads to differential consequences and response differentiation o Poor throws are not reinforced; and they may even be punished by criticisms from the coach or other players The good throws are reinforced by holding a runner at third base, producing an out at home plate, and maybe facilitating a spectacular double play o During the early days after giving birth, new mothers hold their babies in a variety of different positions However, most mothers—both left-handed and right-handed—eventually learn to hold their babies on their left side more than their right This is due to the different consequences for holding the baby on the left or the right, and it explains the tendency to hold the babies on the left • Because a calm infant provides more reinforcers and fewer punishers for mother, there is differential reinforcement for the mother to hold the infant on her left • Any undifferentiated early responses of holding the baby on either side become differentiated into left-holding due to the differential reinforcement o In many social situations, a person's opening lines can make or break a social interaction A good first approach is reinforced; a poor one is punished People who are frequently thrown into social situations will have their behavior modified by differential reinforcement • For example, door-to-door saleswoman may depend heavily on her introductory few sentences to get her foot in the door

Define the word prompt.

- Prompts are often given to hasten the learning process o For example, a teacher's uplifted hand prompts us to speak louder in class so that others can hear better - As learning progresses and prompts are no longer needed, most people naturally fade out of the prompts they give to others o Although prompts are less important than other components of the learning process, they do play valuable roles - Prompts are special stimuli that are introduced to control a desired behavior during early learning, although they are not needed once the behavior is learned o Many prompts can be given almost effortlessly once we have learned how to give them

How can vicarious emotions function as vicarious reinforcers or punishers that modify an observer's operant behavior? Give an example.

- Empathy for the feelings of others is based largely on emotional contagion and vicarious emotional responses o For example, when we see a child wide-eyed with excitement while opening presents, the child's bright eyes, smiles, and laughter are likely to be CSs that elicit pleasant feelings to us Since we cannot share exactly the same emotional responses that the child is having, our vicarious emotional response is pleasurable and somewhat familiar to the child's emotions o Naturally, the observer does not feel the exact same emotional response that the model feels, however, there is often a similarity in emotion responses if the observer has had enough emotional conditioning in situations similar to the one the model is experiencing o The more similar the past social learning experiences of a model and observer, the more likely that the observer can empathize with the model

How can extinction be used in therapy? Give examples.

- Extinction can be used in cognitive behavioral therapy, especially when people have persistent fears that bother them for years and strong avoidance responses prevent extinction o Therapeutic extinction involves having a person confront a fear-inducing CS in a safe environment that is free from all types of aversive stimuli o As the person experiences the feared CS in the absence of other aversive stimuli, the CS loses its power to elicit fear - Cognitive behavioral therapy consisted of repeated therapeutic extinction experiences in which an individual stopped avoiding the fear inducing CSs: As one practices their most feared behavior, in safe therapeutic settings, the conditioned fear began to extinguish - This modern technology of virtual reality (VR) allows therapists to create computer simulations of feared events so people with phobias can face their worst fears in the safety of a computer studio o After several sessions in which there is no US for pain, most feel decreased fear of particular activities and some even report having stopped avoiding these activities

What is extinction? Give examples of extinction after positive reinforcement. Give examples of extinction after negative reinforcement

- Extinction consists of the discontinuation of any reinforcement that had once maintained a given behavior o When reinforcement is withdrawn, the frequency of the response declines o For example, when an electric drill burns out and stops producing rewarding results, we learn to stop reaching for it until it gets fixed - Extinction can take place because no reinforcement is associated with a certain behavior, or because less reinforcement is associated with that behavior than with some superior alternative o For example, we stop shopping at grocery store Y when we find it less rewarding to shop there than at store X o We stop using the old route back and forth home when a new expressway is completed that shortens the trip by 15 minutes - Antecedent stimuli that are regularly associated with nonreinforcement—or with less reinforcement than a superior alternative—become S Deltas for not responding during extinction o For example, before our car broke down, climbing in was an SD for turning the key and having the engine come to life o But afterward, it becomes an S Delta, and we learn not to turn the key anymore - When Gabriel first went to college, she talked about all the things that she used to love talking about with her high school friends in her hometown o Some of the topics went over nicely, but others were of no interest to her college friends, thus, talking about those topics that did not interest her new friends were put on extinction, and she gradually stopped mentioning them while at college o Her college friends became S Deltas for not talking about certain topics, however, seeing her old high school friends provides SDs for bringing up those topics - Extinction after positive reinforcement: o When positive reinforcement for a given operant is terminated, the frequency of the operant usually declines For example, Dr. Ryann had been earning $71,000 a year teaching at a nearby college, and he liked his work, but due to serious budgetary cutbacks, his college informed him they could no longer pay his salary, although he could continue teaching if he wanted Behavior maintained by positive reinforcement usually becomes less frequent when reinforcement is removed o Extinction can be a powerful therapeutic tool for dealing with behavior once maintained by positive reinforcement Some children learn to whine, pout, or throw tantrums because these behaviors bring them attention from others • Once reinforcement comes to an end, whining and shouting become extinct in behavior of children for example - Extinction after negative reinforcement: o Negative reinforcement often leads to escape or avoidance responses When negative reinforcement is terminated, the frequency of escape and avoidance usually declines o The extinction of negatively reinforced behavior occurs as the second phase of a 2-phase cycle First, a person learned some activities that help escape or avoid an aversive stimulus Second, the aversive stimulus is no longer present • This 2-phase cycle often corresponds to the coming and going of problems in life o For example, in elementary school, a bully can cause many classmates to stay vigilant and avoid the bully if possible Thus, in phase 1, the avoidance is learned via negative reinforcement because it can help avert painful experiences o If the teacher succeeds in helping the bully learn to be nicer to his classmates, the former bully ceases to be aversive to others and avoidance of the bully may decline This is phase 2, the extinction of avoidance leads it to decline when it is clear the danger is gone o Daily problems and temporary hardships can produce cycles of negative reinforcement and extinction For example, if you happen to get in a bitter argument with a friend, the aversive experience creates negative reinforcement for avoiding your friend and/or for avoiding any potentially touchy topics in subsequent conversations The mere mention of a touchy topic becomes an S Delta to avoid further words on that subject

How does avoidance retard extinction? Give examples.

- Extinction occurs whenever a CS is present but is not followed by its US o If a person avoids contact with a CS, extinction cannot take place, hence, avoidance retards extinction - For example, by receiving instruction and learning to ski without falling, M finds that the fears of skiing extinguish o Good instruction hastens extinction by pairing the CSs for fears of skiing with the neutral stimuli of no falls and no pain This instruction assures M that the CSs of skiing are no longer paired with the USs of painful falls and as a result, M's conditioned fears extinguish, and gradually the CS of being on skis loses its capacity to elicit fear - Without cognitive-behavioral interventions, some people go to great lengths to avoid CSs that elicit fear, and their avoidance patterns can persist for years, because they never have the extinction experience needed to neutralize the CSs o For example, hearing about the horrible airplane crashes causes some people to fear flying in planes, and often suffer considerable inconvenience in making long distance trips by other means of transportation If they avoid air travel, their fears of flying cannot extinguish - Conditioned fears and anxieties are less likely to extinguish naturally than are conditioned pleasures o The reason is simple, CSs that elicit fear motivate the avoidance, hence retard extinction; whereas CSs that elicit pleasure, motivate approach, which allows extinction if the CS happens to no longer bring pleasure For example, the person who fears flying is likely to avoid air travel, which prevents the fear of flying from extinguishing In contrast, CSs for pleasurable emotions are not avoided, allowing extinction to occur once these CSs cease to be paired with pleasurable experiences

Explain how extinction takes place in Pavlovian Conditioning. What might you do to extinguish a child's fear of high places?

- Extinction occurs whenever a CS is present but is not paired with its US o As a child had enough falls from high places to condition views from heights into a CS for anxiety However, once the child learned the skills for climbing mountains without having any painful falls, they had many exposures to the CS of views from high places without the US of falls Gradually, they ceased to fear heights because the CS of views from heights was no longer paired with the US of painful falls Thus, when a CS is no longer paired with a US, it gradually loses its ability to elicit conditioned responses, and the conditioned reflex (CSCR) becomes weaker - Extinction can occur naturally at all ages of life o Children often learn a fear of heights after several big falls in their early years, however, later, as they gain skills, they have fewer falls and their fears of falling decline due to extinction - During the early years of life, children are relatively small, weak, and vulnerable to painful experiences o Depending on each one's unique experiences, they may learn to be afraid of various CSs—such as strangers, hypodermic needles, big dogs, etc. o After being conditioned to their peak, these CSs can elicit fear and crying - However, as the years pass and the child becomes familiar with many loud noises that are not predictive of pain, the childhood fears of CSs associated with loud noise, for example, gradually extinguish o As long as a person has no direct bad experiences with lightening for example, it is not predictive of USs for painful experiences and ceases being a CS due to extinction o Other stimuli that elicit childhood fears may also lose their power to elicit fear as the years go by, thus, it is the process of extinction that allows us to "outgrow" childhood fears - Extinction is not limited to fears and other negative emotional responses o Positive conditioned reflexes can also extinguish if they are no longer paired with their original USs When a baby breast feeds, the breasts become CSs that elicit pleasurable feelings for the baby—due to their being regularly paired with the USs of milk and gentle skin contact When a mother ceases breastfeeding, the extinction process begins and the breasts become no longer associated with the USs of milk, and over the next months they gradually cease to elicit pleasurable CERs due to extinction - Similar cycles of conditioning and extinction occur at other times in life o When a couple's home is destroyed by a fire that started when they left the stove on, they become conditioned to fear making mistakes that can cause fire Leaving the house or going to bed without checking the stove are CSs that elicit fear o After this conditioning, they may be very careful with the stove and other appliances for a long time, however, as time passes and if they have no more accidents, the extinction process begins to reduce the strength of the CSs for fearing fire, because the CSs are no longer paired with accidents - Extinction helps us understand why there is truth to the saying "time heals all wounds" o One day we make a serious error, and the CSs related to such a serious error cause strong CERs (emotional responses) o A month or 2 later, the CSs may elicit strong feelings of embarrassment or guilt, but extinction takes place every day that our thinking about those CSs is no longer closely linked with making similar errors again Gradually, the CSs lose their ability to elicit emotions

Can you list at least five primary reinforcers?

- Food - Water - Comfortable temperature - Rest - Caresses/sex - Primary reinforcers have strong or weak effects at different points in time o They often take on especially strong powers to reinforce when we have not had access to them for a while, then lose their reinforcement power after we have had considerable recent exposure to them

Can you explain how models create inhibitory effects, giving an example? How do you explain how models create disinhibitory effects, giving an example?

- For example, when a high school graduate arrives at college and sees all the people, she likes wear faded jeans, it is a disinhibitory effect for wearing her old jeans frequently o An inhibitory effect would look like, if she saw her new college friends belittle a student who pursues activities she used to enjoy in high school, the observation has inhibitory effects, leading her to avoid these activities even though no one has ever criticized her for doing them Numerous conformity effects—both good and bad—are mediated by this type of observational learning

How do the contingency or noncontingency of reinforcers and punishers affect behavior? Why do noncontingent reinforcers and punishers have little effect on behavior, even if they are immediate?

- Generally operant learning is most likely to occur when reinforcers and punishers follow immediately after an operant o The longer the time delay between behavior X and its consequences, the less effect the consequences have on behavior X o Any other responses Y and Z that may have occurred between behavior X and the consequences may be modified instead of behavior X - There are 2 qualifications about the extra power of immediate reinforcement o Close time links between a behavior and a reinforcer or punisher do not always lead to operant learning; and long delays do not always prevent it Both depend on the contingency of reinforcement—how clearly the consequence is casually related to the behavior - Operant behavior is modified by its consequences: Reinforcers and punishers that only follow behavior by accident usually produce little operant learning, even if they occur immediately after a behavior o When reinforcers and punishers are the actual consequences of a behavior, they are called contingent reinforcers and punishers—to indicate that the consequences resulted from the behavior o The behavior produced the consequences Contingent punishment suppresses behavior o Any reinforcers and punishers that only follow behavior by accident are called noncontingent reinforcers and punishers—because they are not actually related to the behavior Noncontingent reinforcers and punishers have little ability to produce operant learning, although they can under some conditions, lead to unusual affects as with lightening producing more joking about the rain gods - For example, if Tricia is working as a salesperson, she may learn to greet customers in a friendly manner o If customer A smiles in return, the customer's smile provides contingent reinforcement for Tricia's behavior o If Tricia gives a friendly greeting to a second customer B, then slips and sprains her ankle, the painful sprain is not likely to suppress Tricia's making friendly greetings, because the sprained ankle is not a consequence of giving a friendly greeting Each behavior is modified most powerfully by its own consequences—but not by consequences of other behaviors Thus, close timing of a behavior and a reinforcer or punisher is unlikely to produce operant learning if the reinforcer or punisher is not contingent on—but only accidently follows—the behavior in question

How does Pavlovian conditioning help explain how primary punishers can become rewarding? How does Pavlovian conditioning help explain how primary punishers become extra aversive?

- If a dog is made hungry by hood deprivation and shock precedes the presentation of food, shock can be conditioned into a positive reinforcer o After repeated pairings, the shock becomes a CS that elicits smacking of the lips and salivation, and it also functions as a positive reinforcer

How does firsthand experience help a rule follower polish behavior first learned from rules? What kind of learning is going on during firsthand experience?

- If a rule is simple and the instructed operant is not difficult, a listener may perform the operant flawlessly the first time, after hearing the rule only once o However, when rules are complex and require operant performances beyond a person's present level of skills, the rule user may need extra, firsthand experience (differential reinforcement, shaping, observational learning, or prompts) in order to learn the behavior - As a person gains firsthand experience with any new activity, early rule-governed behavior undergoes important changes o As people repeat the rule-governed behavior several times and gain firsthand experience with it, the clumsy and mechanical early performances usually are smoothed out under the influence of differential reinforcement, shaping, observational learning, and prompting o As people follow a rule, they gain firsthand experience that leads to more "natural," coordinated, and subtle behavior The early clumsiness and mechanicalness disappear as smoother and more polished behavior is learned - For example, a woman taking tennis lessons hears the rules for how to swing the racket, how to move on the court, etc. o At first, her rule-governed behavior looks much more rigid and mechanical than her teacher's but if the tennis student sticks with the game, her ability to apply and follow the rules will improve with practice and firsthand experience

How can an art student learn to do self-shaping? Exactly what does the student do to alter his or her own behavior via shaping?

- If you are an art student, you have probably heard your art teacher comment on your drawings of faces: "the eyes are sensitively done in this picture" o This may learn how shaping is done because if the teacher helps shape your skills in a series of positive steps, the teacher is also serving as a role model that you can imitate the use of positive reinforcement You can thus, learn to evaluate and reinforce future artwork according to similar criteria o If the teacher keeps raising the criteria for reinforcement as your skills improve, you may also learn to impose higher criteria for self-reinforcement with each step of progress As you acquire these self-shaping skills via observational learning, you gain increased power to shape your subsequent improvements in drawing without total reliance on a teacher o A series of adjustments in reinforcement criteria creates natural steps of successive approximation that shapes the continued development of artistic skills

How can counterconditioning be used in therapy? Give examples. What would you have to do to counter condition a child's fear of dogs into a love of dogs? Explain the two types of therapeutic counterconditioning? Give examples.

- In cognitive behavior therapy, counterconditioning is used to reverse people's conditioning to CSs that elicit unwanted emotional responses o There are two forms of therapeutic counterconditioning Systematic desensitization Aversive counterconditioning - Systematic desensitization is when people can overcome fears and anxieties by first pairing the CSs that elicit mild anxiety with stimuli which elicit relaxation and other pleasurable feelings o After they feel comfortable at this first level, they move up a step at a time to CSs that had in the past elicited higher and higher levels of fear and anxiety This process is called systematic desensitization, to show the systematic step-by-step nature of working through feared CSs—from the mildest to the strongest—and reducing the person's sensitivity to them o Although this process can be done by imagining the fear inducing Cs while relaxed, counterconditioning is more effective when people have real-life exposures to feared CSs in completely safe situations - The systematic desensitization of public speaking for example, involves taking a carefully planned series of small steps that can counter condition this fear - Systematic desensitization exercises are done in a series of gradual steps, and people may spend weeks or months overcoming their fears of one level of once-feared CSs before going on to the next step o They are also learning skills for avoiding bloopers, embarrassing topics, or words they cannot pronounce (public speaking) - Aversive counterconditioning is when a CS that elicits problematic positive emotions is paired with aversive stimuli, the CS gradually loses its attractiveness and becomes wither neutral or aversive o This type of cognitive behavior therapy is used with people who have strong attractions to activities that are self-defeating, dangerous, or socially unacceptable—such as child sexual abuse and compulsive or addictive behavior - Aversive counterconditioning is usually considered only a stop-gap method that must be coupled with other types of cognitive behavior changes to have lasting success o By pairing alcohol with aversive experiences such as nausea inducing drug, we can help a problem drinker learn to find alcohol distasteful during therapy In order to help former alcoholics, avoid becoming reconditioned to love alcohol, therapy must go beyond aversive counterconditioning For example, getting the person into an athletic group, community activity, service club, or engrossing hobby could help fill the hours that used to spiral downhill from boredom to booze to stupor

What is haphazard shaping? Why is it common in everyday life? What problems can it create?

- In everyday life, people's behavior is often modified by haphazard shaping o Individuals who have no skills for systematic shaping often reinforce our behavior without setting specific goals, without paying careful attention to which behavior they are reinforcing, and without using orderly steps of successive approximation Although haphazard shaping can produce considerable behavior change, the change is often sporadic, chaotic, fraught with failures, and sub-optimally rewarding • The use of slow steps reduces the risk that the student experiences the failures that can motivate an avoidance or further steps of shaping - Haphazard shaping is often not conducted in small, slow steps o Often an inappropriately large step is introduced, or a person is rushed to the next step before mastering prior steps o Both rushing and taking big steps can be aversive and increase the risk of failure For example, when someone with advanced skills at any activity introduces a friend to the activity, the skilled person may be eager to have the friend progress rapidly to high levels of skill—in order that both of them can share the activity at the same advanced level • As a result, the skilled person may rush the friend up the steps too soon or encourage the friend to make a big step before the friend has all the needed skills - Haphazard shaping with big steps may produce such punishing results that the student finds the activity aversive and avoids further shaping o parents wanting child to be good at baseball or sports - During social interactions, people frequently shape each other's behavior quite unintentionally, and often haphazardly o All 3 components needed for shaping may be present: response variability, differential reinforcement, and changing criteria of reinforcement But they may not produce smooth improvements in behavior if they are not put together in the right order - Luckily, shaping can occur without people planning it o Sometimes we do things smoothy, then we bungle the next few actions o Other people may provide differential reinforcement for these behavioral variations—responding differently to good and bad variations by showing either enthusiasm or boredom, friendliness, or hostility - The patterns of differential reinforcement can change in a stepwise manner—as happens when we leave our hometown to attend college, and then move to city after graduating o Haphazard shaping may well occur without any plan or design Many of us undergo significant behavioral changes when we move to new social environments - When an individual's behavior is being shaped by social feedback from 2 or more people with dissimilar values and goals, the individual's behavior may be shaped in multiple—and sometimes conflicting—directions o Having one's behavior shaped in 2 different directions can be very stressful and aversive for the person if the 2 behavioral repertoires contain incompatible responses

How does technology develop as a result of shaping?

- In one sense, much of science and technology has been shaped by successes and failures in dealing with nature o When people start building kites, gliders, and motorized aircraft, some designs were more successful than others Successes and failures provided the differential reinforcement for improving the aircraft, and through response generalization, even better forms were created o As new scientific principles were discovered that further improved designs, there were reinforcers for making each new step in advancing scientific research Naturally, other kinds of learning (such as observational learning, prompting, and rule use) are involved in any complex technological development; but, in the final analysis, success and failure at dealing with the natural environment shape the course of technological use

In what sense are secondary reinforcers and punishers informative? When are they "good news" or "bad news"?

- Information is a key determinant of the power of secondary reinforcers and punishers o For example, seeing a giftbox with your name on it is informative; it is a secondary reinforcer because the information predicts that you are trying to open the box may lead to reinforcement Seeing wasps in the house for example, is informative; it is a secondary punisher because the information predicts that approaching them could lead to painful stings - The amount of information a person obtains from a stimulus depends on 2 things: o How accurately the stimulus predicts reinforcement or punishment o How well the person has learned to recognize and respond to this predictive stimulus - First, if a stimulus always precedes reinforcement or punishment, it is a much better predictor than stimuli that one that only occasionally precede and predict reinforcement or punishment o The more reliably a stimulus is predictive of subsequent reinforcers or punishers, the more informative it can be, and the more power it can have as a secondary reinforcer or punisher - However, the potential information available from a highly predictive stimulus is useless to people who have little or no experience with that stimulus and the reinforcement or punishment it predicts o Thus, learning is the second determinant of the amount of information a person can find in as stimulus: a person must have had adequate experience with predictive stimuli and related consequences to learn to find them informative as secondary reinforcers or punishers For example, the young child who has had no experience with gift boxes or wasps does not respond to the sight of them as predictive and informative stimuli the way more experienced children do - Various elements of the stimulus collage can carry information predictive of subsequent reinforcement, adding to the power of secondary reinforcers for those people who have learned to respond to the predictive information o A nicely wrapped box will not be a secondary reinforcer for you if it is not your birthday and someone else's name is on the box If it is not your birthday, but your first name is clearly on the gift box, there is more predictive information Finally, if it is your birthday and your name is on the box, the box would be a strong secondary reinforcer for you, eliciting only good thoughts and feelings, with no apprehension or doubts Various elements in the stimulus collage—the box, name tag, day of year, and context cues—contribute to the predictive information that determines how strong a secondary reinforcer can be - Likewise, various elements of the stimulus collage can carry information predictive of subsequent punishment, adding to the power of secondary punishers for people who have learned to respond to the predictive information o If you are driving 10 miles an hour above the speed limit, the sight of a police car sitting in a side street just ahead might be a predictive stimulus that signals the danger of receiving a speeding ticket However, the strength of the secondary punisher depends on the total information contained in the stimulus collage • If you live in a city where everyone comments on the way the police arrest every violator they find, the sight of the police car is a strong predictor that speeding will be punished • Hence it is a secondary punisher that punishes speeding, elicits noticeable emotional responses, and sets the occasion for reducing speed o However, if the police in your town only stop out-of-state cars and rarely bother local drivers, the sight of the police car signals quite different information, and the car is less likely to serve as a secondary punisher ——————————————————————————— - In essence, secondary reinforcers convey information that is "good news" o For example, a gift-wrapped box conveys the good news that a nice present inside, the more, new information is present, the more rewarding the box is o Can be misled: maybe the small box contains tickets for a 2-week all-expense paid vacation in Paris - The information conveyed by secondary punishers serves as "bad news" o Seeing that a police car has pulled out and started to follow your car is bad news, and you may have a noticeable emotional response and slow down even if the police car does not turn on its flashing lights to stop you The more information there is that serious punishment could follow, the worse the bad news is and the greater impact it has on your behavior and emotions

Why do people know more than they can tell? Why does this tell us that few people learn from rules alone?

- It is clear that naturals know when their behavior feels right or wrong and their feelings result from SDs and S Deltas and CSs that have been conditioned while the natural learned from models, prompts, and differential reinforcement o Tacit knowledge is clearly demonstrated when naturals are producing the behavior in question, but this knowledge is hard for them to describe thus, they know more than they can tell - Even rule users are likely to know more than they can tell, especially as firsthand experience polishes their behavior beyond the early phases of mechanical rule use o As additional firsthand experience adds extra polish and complexity to the rule users' behavior, these people also develop tacit knowledge that goes beyond the original rules and eventually they too know more than they can tell - Because people cannot tell all they know about complex behavior, communications concerning many areas of life can be rather sketchy o When they are finished explaining their feel for tennis, you may not be sure you know exactly what they meant - According to behavioral analysis, much of everyday behavior is based on tacit knowledge gained through firsthand experience, without much assistance from norms and rules o Thus, it is no wonder that people have little explicit verbal awareness about the exact procedures they use to negotiate their way through much of their lives

Why can contingent consequences have major effects on behavior, even if they are delayed?

- Long delayed consequences can produce operant learning if a person can detect a contingent, causal relationship between a behavior and its consequences o For example, Tricia greets customer C in a cheerful manner before starting her regular sales routine, and 15 minutes later the customer makes a purchase o Even though the delay between the cheerful greetings and increased sales is relatively long, there is a contingent relationship—a casual linkage—between the behavior and its consequences, and this increases the likelihood that the delayed consequences will reinforce cheerful greetings - Some of the ability to respond to delayed consequences can be traced to our human species having large brains with many areas devoted to learning o It also depends on our ability to verbally reconstruct the events of the past hours, day, weeks, and identify possible local linkages between our behavior and delayed consequences - The more frequently and vividly we recall a behavior and its delayed but contingent consequences the more likely the behavior is to be modified o Reflecting on the numerous good things that happened after taking an impromptu 3-day trip last month can reinforce planning to make a similar trip in the future, if only by helping us create a rule to take more impromptu trips o Even though there may be a long delay between the original behavior and its contingent consequences, verbally reconstructing the behavior and its consequences in close connection allows us to link memories of a behavior and its consequences in close connection allows us to link memories of a behavior and its consequences with almost 0 delay - Thus, cognitive reflection on past events allows behavior to be associated with contingent reinforcement or punishment via verbal and logical skills, once we identify casual linkages o Although the earliest formulations of operant principles stated that operant learning is based on immediate reinforcement and punishment, there is more to the store o First, even immediate reinforcers and punishers must be contingent on behavior—casually related to the behavior—to produce much effect o Second, if a contingent relationship is detected between behavior and delayed reinforcers and punishers that result from it, even delayed consequences can modify operant behavior

What facts about reinforcers and punishers should people know if they want to create long-lasting friendships, love relationships, or family bonds?

- Long-term friendships, love relationships, and family bonds depend on our being able to create social interaction chains that are rewarding after year—without too many destructive fights and without drifting apart because there is not enough reinforcement to keep them together o When 2 people create exchanges that are rewarding to both, the social rewards lead to liking or loving; exchanges of painful experiences usually lead to disliking or hating o Therefore, it follows that people who wish to build long-term bonds of friendship or love need to bring as many reinforcers—primary and secondary—to their social interactions chains as possible, while minimizing the punishers Keeping the pain and fights to a minimum while building very rewarding exchanges and a growing relationship helps keep relationships free of ugly emotions - Both good friendships and love relationships become especially rewarding when people give and receive as many different positive things, as possible o So, each pair of people needs to discuss the things they find most rewarding if they seek to create social exchanges that bring abundant rewards to both of them o In their early phases, love relationships often provide novelty and excitement that makes them seem especially rewarding As the novelty wears off, love relationships may seem less novel and exciting, but successful couples find so many other rewarding experiences to share that the novelty of first meeting pales in comparison to the rewards of a dynamic and growing relationship People can keep love strong for a lifetime if they bring generous rewards to their exchanges and continue to explore novel experiences - Most couples benefit from creating a list of shared activities—such as going to restaurants, dances, sport events, movies, picnics—that allow both to enjoy rewarding experiences o Each couple needs to create their own list of pleasurable activities that best enhance the pleasures they exchange and augment their mutual feelings of love o This often requires many democratic discussions of all the behaviors two people could share, in search of those activities that would build the relationship by being rewarding for both If each person is careful in assuring that they have equal roles in discussing their favorite activities, both are likely to find their interactions quite rewarding

What benefits and reinforcers can a student obtain by learning the skills for self-shaping?

- Most people learn at least some skills for shaping their own behavior towards desired goals o Whenever our parents, teachers and friends shape our behavior, they serve as role models that we can imitate for shaping our own behavior - Self-shaping can produce rapid effects o Often the person who knows best when a behavior was done well is the person who did the behavior o The person who did a polished piece of work knows right away that it deserves to be rewarded Self-shaping allows for immediate reinforcement, which is more effective than delayed reinforcement - There are several sources of reinforcement for learning how it is done o If a student has observed people who are effective shapers and has imitated their methods, the student's self-shaping will be rewarded by 1) faster learning because self-reinforcement is immediate and efficient, 2) positive reinforcers from people who are impressed by the student's rapid progress, 3) escape from the aversive consequences of errors and criticisms that come from making mistakes, and 4) the positive consequences of having greater independence and autonomy in guiding one's own development—as compared with only learning from teachers However, self-improvement by self-shaping is not a global skill that automatically generalizes to assure success in modifying all aspects of a person's behavior

What are the three main ways in which the behavior of models affects observers?

- Observational learning involves the learning of new behavior o When an observer sees a model do a behavior that the observer has never done, the observer may learn how to do the behavior merely by watching - Inhibitory and disinhibitory effects occur when observing a model changes the probabilities of an observer's already learned operant o No new behavior is learned, instead, the probability of an already existing behavior is merely increased or decreased o For example, when a high school graduate arrives at college and sees all the people, she likes wear faded jeans, it is a disinhibitory effect for wearing her old jeans frequently Numerous conformity effects—both good and bad—are mediated by this type of observational learning - Response facilitation occurs when a model's behavior serves as an SD for an observer to do a similar response o For example, josh lights up a cigarette shortly after, Delia does, too The model's behavior facilitates the observer's doing the same thing o Facilitative effects do not involve learning, nor do they produce lasting effects that increase or decrease the frequency of future performances of old behavior o Social facilitation occurs only because a model's behavior has provided SDs that help set the occasion for the observer's making similar response

Describe at least one type of reflex in each of these categories: voluntary muscles, circulation, digestion, respiration, sexual responses, emotional responses.

- Reflexes are among the simplest of all human activities and are easily neglected because they are supposedly quite primitive when compared with actions involving higher cognitive processes o However, reflexes play central roles in many aspects of everyday life and should not be neglected Numerous reflexes can be conditioned through Pavlovian conditioning - It is important that reflexes can become conditioned to predictive stimuli because: o First, most animals evolved to have numerous reflexes because the reflexes are important to basic biological functioning, survival, and reproduction Humans have more reflexes than simple, invertebrate animals • We have reflexes of jerking away from sharp objects to help avoid injury, reflexes underlying sexual arousal that are essential for reproduction, and other reflexes that have obvious survival value o Second, most animals have evolved to be capable of Pavlovian conditioning, and humans are no exception Pavlovian conditioning allows us to respond not only to USs (unconditioned stimuli), but also to the numerous CSs (Conditioned stimuli) that become associated with our reflexes Through Pavlovian conditioning, each of us can learn to respond to the specific predictive stimuli we have encountered during our unique personal experiences Obvious Reflexes: - Reflexes tend to be most conspicuous in babies o Infant behavior is based largely on unconditioned reflexes, whereas adult behavior is based on reflexes and countless other activities learned from years of personal and social experience Nevertheless, reflexes play an important part in our bio-survival mechanisms all through life, and the trained eye can see their importance at all ages in various biological systems Voluntary Muscles: - Babies are born with a variety of muscular reflexes of the skeletal muscular system that help ensure their early survival o For example, babies almost instantly jerk their arms and legs away from unconditioned stimuli (USs) such as pokes with sharp, hot, or cold stimuli o All these aversive USs can also elicit crying, which is likely to attract a caregiver to come to the baby's aid The infant's response can make the difference between surviving and not; hence they function as the infant's survival kit, allowing it to respond adaptively in the period before it has a chance to learn more complex behavior o Pavlovian conditioning allows infants to learn and respond with fear to CSs associated with aversive USs o After aversive stimuli have elicited an infant's reflexes of agitation and crying, how do we calm a disturbed infant? The stimuli of gentle caresses, soft touches, contact comfort, and calm rocking are the USs that elicit the infant's muscle relaxation, tranquility, and decreased crying • Most caregivers learn how to provide these comforting USs that elicit calmness o As a result, the infant learns, through Pavlovian conditioning, to respond to its caregivers as CSs associated with pleasurable feelings: By the time an infant is several months old, merely seeing mother or father nearby is a CS that elicits the conditioned emotions of comfort All through life, most of us can be calmed down by the US of gentle touches, and merely being in the presence of those people who have given us calming feelings in the past provides the CSs that can elicit tranquility and reduced anxiety o Many other reflexes help babies get started in life, functioning as their survival kit, before they have had a chance to learn other types of responses Newborns from day 1, reflexively suck on objects that touch their lips, while functioning to bring them milk if the US is the mother's nipple • After several weeks, sucking responses can be elicited by CSs that regularly precede milk, such as being moved into a nursing position or held in certain ways o Although some of the infant's reflexes—such as the sucking response—disappear in childhood, many continue to function all throughout life Circulation: - The muscular responses of the circulatory system are involved in numerous reflexes o For example, physical exertion increases heart rate and blood flow to the entire body o Sexual stimulation also causes increased heart rate, but blood is shunted especially to the genital areas, causing vaginal lubrication and penile erection - Circulatory system reflexes can be conditioned independently of other response systems o USs such as sudden pain or startle can elicit a strong, pounding heartbeat, and numerous CSs can be conditioned to that reflex during a person's life o The person who has associated precarious heights with painful falls may experience a CR by merely looking down from a high place into a canyon (CS) Blushing is a circulatory reflex in which blood vessels in the outer layer of the skin open and allow blood flow to the surface • Different individuals learn to blush regarding different topics and often shocked when they suddenly find themselves blushing uncontrollably • Thus, helping us realize that our reflexes are under the control of USs and CSs more than our conscious attempts to turn them on or off Digestion: - Several reflexes have to do with the digestive system, including the salivation reflex o Food in the mouth is the US that elicits the unconditioned reflex of salivation o Through Pavlovian conditioning, we learn to salivate when exposed to cues that are predictive stimuli associated with food - Extreme stress, shocks, and pain are USs capable of eliciting other digestive and excretory reflexes such as butterflies in the stomach, vomiting, and nausea o If taking tests has been paired with stressful or painful experiences in the past, walking into a very important exam is a predictive stimulus that can elicit similar conditioned responses - Although people are sensitive to correlations between predictive cues and reflexes, they do not always make the correct associations o For example, cancer patients who receive chemotherapy sometimes learn to associate the foods that they have eaten before going to therapy as CSs that elicit nausea Although evolutionary processes have prepared us to associate food tastes with nausea, in this case, the brain made an incorrect association, in fact, even though wrong stimuli sometimes become CSs for reflexive responses, totally erroneous associations are not too common Respiration: - Reflexes of the respiratory system include coughing, sneezing, hiccups, and asthma attacks o Some psychosomatic illnesses result from Pavlovian conditioning Conditioned asthmatic responses have been found to be elicited by a broad range of CSs, including perfume, sight of dust, national anthem, elevators, etc.

How is observational learning used in behavior modification? Give an example.

- Observational learning is used in behavior modification when models have been used to help others learn perceived self-efficacy, assertiveness, and prosocial ways to deal with frustration o Models are also important in teaching language skills, industrial and management skills, better health practices, ways to overcome phobias, and much more - For example, it can take care of observational learning before a 5-year-old Japanese child has learned enough English—while playing with English speaking children—to talk as rapidly in English as her playmates do o Sometimes observational learning creates successful behavior modifications within a few hours, helping people overcome fears of dogs, cats, rats, spiders, birds, etc. o Many fears can be overcome when people get a chance to live through their feared experiences in natural settings in the presence of coping model who shows how to deal with things that observers once feared

Why is observational learning sometimes much faster than response differentiation and shaping?

- Observing others is often a fast way to learn new behaviors—or to learn which of our preexisting behaviors is most appropriate in a new situation o In many situations, observational learning is faster than shaping - New behavior can usually be learned much more rapidly and efficiently by observational learning than by shaping alone o The presence of skillful social models speeds our learning and minimizes our risk of potentially lethal accidents o In addition, it is unlikely that complex behavior—such as eloquent speech, subtle cultural practices, or technological proficiencies—could be learned by shaping alone; yet, these activities are often acquired quickly by observing real or symbolic models

What is trial and error learning? How does it affect the variations in we see in behavior? Give examples.

- One of the simplest types of operant learning is called trial and error learning o When young children for example, open screw-top bottles, they often pull, twist, push, and pry the lids in a variety of ways Many of their actions have no effect on the bottle tops; however, a counterclockwise twist often succeeds in opening the bottle - Thus, a child's early responses to bottles are influenced by trial-and-error learning o Those responses that produce an open bottle are reinforced, and those that fail to open bottles are on extinction Bottle tops become SDs for counterclockwise twists and S-Deltas for not using the other techniques - A person does not have to intentionally "try" to learn something to make improvements in skills at bike riding, or hitting a baseball o In the process of exploring new activities, we often repeat behaviors over and over and then learn from the consequences of our actions o This is one part of the law of effect Successful outcomes produce pleasing effects that reward the more skillful variations in our behavior, while the less skillful variants are less likely to be rewarded - Whenever people's behavior is variable and some of these variations lead to reinforcement, but others do not, the behavior can be influenced by trial-and-error learning o It is one of the simplest forms of learning but has far reaching consequences - Many of us make progress in signing if we keep trying over and over o For example, it is common to see lots of trial-and-error learning in childhood, when young kids explore all sorts of activities and gain skill at some and leave others behind, due to a lack of rewards Children often go through an early phase of learning how to put on their socks when they struggle to get a sock on the foot o After several days of slow progress, they finally "get it" and the rewards bring an end to the early period of trial and error - Trial-and-error learning is seen when there is a "right" way and a "wrong" way to do something o When a teenager, for example, first learns to drive a car, it takes hands-on learning to just start the car - Even though this type of learning is often called trial and error learning, a more accurate nontechnical label might be "success and failure learning" because appropriate behavior leads to successes and inappropriate behavior leads to failures o Because children have so much to learn, numerous examples of trial-and-error learning can be found in childhood, but this type of learning occurs all throughout life, as we try to figure out how to work any new computer program, technology, artistic medium or social skill - In fact, society as a whole often moves forward via processes of trial and error o For example, early automobiles were crude and dangerous, but engineers learned by trial and error how to design them to be safer and more efficient o It takes time for progress to be made, but the rate of progress is accelerating, and new technologies are emerging faster than ever in the past Most technologies and social structures that humans create change and develop due to trial-and-error processes • This sets the stage for us to learn about shaping, which is the next important type of learning worthy of our understanding

How do models serve as sources of CSs that elicit vicarious emotions in observers? Give an example.

- Our emotions are conditioned via Pavlovian conditioning, with sadness and happiness being 2 conspicuous examples o For example, when an observer sees someone else smiling, the smiles are likely to serve as CSs that elicit pleasurable emotional responses, and perhaps even a smile, in the observer

Why do SDs from both the rule and context cues influence rule use?

- People are likely to follow rules only when SDs are present in the rule and the context, indicating that there has been past reinforcement for using similar rules in similar contexts o If following a certain rule has been reinforced in the past, the rule becomes an SD for future rule-governed behavior o If following another rule has resulted in nonreinforcement or punishment, this second rule becomes an SD for not following the second rule in the future Two general kinds of rules that often lead to reinforcement are commands and good advice - Following a given rule may be reinforced in one context but not in another context o As a result, certain context cues in the stimulus collage can become SDs or S Deltas that help determine whether or not a person will follow a rule For example, when parents state a rule, they may give extra cues that they really mean it—that they will reward or punish, depending on the child's performance - Countless stimuli from the rule giver, the audience, and the surrounding environment can become context cues—SDs or S Deltas—that people discriminate when and where to follow or disregard a given rule

What are the two types of inverse imitation? Why do people do each one?

- People learn to do inverse imitation when there is reinforcement for behavior that complements or differs in other ways from the model's performance o Situations in which inverse imitation is reinforced often involve punishment for regular imitation - The 2 main types of inverse imitation: o One occurs when the observer's behavior must complement the models' The movements of two dancers for example, must complement each other to produce reinforcing results o The other type is only reinforced when the observer is being different from the model For example, members of a juvenile gang often hate the police and any behavior that looks like the straight behavior modeled by the cops is likely to be punished by other gang members • Whereas behavior that differs from the straight behavior is negatively reinforced by escape from social criticism from peers, and it may also be positively reinforced by peers who approve of rebellion—that is, doing anything different from straight behavior - Inverse imitation for being different often occurs when observers dislike the model, see negative consequences follow the model's behavior, or receive strong reinforcers from demonstrating to others (or to themselves) that they are not conformists

Why do we learn to give rules to others?

- People learn to give rules to others because rules often provide a rapid way of helping—or forcing—another person to do specific acts that are reinforcing to the rule giver o The practice of giving rules to others is reinforced if it's successful in modifying their behavior in the desired direction o Parents usually find that giving a child a rule, such as not going into the street or you will have to stay in the house, speeds the child's learning considerably Rules help children learn which behaviors are forbidden, also helping the children do self-instructed, rule-governed behavior the next time they are about to dash into the street If children follow the rules, the parents' practice of giving rules will be reinforced - In many situations where shaping, observational learning, or prompting may take a long time to produce the desired behavior, a rule can take effect immediately o For example, if your chubby friend is on the way to the kitchen to get some snacks, you may modify your friends' behavior by a tactfully worded rule such as "Hey, I thought your new diet didn't allow you to have snacks" The rule may be effective in helping your friend realize they could live without a snack, whereas shaping and modeling would not have worked nearly so efficiently

What is response generalization? What produces it?

- Response generalization is the simplest cases of differential reinforcement and does not lead to the creation of new behavior o Some variations of a person's existing behavior are being made more frequent; and other variations are being made less frequent o At the end of differential reinforcement, no new variations have been created - However, there are behavioral processes that often accompany differential reinforcement which result in the creation of new behavior o These creative processes are response generalization and shaping, both of which allow people to develop completely new behavior patterns that lie outside the range of their old responses o They are the sources of human creativity—which is a topic that is of great importance - When operant is reinforced and increases in frequency, reinforcement not only strengthens the operant, but it also creates generalized effects that strengthen closely related behaviors o Through this process, called response generalization, variations of an operant may also appear and increase in frequency, even though they have not been reinforced o When behavioral variations in zones C and D are reinforced—but other variations in zones A and B are not reinforced—all the responses in zones C and D increase in frequency In addition, creative changes occur due to response generalization: completely new behaviors (zone E of 3rd graph) appear as generalized variations of the behaviors that were reinforced The process of reinforcing behaviors of type C and D increases the frequency of C and D, along with some completely new responses that had never been seen before • Thus, response generalization sometimes causes novel and creative variations on old behavior to appear • The new behaviors in zone E are natural variations on the reinforced operant of type D, but they add novel and creative behaviors to a person's behavior repertoire

Can you give some examples of tokens as secondary reinforcers and punishers? How did they become secondary reinforcers and punishers?

- People often use objects as secondary reinforcers or secondary punishers o These objects are called tokens because they stand for other kinds of reinforcement or punishment (high grades, diplomas, prizes, medals, etc. are tokens of social esteem given to people who excel) - For example, the golfer who has an entire wall covered with trophies and awards may find these prizes more valuable than others realize o Each prize is associated with a victory, adding to the total size of the collection The larger the collection, the more amazed and impressed are guests who explain praise and compliments Tokens of esteem often attract multiple other reinforcers, which further increases the power of these secondary reinforcers - Tokens are sometimes used in behavior therapy as a quick and easy way to reward target behavior o Almost any object can be called a token, taking value for children when they can be traded for sweets, tv time, or other reinforcers o Once in the trade-in value of the tokens is established, we have created a token economy, and earning tokens can be made contingent on target behaviors For example, we promise 7-year-old Serena that every 5 minutes she spends studying quietly is = to 1 token She benefits from seeing immediate reinforcement of her behavior as her tokens pile up, even though the ultimate reinforcement may not come until later when she trades them in - Tokens are sometimes used as secondary punishers o Traffic and parking tickets are used as tokens that precede the loss of reinforcers It is a CS that makes most people feel bad as soon as they see it, and can also punish and suppress illegal parking In addition, it functions as an SD for paying the specified fine, because paying is negatively reinforced by escaping the more severe punishment given to those who fail to pay o A token of social disapproval affixed to a person's clothing or belongings; the person may be eager to remove it Hence the person may need to be kept under surveillance and threat of punishment for not displaying it

How can an art teacher use systematic shaping to help a student?

- People who play the role of teacher sometimes become effective shapers, whether they are parents, teachers, camp counselors, coaches, or anyone else you might turn to for help in learning a new skill o A teacher who is effective at shaping behavior watches the variations in each student's behavior, gives positive feedback for the desirable parts of the variations, and changes the criteria for reinforcement in small steps as behavior shows successive approximations toward the desired behavioral goals o In the beginning, the effective teacher-shaper studies each student's initial behavioral skills so that shaping begins at the correct place—where the student is currently Among the behaviors the student can do at present, the better variations are reinforced, and the poorer variations are not New and better variations will emerge via response generalization, and these deserve special attention. - An art teacher might comment on the better aspects of a student's latest sketches: "Your use of shading has improved nicely in the past several days" o By always focusing on the better features of the student's work, the teacher provides differential reinforcement that automatically includes any new responses which emerge via response generalization o Focusing on the best behavior the student can do at present also helps the teacher adjust the criteria for reinforcement at the same rate the student's work improves: As the student gains skill, the teacher always looks for the better features of behavior and rewards them - Shaping is an ideal way to assure that students enjoy developing new skills, because students are rewarded at every step for doing what they do well at that step o Shaping minimizes the problems and aversive experiences that arise when teachers attempt to develop new skills by comparing a student's behavior with that of more advanced students or with impossibly high criteria Comparing a beginning student with advanced students may be quite aversive for the beginner, because it vividly reveals the inadequacies of the beginner's behavior and suggests that enormous effort will be needed to reach advanced levels of performance - Teaching a student to strive for perfection or near perfection can also be quite aversive because the final goal looks so distant and unattainable o Shaping is a more positive and rewarding teaching tool because each student's behavior is evaluated and rewarded according to that student's own current performance level rather than being compared with perfection or other people's behavior; and all students receive generous reinforcement for the better variations in their own present behavior, no matter what step of skill development they have reached Shaping does not demand that a person do better than he or she can already do: That is unnecessary because improved performances will appear naturally and automatically (due to response generalization) as the better variations in behavior are further reinforced

Give examples of prompting and fading via (a) physical guidance, (b) mechanical prompts, (c) pictures, (d) gestures, and (e) words.

- Physical Guidance: o People frequently use physical guidance as a prompt when helping children learn new behavior For example, a child may be prompted to play with a new toy when the parents physically guide the child's hand to touch, hold, shake, or even pull the toy in a manner that produces noise, light, or other effects After being prompted several times to pull the string connected to the toy duck, the child's pulling the string is rewarded by SS—hearing the duck quack and seeing the colorful wheels go around o Children are often prompted into social interactions by physical guidance The first time a young child confronts Santa in a department store, the child may be reluctant to climb onto his knee • The parent may physically lift the child into place and start the interaction, thus, after receiving positive reinforcement for the prompted behavior with candy, etc. the future trips to see him require less prompting o Prompting by physical guidance is not as common in adulthood as in the early years Because adults respond better than children to do verbal instructions—both written and spoken—rules can be used with adults as an alternative to prompts However, in certain situations physical prompting with the hand is superior to verbal instruction • For example, during sexual activities, a gentle hand can often communicate and guide the movements of the partner much better than words could ever do, showing what exactly needs to be done - Mechanical prompts: o Metronomes are a form of mechanical prompt that provides stimuli to help musicians learn the timing of music As students master the speed and rhythm of a given passage, they can decrease their use of the metronome and thus fade out the prompt o In order to help their child, learn to ride a 2-wheeler, parents sometimes attach 2 training wheels to the sides of the bike to prevent the bike from falling over These extra wheels serve as mechanical prompts that help teach bike riding and prevent falling Once the basic skills are mastered, the training wheels can be adjusted, and the child can learn how to balance • This gradual removal of mechanical support is the fading part of the process • During each step of the fading of support, the child can lean the bike over further and learn from the consequences - Pictures: o When children first learn to read, they are often given books that contain many pictures related to the words The simplest books may have on word per picture, but seeing the word and picture together assists the child in saying the correct word • Later, as the child has more experience with printed words, the pictures are faded out: Smaller or sketchier pictures may be used, and the number of words per picture increases As the pictures are faded, the child learns to rely more and more on the printed words as the SDs that control the verbal responses o Studies have shown that fading the prompts is crucial for efficient learning: Children who always see the picture and single word together do not learn to respond to the word alone as rapidly as those who have the picture prompts gradually faded out - Gestures: o With a few gestures, a conductor can prompt an orchestra to modify the tempo, volume, or tone quality of each passage of music After several rehearsals, the conductor expects them to play each piece, and the conductor can fade out the more exaggerated gestures o Additionally, actors and dancers also learn from gesture prompts When an actor speaks too quietly, the director may give hand gestures, emphasizing the upward movements, in order to encourage more vigor and volume from the speaker's voice • When a director creates new gestures that succeed in prompting the desired behavior, the successful effects reinforce the director's creative prompting - Words: o When a person is learning new patterns of verbal behavior, words are often effective prompts For example, an actor is learning a new script is in the same type of position as a child trying to learn a poem or prayer • When the last words that were spoken do not have enough SD control to cue and bring up the next phase, it helps to have someone prompt the correct verbal pattern o When people cannot remember an important detail while telling a story, they may look to a friend who knows what they need to say, waiting for a prompt They may even prompt their friend to give the verbal prompt by looking helpless or speechless or by gesturing with empty hands • For example, when 2 people try to talk about the places they went on a trip together, they may turn to each other for help in filling in the names of various things they did, but after enough prompts, storytellers often learn their lines well enough to get along without further prompts

What does it mean to say that primary reinforcers and punishers are fixed but flexible?

- Primary reinforcers and punishers are biologically rooted and thus have somewhat fixed and predictable values o But they are also flexible and vary across time and situations: they are somewhat relative, not absolute, or unchangeable - To understand the fixed but not flexible nature of primary reinforcers and punishers, it is helpful to place all reinforcers and punishers on a continuum: powerful reinforcers are located at the left end and have the ability to raise the probability of behavior o Powerful punishers are located at the right end and have the ability to suppress behavior o In the middle, the stimuli are called neutral because they have no measurable capacity to reinforce or punish behavior - The ability to reinforce or punish behavior is not a fixed or intrinsic quality of a stimulus o Even though the physical properties of a stimulus may be fixed, its ability to function as a primary reinforcer or punisher, depends on the state of our body and on prior learning When we are tired and sleepy for example, a dark, quiet room provides reinforcement for entering and snuggling under the covers, but staying there is boring when we are well rested and fully awake - Thus, reinforcers and punishers are relative but not fixed and unchangeable (even if their physical properties are stable and unchanging) o Stimuli that function as reinforcers or punishers do not necessarily have the same effects under all conditions A given stimulus can function as a reinforcer, neutral stimulus, or punisher in different situations - For example, during childhood a person may consistently avoid eating Brussel sprouts; yet in adulthood, the person may love eating them o Thus, Brussel sprouts functioned as a punisher in childhood but a reinforcer in adulthood

Why is it important that reinforcement follow prompted behavior?

- Prompted behavior is generally learned most rapidly when it is followed by reinforcement o Thus, when helping their child learn new behaviors, parents often use prompts to start the behavior and then give enthusiastic praise to reinforce its frequency - Reinforcement is crucial for strengthening the prompted behavior o All operant behavior depends on reinforcement o Sometimes the natural reinforcers for doing a prompted behavior are sufficient, but prompters often need to give extra social attention and signs of genuine approval

How are prompts and fading used in behavior modification?

- Prompting and fading are a common part of behavior modification and cognitive-behavioral therapy o For example, signs are often used in natural environments as prompts to modify behavior such as recycling, using condoms, and eating healthy Prompts in the form of signs have been used successfully to increase the number of free condoms taken at bars Signs and fliers in national brand fast-food restaurants are effective in prompting customers to shift from less healthy selections to salads and low-fat, high-fiber foods

What is prompting? What three phases are usually present when learning from prompts?

- Prompting is a common part of social learning in childhood o But all through life most people give and receive prompts so naturally that the process is sometimes hardly noticed, although it still has its effects on behavior - Learning from prompts usually occurs in 3 phases: prompting, reinforcement, and fading o These 3 phases often blend together rather than standing out as separate activities - A behavior is prompted by words, signs, physical nudges, or other stimuli that can start or mold a behavior o Prompts are special stimuli that are introduced to control a desired behavior during early learning, although they are not needed once the behavior is learned Many prompts can be given almost effortlessly once we have learned how to give them

How does response generalization make possible the shaping of new behavior?

- Response generalization plus reinforcement is often used as a teaching tool o Differential reinforcement can encourage several different careers such as acting o Response generalization is a common side effect for example, as performer's comedy skills improve, it becomes easier and easier to create entirely new forms of humorous banter When creativity is rewarded, people learn to become more creative because these new humorous responses are likely to be reinforced, as well as the actor's skills at creativity are reinforced along with the humorous new word patter they produce - Shaping is a process by which operant behaviors are changed in a series of steps from an initial performance toward a desirable goal o Each step results from the application of a new criterion for differential reinforcement o Each step of learning produces both response differentiation (with improved skills) and response generalization (with the creative new behavioral variations that make possible the next step of behavior improvement - In everyday life, shaping appears in a range of different forms, from systematic to unsystematic o Systematic shaping is more likely to produce rapid and effective behavioral change—with minimal failures and aversive consequences—whereas unsystematic shaping is more likely to be slow and disorganized, with a higher risk of failures and aversive consequences

How can rules be used in behavior modification and cognitive-behavior therapy?

- Rules can be used in behavior modification and cognitive behavioral therapy because therapists give clients instructions on how to do various activities needed to improve the quality of their lives o Clients may be given books or pamphlets about the behaviors they need to learn, and these reading materials often contain very explicit rules for arranging behavioral changes One of the main goals of behavior modification and cognitive behavior therapy is to clarify the means by which behavior is changed so people can take a more active role in their own self-direction

How can knowledge about rules help you formulate clearer ABC rules that could be useful in gaining greater control over your own life?

- Rules generally describe some aspect of the contingencies of reinforcement: the ABC relationship among antecedent stimuli, behavior, and consequences o The guidelines for helping parents cope with a pouting child specify all 3 elements of the ABC model: If the SDs of pouting are present, your behavior of not rewarding it will produce beneficial changes—as pouting gradually disappears The ABCs of the second rule (for rewarding alternative behavior) are also encoded words - People can expect the best success in guiding their own behavior when they use scientifically tested rules for self-instruction o Knowledge is power o Knowing the behavior principles of rule use greatly increases our power to organize and direct our own lives: It is wise to clarify all 3 elements of the behavioral ABCs and then reinforce all behaviors that successfully approximate the desired goals - Sometimes rules need to be spelled out completely in order to maximize their effectiveness or minimize misunderstandings o For example, telling a child to a completely rule to clean their room before dinner in order to watch TV tonight is much more likely to generate the desired behavior than is stating a brief rule, such as clean your room - Children learn that one person's rule can be counteracted with another person's rule o For example, Tony did not share his blocks at first, but Emmy might say, "Daddy said you should play with me" If Tony begrudgingly gives in and allows Emmy to play with his blocks, this reinforcer hastens Emmy's learning to recognize that not all rules must be obeyed o She thus learned to invent rules to help her and when she is rewarded for inventing a rule, these rules now give her power to control her older siblings Further rewarding Emmy for being creative and inventing rules that she never heard before but benefit her

What is satiation? How does it affect primary reinforcers?

- Satiation occurs when a person has had so much of a certain primary reinforcer that the stimulus loses its ability to function as a reinforcer o Generally, the more satiated a person is with any primary reinforcer, the less power that US will have to reinforce behavior - For example, when a person is satiated with food, it ceases to function as a reinforcer but the 5th Hershey bar you eat in a row will not taste as good as the first one o Inserting pauses between the Hershey bars—or other positive reinforcements—slows the satiation process

Because secondary reinforcers and punishers are both CSs and SDs, what three functions do they have?

- Secondary reinforcers and punishers are predictive stimuli that usually have the properties of both CSs and SDs; and as such they can have 3 separate functions o Two functions are based on the properties of CSs, and one is based on the properties of SDs As CSs, they function as consequences that modify the prior operant behavior, and elicitors of reflexive responses with emotional components o As SDs, they set the occasion for subsequent operant behaviors Both secondary reinforcers and punishers have all 3 functions - As a person gains experience with predictive stimuli that precede reinforcement, those stimuli become secondary reinforcers with the properties of both CSs and SDs o As such, they can reinforce behavior, elicit pleasurable emotional responses, as well as set the occasion for future operant For example, when a 4-year-old girl sees a wrapped box with her name on it, the box is a predictive stimulus that something nice will happen if she opens the box • By this age, she has received enough presents to have learned that gift-wrapped boxes become secondary reinforcers because they are predictive stimuli that regularly precede reinforcement providing a CS that reinforces the child's looking at the box and eliciting pleasurable responses • Gift boxes are: 1. CSs that = reinforce looking at the box 2. CSs that = elicit pleasurable emotional responses 3. SDs that = set the occasion for opening the box - As a person gains experience with predictive stimuli that precede punishment, those stimuli become secondary punishers with the properties of both CSs and SDs o As such, they can punish prior behaviors, elicit unpleasant emotional responses, and set the occasion for operant that help avoid punishment For example, if you had prior experience with wasps, the sight of several of them wildly in your kitchen provides predictive stimuli that you could get stung if you are not careful The predictive stimuli are secondary punishers, with the properties of both CSs and SDs • As CSs, they can punish and suppress responses of moving too close to the wasps and elicit uneasy feelings or fear • As SDs, they set the occasion for subsequent operant, such as leaving the room or opening a window Wasps are: • CSs that = punish moving too close • CSs that = elicit unpleasant emotional responses • SDs that = set the occasion for opening the window so they can get out of the room helping not get stung

What are secondary reinforcers and punishers? Can you define them?

- Secondary reinforcers and punishers are very powerful effects on behavior o There are some cases in which secondary reinforcers and punishers have stronger effects on behavior than primary reinforcers - For example, some models love the reinforcers of being looked at and photographed o Social attention is such as strong secondary reinforcer for some models that they will starve themselves—and resist the primary reinforcer of eating healthy foods o Looking good on each job increases their chances of capturing attention—the powerful secondary reinforcer—and landing more modeling jobs Clearly, the secondary reinforcers for being lean, based on cultural values and social conditioning, can be more powerful than the biologically based primary reinforcers of food - Secondary reinforcers and punishers do not always play such spectacular roles in motivating our behavior o However, they are almost always present and important in the behavior of everyday life because most people respond to a multitude of stimuli as secondary punishers, as well as many of these stimuli are major determinants of behavior—especially in long chains of operant where primary reinforcers come after weeks or months later

Why is the reinforcement value—or practical value—of an operant so crucial in influencing the acquisition phase? What three cues do observers use to assess the reinforcement value of a modeled behavior?

- The reinforcement value/practical value of an operant is so crucial in influencing the acquisition phase because the rewards associated with the model's behavior may favor the acquisition of rewarding behavior o 3 cues that observers use to assess the reinforcement value of a modeled behavior are: Seeing consequences of a model's behavior Seeing a model's emotional responses Characteristics of a model

How does a history of partial reinforcement retard the extinction of secondary reinforcers? How does a history of partial punishment retard the extinction of secondary punishers?

- Secondary reinforcers are slower to extinguish if they had been maintained by partial conditioning in the past, rather than by continuous reinforcement o Partial reinforcement occurs when a secondary reinforcer is followed by other reinforcement only part of the time—for example, only 3 out of 10 times - Continuous reinforcement occurs when the secondary reinforcer is always followed by other reinforcement—10 times out of 10 o If purchasing lottery tickets is rewarded by winning some cash every once in a while, purchasing such tickets is rewarded by secondary reinforcers that are maintained by partial reinforcement o If seeing a gift box with your name on it is always followed by a rewarding experience, looking for your gifts is maintained by secondary reinforcers based on continuous reinforcement - If all through your high school years, friends invited you to do things and they always followed through with rewarding experiences, invitations would be secondary reinforcers based on continuous reinforcement o If your first college roommate never followed through after invitations, it would be easy for you to discriminate those invitations from them were different than in high school - It is difficult to notice the onset of extinction after a history of partial reinforcement o This, in turn, slows extinction: It slows your learning that the college roommate's invitations are predictive of no rewards o You might continue for months to respond to your roommate's invitations as predictive of occasional reinforcement, even though they were not - However, things would be different if you had been raised in situations where threats were only occasionally followed by other forms of punishment (perhaps one third of the time) o Threats would be associated with partial punishment, and you would always be wary of them, because punishment might come any time, but you would be used to false alarms when threats led to nothing o When you got your cranky new neighbor, you would be wary of threats and used to threats not always leading to other punishers Thus, you would find it harder to discriminate between the cranky neighbor's threats that predicted no punishment and your prior experiences of partial punishment

Explain any six of the eight determinants of strong Pavlovian conditioning.

- Several variables determine the speed with which Pavlovian conditioning takes place and the strength of a conditioned reflex after conditioning 1. Strong USs produce stronger conditioned reflexes than do weak USs a. A major car accident causes stronger emotional responses and more powerful conditioned anxiety about unsafe driving than does a minor fender bender at a slow speed 2. When a CS is always associated with a given US, the CS takes on a greater ability to elicit the CR than if pairing is only intermittent a. When a stimulus precedes a US 100% of the time, it is much more likely to become a CS than if it preceded the US only 20% or 50% of the time i. If a woman always wears a certain perfume before having sex, and never wears it at other times, the perfume is likely to become a CS that will elicit sexual arousal for her and her partner 3. When multiple stimuli precede a US, the stimulus that is most highly correlated with the US is most likely to become a strong CS a. If gentle loving words always precede making love but perfumes are only associated with sexual interactions part of the time, the loving words are more likely than perfume to become CSs for erotic feelings b. The highly correlated cues stand out most conspicuously as clear predictors of sexual pleasures, which facilitates their conditioning i. Gradually, they tend to overshadow the less predictive stimuli and become the strongest CSs 4. Stimuli that are the focus of attention are more likely to become CSs than are inconspicuous or unnoticed stimuli a. If we are clearly focused on several bees before being stung, we learn to fear the bees faster than if we had been stung from behind and hardly saw the bee b. People often try to focus our attention on relevant predictive stimuli—and minimize the number of distracting or irrelevant cues—when they fear or like some CS i. When a father sees his child run toward a lawn full of clover and bees, the father warns the child to pay attention to the bees ii. If the child happens to be stung, the extra attention to bees helps the child learn the dangers of bee stings 5. A predictive stimulus must occur before—not after—a US for conditioning to occur a. If natural stimulus occurs after the US has appeared, Pavlovian conditioning rarely occurs i. Attempts to create "backward conditioning"—in which predictive stimuli come after a US—almost always fail ii. It makes sense that we would not have evolved a tendency to associate things in backward casual order iii. If a person ate some bad food (US) and became nauseous and sick (UR), should this conditioned sickness reflex affect the stimuli that came before or after it? • The sight and odors of bad food that came before eating the bad food (US) are most likely to be the casual stimuli (CSs) we need to learn to dislike b. In Pavlovian conditioning, stimuli that are present shortly before a US appears are most likely to become effective CSs 6. Short time lags between the onset of a CS and the onset of a US facilitate Pavlovian conditioning a. In the laboratory, intervals of 0-5 seconds between the CS and US produce stronger conditioning than do longer intervals b. The optimal interval between the CS and US is often reported to be approximately 0.5 seconds i. If a person fantasizes during sexual stimulation, the close temporal pairing of fantasy and sexual stimulation (US) facilitates the conditioning of the fantasies as CSs that elicit sexual arousal c. As the time lag between predictive stimulus and a US increase, Pavlovian conditioning becomes slower and less effective at producing CSs

How does a child learn to slide skillfully on ice via shaping?

- Shaping can occur without anyone's playing the role of teacher or shaper o Our behaviors and skills for dealing with the physical environment—with either nature or human-made objects—is often shaped in complex and subtle ways by our successes and failures in dealing with it, without social intervention - The natural environment contains many objects, forces, and living things that can shape behavior without social assistance o Gravity, storms, ice, surf, mountains, insects, large animals, and the rest of nature have properties that can lead to the differential reinforcement and sometimes the shaping of behavior For example, when walking or driving on icy roads during the winter, people learn skills for caution Although some children avoid walking and sliding on icy areas after several painful falls, other learn to balance and eventually make long-graceful slides on ice • These skills are learned in a series of natural steps beginning with short slides, and increasing to ever longer glides - Thus, because the laws of nature are so unbending, natural shaping can be more consistent than social shaping o Although no one may be present to shape the child's skill at sliding on ice, each clumsy response is punished by falls, and each step of increasing skill is reinforced by success and the sensory stimulation of longer, faster, and more exciting slides

What are the differences between real and symbolic models?

- Social models can influence behavior in many ways o Models can be real (bodily present) or symbolic (presented via book, movie, TV, or verbal descriptions)

Why did Tanya suffer because she was born with "congenital indifference to pain"?

- Tanya suffered because her disorder allowed her to feel reinforcers and pleasures, but felt no pain and no punishers o When she cuts herself or falls, she experiences no pain and punishment, and as a result, she has a hard time learning not to fall down, cut, or burn herself - By the age of 4, her body was covered with bruises and open sores, and by 11, her fingers had been cut down to stubs and both her legs amputated o She was deprived of the pain feedback that punishes most of us when we do something that injures our bodies: it is an important signal and significant for health and survival

What are the two main phases of observational learning? Why are they separate and distinct?

- The 2 main phases of observational learning are acquisition and performance o Acquisition involves perceiving and remembering information about a model's behavior o Performance involves using that information to carry out an observed behavior Performance can occur seconds, weeks, or even years after acquisition - For example, a child may watch father use microwave oven and instantly acquire knowledge about how the act is done, but it may be days or weeks before the child performs imitative response

How is the acquisition phase influenced by similarities of the observer and model? How is the acquisition phase influenced by similarities of their behaviors?

- The acquisition phase is influenced by similarities of the observer and model because when 2 people are engaged in similar tasks, they tend to be more observant of the other person's behavior than when they are doing different things o For example, follow the leader comes close to being pure example of modeling effects due to similar behavior - The acquisition phase is influenced by similarities of their behaviors because all of the factors that influence the acquisition of modeled behavior can operate at the same time o For example, when a novice is learning from an experienced friend how to paddle a canoe, the novice can see the practical value of the model's skillful use of the paddle o Friendship adds another factor to the modeling effects If the observer likes and respects the model as an accomplished athlete, there will be even more reinforcement for acquiring information about the model's behavior

How can models cause observers to learn new emotional responses via Pavlovian conditioning? Give an example.

- The conditioning of smiles and other social cues is influenced by numerous cues around us, often allowing us to learn subtle discriminations o Although the smiles of friends usually elicit pleasurable feelings, the smiles of people who deceive or manipulate us come to elicit quite different feelings

In what sense are secondary reinforcers and punishers predictive stimuli? What do they predict? Why?

- The stimuli most likely to become secondary reinforcers or punishers are those that are the best predictors that more reinforcement or punishment is forthcoming o For example, the sight of your bowling ball rolling down the alley toward a strike is a good predictor of a rewarding experience; hence that visual image provides secondary reinforcement for skillful bowling before ball even hits pins o Note that the secondary reinforcer follows the behavior closely, helping provide more immediate reinforcement of behavior than if you had to wait until all the pins fell Secondary reinforcers play an important role in bringing immediate feedback about our behavior

What happens when a behavior that was once punished is no longer punished? How do cost/benefit ratios influence learning? Give examples to make this clear.

- The cost/benefit ratio of punishment and reinforcement helps predict how much response suppression will occur o When costs are high and benefits are low, there is more response suppression than when costs are low, and benefits are high - The frequency of punishment and reinforcement affects the cost/benefit ratio o If a behavior is always rewarded, but punished only one time in 10, the intermittent punishment is less likely to suppress responding than would more frequent punishment o For example, Robert believes in ESP and likes to talk about it with his roommate: It's a rewarding topic However, if Robert is criticized every time he mentions ESP to strangers, he is more likely to stop talking about it with strangers than if he had only been criticized one time in 10 by strangers o If a boy criticizes his girlfriend every time she plays tennis with him, his criticisms are punishers that counteract the rewards of playing She may continue to play tennis with him if the frequency of rewards is greater than frequency of punishment; but she is likely to quit if the criticisms are more frequent than the rewards - Even mild or infrequent punishment can totally suppress behavior if some alternative behavior is available that has a better cost/benefit ratio o Both punishment and extinction reduce the frequency of behavior; but punishment usually does so more rapidly and more completely, compared to extinction For example, after a child has learned to pout of whine because the parents have given social attention in the past, the frequency of pouting or whining can be reduced by either extinction or punishment - The discontinuation of punishment: o Punishment does not cause behavior to be "unlearned or forgotten" instead, it merely suppresses the frequency of responding Often the effects of punishment are only temporary; and when punishment no longer occurs, the rate of responding may increase again This phenomenon is called "recovery after the end of punishment" o In many cases, previously suppressed behavior returns to the frequency it had before punishment began - For example, when the police began using radar to issue speeding tickets on highway 123, speeding may be reduced on that road, but when they stop patrolling road 123 and focus on highway 123, without punishment, and many former speeders return to their old practices of speeding along highway 123 - Recovery is fastest and most complete when the original punishment was mild or infrequent and there are benefits for doing the behavior, showing the effects of cost/benefit ratio o The milder the original punishment, the sooner a behavior is likely to recover after the end of punishment For example, if you have a skiing accident that breaks both legs, this intense punishment may suppress skiing permanently, even after your legs are well mended

What are the six main determinants of the acquisition phase of observational learning?

- The model's behavior has practical value; that is, it produces reinforcing consequences - There are personal similarities between the model and observer - The model and observer are engaged in similar activities - There are reinforcers for watching the model - The model's behavior is salient and easily visible - The model's behavior is not far beyond the observer's present level of skill

How is the acquisition phase influenced by the visibility of the modeled behavior? How is the acquisition phase influenced by the easiness of the modeled behavior?

- The more visible a model's behavior is for an observer, the more the observer can learn by watching o Acquisition of information from a model is usually facilitated by being close enough to see and hear the model, having good eyes and ears, being on the correct side of the model to see the most important movements, etc. - The easiness of modeled behavior can best be understood by locating the behavior of both the model and the observer on the appropriate steps of increasing skill o If the modeled behavior is too many steps ahead of the observer's skills, the observer may not be able to acquire much useful information from the model

Why are rule givers sometimes disappointed by the errors they see in the behavior of people who attempt to follow their rules?

- The person who creates rules is trying to capture the essence of firsthand experience and pass this on to the rule user, but the 2 people are responding to their environment in different ways o The person who formulates a rule for a complex behavior will almost always be disappointed with the rule users' failure to understand unique details and situations as well as the rule giver does o Even if the rule giver expands on the rule with several auxiliary rules, the less experienced rule user simply will not show all the finesse of the person who learned from firsthand experience One solution to the problem is to let the rule user gain increasing amounts of first-hand experience—the kind that shaped the shopkeeper's behavior—in order to supplement the learning begun with rules alone - For example, I told my assistants 100 times how to deal with the grouchy old desk man, and they never seen to handle him the way I want them to

Which produces tacit and explicit knowledge: rules or firsthand experience? Why?

- The person who learned a behavior from rules and thinks about the behavior in terms of rules has explicit knowledge about the behavior o The person who learned from firsthand experience, without rules, is less likely to think about the behavior in terms of rules and has tacit knowledge—or unspoken knowledge—about the behavior o Rules provide people with explicit knowledge that is easy to verbalize and easy to share with others because the knowledge was verbally encoded from the start, when the person first heard the rules Explicit knowledge is readily made public knowledge because it is easy to communicate to others while making people feel they are consciously and verbally aware of the reasons for their behavior • The coach says I should increase my running to 5 miles per day, 5 times a week, to build my endurance = explicit knowledge - In contrast, the natural is playing it by feel, often without verbal awareness of the causes of behavior o A person who is a natural at athletics may exercise at a level that "feels right" o Naturals clearly know something about the behavior they do, but it is tacit knowledge, or unspoken knowledge It is personal knowledge as opposed to public knowledge, and it has an intuitive quality because it is guided more by nonverbal sensations than by verbal instructions o Tacit knowledge is personal knowledge, based on first-hand experience; and when the person dies, all the unique, personal knowledge of a lifetime dies too Explicit knowledge encoded in rules, is public knowledge that can be passed from person to person or through symbolic media, and hence can outlive the rule formulator

What are the enlightened views on punishment? How is this seen with nations? How is this seen with parents?

- The philosophers of the enlightenment helped us learn why we should minimize the use of punishment with other people o We can see how their logic applies to both nations and parents o They are good reasons to avoid using punishment as much as people did in the past - Most nations, in the past, were controlled by kings, emperors, tyrants, or dictators who used strong punishers to keep the masses under control o The use of punishment pleased the leader but shackled the masses under aversive control As a result, life was nasty, brutish, and short for many people o Enlightenment philosophers wanted to end suffering and create less painful ways to rule nations and solve international conflicts, creating a nation that offered people life, liberty, and the pursuit of happiness - Parenting has seen a similar shift away from the use of punishment toward more positive alternatives o In the past, parents often found that spanking and other punishments could quickly stop a child from doing things that bothered the parents Strong punishments stop unwanted behavior quickly, which brings immediate rewards to the parents for using aversive control o Fortunately, enlightened thinking and social science have led modern social psychologists and parents to understand the problems of using punishment in child rearing Punishment may lead to a temporary reduction of unwanted behavior, but it usually does not produce lasting change o Parents who use the paddle or belt to control their children are serving as role models for the use of violence and aggression as ways to treat other people, and their children may imitate and learn violent methods for controlling others o There are more enlightened ways to raise children Today's parents are told to use reasoning instead of physical punishment when children misbehave The child is learning some basic rules of social life and reasons to be more empathetic reducing chances that the child will steal or do a frowned upon behavior in the future o The parents serve as role models for caring and intelligent social interactions, helping the child see how enlightened people treat others

What are the two factors that affect the performance of an imitative response?

- The two factors that affect the performance of an imitative response are past reinforcement and present reinforcement o Past reinforcement for imitating a certain model or a certain type of behavior increases the probability of performing the modeled behavior in SD contexts similar to those in which reinforcement occurred in the past o Present reinforcement occurs due to the idea that performance is additionally influenced by present patterns of reinforcement and punishment Cues that correlate with the present patterns of reinforcement and punishment may become SDs or S Deltas for imitative performance These cues can come from the model or from other people who have imitated the model

What are mastery models? How do they affect an observer's behavior? What are coping models? How do they affect an observer's behavior?

- There are 2 kinds of models, mastery models and coping models o Mastery models demonstrate only the final step of mastering skill; hence they deprive the observer of the information needed to traverse the early steps of learning o Coping models demonstrate the skills that an observer needs to cope with the problem of moving from any one step to the next Coping models have the skills needed to model behavior that is only a step or two ahead of the observer's skills - While coping models are valued for their helpfulness in advancing up the steps of learning, mastery models are often valued for the inspiration and expertise they bring o For example, after hearing a world-famous novelist discuss the exhilaration of creative writing, a student may become very enthusiastic about writing However, the writer's behavior gives no clues about the years of practice, study, rejections, and rewrites that came before Thus, the observer does not have access to information about all the steps involved in becoming a master of writing

What is the difference between explicit and implicit rules?

- There is a continuum from explicit rules to implicit rules—from fully formulated rules to sketchy abbreviations o Children usually need rather clear and explicit rules if they are to succeed in following them and may not find enough useful information in a sketchy one For example, telling a child to set the table is vague and sketchy whereas they may do a better job when given more explicit rules such as telling them to put the napkins on the left side of each plate; then come back and I will tell you what to do next - As people gain experience with rule use, most learn to do rule-governed behavior in response to increasingly vague and sketchy rules—including statements in which a rule is only implicit such as table Joe, may not be enough to get Joe to set the table when guests are expected - Naturally, a person may be skillful at locating implicit rules in some areas of life, but be insensitive to implicit rules in other areas, depending on prior experience in each area o The investor may put money into Pogwash properties but fail to detect implicit rules in conversations about the importance of treating his partner with kindness and affection

Can you list at least five primary punishers? Do primary punishers show a threshold effect? Explain your answer clearly.

- Usually there is a threshold effect with punishers o Effort must exceed some threshold before we find it aversive o Sharp scratches must surpass a certain limit before they are aversive Once above the threshold, increasing the intensity of the US increases the strength of the punishment effect o For example, if your neighbor's cat gives you a superficial scratch every once in a while, this mild punisher may slightly suppress the frequency of your picking up the cat However, if the cat regularly gave you quite painful scratches, the intense punishment might completely suppress your touching the cat

Why can vague rules—such as "The table, Joe"—be effective? Could a four-year-old follow that rule? How does this show the importance of learning in being skillful at rule use?

- Vague rules such as the table Joe cannot be effective because individuals, especially children, need rather clear and explicit rules/directions that tell them what to do and how to do it to result in the best performance and response o A 4-year-old could not follow the table Joe because it is not enough to get Joe to set the table when guests are expected (especially if Joe has never set the table prior to this event) also not enough useful information - This highlights the importance of learning in being skillful at rule use because with experience in rule use, individuals eventually advance to the point where they can extract rules from verbal statements that are not in rule format o An implicit rule in a sentence, could thus be extracted, but following such unspoken rules require skills that only come with experience

How does the violence show in the media affect people's behavior? Give examples to demonstrate these media effects.

- Violence shown in the media affect people's behavior by presenting models of a wide range of behaviors, some desirable and some undesirable o There has been a great deal of social concern that the frequent and vivid presentation of violence and brutal behavior in the media may increase people's use of violence in their daily lives - For example, do movies and TV shows that contain violent scenes in which a woman is beaten and raped produce victim modeling effects, increasing the likelihood that women will be assaulted and raped? o Data from England for a 63-year period in which capital punishment was used and given extensive media coverage reveal that there was a decrease in homicides during the 2-week period immediately after a highly publicized execution Thus, reflecting the effects of vicarious punishment

What are primary reinforcers and punishers? What are secondary reinforcers and punishers? How do they differ?

- We are biologically prepared to respond to primary reinforcers and primary punishers without having to learn to do so o Our survival depends on them, and most of us are born capable of feeling both pleasure and pain - Stimuli that are regularly associated with primary reinforcers and punishers often become secondary reinforcers and secondary punishers o These can vary considerably among individuals because they depend on our unique personal learning experiences - All of the primary reinforcers and punishers are biologically important stimuli, essential to survival o Our ability to respond to them allows us to be rewarded for locating food, water, optimal temperature, sex, and other primary reinforcers while being punished when we cut, burn, or otherwise harm our bodies o Primary reinforcers and punishers are also called unconditional reinforcers and punishers to indicate that they derive their power from biological sources and do not need conditioning to be effective

How are primary reinforcers and punishers traced to natural selection and evolutionary causes?

- We can trace the capacity to be reinforced and punished to natural selection o Our ability to respond to food and water as reinforcers—and cuts and burns as punishers—is crucial for survival; hence we have evolved to respond to them - Individuals who are biologically predisposed to respond to food, water, and sex as primary reinforcers are more likely to survive and reproduce than are individuals without these predispositions o People who are biologically predisposed to respond to cuts, burns, falls, and other hazards as punishers are more likely to survive than individuals—such as Tanya—without these capacities - Species such as humans have evolved to have elaborate brain structures needed for very complex learning o We are also born with numerous reflexes and the capacity to learn vastly more complex behavior via operant and Pavlovian conditioning

Why are we told to act like Romans when in Rome?

- We do not know how people behave in a certain situation; we are often well advised to observe the behavior of people who are familiar with the situation o Which in turn, is why we act like the romans when in Rome, etc. - Observing others is often a fast way to learn new behaviors—or to learn which of our preexisting behaviors is most appropriate in a new situation o In many situations, observational learning is faster than shaping - Throughout much of history, observational learning has been the central way that culture and knowledge have been transmitted to the next generation o Copying may seem uncreative, but it actually helps keep the creative knowledge of prior generations alive and moving on to the next generation—if there are still reinforcers for using the behavior that we copy

When Angela had her car crash, did all the stimuli present at the time of the accident become CSs for apprehensive emotions? Which were most likely become CSs? Why? How did the experience change Angela' emotions? How did it change her driving?

- When Angela's car began to spin out of control on a wet, curvy road, her eyes were wide open o She was super-aware of the car's motion, the trees spinning overhead, and a small road sign she hit before crashing into a tree o Countless stimuli were present, and they did not all become CSs associated with the fear of a near-death accident The trees and road signs were not the cause of the accident, and they did not become CSs for fear • The stimuli that most powerfully predicted the accident were speed, wet pavement, and curved roadways, and these are the stimuli that became the CSs Angela learned to associated with anxiety

What is punishment? What are the two different types of punishment, based on addition and subtraction? What needs to be added to punish behavior? What needs to be subtracted to punish behavior? Give examples to make this clear.

- When an operant behavior is followed by an aversive experience that suppresses the frequency of the operant, the aversive stimulus is called a punisher o The process by which an operant is suppressed is called a punishment - For example, if a person receives a hefty fine after driving through a red light, the punishment is likely to suppress the behavior of running red lights in the future o Also, responsive suppression can take place even if you are unaware that others' comments have caused you to stop mentioning the criticized topic o After people receive traffic fines for driving through lights, red lights become S Deltas for not driving through them, whereas green lights are SDs for driving ahead - When punishment is opposed by reinforcement, behavior is influenced by the relative intensity and frequency of both the punishment and reinforcement o The opposing effects of reinforcement and punishment are based on the intensity of each o For example, if a light switch in the hall works (a reinforcer) but gives a weak shock (a mild punisher) when used, people may continue to use the switch at night because the reinforcement of light outweighs the discomfort of a weak shock However, if the shock was more intense and painful, the punishment of the shock might outweigh the reinforcement of light and suppress responding

How can one person empathize with the emotions of another person?

- When children are standing near the edge of a steep cliff and we are standing safely below, and we have had painful falls from high places, we can empathize with—and feel—the emotions of other people who have had similar experiences o Seeing the children at risk of falling provides the CSs that are predictive of pain, and it elicits our feelings of fear - Whenever people have had similar Pavlovian conditioning with any given emotional situation, their similar conditioning allows them to empathize with each other o People who have not had similar Pavlovian conditioning with a specific stimulus pattern may not have similar (or empathetic) emotional responses in that situation Because no 2 people have had exactly the same past learning experiences, no 2 people have exactly the same emotional responses to daily events; hence the perfect empathy is beyond our grasp • But the more that 2 people share important life experiences, the more empathy they can feel for each other

How do secondary reinforcers and punishers come into existence? What kind of learning experiences produce them?

- When neutral stimuli repeatedly precede and predict the consequences of our actions, they become secondary reinforcers o When neutral stimuli repeatedly precede and predict punishers, they become secondary punishers - For example, a $50 bill is merely a piece of paper, and young children respond to it no differently than they do to other colorful printed paper o However, after a few years of having money paired with a broad range of other reinforcers, such as food, drink, movies, toys, and clothes, most children learn to respond to money as a strong secondary reinforcer - Social cues often become secondary reinforcers and punishers o Social attention, smiles, approval, sincere praise, and signs of affection are secondary social reinforcers for most people, but they often precede and predict other rewarding social events o Frowns, scowls, criticism, insults, and signs of dislike are secondary social punishers for most people, because they often precede or predict painful experiences - Secondary stimuli can lie anywhere along a continuum from strong secondary reinforcers to strong secondary punishers, with the weakest stimuli near the center of the continuum o Because each individual has a unique history of conditioning, the same can function as a secondary reinforcer, neutral stimulus, or secondary punisher for different individuals One person may find sarcastic humor very reinforcing and learn a range of skills for weaving such dark humor into conversation Another person may find sarcasm to be very aversive and avoid people who use it o Almost all people are born responding the same way to primary reinforcers and punishers, but unique learning experiences and conditioning lead us to respond quite differently to secondary - Our ability to learn to respond to secondary reinforcers and punishers extends the power of the primary reinforcers and punishers to function "by proxy"—through these secondary stimuli—in situations where the primary reinforcers and punishers are not present o For example, the secondary reinforcer of money takes its power from primary reinforcers that can be acquired through spending money

What reinforcers are there for talking as if our behavior were rule- governed, even when that is not usually 100% true?

- When people are put in situations where they have to explain their actions, some people tend to explain their behavior as if it were rule governed o However, in many cases it is difficult to determine whether such verbal accounts are ad hoc descriptions of natural behavior or accurate summations of rules that were used to guide rule-governed behavior - Because most behavior is the product of many influences besides rules, it is often unwise to infer that behavior is strictly rule governed even when people allude to rules to explain their activities o All too often, verbal explanations do not reflect the real causes of behavior and people are especially handicapped in describing natural behavior because, first, it was learned without verbal instruction, and second, most people are not aware of how differential reinforcement, models, and prompts affect their behavior Hence, they find it hard to explain the real causes of their behavior - Even when people don't know why they did a certain behavior, they may create any of a large number of feasible verbal accounts for it, creating credible account—especially if it sounds intelligent—sometimes leading to more reinforcement than not saying anything, thereby appearing to be ignorant about one's own actions o By making behavior sound like it was rational and rule guided, people avoid some of this criticism and as a result, most of us learn to talk as if our behavior were rational, planned out, and rule-governed, especially if it helps us avoid criticism The more skillful people become at inventing reasonable and intelligent-sounding accounts, the more likely they are to escape aversive consequences and to be admired for their intelligence - Even if rules were used in the initial learning of a behavior, all the extra firsthand experience that refines the early rule-governed behavior into smoothly polished performances is rarely captured by restating the rules in simple verbal accounts

What are generalized reinforcers and punishers? Give some examples. How did they gain their generalized power?

- When people learn that certain stimuli are predictive of a variety of different kinds of reinforcement across a broad range of circumstances, these secondary reinforcers become generalized reinforcers o Money is an excellent example of a generalized reinforcer Because money can be used to obtain many positive reinforcers or avoid many punishers in countless different situations, most people learn to respond to money as a generalized reinforcer o Although money is a generalized reinforcer for most people, it is not universal Some societies utilize shells, arrowheads, necklaces, and other items for trade, etc. However, any object can become a generalized secondary reinforcer when a group of people use it in exchange for numerous other reinforcers o Even though money is a generalized reinforcer for most people in our society, we may learn discriminations that prevent all money from being a reinforcer in all circumstances o Because money can be exchanged for almost anything people might want, this generalized reinforcer has become a "market signal" on which the entire modern global economy is built - When people learn that certain stimuli are predictive of a variety of different kinds of punishment across a broad range of circumstances, these secondary punishers become generalized punishers o Frowns, cold stares, harsh tones of voice, criticisms, and other related social cues tend to be generalized punishers for most people - Some people gain operant power and control over reflexive gestures and generate fake smiles and grimaces without effort o And those of us who observe fake signals often learn to discriminate between fake smiles and genuine smiles, theatrical frowns and real frowns, along with other related expressions

What causes the extinction of a secondary reinforcer or punisher?

- When secondary reinforcers and punishers no longer precede and predict other reinforcers and punishers, extinction takes place, and the secondary stimuli lose their power o They cease being informative and we cease paying much attention to them - Secondary reinforcers are on extinction when they no longer predict other reinforcers o If you have a new roommate for example, who invites you to go sailing this weekend, the invitation is good news—a secondary reinforcer—if similar invitations have often been followed by rewarding experiences in the past o However, your roommate cancels, and after 6 out of 6 invitations end in cancellations, invitations from your roommate will lose their power as secondary reinforcers, due to extinction - Secondary punishers also lose their power during extinction o If you move into a new apartment and a cranky neighbor threatens to call the police every time you play your stereo, the threats are bad news Because they are secondary punishers, they elicit aversive emotional responses and may stop you playing the stereo for awhile o However, if you turn on the stereo later that week, hear more threats, and the threats are not followed by any other form of punishment, the threats will begin to lose their power as secondary punishers

What kinds of learning experiences cause us to discriminate those certain social reinforcers and punisher are fake in certain circumstances?

- Young children often respond to most compliments as secondary reinforcers, because praise from adults is often predictive of various other rewards o Yet as years go by, most of us have experiences in which other people manipulate us with insincere compliments and fake flattery; with the consequences that we end up stuck doing something aversive o For example, "Gee, Jim, you're so good with the kids, why don't you take care of them for the afternoon?" after Jim hears the compliment, hey may argue to help out, only to be burdened with a difficult 3 hours of work After a couple of such experiences, Jim will learn to discriminate the differences between sincere and manipulative compliments, so manipulative compliments and overly generous praise eventually becomes secondary punishers These compliments are no visible to Jim as "bad news" and are CSs that punish Jim's willingness to help the manipulative person and make him feel negative emotions, also act as SDs for avoiding the situation o If we notice that various people use manipulative praise and fake compliments, we may increasingly distrust them—which in turn causes us to appreciate and trust the people who do not manipulate for others

Can you explain the change from undifferentiated to differentiated responses in throwing baseballs? In being a salesperson?

o The process of response differentiation can be seen in many everyday situations For example, when children are first learning to play baseball, outfielders are often less than outstanding in throwing the ball from the outfield to home plate • Some throws come to home plate, but others fall short, and this leads to differential consequences and response differentiation o Poor throws are not reinforced; and they may even be punished by criticisms from the coach or other players The good throws are reinforced by holding a runner at third base, producing an out at home plate, and maybe facilitating a spectacular double play o In many social situations, a person's opening lines can make or break a social interaction A good first approach is reinforced; a poor one is punished People who are frequently thrown into social situations will have their behavior modified by differential reinforcement • For example, door-to-door saleswoman may depend heavily on her introductory few sentences to get her foot in the door

What happens in homes that produce children who can use self-given rules to solve problems and guide their own behavior effectively?

Self-instruction usually begins in the first 5 years of life o These self-instructions can be traced back to their home environment - In some subcultures, parents do not converse much with their children, relying on gestures more than words o Children from such taciturn homes do not engage in much verbal self-instruction, and they are slow in mastering new skills - In other subcultures, parents talk frequently with their children, interacting in a warm and responsive manner o Parents who generously share words of wisdom—and explain how one talks oneself through problems—are giving their children precious gifts o Children from such loving verbal homes can tackle new problems with optimism and a large repertoire of verbal rules for confronting difficulties and solving problems - Typically, young children talk audibly to themselves rules to guide their own behavior, but the self-talk and self-instruction gradually fade into inaudible muttering, and finally the child simply thinks through the words silently—as an inner monologue o The processes by which your audible self-instructions faded into muttering and finally self-talk happened to you so many years ago that you cannot recall how it happened o Now, you are gaining knowledge about rule-use skills that can help you become even more skillful at self-instruction - Not all children receive the best start in learning how to verbalize about their own behavior, their options in life, the possible consequences of each option, and ways to guide themselves toward the best option o Behavior modification and cognitive behavior therapy provide useful information—and clear rules—for improving each and every step in that chain By sharing these words of wisdom, we can help the next generation learn how to guide their own behavior more effectively via carefully designed self-talk

Define and explain the words reinforcer and reinforcement. What are the two different types of reinforcers? How do they differ? Give examples to make this clear.

o A reinforcer that follows an operant increases the likelihood that the operant will occur in the future The process by which the frequency of an operant is increased is called reinforcement o The word "reinforce" means to strengthen Reinforcers strengthen operant behavior and make operant more likely to occur in the future o The word "reward" is often used as an everyday substitute for the word reinforce A behavior that leads to rewarding results—or good effects—will probably recur more frequently in the future But the word reward can be misleading if we think of it only in the narrow sense of one person intentionally rewarding another • In fact, good effects and rewarding outcomes can come from many sources o The speed with which a person learns an operant behavior depends on the complexity of the operant, the person's present level of skills, the reinforcers involved and numerous other variables It is usually easy to learn simple skills with even minimum reinforcement, but it can take many hours to learn operant that involves several sets of complex skills For example, people with some athletic skills are often quick to pick up a new activity—but individuals with few athletic skills may find these activities difficult to learn o Babies usually start life with very simple behaviors such as sucking on nipples or crying when experiencing pain But they are capable of learning from the beginning of life and their learning often allows a "learning curve o Language learning gives us a wonderful example of a learning curve Babies have no language skills in the early part of life, but adults talk to babies, and they begin to recognize words o The more words that parents and others say to children, the faster the child can learn to talk, especially if people give reinforcers generously when the child uses words In homes with verbal families and loving responses for youngsters who pick up new words, the learning curve starts to rise quickly and accelerate In homes where families do not talk much and do not share words with their children, the learning curve may not rise as quickly, due to lack of words and or the dearth of positive reinforcement for talking o Learning curves for language rise for years Typically, children from very verbal homes can learn far more words, taking their learning curves to quite high levels In contrast, children from less verbal homes where their family uses few large and complex words may learn far less than 30,000 words o People who take schooling seriously and enter vocations that require large vocabularies can end up with enormous vocabularies, showing learning curves that rise unusually high and keep rising as long as people remain engaged in complex verbal worlds

What is the ABC formulation of operant learning? Explain all three elements and how they interact. Give examples to make this clear

o A stand for antecedent cues that exist before an operant action o B stands for the behavior o And C stands for the consequences that occur after the behavior The relationship/interaction among these 3 elements is that antecedent cues (A) come before the behavior (B), and the behavior (B) produces the consequences (C) that occur after the behavior The arrow between B and C indicates that the behavior produces the consequences and the colon between A and B in the equation indicates that antecedent cues to not cause behavior, they merely set the occasion for the behavior (A: B C) • For example, having a canvas on an easel does not cause an artist to create a picture, it does, however, set the occasion for the artist to explore various options o Operant behavior is usually produced through the voluntary nervous and muscular systems, and antecedent stimuli give cues about the outcomes of various possible voluntary actions—rather than signifying a specific response o During operant learning the consequences of a behavior influence (1) the frequency of the behavior in the future, and (2) the ability of future antecedent cues to set the occasion for that behavior The consequences (C) influence the future status of both A and B • Therefore, starting point for analyzing the ABC relationship is with the consequences o The consequences following an operant behavior are the prime movers of operant learning The consequences can be either good effects or bad effects • Good effects cause the behavior (B) to become more frequent in the future and influence the antecedent stimuli (A) to set the occasion for repeating the behavior in the future • Bad effects drive down the frequency of the behavior in the future and lead the antecedent stimuli (A) to signal that the behavior should not occur o Any operant behavior can be strengthened or weakened, depending on the type of consequences that follow the behavior Reinforcers are types of consequences that strengthen a behavior Punishers are types of consequences that cause a behavior to become less frequent We learn from the consequences of our actions • For example, driving carefully is reinforced by the enjoyable consequences of arriving safely and driving too fast can be punished by the aversive consequences of tickets or accidents o When learning any given behavior, we often respond to both reinforcers and punishers We all tend to be quite responsive to good and bad consequences, adjusting our behavior accordingly o Antecedents: Every moment of our lives, we are surrounded by countless stimuli—from both inside and outside our bodies—that set the occasion for each next thought and act Antecedent cues that preceded behaviors that were reinforced in the past tend to set the occasion for repeating that behavior • Antecedent cues that preceded behaviors that were punished in the past alert us not to repeat the behavior Sometimes a stimulus context contains both types of cues and we feel torn • The past consequences of winning or being stung influence the way 2 people can perceive the very same antecedent stimulus Our thoughts and actions can be influenced by multiple and often subtle antecedent stimuli, or contextual cues • Our brains are always active in hunting for connections among the 3 parts of the ABC equation o The Operant: the unit of behavior we study in cases of operant learning is called the operant Operant are usually defined by their ability to produce certain consequences, not by their physical appearances • Any of these behavioral variations (B) that result in rewarding consequences (C) belongs to the same category of behavior Every kind of operant can be defined in terms of the types of consequences it produces • Thus, there are operant related to "funny jokes" and other operant related to "jokes that flop" too often • After enough years we often learn to tell the difference between sincere praise and insincere flattery when we hear them—because the first is linked with good consequences and the second can hurt us Because the consequences called reinforcers and punishers are the prime movers of operant learning, the remainder of this section is organized around the 5 fundamental ways in which consequences affect operant

What new things begin to happen as children enter the early school years?

o Early school years require young people to learn more complex things than occupied children in the preschool period When children first go to school, they face new challenges with the topics they are taught, but also with socializing in larger groups of individuals o In school environments, the learning of verbal skills becomes increasingly important Both nature and nurture can be operating o The better we understand the learning theory, the more easily we can step in and help young people of all ages learn most easily and take pleasure from the things they are accomplishing o Getting an opportunity to be in a class play can open up numerous doors for opportunities Unfortunately, plays for example, are not always offered, or may not take every child who tries out o Our society would benefit if you cared to improve on any subset of education o Perhaps it takes a certain teacher or task to bring the right challenges and learning opportunities into a young person's life Family, relatives, and peers can also allow the young person to thrive and flourish, kick starting a period of strong dedication to learning as much as possible There are countless ways that learning can affect a person's life

What is the nature-nurture fallacy? What is the alternative to it? Why is this important?

o In explaining behavior, people tend to use either-or logic, wanting to explain behavior as if it is either caused entirely by genes (nature) or explained by prior learning (nurture) No behavior can be completely explained by genes alone or learning alone due to the fact that nature and nurture are almost always intertwined o Our genes operate all through life, decoding the DNA in our cells to construct new building blocks for our bodies Learning also occurs all through life, especially if we are life-long learners In addition, learning about learning theory helps make it easier to become a life-long learner

What are the key points of the first part of the preface?

o Over the past 300 years, science has proven to be the most powerful tool ever developed for answering the questions arising from human curiosity In the past 100 years, numerous powerful scientific principles about learning and behavior have been discovered o Much of our behavior is learned o It is especially valuable to understand how people learn Helps you make your own learning more exciting and rewarding Easier to use positive reinforcers to learn a large range of things Helps you understand other people better Positive exchanges bolster the bonds of friendship and love o The body of knowledge needed to understand how people learn is called the learning theory Principles of this theory are referred to as behavior principles Most of what human beings do is learned/socialization The learning theory is concerned with all of human behavior—talking, thinking, etc. Sequences of thoughts, feelings, and overt actions all intermingle as we act upon our world • Everything we do is behavior o The experiences of each day provide us with countless interesting examples of natural behavior o Behavioral understanding will help you better guide and control your own thoughts, feelings, and actions vis-à-vis the goals you have o The more you know about the whole range of human actions, feelings, and emotions, the better you can learn how to improve on the quality of your own life, including your feelings and emotions o The learning theory provides us with vocabulary that gives you awareness of the details of your life and can help you not only talk to others but develop increasing awareness of your life and ways to glide into a happier future o New scientific breakthroughs show how important it is to set lifegoals for being positive, finding meaning in life, building solid relationships with others, and celebrating our accomplishments in life You increase the chances of having successes when you pay attention to your actions and give yourself "positive feedback" for any improvements o The mere act of sharing valuable knowledge brings pleasure and others often appreciate it o Much of our thoughts, emotions, and actions are learned, making learning theory centrally important to leading fulfilling lives and flourishing while we do it o Learning takes place all across the life span, beginning even before birth The human brain has 86 billion neurons that connect to each other at 200 trillion synapses The nerve cells and synapses are changing all the time as we experience things and learn • The synapses can "rewire" and or grow and change rapidly o We can flourish at all parts of the life span, starting in childhood The more you know about learning and flourishing, the better chances you can feel excited about life

What is the law of effect? How does it explain the basics of operant learning? Give a simple and clear example.

o The law of effect is based on the discovery that voluntary behavior is influenced by its effects o Instrumental behavior is changed by its outcomes or consequences For example, if an artist is experimenting with the use of pastels and creates some lovely effects, the good effects increase the chances the artist will use the pastels more in the future o According to the law of effect, behavior that produces good effects tends to become more frequent overtime; and behavior that produces bad effects tends to become less frequent o Subsequent reformulations of the law of effect recognize the importance of the stimuli that precede behavior Behavior is influenced not only by the effects that follow it, but also by the situational cues that precede it • For example, stepping on the gas has good effects when the light is green and bad if it is red As a result, people become sensitive to situational cues, especially to antecedent cues that precede their behavior and allow them to know whether a behavior is likely to produce good effects or bad ones

How can role models help you find purpose in life and make wise life choices?

o Try to get close with people who have interesting lives, careers, or jobs that they genuinely love Find out how they discovered their goals in life and how they got into their positions o Ask people in your family, school, or neighborhood if they know interesting individuals you could approach to talk about what schools they went to, how they grappled with some of the difficult courses that often stymy people who want their types of positions, etc. Write down the best ideas to keep for the future o Try to obtain internships with hospitals, sports teams, law firms, or any organizations that might allow you go get close to people who can share valuable career information with you Ask successful people if you can shadow them o The closer you get to be with people who have lives that appeal to you, the more you will learn about the concrete steps you need to take next The more unique people you meet or read about, the better chances you can pick future routes that will help you flourish o Look for possible role models who might inspire you If you have a strong wish to thrive and flourish, you need to take on active role in discovering the things that excite and motivate you the most • Sample widely from as much of life as you can and it will increase your chances of finding role models and information for reaching wonderful life goals and flourishing

Explain discriminative stimuli (antecedents). What is the difference between SDs and S's? What consequences produce SD's and which consequences produce S's? Give examples to make this clear.

o When a behavior is followed by a reinforcer, not only is the behavior strengthened, but any relevant antecedent stimulus also takes on special qualities, becoming a discriminative stimulus (SD), or stimulus for discrimination We can now write the simple ABC equation in more precise terms • SD sets the occasion for operant behavior (B) that have led to reinforcers in the past o When behavior (B) is followed by a reinforcing stimulus in one context but not in other contexts, any antecedent context cue associated with reinforcement becomes a discriminative stimulus (SD) Each SD helps set the occasion for future responses • For example, if an American is in a foreign country where very few people speak English, signs reading English spoken here becomes SD's that set the occasion for speaking English with the natives o SDs predict when and where behavior is likely to be reinforced o Things are quite different when a behavior is not followed by reinforcement The stimuli that best predict that no reinforcers will be forthcoming become (S Delta) which indicate that the behavior is not likely to be reinforced in this particular context (whereas SD's signal that reinforcement is likely S Deltas are discriminative stimuli for not responding o SD's and S Delta signal us whether to act or not to act Many cues can become SD's and S Deltas for the same behavior • There isn't just one Sd or one S Delta for each behavior Both SD's and S Deltas are cues about doing a specific behavior • An SD is a cue to go ahead and do the behavior • An S Delta is a cue to stop and not do the behavior o The stimuli that are SDs for one behavior may be S Delta for another behavior For example, when a foreigner who knows no English first visits the United States and confronts doors marked "push" and "pull" the person may not discriminate between these 2 cues, and randomly push or pull-on doors until one of the 2 acts is rewarded o Any stimulus—a person, place, or thing—can become an SD for all the behaviors that have been reinforced in its presence and an S Delta for all the behaviors that have not been reinforced in its presence o As people learn to discriminate between SD's and S Deltas for a given operant, they are more likely to perform this behavior in the presence of SD's than in the presence of S Deltas However, the SD's do not cause behavior For example, a foreign traveler may walk past several banks before entering one to exchange money, or buy postcards and only write 6 • SD's set the occasion for operant, they do not cause behavior o Antecedent cues that predict more reinforcement become SD's; and cues that predict less reinforcement become S Deltas Thus, antecedent cues can become S Deltas because they are associated with no reinforcement, or merely because they are associated with less reinforcement than is available elsewhere o There are 2 kinds of reinforcement: positive and negative To reinforce means to strengthen, and both positive and negative reinforcement strengthen behavior • They both increase the likelihood that people will repeat a behavior in the future • In terms of law and effect, positive reinforcement consists of the onset of good effects and negative reinforcement consists of the termination of bad effects The onset of pleasurable music is a good effect that provides positive reinforcement for turning on the stereo The termination of the alarm clock's aversive buzz in the morning provides negative reinforcement for turning off the alarm o Positive reinforcement occurs with the onset of a reinforcing stimulus, and negative reinforcement occurs with the termination of an aversive stimulus One way to remember the difference between positive and negative reinforcement is to think of addition and subtraction as synonyms for positive and negative • Positive reinforcement strengthens behavior when good effects are added to our lives • Negative reinforcement strengthens behavior when bad effects are subtracted o The antecedent cues that precede any kind of reinforcement—positive or negative—become SD's Thus, the SD's of darkness in the evening set the occasion for turning on the lights because this operant activity produces the positive reinforcement of being able to see things at night o Most behavior can be strengthened by both positive and negative reinforcement Consider politeness, if your being polite is followed by smiles and kind words from other people, the onset of good effects provides positive reinforcement for your politeness o Whereas positive reinforcement is associated with good effects and pleasurable experiences, negative reinforcement occurs when we escape or avoid aversive experiences The rewards of negative reinforcement are based on the termination of aversive situations rather than the onset of good effects and pleasurable experiences That is why the rewards of, for example, removing splinters and avoiding traffic tickets, do not feel the same as the pleasures of positive reinforcement o There are 2 main types of behavior that are produced by negative reinforcement They are escape and avoidance o Negative reinforcement does not feel as good as positive reinforcement Escaping or avoiding something aversive does not bring the pleasure that we obtain from receiving a positive reinforcer, though it has its own gratifications o Escape involves reacting after an aversive event is present Avoidance involves proacting—taking preventative steps—before an aversive event arises • For example, people react to hangovers by taking aspirin; they proact by not drinking so much that they would get a hangover Escape is usually learned before avoidance • This is especially true in childhood, first, children learn to escape wet underwear by removing them and then learn to avoid it by not having accidents


Set pelajaran terkait

Article 225 Question Bank With Answers

View Set

Geometry Honors - PROOFS: Reflexive Property, Vertical Angles, Midpoint/Bisector/Medians, Angle Bisectors/CPCTC, Altitudes, Isosceles Triangles, Parallel Lines, Similarity

View Set

Test 2 - Bible: Personal Evangelism

View Set

Chapter 15: A few viral diseases

View Set