Task List (FK01-FK48)

¡Supera tus tareas y exámenes ahora con Quizwiz!

Item FK33: functional relations

A functional relation is demonstrated when experimental control is shown by an independent variable being responsible and solely responsible for a change in the dependent variable. A functional relationship shows that the independent variable, and not confounding variables, are responsible for the change in the dependent variable.

Item FK25: multiple functions of a single stimulus

A single stimulus can be more than simply an SD (discriminate stimulus). For a certain organism, a stimulus may serve multiple functions, including an SD , motivating operation, punisher, or reinforcer. A piece of cake may be a motivator for you to finish all the dishes after a typical dinner, but it might be a punisher if you have just stuffed yourself with Thanksgiving dinner and pie.

Item FK18: conditioned reinforcement

A stimulus reinforces, or strengthens, set behaviors through its association with a primary reinforcer. Conditioned reinforcement is learned. Money is a conditioned reinforcer because it is paired with what it can get a person (food, shelter, clothing, etc.). Imagine someone who is unfamiliar with money or payment who wants food. If food is offered, but also some meaningless green paper along with it, eventually the green paper will take on some reinforcing properties. Also, requiring this person to hand you some of that green paper before you give them food (food contingent on providing money) will tend to make obtaining money a conditioned reinforcer. Conditioned reinforcement increases the likelihood of a behavior.

Item FK41: contingencyshaped behavior

Behavior that is controlled by the prevailing contingency is contingencyshaped behavior. Another description is behavior that is controlled by current environmental events and more immediate consequences. This is somewhat in contrast to rulegoverned behavior, defined in the next section. Given a red light and a police officer driving behind you, you are likely to stop and wait for the light to turn green because of the environmental event of the officer behind you (that is, the real possibility of punishment for running a red light). If circumstances change and one is parked at the same light and no one is around, the person who has contingencyshaped behavior will be more likely to ignore the rule in favor of the current likelihood of punishment or reinforcement in this case, reinforcement for running the light by arriving at the destination more quickly.

Item FK31: behavioral contingencies

Behavioral contingencies refer to the relationship between a specified behavior and the consequence associated with that behavior. Behavioral contingencies have been explained using "if/then" language. "If you eat your dinner, you'll get dessert." Contingency contracting is an agreement between two or more parties based on the if/then language. "If you do this, then I'll do this or you'll get this."

Item FK07: Environmental (as opposed to mentalistic) explanations of behavior

An environmental explanation of behavior describes the conditions under which a behavior occurs (the antecedents) and the consequences which shape future behavior (punishers and reinforcers). A mentalistic explanation points to concepts, human constructs, or circular reasoning to understand behavior. Many mental health diagnoses are good examples of circular reasoning if taken as causal. For example, a person who has "Oppositional Defiant Disorder" (ODD) may hit her parents when told "no." (Note the environmental explanation describing the "no" antecedent to the hitting behavior.) The mentalistic explanation might be that the ODD caused this behavior, that a lack of prefrontal cortex formation may be responsible, or that it was simply fate or karma that brought on this behavior. When we describe behavior in environmental terms, there are at least a couple of advantages over mentalistic explanations. One is that we can observe, measure, and record environmental events and effects, whereas we can't with mentalistic explanations. Also, mentalistic explanations lead to "willynilly" interventions (e.g. use of essential oils, chelation treatment, blaming vaccinations for autism and refusing them because of this, etc.). The final reason that we use environmental explanations is that we can use behavioranalytic interventions and actually prove a connection between treatment and an increase or decrease in behavior. The mentalist can claim that they've done some "energy work" from across the country and people may report feeling better. There is no way to prove this link is causal, rather than a placebo or coincidence.

Item FK23: automatic reinforcement and punishment

Automatic reinforcement and punishment occur in the absence of social mediation; that is, the behavior, itself is inherently the punishment or reinforcement. Examples of automatic reinforcement include: hair twirling, thumbsucking, and masturbation. Examples of automatic punishment include touching a hot stove and receiving a burn, or becoming ill after ingesting poison.

Task FK10: behavior, response, response class

Behavior is everything that people do (e.g., how they move, what they say, feel, and think). It's the activity of living organisms. You can detect an organism's behavior by looking at its placement in space and time by any form of measurable change. Even the slightest blink of an eye, or firing of a neuron, may be considered a behavior. Response is one instance of behavior. Behavior and response are synonymous terms. Response class is a group of behaviors that all have some similarity. Usually a response class is defined by function, form, or an arbitrary grouping. Examples of these, respectively, are many behaviors that all serve to access attention, behaviors that all involve use of the hands, or behaviors that move one through space. Let me expand on these examples: ● Function Grabbing people's arms and pulling, yelling "Hey!" and quietly staring at someone with an expectant facial expression could be considered a response class with the function of getting attention. ● Form Swinging a baseball bat, a golf club, or a kettle ball may belong to a response class because of their similar form or topography. ● Arbitrary Walking, running, swimming, and jumping might be considered part of a response class that aren't topographically or functionally equivalent. An example of how the three of these come into play is through looking at a functional behavior assessment (FBA). A child is working on homework that he does not necessarily want to do because he is interested in other things. His homework is placed in front of him and he is given the directive to do his homework. He decides to ask about his homework, or talk about his day, engaging in different questions about the other person or a different situation. The behavior function is escaping/avoiding homework; the response class is asking questions, talking about the day, and asking about other questions not related to homework. In this case, all of these behaviors have the same function.

Item FK39: behavioral momentum

Behavioral momentum characterizes the effects of the highprobability (highp) request sequence. This is usually described as 25 requests that are of highprobability for compliance just prior to a lowerprobability (lowp) request that has the effect of maintaining the response rate across requests. In other words, the lowprobability request is honored because of the previous high rate of responses. Example: Calling out a person's name, reaching out to shake her hand, giving her a coupon, and asking her if she'd be willing to answer a couple of questions. Keep in mind that the lowp request cannot be of such a low probability that it is unlikely to elicit a behavior. In the above example, if the last request were to fill out a tenpage survey and buy a condo in Florida, it's unlikely to happen due to the highp request sequence.

Item FK30: distinguish between motivating operation and reinforcement effects

Behaviors that occur because of motivating operations occur because of the environmental variable that causes an organism to want (or want to get away from) something. Motivating operations affect the value of a reinforcer. Sleep deprivation makes sleep all that much more valuable and causes a person to want to go to bed. Reinforcement effects make a person want something (or want to get away from something) because of the organism's history with the consequence. One might go to bed early because he doesn't want to feel sluggish the next day.

Item FK09: Distinguish between the conceptual analysis of behavior, experimental analysis of behavior, applied behavior analysis, and behavioral service delivery

Conceptual analysis of behavior looks at the theoretical and conceptual issues of behaviorism. Experimental analysis of behavior (EAB) involves laboratory settings with both human and nonhuman subjects. This is looking at the fundamental principles of behavior through singlecase design arrangements. Applied behavior analysis wants to discover the functional relations between its socially significant behavior and the controlling variables, and act to influence these variables for the betterment of the individual and those around them. Behavioral service delivery is when behavior analysts design, implement, and evaluate behavior tactics that come from the fundamental principles of behavior.

Item FK34: conditional discriminations

Conditional discriminations occur when reinforcing a response to a stimulus depends on, or is conditional upon, other stimuli. An example may be in a matchtosample procedure a child may be given a carrot, and then shown an array of a carrot, a beet, and a radish. The child being reinforced for selecting the carrot as a match depends on discriminating the carrot from the radish and the beet.

Item FK27: conditioned motivating operations

Conditioned motivating operations (CMOs) are learned valuealtering effects of an antecedent stimulus. They alter the momentary frequency of behavior or value of other stimuli. These are often confused with discriminative stimuli, but are different. There are three types of CMOs: transitive (CMO-T); Reflexive (CMO-R); and Surrogate (CMO-S); discussed in the next section. For now, remember that these are conditioned; not instinctual.

Item FK20: conditioned punishment

Conditioned punishment is a learned punishment. The first time we touch a hot stove we instinctively remove our finger (reflexive actionunconditioned); this decreases one's likelihood of touching the stove in the future. If you touched a black wood stove and experienced pain, you'll likely act differently around a black wood stove the next time that you encounter it. Neutral things that are paired with aversives or aversive consequences are avoided in the future.

Item FK32: contiguity

Contiguity is the close comparison of two or more events when they occur at the same time or very close together. For learning to occur, a consequence must occur immediately after or at the same time as a stimulus is presented. In the simplest terms, an organism must be able to connect reinforcement or punishment to the stimulus that elicited or evoked it.

Item FK43: Echoics

Echoics: This is simply a repetition of a verbal stimulus, an echo. There is pointtopoint correspondence and formal similarity in an echoic; that is, the same sense modality is used and the response has the same beginning, middle, and end as the stimulus. An example is saying, "Big ol' black bear" when mimicking someone else saying the same thing.

Item FK04: Empiricism

Empiricism is the idea that experimentation, free from bias, is how best to understand human behavior. A key point with empiricism is that empirical results are available for others to review and analyze. That is, the results are reported along with the methodology in order that others may examine the effectiveness on their own.

Item FK11: environment, stimulus, stimulus class

Environment is considered "everything but the moving parts of an organism." Stimulus is "an energy change that affects an organism through its receptor cells." One way to look at a stimulus class is to look at the physical features (formally), when they occur (temporally), and by their effect on the behavior (functionally). When a group of stimuli share any common elements in one or more of these dimensions, then you can place them within a stimulus class. ● Form All triangles, things that are blue, things that light up. ● Temporally Stimuli that precede talking, for example: Someone saying your name, a teacher asking you a question, or someone calling you a name. ● Function Items that you would use to unfasten a bolt.

Item FK22: extinction

Extinction is when discontinuing reinforcement results in a diminishing rate, and eventually total absence, of a behavior. If a child has previously been reinforced by receiving what they wanted when they whined, and then they stop receiving what they wanted, they might at first try harder (whine more); this is an extinction burst. However the whining will decrease and eventually end.

Item FK38: behavioral contrast

If behaviors are targeted for reduction in one setting, or on one schedule of reinforcement, they may increase in other settings or on other schedules. This effect is behavioral contrast. This occurs without any changes in the other settings or on the other schedules. Knowledge of behavioral contrast can be useful in understanding increases or decreases in behavior that are not being explicitly targeted for intervention in certain settings, but are being targeted in others.

Item FK01: Lawfulness of behavior

Lawfulness of behavior is the idea that behavior occurs as a result of certain conditions. Behavior follows laws that describe how organisms (us) interact with stimuli. The lawfulness of behavior is that it follows these rules. We may not always know what rules a behavior is following (e.g. what the antecedent was, what MOs were in effect, the behavioral history of the person, etc.). Nevertheless, as behavior analysts, we understand that these rules apply.

Item FK08: Distinguish between radical and methodological behaviorism

Methodological behaviorism is attributed to John Watson and emphasizes the observation of behavior to understand why we do what we do. This is opposed to Skinner's notion of radical behaviorism. The main difference between these two schools of behaviorism is that methodological behaviorism doesn't recognize thoughts, feelings, or other "private events" as behavior; Skinner did. Whereas the methodological behaviorist will treat thoughts and feelings as if they don't exist, the radical behaviorist suggests that even though we may not be privy to observing thoughts and feelings, they likely follow the same basic principles of behavior such as reinforcement and punishment.

Item FK15: Operant Conditioning

Operant conditioning is a process where the future frequency of the behavior is determined by the history of the consequences. In the example above, training the child to use the bathroom results in social/verbal praise, as well as access to a tangible when the child goes to the bathroom upon hearing the sound of the timer. The behavior of going to the bathroom is likely to increase upon receiving the reinforcement.

Item FK05: Parsimony

Parsimony is the idea that one should look to simple explanations before considering more complex or abstract causes of behavior. This is an important concept in practice, but also for the test. Look for the simplest explanation before you add assumptions and/or start secondguessing yourself. Example : Johnny is a fouryearold preschooler. Lately, he's been throwing wooden blocks about twice per day, and has hurt some of his peers. He is diagnosed with ADHD and sees a therapist once per week. The behavior occurs at different times of the day, and seems to be a function of sensoryseeking or attention. A functional behavior analysis is requested. In the above example, it might be that simply removing access to the blocks would prevent the behavior. It's the easiest thing to try and may prevent an unnecessary report and intervention.

Item FK06: Pragmatism

Pragmatism, which is wellaligned with behavior analysis, is the concept that one can identify the truth by being able to replicate and verify results. Rather than guessing at the truth, a pragmatist acts to see that, for example, an NCR intervention really was responsible for reducing attentionrelated behavior.

Item FK47: Identify the measurable dimensions of behavior (e.g., rate, duration, latency, interresponse time)

Rate Frequency over time. Examples include the number of tantrums per day, parts assembled per hour, and vocalizations per minute. Probably the most used measure of behavior occurrence, rate is useful because it is easily conceptualized and can be used when measuring behaviors to increase as well as behaviors to decrease. Duration Also known as temporal extent. This is how long a behavior occurs. This may be best used with behaviors that have longer occurrences, such as engagement in an activity perhaps measured in minutes versus use of foul language perhaps measured in 12 seconds. Duration can be used for behaviors that you want to increase or decrease. Interresponse Time (IRT) This is the time between the end of one behavioral occurrence and the beginning of the next. Examples are time between smoke breaks, time between taking bites of food, or time between requests. To increase IRT means to decrease the target behavior; that is, the longer the period between behaviors, the less behaviors are occurring. To decrease IRT means to increase the target behavior; the shorter time between behaviors, the more behavior is occurring over time. Latency A measure of the time between a prompt and the beginning of a behavior occurrence. Another way to understand this concept is how long it takes a client to get started on a task once a prompt has been given. This measure might be useful for increasing productivity when a client currently has many prompts and a slow reaction time (i.e. long latency).

Item FK13: reflexive relations (USUR)

Reflexive relations (US-UR) are different and simpler than the reflexivity relation involved in teaching stimulus equivalence. The USUR relation simply describes the relationship between unconditioned stimuli and our natural responses to them. Stimuli such as cold, heat, pain, sweet or sour tastes elicit a response that is a natural reflex. No conditioning is needed for these responses to cooccur with these stimuli.

Item FK16: respondent-operant interactions

Respondent (reflexive) behavior is that which is elicited by an antecedent stimulus; the environment controls the organism. Operant (learned) behavior is emitted or evoked because of an organism's history of consequences, reinforcement, or punishment; an organism controls its environment. It is often difficult to identify one category from the other in real life. Respondent and operant behavior interact in both the real world and in laboratory settings. One stimulus may elicit a response (respondent) that is then reinforced (operant). That same reinforcement happens to be paired with the person providing the reinforcement, and makes that person a stimulus that signals the availability of reinforcement (respondent). The difference between operant and respondent conditioning is that in respondent conditioning, a stimulus or stimulus condition is paired with reinforcement often enough for a person to respond to that stimulus in a predictable way. Operant conditioning, instead, involves the environmental change (reinforcement or punishment) that occurs following someone's action(s). Respondent involves an event that causes a reaction. Operant involves an action that causes a consequence. Respondent is controlled prior to behavior; operant is controlled by the reaction to behavior. Respondent example: Typically your child returns home around 3:15 to 3:30 pm each day, so the sound of the garage door opening does not alarm you (i.e. respondent pairing of time and noise = no alarm). If you hear the garage door at 1 pm, you are alarmed and go to investigate the sound (i.e., time and noise do not fit conditions that signal no need for alarm = investigate the sound). Operant example : When your child comes home at the regular time, she puts her things away where they are supposed to go. You respond by thanking her for her response, and sometimes giving her a hug or a kiss (reinforcement for putting things away). When your daughter throws her things on the floor you respond by asking her to not only put her things away, but to complete a load of laundry as well (punishment for putting items on the floor). Combined example : When your child arrives you are not alarmed, but anticipate an interaction (respondent). Your child puts her things away and you praise her (operant). You then sit at the table and talk about the day over a cookie and milk (respondent pairing reinforcement with talking). Your daughter relates a story of being kind to a new student and you give her another cookie (operant reinforcement to increase future kindness).

Item FK14: respondent conditioning (CS-CR)

Respondent conditioning (CS-CR) is most associated with Pavlovian Conditioning. This requires a stimulusstimulus pairing (e.g., an unconditioned stimulus and a neutral stimulus. For example, we are potty training a child to recognize when he/she needs to use the bathroom. In the initial training, we want the child to go to the bathroom every 30 minutes, so we are going to pair the sound of the timer (NS) with going to the bathroom (US). Each time the sound of the timer goes off, we go to the bathroom. Soon the sound of the timer becomes conditioned to the behavior of going to the bathroom.

Item FK36: response generalization

Response generalization is when untrained responses functionally equivalent to the target response are emitted or evoked by the same stimulus. One stimulus evokes or emits multiple responses. When your UPS worker drops off a package you may say, "Thank you," "Have a nice day," or "Yea, my new shoes!" even though your mother only trained you to say, "thank you."

Item FK42: rule-governed behavior

Rule-governed behavior is is just what it sounds like: the control of behavior by instructions, rules, or other prompts. Consistent differential reinforcement develops behavior that is controlled by these rules, instructions, or other prompts. This usually describes behavior that is controlled by delayed consequences, or even consequences that a person has not yet encountered. For example, a person may decide not to rob a bank because of the known consequences (e.g. you've seen news reports of people convicted of this crime, you've seen a video of a robber being shot, and you've already developed a sense that you don't want others to see you as a criminal). This concept describes people who act on the rule and its implied or stated consequences, even when such consequences may not be likely contacted in the moment. An example of this is waiting for a light to turn green before proceeding, even when no one is around. The rule is that you stop at a red light, and go when it's green. A rule-governed person will stop at a red light until it turns green, even when no immediate enforcement of the rule appears to be in place (people around, a police officer behind you, etc.).

Item FK21: schedules of reinforcement and punishment

Simple schedules of reinforcement and punishment are based on a constant number of responses or a changing number of responses required to encounter a consequence. This can be based on a set number of responses during a specified amount of time ( fixed interval schedule) before a consequence. In a fixed ratio schedule, a constant number of responses is required before reinforcement is available or punishment is administered. Schedules can be variable, or based on a changing amount of time ( variable interval), or a changing number (average number) of responses ( variable ratio). Above, see the relative response rates for each simple schedule of reinforcement. Note that fixed schedules are associated with postreinforcement pauses when the line flattens out at the point of reinforcement. We found this graph at boundless psychology, here , although it's not clear where it originated because there are many graphs like it on the web.

Item FK35: stimulus discrimination

Stimulus Discrimination is the ability to tell the difference (discriminate) between two (or more) stimuli. How one learns this is to receive reinforcement for correctly discriminating and either extinction or punishment for failing to discriminate. Given two antecedent stimuli, responses are reinforced in the presence of one stimulus (discriminative stimulus) and not as a response to the other (stimulus delta). Dogs and wolves share similar traits but if asked, "Which animal is a dog?" one would only be reinforced for answering correctly if they selected the dog. For saying "wolf," you might receive silence or a repetition of the question (extinction). You might also receive a mild punishment such as, "No, that's not right." Also in this example, realize that it would be helpful to point out what was wrong and how to correct the mistake, so that the client is better able to discriminate for the next time.

Item FK24: stimulus control

Stimulus control is when a specific discriminative stimulus can, and will, reliably evoke or abate a particular behavior. Behavior is emitted in the presence of one stimulus (discriminative stimulus) but not in the presence of another (stimulus delta). One is more likely to cross the street when they see the greenishwhite symbol of a person walking then when they see a red hand indicating "stop."

Item FK12: stimulus equivalence

Stimulus equivalence or, functional equivalence, refers to the ability of multiple stimuli to become functionally equivalent and evoke the same response. (In lay terms, the stimuli mean the same thing to the responder, and therefore cause him/her to respond in the same way.) A real apple, a picture of an apple, and the written word "apple" become functionally equivalent when reflexivity (a real apple is the same as another real apple, A=A), symmetry (A=B, B=A, a real apple is equivalent to a picture of an apple, a picture of an apple means the same thing as a real apple), and transitivity (a real apple and a picture of an apple are functionally equivalent, A=B; and a picture of an apple is equivalent to the written word apple, B=C), therefore a real apple is equivalent to the written word apple (A=C) are established. Stimulus equivalence is required for reading comprehension.

Item FK37: stimulus generalization

Stimulus generalization is when a response evoked by an antecedent stimulus can also be evoked by similar antecedent stimuli, i.e., one response to multiple stimuli. Example: Amy's father has a beard; she calls him "Dada." Amy calls all men with beards (similar antecedent stimuli) "Dada."

Item FK02: Selectionism (phylogenic, ontogenic, cultural)

The idea of selectionism is that all forms of life (single cell to complex cultures) evolve as a result of selection with respect to function. Just like evolution of species, this is evolution of behavior. Phylogenic refers to the natural selection in the history of the species whereas ontogenic refers to selection of consequences during the lifetime of the individual organism. We as people are capable of learning a huge range of behaviors that involve response sequences. I like to think of the children's book If You Give a Pig a Pancake. One simple task (give a pancake) leads to another task (wanting syrup) and another (getting a bath because of being all sticky) and so on, which results in a response chain. Behavioral culture refers to the environmental conditions that have reinforced or punished our behaviors. It should include not only a larger cultural identity such as nationality or ethnicity, but the specific conditions encountered in an individual's life which shape his or her behavior (e.g. the family unit, school environment, etc.).

Item FK48: State the advantages and disadvantages of using continuous measurement procedures and discontinuous measurement procedures (e.g., partial and whole-interval recording, momentary time sampling)

The main advantage of using continuous measurement is that all instances of behavior can be recorded. If you are continuously observing a client, you are able to see each behavior occurrence. You won't miss any behavior. Conversely, the main disadvantage of using continuous measurement is the time that it takes to directly observe all client behavior. For most practitioners, it is impossible to observe constantly. This is where discontinuous measurements come in. The main advantage of discontinuous measurements is that a good estimate of the target behavior's occurrence can be obtained without long periods of observation. The main disadvantage of using discontinuous measurements (partial interval, whole interval, and momentary time sampling) is measurement artifacts. This is simply the unreliability of the data, given that not every behavioral occurrence is noticed. Advantages, disadvantages, and appropriate uses of different methods: Whole Interval: ● Tends to underestimate the "actual" occurrence of behavior. ● Appropriate to use for behaviors that are of longer duration and discrete. ● Also, use for behaviors that you would like to see increase. Partial Interval: ● Tends to overestimate the "actual" occurrence of behavior. ● Appropriate to use for behaviors that are of shorter duration and for behaviors that may not be discrete. ● Use for behaviors that you would like to see decrease. Momentary Time Sampling: ● May under or overestimate "actual" behavior occurrence ● Appropriate for persistent behaviors ● Use preset moments to record For all discontinuous measures, shortening the observation intervals tends to increase accuracy and validity.

Item FK03: Determinism

This seems similar to the lawfulness of behavior. Determinism takes the lawfulness of people's behavior and takes it to the next logical level; that is, determinism is the idea that the "rules" of behavior include events occurring as a result of other events. Behavior is caused by other events. It doesn't just happen "for no reason." This is what I mean when I say that things happen for a reason.

Item FK40: matching law

The matching law states that the proportion of responses "matches" the relative reinforcement or "pay off." This also has to do with concurrent schedules that are independent of one another. It is easiest to understand the matching law in terms of a choice between two responses. If a child is offered a snack or some blocks, the number of times that each item/activity is chosen will add up to the total times that the items are available to choose. That is, the more times that a client chooses "blocks," the less they are choosing "snack." If there are 100 total opportunities to choose, and we count 75 times that the client chose "blocks," that means that 25 times the client chose "snack." The total amount of "choosing" (75 + 25) is equal to the total number of reinforcement opportunities. Note : the previous example is a simplification of the matching law and the allocation of responses does not always exactly match reinforcement, but can be fairly accurately predicted in most cases. Example: In practice (and on the test), the matching law can be useful to know. For instance, let's say that you have a client with an appropriate way to get attention who also has an inappropriate way to get attention. (I hope this sounds familiar.) The client will respond (i.e. "choose") which way to get attention based on the amount of reinforcement that is available for each choice (response). If you want to increase the amount of appropriate attentionseeking, you would make more reinforcement available for this behavior/response. If you want to decrease the inappropriate behavior, you would allow less reinforcement for this behavior. (Hopefully this is also sounding familiar). You can accomplish either goal by reducing the total reinforcement for inappropriate behavior and increasing it for appropriate behavior. Let's assume the inappropriate behavior will receive some reinforcement 2 responses per hour that we can't extinguish. If we increase the total amount of reinforcement to 100 opportunities per hour, then these 2 responses will look relatively weak, when compared to the 98 appropriate responses. Another example can be found in Task E08.

Item FK46: Intraverbals

The most subtle and "highest level" of verbal operant, intraverbals are verbal stimuli that are controlled by other verbal stimuli. These have no formal similarity or pointtopoint correspondence. An example is the response "two bits" when someone (even yourself) says, "shave and a haircut." Other examples are responding, "You're welcome" when someone says "Thanks," or "Lukes" when told, "Skywalker, Dukes of Hazzard, Perry, Wilson."

Item FK45: Tacts

These verbal operants are controlled by nonverbal stimuli. Tacts do not have pointtopoint correspondence, and are reinforced by generalized conditioned reinforcers; that is, tacts take place in the presence of these stimuli. An easy example is seeing a bicycle and saying, "bike." A more difficult example is an internal state of hunger to which one says, "I'm hungry," or "tengo hambre." I say, "more difficult," because this is an internal state that is not directly observable by others. Strictly speaking, tacts must occur in the presence of the nonverbal stimuli. However, in my humble opinion, this includes expressions based on remembering items or events such as relating a recently viewed movie to a friend; "Did you see that scene where Yoda fights Count Dooku?"

Item FK28: transitive, reflexive and surrogate motivating operations

Transitive motivating operations (CMOT) This is a stimulus that alters the value of other stimuli. Basically, anything that has been associated with a UMO, may become a CMOT. Example: If you feel lonely (deprived of social contact), anything that has been associated with obtaining social contact may become a CMOT, such as your phone, downtown, your computer, the TV, the annoying neighbor, or your dancing shoes. Reflexive motivating operations (CMOR) A stimulus that has systematically preceded some worsening (or improvement) of the person. This is often associated with an avoidance response and a "warning stimulus." Example: When Amy sends me a text message with a sad or angry emoji before she comes home, we often fight when we see each other. As a result, I often stop at the sports bar on my way home to delay the fight when I get a sad or angry emoji text; that is, I see this warning stimulus (text) and believe that a worsening of my condition is near. Acting to avoid the worsening (meeting and fighting) becomes more reinforcing. Note that in the above example, I am free to go to the sports bar any time. It's not that this option is more available, but more desirable when I receive a sad/angry text. Surrogate motivating operations (CMOS) These stimuli act in the place of unconditioned motivating operations; that is, the CMOS temporarily alters behavior that has been reinforced in the presence of the UMO. They act as though an MO is in effect, when it really isn't. Example: You are hungry every day when the whistle blows at noon, and you go to eat your lunch. On Saturday, when you aren't working, the noon whistle can still be heard from your house. Upon hearing the whistle you go to the refrigerator and eat a snack, even though you had a big breakfast. In the above example, you acted as though you were hungry in the presence of the whistle, because your state of food deprivation (a UMO) has been repeatedly paired with the presence of the whistle. It's not that the whistle made food more available, it has just acted to make food more desirable. Again, note that these items may not be more available, but they become more important or desirable.

Item FK26: unconditioned motivating operations

Unconditioned motivating operations are unlearned valuealtering effects of an antecedent stimulus. Motivation does not depend on one's learning history. Sleep deprivation increases the value of sleep as a reinforcer without the need for a learning history to make the effects of sleep valuable.

Item FK19: unconditioned punishment

Unconditioned punishment is unlearned. The first time we touch a hot stove we instinctively remove our finger (reflexive action), this decreases one's likelihood of touching the stove in the future (this becomes a learned punishment because of the history of consequences).

Item FK17: unconditioned reinforcement

Unconditioned reinforcement These are primary reinforcers, like food, water, shelter, sleep and sexual stimulation things that are needed to survive and need no learning history. Like any reinforcement, a change in the stimulus (e.g., abundance or scarcity of food) can increase the future frequency of a behavior. Other terms that are synonymous with unconditioned reinforcement are primary and unlearned. Example: When I am really cold, I am going to seek out a coat, blanket or heat to find a way to become warm. Likewise, when we are hungry or thirsty, we look for things in our environment to eat and drink.

Item FK44: Mands

Under the control of motivating operations, mands are requests for specific reinforcement. These verbal operants are often one of the first communication tools that are targeted for intervention, as the mastery of mands tends to increase the reinforcement of communication in general. Once a person has learned that communicating requests can result in reinforcement, other requests become more likely as well. Manipulating motivating operations and the use of incidental teaching are often associated with mand training.

Item FK29: distinguish between the discriminative stimulus and the motivating operation

his is very often difficult to do without some serious thought. It's helpful to think of a discriminative stimulus as a signal that reinforcement is (or may be) available. In contrast, motivating operations increase or decrease the power of a reinforcer in value and in behavior that has led to obtaining reinforcement in the past. It is helpful to think of motivating operations as states of satiation or deprivation in which a person acts to obtain something because they want it more than they might otherwise, or want something less because they've had enough. An example might help to illustrate this idea: I wake up in the morning after only 4 hours of sleep. I want more (establishing operation because of sleep deprivation valuealtering effect). I look at my emails and read that my first two appointments have been cancelled (discriminative stimulus that signals the availability of more sleep). I set my alarm for two more hours of sleep (establishing operation because of sleep deprivation behavioraltering effect). n the last part of the example, note that I could set my alarm for two more hours at any time of day. I have the ability to do this whenever my phone is near. However, the current motivating operation in effect (not having slept enough) makes the reinforcer (more sleep) more valuable. The setting of an alarm is not more available; it's more desirable. Summary: To determine Motivating Operations, look for deprivation or satiation. Also, ask yourself if a behavior could be performed at any time, but is momentarily more desirable. This is an MO. In contrast, Discriminative Stimulus is a signal that has, in the past, indicated that reinforcement is coming.


Conjuntos de estudio relacionados

RS Pharm CH 22 Chemotherapy Drugs

View Set

English Comp II - Midterm Review

View Set

AL 10 / [windows] keybinds: 34.9

View Set

Chapter 43 Lewis questions- Lower GI

View Set

Pharmacology II Prep U Chapter 54: Drugs Acting on the Upper Respiratory Tract

View Set

Bus 381 - Chapter 14: Occupational Health and Safety

View Set