Research Methods COMP

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

independent variable

"Predictor variable" It is the experimental manipulation or variable we are evaluating to see if it has an effect! Example: A health psychologist wants to learn more about how stress influences memory. In this example, the dependent variable might be test scores on a memory test and the independent variable might exposure to a stressful task.

What is Reactivity of Experimental Arrangements (Threat to EXTERNAL validity)

*Reactivity to experimental arrangements* refers to being aware of participating in a study and responding differently as a result -Can be an unconscious process -Can this effect be reduced or mitigated (allowing persons to have extra exposure to being observed may reduce this "onlooker effect") *Reactive Assessment* The extent to which subjects are aware that their *behavior* is being assessed and that this awareness may influence how they respond.

What is random selection?

*•Random selection improves external validity of findings* Random selections m means to *create your study sample randomly, by chance. Random selection results in a *representative sample*; you can make generalizations and predictions about a population's behavior based on your sample. •When we conduct a study we don't want to just say that our findings are significant for the sample. We want to say that the findings apply to the population from which the study is drawn •Random selection •Drawing from the total population of interest in a way that ensures that each member of the population has an equal chance of being drawn •If this can be done it and the sample is relatively large, the potential for generality greatly improves •If one is shooting for good generalization, the sample needs to be a good representation of the population •Many things restrict this effort •Obtaining a sample at one point in time may not readily generalize to the population •*To make statements about a population* •Careful sampling is required of different segments or subgroups of the population to reflect subject and demographic variables of interest •Such as geography, socioeconomic level, ethnicity, religion, - what other demographics are often important? •While random selection is somethings somewhat doable, it is often the exception •This may not be the end of the world, but we do need to be careful to call it what it is

What is the Generality across measures, setting, and Time threat to EXTERNAL validity?

-Catch all category -Measures - maybe depression ratings decrease on the scale, does that carry over to how they are doing in everyday life? -Setting and time - does the lab environment make a difference

Scale construction

-KISS (keep it simple stupid) - stay focused on the criteria -Publishing companies spend lots of $$ developing these scales --It does not mean they are the cat's meow --Look at reviews before spending your $$ --Talk to others --Don't be afraid to contact the publisher uOften times they have representatives who can provide you with info -Most scales have at least 10 items -But some scales used for tracking symptoms may be shorter (for convenience) The text goes into some detail on scale development (pp. 373 - 376) See Figure 13.3 for a nice example of a child's pain scale uNote how pictures are used

What is Narrow Stimulus Sampling (Threats to External Validity)?

-Narrow Stimulus Sampling •Stimulus characteristics --Refer to stuff associated with manipulation --Experimenters, setting, interviewers, etc. may restrict generality of the findings. --Most common occurrence in psychological research pertains to restricted features of the experimenters or materials used. Example: a single vignette may have unique characteristics - or maybe there is something unique about the experimenter serving as the only clinician or interviewer? Could this impact results? Or a hospital population only! •We would like to say that the experimental manipulation did not depend on special features •Because those special features can be seen as threats to external validityJ

What are the Major Threats to External Validity?

-Parsimony (avoid complexities if reasonable) -Avoid complexities unless needed to support explanation •Sample Characteristics •Results can be extended to persons whose characteristics differ from those in the study •Similar or dissimilar characteristics •Human and nonhuman animals •Animal research has produced major advances •Think learning theory, brain functions, medicine, etc.

What is the benefit of a good design?

-Provides a clear road map to the hypotheses -Makes rival hypotheses less plausible

Over sampling

-Sometimes a small portion of a population may be important to you or your practice -May need to over sample to ensure that you get to those folks -Risks of identifying clients that represent a small group -If your scale includes demographic info on age, gender and education -You might have one 55 year-old male with a master's degree J -If that 55 year-old took this course he would know if he answers this survey his anonymity might be compromised J BE CAREFUL -Sampling in clinic settings is not a perfect science -Be mindful of your goals and adjust your sample accordingly

What is construct validity?

-What specific aspect of the manipulation was cause? -What is the conceptual basis underlying the effect?

Likert scales are

-Write several declarative sentences that address the topic -are statements that people can react to (strongly agree, agree, etc.....) -Example: "I love learning about research methods" -Only contains one central idea -But even this item could be broken into different parts of the research methods experience -Avoid "double-barreled" questions that contain more than one idea Example: "I love research methods and online learning" ß 2 ideas are a "no-no" -Jargon and psychobabble --Use a vocabulary that your target audience will readily understand --Use your word software to complete an estimate of the reading grade level -Phrases that might have more than one meaning --You may need to pilot the items to discover that some items are confusing to others

Key requirements of Single-Case experimental Designsl

1. *Ongoing assessment*: •Multiple observations over time while intervention is provided (e.g., direct observation continues throughout the study) --Purpose is to provide information that is required to complete the intervention phase and evaluate the data 2. *Baseline assessment*: Assessment is obtained over a period of time prior to the implementation of the intervention (called the baseline phase) •Purpose is to assess performance and predict expected performance if intervention is not provided 3. *Stabiliy of performance*-Little variability in performance over time If the target behavior tends to be stable over time, then the study allows projections of performance to the immediate future and evaluate the subsequent intervention (one hopes that the "trend line" moves in the predicted direction

What are the 5 types of Validity for Qualitative research?

1. Descriptive Validity 2. Interpretative Validity 3. Theoretical Validity 4. Internal Validity 5. External Validity

What are the 7 threats to internal validity?

1. History 2. Maturation 3. Testing 4. Instrumentation 5. Statistical Regression 6. Selection Biases 7. Attrition 8. Diffusion of treatment

What are the categories of validity?

1. Internal validity 2. External validity 3. Construct validity 4. Data-evaluation validity

How do threats to internal validity emerge?

1. Poorly designed study 2. Well designed study but sloppily conducted (sloppy procedures) - this creates threat to itnernal valkdity> 3. Well designed study with influences hard to control, like subjects dropping out. Diffuison of treatment (if manipulation is not provided corretly).

What are the 3 broad influences of Qualitiative research?

1. Quantitative efforts do not really focus on the richness of the experience --But rather provide gradations of characteristics that can be manipulated quantitatively --E.g.: The descriptive stats from a set of data 2. In social sciences (e.g., sociology & anthropology) -Tradition of the researcher elaborating on the subject matter in great detail - maybe adding a little personality or personal interpretation to the investigation -This type of work is seen as informative and helpful in generating ideas and perspective 3. Dissatisfaction with quantitative methods •Focus on groups and mean scores of groups •Focus on participants as the objects of a study •Variables separated and seen in isolation •Reducing experience to quantitative data •For those with qualitative perspective, quantitative methods reflect a paradigm that neglects human experience and attention to how individuals interpret and construct their world

Validity from a Qualitative Research Perspective

1. Triangulation 2. COnfirmability 3. Credibility 4. Transferability

What are the 3 types of data Evaluation?

1. •Visual inspection -Simply a visual inspection of graphs used to determine the consistency of the intervention effects across phases 2. Criteria used for visual inspection •Change in mean scores •Change in the trend line - compare baseline to intervention trends •Shift in level - look for break between one phase and another and determine if change is evident 3. •Statistical evaluation •Stats seems to improve the perceptions that the data is relevant (may or may not be true) •Time-series analysis is perking some interest •Sometimes the change scores can be compared to a norm - if the change is substantial it can be "significant" •Statistical approaches to time-series are still emerging

What is a cohort?

A particular group of people that share some characteristic? •Left-handed •Birth-cohort

What is meant by methodology as a refining tool?

A way of thinking that relates directly to how one thinks about the study

What is Confirmability in Qual research?

Another form of validity; it is the likelhiood that an independnet reviewer would complete an adudit of the information and arrive at the same conclusions.

What is credibility in qual research?

Another way to improve validity; the believeablity of the findings. DO others who are familiar with the research'ers conclusions agree that the findings make sense?

What is a Testing threat?

Any change that is a function of repeated measures such as learning or carryover effects Threat to INTERNAL validity.

What is the threat of Testing to Internal validity?

Any change that is a function of repeated measures, such as learning or carryover effects

What is a Maturation threat to Internal validity?

Any change that may result from processes within the participant Example: Growing older, healthier, smarter, bored, etc.

What is Maturation threat?

Any change that may result from processes within the participant. Example: growing older, healthier, smarter, bored, etc. Threat to INTERNAL validity

What is Historical Threat to internal validity?

Any event that might influence the findings. Example s include family crises, change in job, a teacher/spouse, etc., power blackouts, or any histoical event.

What is History threat?

Any event that might influence the findings. Examples include family crises, change in job, a teacher/spouse, etc ., power blackouts, or any other historical event (*covid-19 and protests!*) Threat to INTERNAL validity

What is transferability in Qual research?

Are the findings generalizable?

Likert Scales-Response Choices

Arrange as ordinal data (low to high, agree to disagree, etc.) -How many choices? -Odd number if you want a place to fall in the middle -Even number is you are pushing people to one end or the other uExample: A scale may measure extroversion-introversion -High scores suggest extroversion -Low scores suggest introversion -If you want to use this scale to identify those with extroversion and introversion so you can compare them on some other scale - you don't want folks to fall in the middle - you want to at least try to identify them as one or the other -If pushing people into one category or the other is not a concern, then a mid-point is not so bad

Demographics

Author describes "factual" questions and then lists "gender" as male or female uNote how we are evolving If you want authentic and honest responses - be respectful to your clients Race and ethnic categories are sloppy uFirst, race and ethnic origin are different Second, Americans are becoming increasingly multiracial As we become more sophisticated, it becomes more complicated - Good luckJ Personally, unless there is something to be gained, why ask?? Online? uPersonally, I like them they can be set up to ensure the anonymity of participants (unless you ask for ID info) Can be distributed broadly and most online programs (SurveyMonkey) organize your data collection

Criterion related validity

Can teh results of the measure be connected to an outcome?

What is the threat of Instrumentation to internal validity?

Change in the instrument or assessment procedure over time. Example would be those who rate the participants and their change in scoring behavior.

What is an instrumentation threat?

Change in the instrument or assessment procedure over time. Example: those who rate the participants and their change in scoring behavior.

Special considerations for special populations on Survey Data

Children -Younger children might respond better to pictures -Use fewer responses (but this can negatively impact reliability) Dementia or in other ways unable to respond -Can others ethically be involved -Who gives consent,, it can be complicated Reading level -Use your word processing spell check to estimate reading level -For MS Word -Select file - select options - select proofing -Under "When correcting Spelling & Grammar" check box for readability statistics

What is Data-evaluation validity?

Could data or methods mislead or obscure the effect?

Concurrent validity

Does it distinguish between what it should theoretically dbe able to distinguish ebtween?

Face validity

Does the measure LOOK like it is measuring what it is supposed to measure?

Validity defined

Does the measure do what it says it does?

Predictive validity?

Dos it predict what it should theoretically be able to predict?

What are Confounds?

Features of a study that might interfere with the accurate interpretation of the results •If a study is confounded, then another variable changed along with, or was embedded in the experimental manipulation •Could be whole, or in-part responsible for the results •Its some other component to the study that is at least partly responsible •*Text example: moderate amounts of wine* •1-2 drinks related to positive health benefits - but is it the wine? ---Maybe people who are moderate drinkers possess other characteristics -----Mellow, less likely to smoke, less obesity, diet, SES, etc.. -----Findings suggest that these folks drink a bit less, have lower obesity, higher SES.. •Any of which might influence the findings •So one wants to be careful with such interpretations Even if these issues are ruled out - which wines, can ETOH be removed and maintain benefit

Descriptive validity is what?

For qualitiative methods; the extent to which teh reproted account is factually accurate. The account may reflect descriptions of event,s objects, behaviors, peoppe, settings, times, places, etc.

External Threats to Validity: *Sampling*

Have a rationale for why the sample provided is a good test of the hypothesis, address diversity issues, and make sure theory is relevant to so selection (some characteristics need ti be prioritized). A sample of convenience: Often used in research (think college students or people drawn form a waiting room clinic), the main question is the sample appropriate? Is there a quality about the sample that may influence the findings?

How does one proceed with Sample selection?

Have a rationale for why the sample provides a good test of the hypotheses (and diversity issues are addressed) -Theory is relevant to selection - it may suggest characteristics that need to priortized (the investigitaor should be explicit about why a particulr sample was selected)

What are the instrument statistic issues with Survey research/likert scales?

Homogeneity of variance across items -Items have a similar dispersion of high and low scores -If item dispersion is similar across items then one can have more confidence that adding them together will be ok --If certain items are really skewed it can distort the mean -Content validity of the items -All items are field tested and vetted by those familiar with the dimensions being assessed -Cronbach's Alpha -Measures internal consistency of items -If items are all measuring the same construct, then they should be correlated SPSS can run Cronbach's Alpha and report consistency measure for each item that is removed Sometimes removing an item improves consistency... sometimes not Test-Retest -Measures consistency across time (measures stability of the instrument)

Ethic issues in research

IRB Informed Consent to Research Informed Consent for Recording Voices and images in research Reporting research results Debriefing if deception is done

What is theoretical validity?

If the "hows" and "whys" are addressed, how well does hte explnanation fit the data? Theory is expected to be at a higher level of inference than interpretative validity and conveys possible rreasons or underpinnings of the phenomnon.

Likert Scale Statistics

If you want to tally the items on a scale to obtain a total score: -Make sure the items are only measuring one dimension -You cannot use a total score for items that are notably different --Example: when measuring client satisfaction at a clinic you might be asking about different issues/dimensions -Satisfaction with counselor -Satisfaction with clerical/billing support -Parking -Waiting room environment -Etc, etc.... -These are all different dimensions and should not be combined for a total score --What happens if a person is happy with counselor but really unhappy with parking?

Control group

In an *experiment* the group of individuals who do not receive the treatment or intervention is called the control group. A true control group only exists IF *RANDOM ASSIGNMENT* was done properly. If no random assignment was done, then the group is called a *comparison group*

Comparison Group

In non-experiment research design, the group of individuals not receiving the treatment or intervention or receiving an alternative tx or intention. Comparison groups are any group included in a research design beyond the primary group or groups of interests. *COMPARISON GROUPS ARE A BROADER NOTION THAN *CONTROL GROUPS*. Comparison groups may help clean up ambiguity and strengthen your study.

If participants know they are in the control condition, this understanding may change their behaviors and be a threat to what v alidity?

Internal validity

What validity is threatened for the No-intervention group?

Internal validity, as they may be treated differently "those intervention folks get all the perks."

Convergent validity

Is this measure consistent with other measures of the same/similar construct?

Discriminant (divergent) validity

Is this measure different from measures of other constructs

What is Triangulation in qualitative research?

It helps improve validity as it is multiple sources, procedures, and/or perspectives converge to support the conclusions

What is the Plausible rival hypothesis and competing interpretations?

It is alternative explanations for the study findings

What is dependent variable?

It is the OUTCOME or measure we are examining to assess the impact or effects of the IV~ "It depends on the IV" Example: A psychologist is interested in studying how a therapeutic technique influences the symptoms of psychological disorders. In this case, the dependent variable might be defined as the severity of the symptoms a patient is experiencing, while the independent variable would be the use of this specific therapy method.

How do you manage threats to internal validity?

Keep it simple; think parsimony

Questionnaire Formats - Likert Like Scales

Likert scales are quite common --Named for Rensis Likert who developed it for his dissertation at Columbia in 1932 --Generally used to measure attitudes and feelings Designing Likert Scales -Do your homework and review the relevant scholarship on your topic --You wouldn't develop a scale to assess depression without first studying depression symptoms --And how to describe the symptoms to a lay person Develop justifiable goals and objectives

What is Attrition threat?

Loss of participants that leads to selection bias. Threat to internal validity

What is Attrition threat to internal validity?

Loss of participants that leads to selection bias. Attrition can have other effects too.

No treatment control group (Control group used in intervention studies)

No intervention is provided - pre & post measures are obtained over the same interval as the intervention group(s) •Performance of no-treatment control groups can change over time as a result of history, maturation, testing, stats regression. •So this group assesses the base rate of change for clients who did not receive treatment •Ethical concerns when participants are presenting and requesting treatment - this is typically resolved by advising participants that if they participant they may be assigned to the treatment or control conditions (not a great solution)

No-contact control group

No intervention is provided and this group is not aware that they are participating in the study. They have no contact with the researchers (this could require deception, which is seriously frowned upon) deception!!!!

What are the Goals and Objectives for Clinical Clinical Questionnaires?

Obtaining meaning data requires a lot of work -Much research -Lots of steps - piloting items, etc.. First - a clear and specific objective is needed It is easy to get pulled off the main objective -Younger researchers get curious "a few more items won't hurt," or "gee, I wonder what folks think about???" -Each item needs to be justified -Linked to the expected outcome Plan to write several items for each of the instrument's primary objectives -Editing and field testing will likely reduce that number -Use enough items to "get at " each objective --- but no more

How do you ensure quality of Survey research?

Pre-pilot the items -Work with a small group of uninvolved people -Similar reading level - education level - backgrounds -How do they interpret the items -How would they respond Confusion about any items Be care for response process issues --Where the item interpretation leads to invalid responses --Example: falsely assume that introverts don't like parties Pilot the instrument -Use independent sample that represents those you how to survey -Make sure its doing what you want it to do -Be careful with IRB issues

What is Statistical Regression threat?

Regression tot he mean

Wait list control group

Same as no treatment group except that after the intervention is complete, the wait-list group receives the intervention. The wait list group may also receive testing after the intervention (same as no treatment control, but in this example, the wait-list control group is provided the treatment after the delay*

Common Scales to Measure

Scale to measure agreement -Absolutely agree - Agree - Undecided - Disagree - --Strenuously disagree Strongly agree - Agree - Undecided - Disagree - Strongly disagree Scale to measure frequency Constantly - Frequently - Occasionally - Very infrequently - Never -You can assign behaviors to describe the choice (i.e., Constantly = every day, Frequently = more days than not, etc. Scale to measure importance --Critically important - Important - Moderately important - A low priority - Irrelevant The text reviews a few more ideas for scales There are many ideas and variations

What is Selection Bias threat to Internal validity?

Systematic differences between participant groups before the experiment. Were the differences already present before the experiment.

What is Selection Biases threat?

Systematic differences btween participant groups before the experiment. Were the differencds already present before the experiment? Threat to INTERNAL validity.

What is Interpretative validity in Qualitive research?

Th extent to which the meaning to what has been described is accurately represented. Is the material adequaltely understood? The interpreations represent reported experiences.

Content validity

The degree to which the content of the instrument properly measures the area of interest

What is EXTERNAL validity?

The extent to which the results ofa study can be generalized byeond the conditions of the study to other populations, settings and cicrcusmtances. *External validity covers all dimensions of generalizty of interest. Characteristics of the study that may limit generalizaty of the results are refefred to as threats to external validity.

What is the file drawer problem?

The idea that reviews and meta-analyses of published literature might overestimate the support for a theory, because studies finding null effects are less likely to be published than studies finding significant results, and are thus less likely to be included in such reviews.

Facts about research bias

The main point to remember with bias is that, in many disciplines, it is unavoidable. Any experimental design process involves understanding the inherent biases and minimizing the effects. In quantitive research, the researcher tries to eliminate bias completely whereas, in qualities research, it is all about understanding that it will happen.

Diffusion of Treatment threat?

The treatment or intervention occurs at times when it is not intended or to particiaptns for hwom the intervention is not intended. There are also "react to controls" where particiaptns recieve serviceds that change their responses to the experiment. For example: someone getting counseling and not telling anoune that this person has also started a trial of meds! Threat to internal validity.

What is Diffusion of treatment a threat Internal Validity?

The treatment or intervention occurs at times when it is not itnended or to participants for hwom teh intervention is not intended. -There is also "reaction to controls" where particiapts recieve services tht ahnge their responses ot the experiment. Example: Someone gtting counseling and not telling nayone that htis person also had started a trial of meds.

What is statistical regression threat to internal validity?

This is simply called regression to the mean; outliers will always revert back to the mean

T/F internal validity has to do with variables (IV/DV?)

True

T/F •A factorial design is not a single design but a family of designs that vary in the number and types of variables and the number of levels within each variable.

True

T/F all threats cannot be anticipated for Construct validity?

True; Construct validity is about interpretation - and you are just hypothesizing before you conduct the study "stuff happens" that shapes your interpretations

Likert Scales - Response Choices (how many choices?

Try 4 or 5 Example: -5 choices: Strongly agree, Agree, Neutral, Disagree, Strongly disagree -4 choices: Omit the "neutral" response 7 choices can be used -Once you get beyond 4 or 5 its difficult to accurately describe the level of agreement -Super duper Agree, etc.... It can be done but the wording needs to reflect an attempt at equal interval -Even though we know that equal interval with such items is a bit off base

What is internal validity?

Was the research done "right?" The extent to which a study can establish true cause-and-effect relationship between an outcome and a treatment.

What does WEIRD stand for (Threat to EXTERNAL validity)

Western, Educated, Industrialized, Rich, Democratic •What we observe may not be a core area common to all others, but cultural.

Are college students, or WEIRD, a threat to internal validity?

Yes, these are a threat to external validity (how generalization are these groups to the regular population?).

What is internal validity again??

the degree to which data in a study reflect a true cause-effect relationship What variables other than the IV could account for the effects on the DV

Construct validity according to Reed

the degree to which the instrument measures the domain, trait or characteristic of interest •Refers to the link between the concept behind the measure and research that attests to the utility of the construct in explaining the findings

What is external validity?

the degree to which the investigator can extend or generalize a study's results to other subjects and situations. Can you generalize this data to the regular population?

Checklist Design?

uItems should uReflect easily observable behaviors (complicated behaviors complicate rating) uSequence any natural order that would likely be followed uRating behaviors - least problematic to most problematic, etc. uContain unambiguous language (clear and concise) uContain a spot for the rater to clearly rate their assessment (no fuzziness) uBe written in a consistent voice (i.e., first person, third person, etc....) uEach contain only one issue to be rated uOnly appear once - avoid multiple ratings of the same issue unless justified

Thurstone's Hierarchical Scale

uOne of the older measurement scales uUsed to measure a single dimension of affect, attitude or opinion uHow constructed uDevelop many items that assess different levels of response to the attitude uGradually winnow down to about 10 that reflect a gradual increase in strength for the attitude uGoal is to have items represent an "equal-appearing" interval scale Meaning that the distance between the statements reflects what appears to be equal interval Developing a Thurstone Hierarchical Scale is not easy And the issue of equal interval can be easily challengedJ

Rating Scales and Clinical Checklists?

uOne of the older measurement scales uUsed to measure a single dimension of affect, attitude or opinion uHow constructed uDevelop many items that assess different levels of response to the attitude uGradually winnow down to about 10 that reflect a gradual increase in strength for the attitude uGoal is to have items represent an "equal-appearing" interval scale uMeaning that the distance between the statements reflects what appears to be equal interval uDeveloping a Thurstone Hierarchical Scale is not easy uAnd the issue of equal interval can be easily challengedJ

Structured Interviews and Decision Trees

uSome research supports the use of these techniques May provide a more accurate diagnosis Problem -Most clients come to a counselor to talk Want to tell their story -Expect to tell their story -Often benefit from telling their story Semi-structured interviews -Seems like a good compromise -More open-ended but can be more structured when a rule out is needed for a specific diagnosis -I still use: S - I - G - E - C - A - P - S (not really DSM but I find helpful)

What is a factorial design?

• •Factorial designs allow the simultaneous investigation of two or more variables (called factors) in a single experiment. Within each variable, two or more levels or conditions are administered •Test example of two variables: 1. Type of coping strategy 2. Type of clinical problem Each variable has two levels (regulation or relaxation for the coping variable and depression or OCD for the clinical problem variable) This is a 2 x 2 design (2 variables each with 2 levels) forms 4 groups that represent the different combinations possible The data obtained will identify if the coping strategies differ from each other on some measure of stress, if the two diagnostic groups differ and if the effects of coping vary as a function of (are moderated by) diagnostic groups

What is Consensual Qualitative Research?

• A semi-structured approach to interviewing -The structure makes this technique a bit more quantitative than other qualitative approaches •Sometimes described as "collaborative qualitative research" •Involves collaboration among several researchers -The idea is that multiple researchers reduce systematic error that may occur when relying on one individual -Also reduces potential for "group think" bias by adding auditors who check the work of the primary researchers -Ensures a higher level of transparency in the research and interpretations • CQR Methods -Develop semi-structured, open-ended questions -All participants answer the same questions using a standard protocol --Central to the method is the building of a consensus about the interpretations being made --Team members may represent researchers with diverse backgrounds (gain different perspectives)

What is the focus of Case studies?

• Behaviorally focused (think of observing a difficult child in a classroom) •Remember to gain understanding of the person's contextual issues (culture - circumstances - etc..) • Time sampling - involves observational checklist and a time counter • Interval and Frequency sampling - Observe during specified time frames • Event sampling - periodic observations during a limited time frame (i.e., an event) • Data recording or Time Series - observe snapshots of behavior over time

Operations, Constructs, & Procedures

• Developing hypotheses is a step toward being more concrete about how the idea will fit into an investigation •The hypotheses will include constructs and these also need to be made more concrete •Constructs/concepts need to be operationalized •Operational definitions refer to defining a concept on the basis of the specific procedures and methods to be used in the study •Often times disorders are operationalized via the DSM - but not always - operational definitions may vary according to the study/procedure, the instruments used •Sometimes the instrument used to measure the construct will provide a definition - this may or may not be sufficient •If we operationalize a disorder based on a measurement tool, and then someone uses another measurement too, the operational definition might change. •So,, selecting an appropriate measurement tool can be a super important consideration •Also think about cost, reliability, validity, ease of use, etc..... •Science requires that we provide definitions of one's constructs, how they will be measured, cut scores (if applied), etc. •Remember that replication is vital to science. How will others replicate your study unless you are specific •Procedures need to be painstakingly mapped out and organized! •This includes accessing participants - informed consent, etc...

What are direct observations?

• Direct measures of overt behavior •Interpersonal communication (marital, organizations, etc..) •Sexual dysfunction •Social/dating skills •Enuresis/encopresis •Overactivity/attention/impulsivity •Other classroom behaviors •Tics •Stuttering •Insomnia •Verbalizations of hallucinations and delusions •While self-report measures can be used for many of the above behaviors, we know that such measures are not always accurate

What is the Ethnographic Model?

• Focus is on total culture-sharing --Acquiring an understanding of the culture - norms, etc. •Normally involves a group of people (20 or more - perhaps a community) •Researcher becomes a part of the community/culture - participates in community activities, etc. •The "realistic approach" refers to joining the group - blending in with the group • Getting in can be tricky -Need to build trusting relationship with the community -Potential for ethical issues, i.e., obtaining informed consent -Permission is needed - Not appropriate to manipulate in order to gain access --Approach leadership of community - group meeting, etc

Ethnographic Research

• Goal is to develop a deep understanding of the culture of the identified group --While developing the research question it is important that the researcher consider her/his possible biases •These are noted for inclusion in the research report • Such studies require the researcher to become immersed in the history, religion, rituals, language, norms, etc. of the group to be studied --This could mean having the researcher acquire new skills --Speak the local dialect or learn a new language •Note taking --Recorded by hand in the field --Start with clear description of the setting - describe the participants and their relationships --May contain 1:1 interviews or group meetings --Includes anecdotal accounts - observations - discourse among participants - beliefs - etc......- --Hunches from observer about the meaning or significance of various behaviors observed

More on qualitative

• Importance of methods! -A detailed and often complex description of the methods is needed -The approaches used my be thoroughly justified -Explaining the steps used to code the data is important for readers to grasp the techniques used •Researchers must be mindful of their own biases and report how those biases might have interacted with the data analysis • Participants -Description of participants requires attention to detail -Justify the size of the sample •Data Analysis -Use a step-by-step description so a reader can follow the logic of the analysis -Text recommends using flow charts and other visual devices to help explain the analysis

Quality of Qualitative Research

• Like all research, there will be some variability with quality •Credibility to readers •Does the reader find the study credible --Based on topic, the quality of the paper, the researcher's credentials or reputation •Triangulation is apparent •Rigor noted in time invested and attention given to details •Transferability •Some refer to this as "external validity" •The study's findings can be replicated by others working in different settings •To replicate a study, one needs a well written and detailed description of the researcher's methods

Narrative Research Process cont

• Narrative Research Processes •Interpretative analysis is utilized to identify important themes that emerge from the content. •Examines information for factors that enabled, and might have constrained the narrator's information. •Steps recommended for interpretative narrative analysis •Verify that this approach is the best for addressing the proposed research question •Identify a few select individuals with a good memory for the event and who seems to have good stories about their lived experience with the event •Look-ask about diaries, correspondence, newspaper articles, old pictures, etc. that can be used with the individual in helping that person recall and accurately discuss the event •Place the narrator and the story within the proper social, cultural and historical context and story line •Organize the narrator's stories into a framework that allows for a beginning, a middle and an end to the story •Beginning of story sets the context •The middle identifies the problem and the main character(s) •Work with the narrator to ensure accuracy (the validity of the story and its interpretation)

pretest-posttest control group design

• Pretest-posttest with control involves at least two groups. One group receives manipulation and the other does not. •Essential feature: participants are tested before and after the manipulation •The effect of the manipulation is measured in the amount of change from pre- to post-assessment

What is Narrative Inquiry?

• Research based on the analysis of narrative material -Written - spoken - drawn -Considered interdisciplinary -Present one's life story or the lived experiences of multiple individuals -Most frequently they are stories that provide an account of events or actions Story telling (oral history) is an ancient form of discourse -Every culture •Two primary forms of narrative inquiry -Analysis of narratives -Identify and describe themes common across similar stories from individuals •Narrative analysis -Research collects descriptions of events and subsequently organizes them into a storyline

What is Alpha?

• The probability of rejecting a hypothesis such as the null when the null hypothesis is true (Type I Error) •Adjust alpha level to reduce Type I Error (example: go from .05 to .01, etc.)

Analysis of CQR data?

• Using the raw data, individual team members identify themes or domains that can be used to sort and arrange the data •Avoid preconceived ideas - allow the domains to emerge from the data •Team members then work to find common understanding for domains or themes •Commonalities are summarized with examples from the data •Clarification of themes/domains •Cross analysis is then used to agree on: •Common set of domains •Language used to describe the domains •Final consensus is reached

What is Translational Research?

• ways of moving basic findings so they reach those who need that info •Increased emphasis

What are the Major Threats to Construct Validity?

•*Attention and Contact Accorded the Client* •Was there an increase in attention given to the client - could this explain the results? •Also - placebo effects are "real" and powerful! (good reason to use a placebo - to rule out the placebo effect as an explanation for the change) •What about participant expectations (i.e., "I am participating in cutting edge science")? •*Single Operations and Narrow Stimulus Sampling* •A single set of stimuli, a single investigator, etc. may contribute to the results •One therapist administers the treatment - is there something special about this person? •Two or more administrators would improve this study. •But if two groups of therapists (Tx A and Tx B) is it possible that therapists in Tx a were better trained (adherence to Tx) •*Experimenter Expectancies* •Unintentional effects of researcher may influence participant responses •Vocal tone, facial expressions, delivery instructions •Double blind to help with this issue (experimenter is kept in dark) - complicated in a study of therapy interventions *Demand Characteristics* •Cues that are ancillary but provide info that exerts influence on participants •Cues are incidental but promote behavior that might be thought to occur from independent variable

Key questions and concepts that pertain to relations among variables

•*Correlate*: Two or more variables are associated at a given point in time where there is no direct evidence that one variable proceeds the other. *Risk Factor*: A characteristic that is an antecedent to and increases the likelihood of an outcome of interest. •A predictor of a later outcome •Here we know that the one variable occurs before the other (one predicts the other - cigarette smoking predicts higher incidence of lung cancer) •Even though smoking predicts lung cancer, we should be cautious in assigning smoking as the cause of lung cancer - WHY? *Protective Factor*: A characteristic or variable that seems to prevent or reduce a negative outcome. •Not everyone who experiences a significant trauma develops PTSD. What is going on?? ---How would you study this (it would likely require multiple studies using different methods) •*Cause*: Changing one variable is demonstrated to lead to a change in the other variable. •Moving from a "risk factor" to a cause

What is the importance of Selecting the Right Sample?

•*Diversity of the sample* -Ongoing concern •Historically much of the research was with European American males •Omission of diverse groups is now recognized as a serious concern - even discriminatory in some circumstances •Bob - talk about the history of blocking women from child-bearing age from participating in many research studies •Kazdin refers to WEIRD (Western, Educated, Industrialized, Rich, from Democratic Cultures) •We may not be able to randomly select the sample - but we should avoid systematically excluding participants who possess diverse characteristics •Sex, sexual identity, ethnicity, culture, SES, etc. etc. etc...

What are the 8 Major Threats to Data-Evaluation Validity?

•*Low Statistical Power*: there is a lack of a true effect or one cannot detect that effect (difference) •*Subject Heterogeneity*: participants are too difference (differences may be age, disorder, background, etc.) •*Variability of Procedures*: Are procedures sloppy, problems with interrater reliability, procedures not standardized and rehearsed •*Unreliability of Measures*: Measurements are poor (creates score variability) •*Restricted Range of the Measures*: Measures with limited range reduce ability to detect differences •*Errors in Data Recording, Analysis & Reporting*: Obvious threats •*Multiple Comparisons and Error Rates*: multiple tests increase probability of a "chance finding" (think multiple t-tests as a simple example) *•Misreading/Misinterpreting Data:* Misread SPSS, misunderstand findings •What do you learn from this problem that is more common that folks want to admit?

What are the advantages of the Pre & Posttest Designs?

•*Matched subjects* - allows the match of subjects on some variable(s) that might influence results (in previous ASD study think of IQ and gender differences among those with ASD) •*Evaluate variables*- Permits the evaluation of the variable that was matched in the study (in ASD study one can evaluate IQ and how it impacts the results - is ABA better for higher IQ ASD children, etc...) •*Statistical power* - Pre-posttests have better statistical power (the statistics does not need to account for differences among the folks in the treatment and control groups - they are the same folks and they are given a pre and post test) •*Analyze changes* - Allows the researcher to examine who changed - what proportion of individuals changed in a particular fashion and determine if such changes are statistically significant •*Evaluate attrition* - Allows evaluation of attrition, i.e., what were the participants like that dropped out and did not complete the post-treatment measures? •Additional considerations for pretest - posttest designs •Pretest sensitization effect (sometimes called a carryover effect) - did the pretest influence the participants to react differently on the posttest regardless of the intervention •In a randomly controlled treatment people who do not received the preferred or desired treatment might dropout --Actually - just receiving the preferred treatment might influence the results when not accounting for the specific interventions (self-fulfilling) --Ethical issues need to be considered - the study may be stopped early if there is a notable difference in performance among the two or more groups

What are the Procedures for Evaluating if Demand Chacteristics may account for the results?

•*Post Experimental Inquiry* •Ask participants after study about their experiences/perceptions - •What was the purpose, what was expected, how were expected to perform? •If findings are consistent with expectations this raises the question that demand characteristics might have contributed •*Pre-Inquiry* •Participants are exposed to the proposed procedures - hear and see what will happen without actually running the procedure - then asked to respond to measures •If participants respond to measures consistent with predicted hypotheses - this raises concern that there is something going on with procedures •*Simulations* •Participants are asked to act as if they have received the procedures and then asked to deceive "naïve researchers." •If participants can deceive naïve researchers (they seem to know how to act) then there may be a concern.

What are the Key Criteria for Inferring a Causal Relation?

•*Strong Association*: Strong association between IV or intervention and the DV or outcome •*Consistency*: Replication across studies, samples and conditions. •*Specificity:* Demonstrated specificity of the association among the intervention, the moderator and the outcome. •If there are other plausible constructs that can account for the outcome, then a causal relation cannot be supported •*Time Line*: Demonstrate an time line or order of the proposed cause and outcome. The ordering must be very clear. •*Gradient*: Demonstrating that a gradient in which a stronger dose or activation of the IV is associated with a greater change in the outcome (e.g., "dose-response relation"). •*Plausibility or Coherence*: A plausible, coherent and reasonable process explains precisely what the construct does and how it works to lead to the outcome - steps along the way should be testable. •*Experiment:* Causal relation evident when one alters the IV and observes a change in the DV. •*Analogy*: Are there similar causal relations in other areas? •Text example of antibiotics not only address infection but also altering problems such as tics and OCD (pretty spooky) - what is going on here? --Science is very fussy about cause-effect relations - they don't come easy.

What are Accelerated, multi-cohort longitudinal designs?

•: longitudinal design where two or more groups are studied in some special or unique way •Kazdin example: three groups are studied to examine patterns of cognitions, emotions and behavior emerge over a time frame from ages 5 to 14 •A group of 5 year olds are studied until age 8 •A group of 8 year olds are studied until age 11 •A group of 11 year olds are studied until age 14 •All three of these age groups are studied concurrently - so the entire study takes 4 years to collect data •There is a cross-sectional component as all the children are compared at the time they enter the study and are at different ages (ages 5, 8 and 11).

A-B Designs

•A = is the baseline •B = is the intervention phase •Weak design - many extraneous factors could contribute to the desired change •Do clinicians care? J (just saying...)

What is a Moderator?

•A characteristic that influences the direction or magnitude of a relationship between the intervention and the outcome (IV and the DV). •If the relationship between the IV and DV is different for males and females, sex is a moderator of the relation •Moderators are important in the study of psychotherapy outcomes - what influences the outcome in therapy •What is it about those people who do not respond to psychotherapy - what are the circumstances that influence outcome?

What is a Mediator?

•A construct that shows statistical relation between the IV and DV. •Something that mediates change may not necessarily explain the processes of how the change occurred. •It may point to the possible mechanisms but is not necessarily a mechanism. •Exercise reduces depression, so what are the factors related to exercise and depression that account for this change? *It explains*

GT-Basic research design pt 2

•Advanced coding: -Integration of final grounded theory (theory grounded in the findings) -Concepts that reach the stage of categories should be abstract and reduced to highly conceptual terms ---Findings should be interrelated concepts as opposed to themes ---Statements developed to detail relationships between categories and the central core category •Storyline technique •For theoretical integration •Its builds a story that connects the categories and presents theoretical propositions •Presents the grounded theory •Quality and rigor --Researcher's expertise and skills --Methodology is congruent with research question --Procedural precision in use of methods

What are Comparison?

•All sorts of groups can be used in research studies •*Comparison groups refer to any group included in a research design beyond the primary group or groups of interest* •Used to draw various conclusions •Different types of groups may allow different conclusions •Could be multiple treatment groups •Comparison groups is a broader notion than "control groups" •But some control groups might be thought of as comparison groups since the experimental group is being "compared" to the "control group" •The bigger issue is to consider what types of groups might strengthen your study •Maybe to rule out some threat or other? •The selection of groups should be driven by what you hope to say about your results at the conclusion of the study •Comparison groups might help clean up some ambiguity •So,,, obviously this needs to be considered prior to putting the finishing touches on your design/methodology

Random Assignment

•Allocating subjects to groups in such a way that the probability of each subject appears in any of the groups is equal. •Allocation is random - random numbers assigned •Use of a software program to assign groups or pre-existing tables located in stats texts, etc. •Power is improved if groups are equal in size

Counterbalanced design

•Another example of multiple treatment counterbalanced design (three conditions A - B - C) ABC, ACB, BCA, BAC, CAB, CBA The six sequences illustrate the order in which the conditions appear. There would be six groups of participants one group assigned to each of the six counterbalanced sequences •Sequencing effects need to be considered since they can have an impact - sequencing effects are a problem hereJ What is the first condition leads to a certain degree of change? Very tricky!

Self report measures

•Assess feelings and problems defined by what clients say or feel •Helpless - self-critical, unhappy, etc. •Can assess multiple domains that may not be difficult using other means •Client is in unique position to report on these domains •Can do more than one domain at a time (omnibus scales - BASC, MMPI, PAI, etc...) •Ease of administration - easy for screening •Use self-report measure to assign participants to the appropriate group for research •Assess diverse aspects of a characteristic or multiple characteristics

Psychobioloigcal measures

•Assessment techniques designed to example biological substrates and correlates of affect - cognition - behavior •Links biological processes and psychological constructs •Less invasive: respiration - heart rate - blood pressure - brain activity •More invasive: sexual arousal - sampling blood & saliva - think of detecting drug use, etc. •Think physical symptoms of anxiety •EEG and depression as well as other disorders •Cortisol and stress •More broadly - neuroimaging and assessment of disorders •Functional Magnetic Resonance Imaging (fMRI) - measures blood oxygenation and flow •Computed Tomography (CT) - picture of brain based on absorption of tissues of X-rays •Positron Emission Tomography (PET) - traces radioactive material used to map brain processes/activity •Single-Photon Emission Computed Tomography - reflects blood flow and images of brain regions •Electroencephalography - measures electrical brain activity of many neurons •Magnetoencephalography - maps brain activity using magnetic fields produced by brain's electrical currents

True-Experimental Designs

•Assigning participants to groups in an unbiased fashion is a major defining characteristic of a true experiment •One also expects the researcher is manipulating the conditions (i.e., controlling the delivery of the experimental condition and allocating the participants to groups in a random fashion •There are many different types of designs

Issues related to measurement selection

•Awareness of being assessed •Acquiescence - tendency to respond affirmatively •Naysaying - Tendency to disagree or deny (opposite of acquiescence) •Socially desirable responses - Tendency to play oneself in a positive light •End aversion bias - Tendency to avoid extreme scores •Multiple measures may help if they vary in how they may prompt participant reactivity •Helps if we assure anonymity

What is phenomenology?

•Bring together individual perceptions associated with some shared experience •Explore the core meanings and true essence of what is shared 1. This starts with collecting participants' statements: Typically, statements are written down, but could be transcribed 2. Once statements are collected that describes the individuals shared experience, the researcher reduces the data by completing a transcript analysis -*This leads the researcher to identify common themes and shared meanings* •3.The eventual result is a report that provides a statement of the phenomenon's essential, invariant structure.

What are the limitations of A-B & A-B-A-B designs?

•Can baseline conditions be tolerated long enough to provide meaningful Phase A •After Phase B, the dependent variable may not be able to revert back to Phase A (behavior cannot be unlearned) •Withdrawal of Phase B may be contraindicated •Places participant at increased risk (e.g., suicide, risky behaviors, etc...) •Participant finds the new Phase B experience intrinsically rewarding •People don't want to mess with a good thing (old clinical saying: "shoot, and whatever you hit, call it your target")

Advantages of Computerized and Techology-based assessment

•Can be done in real life - real time •Multiple assessments through the day •Items can be adjusted based on previous responses •Reliable administration (can't skip items, math errorsJ) •May elicit more revealing info (assessor does not influence respondent) •Lower costs •Large scale operations possible •May increase reliability of clinical decision making

What are Ceiling and Floor effects?

•Ceiling and floor effects refers to situations where change in the dependent variable reaches a limit where not additional change is possible •No further change can be measured •The scale may reach a limit that cannot be extended •So,, even if additional change appears possible, it cannot be measured •Kazdin also points out that ranges that appear to possess equal interval may not really be equal •Example - The first 10 pounds is easier to lose than the second or third 10 points

Ethnicity, Culture and Diversity

•Changing demographics of cultural and ethnic groups and the role of culture and ethnicity in psychological processes •Do you know the population you want to study? •It should not be assumed that a measure is equivalent for different cultural groups without some evidence that it is equivalent •Has construct validity been established for diverse groups?

Cassification, Selection, and Diagnosis

•Classification, selection and diagnosis are all ways of referring to some outcome that is of interest •Prediction and selection of outcomes are important to practitioners •We use research to identify variables that predict an outcome (e.g., if "X" occurs, then this outcome is more likely) •Kazdin also uses the example of identifying and classifying people who may present a risk to the public (terrorists, etc.) •What variables can help with identification (and think about ethical concerns too)

What are Cohort designs?

•Cohort designs employ strategies in which the researcher studies an intact group or groups over time •Follow samples over time to identify factors leading to an outcome of interest •The group is assessed before the outcome occurs •Effort is to establish the relations between antecedent events and the outcomes •Time frame can vary from weeks to even years - depending on the goals of the study •In contrast to case-control designs where the groups are selected based on the outcome already occurring

What are mixed methods?

•Combining quantitative and qualitative methods •Some make the case that this is the "third paradigm" (mixing the two methods) •Mixed methods has its own advocates, followers, etc... •Using mixed methods and diverse approaches can reveal more about the phenomenon than just one approach

Overview to Threats to Data-Evaluation Validity

•Conclusions depend on rejecting the null hypothesis •Null: no differences between groups •We can reject the null if we find a statistically significant difference •We select a probability level (normally at least 0.05 - allowing only 5% error) •The possibilities •Yes: a sig. difference is found (we reject the null hyp.) •Yes there really is a sig. difference •Ooops, nope, there really is no sig. difference [often referred to as "Type I Error"] •No: a sig. difference is not found (we do not reject the null hyp.) •Yes, this is the correct finding - no sig. difference exists •No, this is not a correct finding - there really is a sig. difference that has not been found [often referred to as "Type II Error"]

What are the core features of Qualitative Methods?

•Core Features: Narrative accounts - Description - Interpretation - Context - Meaning -Goal is to describe, interpret and understand the phenomena of interest -Goal is to deepen understanding -Gain an in-depth understanding -Purpose is to ask participants to describe the experience (e.g., feelings, thoughts, actions, etc.) and to do so in a manner that captures the richness and meaning the experience has for the participants •Qualitative methods are thought to increase our understanding because the level of analysis is detailed and in-depth -Highlights new ways of speaking about the phenomena - new insights •Qualitative methods' systematic and scientific tenets and procedures move it out of the realm of literature and other arts -Going beyond the traditional arts by providing a systematic method of data collection, analysis, replication - and efforts to reduce potential biases

What are the critical issues with cohort designs?

•Critical issues mostly involve construct validity •What is the construct of interest •What are the operational criteria to separate/delineate the groups (measures for separating the groups) •To what extent is the assessment procedure known to reliably separate or select persons with and without the desired characteristic(s) •Critical issues in selecting the groups •From what population, setting and context •If using a comparison group, what are the criteria for choosing the most suitable group •How similar are the different groups (aside from the characteristic of interest) •Critical issues with direction and type influences •Will the results allow conclusions about all kinds of stuff!!!!

What is the sample?

•Critical to the study is the choice of subjects or participants •Specific to the research proposal is why are you choosing these subjects? •Maybe because the specific sample is less likely influenced by some variable? •Convenience is often a big consideration •But, the bigger question is: "What are the circumstances in which this hypothesis is likely to be supported?" •Is there a population that would be a super good test for the hypothesis? •Is there a group/population that is more likely to show predicted results?

Developing the research Idea

•Curiosity: Special interest from observation, belief, experience, etc. •Are (fill in blank) more (fill in the blank) than non- (ex: are musicians more sensitive than non-musicians)? •Case Study: What seems to be a relation among features within an individual and examining if the relation really exists and if it has an generality. Intensive study of the individual/group/institution (political body)/society. •Does a person with damage to a localized area of brain thought to be responsible for certain behaviors exhibit behaviors consistent with expectations? •Early case studies - think Freud •Studying Special Populations: Research that isolates a special group for close analysis of characteristics •What are the cognitions of people with depression? •Does the presence of a particular personality characteristic predict other characteristics of interest? •Studying Exceptions: A population that seems to violate a general rule is studied. •Can exceptions be studies that deviate from typical and expected outcomes (what's going on?)?

What are the 4 advantages of TAU as the comparison condition?

•Demands for ethical concerns are addressed - no fake treatments were provided •Since everyone receives a veridical (i.e., real or truthful) treatment, attrition is kept to a minimum --Attrition creates all sorts of problems with internal, external, construct and data evaluation validity •TAU tends to control for the "common factors" issues •Clinical folks seem to feel better about this approach - it's a real "clinical" question

Posttest-Only Control Group Design

•Designs that use pretests are more common, but.... •Posttest only controls for possible issues with pretest sensitization •Posttest only avoids participant exposure to the assessment task before the experiment •With large numbers of participants and random assignment to the groups the likelihood of group equivalence is thought to be pretty good •Another issue is that pretests are not always available in clinical research •When assessment batteries are expensive and time consuming •One compromise is to provide some pretests but limit what is provided •There can be ethical issues, where a pretest might lead to increased exposure to risk •E.g., blood draws, sensitive personal questions, etc. •But,,, pretests offer the chance to match by the pretest, etc... so they are often desirable

What is Group equivalence?

•Desire is to distribute the characteristics of the sample among the groups •Random assignment helps control for nuisance variables •It allows for unsystematic distribution of the nuisance variables •We must acknowledge that random assignment does not mean that groups will always be equivalent •Group differences are less of a concern when you are working with a large sample (no real rule, but groups that have >40 is a helpful guide) •Statistical power also needs careful consideration

Observational Research & Practictioner Interests

•Disorders: risk factors, onset, etiologies and course can't be studied using experimental manipulation - it would be unethical •Such studies are mostly observational - comparing those with and without a specific disorder •Studies of trauma survivors, exposure to toxins, malnutrition do not permit experimental manipulation •Public health studies rely on observational research & psychology uses similar methodologies •Experimental research often limits the number of variables that can be controlled or manipulated •But we know that there are many variables that interact to influence human behavior •Statistical methods can take multiple variables into account and examine the influence of variables on each other •Data-analytic techniques have improved and can strengthen inferences that can be drawn from observational research •Path analysis, multiple regression, time-series analysis •Correlational research has been used to identify the relationships of smoking and health issues, exposure to lead health issues (including lower IQ), etc. •The Centers for Disease Control and Prevention don't "pooh pooh" these findings when then data is compelling

What are Novelty Threats (THREAT TO EXTERNAL VALIDITY?

•Does the novelty of the stimulus impact the reaction or response ---If fire trucks are painted yellow, will accidents with them be reduced? People might be more observant of yellow fire trucks because its so different and once that novelty wears off, what then?

Ethical Protection of participants

•Ethical protection of participants •Careful with deception - avoid it if at all possible •Any side effects or negative consequences for participants •IRB will carefully review your application

How to do manage threats to External Validity?

•Examine the importance of determining the relevance of a threat to external validity before it is managed •Demonstrate the flaw in the findings •The onus is on the researcher who conducts the study to clarify the conditions to which one might generalize and to convey how the conditions of the experiment represent those conditions. •The onus on the skeptic is to be explicit on how a particular threat to external validity would be plausible as a restriction of the findings.

What is a Yoked Example?

•Example: differences among treatment and control groups might include the number of sessions, the time between sessions, the times when progress is measured, etc... •Any of these factors could potentially confound the findings between treatment and a control group •Yoking is a matching procedure where each participant in the treatment group is matched with a person in the control •So,,, anticipating differences among treatment and control groups and yoking participants helps control for such differences

What are the strengths and limitations of single-case designs?

•Expands the range of options to evaluate intervention programs (much of this is done by practitioners seeking to document treatment progress) •A way to evaluate progress of individuals in treatment •Provides ongoing feedback - track progress - this can be valuable •Used in a small scale way before expanding to something bigtime •Investigate rare problems where a large sample is not likely •**issues** •Generalization - threats to external validity •Normally one may be more concerned with the client progress than some group

What is Data-Evaluation Validity?

•Facets of the finding's evaluation that influence conclusions about the experimental condition and its effects -Often its statistics that are used in the evaluation - hence a previous name for this type of threat to validity is "statistical conclusion validity" -But this type of validity is bigger than just stats

Dilemmas related to subject slection

•First: diversity of the sample - USA census only recognizes five racial groups (leaving aside combinations) •Ex: Hispanic Americans are more than one group - much ethnic diversity •One must recognize that any sample may be inherently limited in representing all groups •Second: expands the issue of ethnicity and culture

Examples of Theory as a guide?

•Focus on the origins and nature of a clinical dysfunction or behavior pattern •Conceptual underpinnings and hypotheses about factors leading to •The problem or pattern of functioning •Processes involved •How these processes emerge or operate •Maybe various risk and protective factors •Paths and trajectories •Early development and subsequent dysfunction •Focus on factors that maintain a particular problem or pattern of behavior •Factors that influence and sustain or shape a problem or pattern of behavior •How, why or when relapse occurs •What promotes therapeutic change •The necessary and facilitative conditions for change to occur

What do Quantitative methods focus on?

•Focus on underlying processes •Causes •How it occurs •Researcher is a data collector - less influenced by the subject matter

Avoiding Increased Error

•For parametric tests such as t tests we can only test up to two groups at a time •To do more that two groups increases the chance for error •Using a t test to compare more than two groups (e.g., A - B - C) results in compounded error •A - B allows for 5% \ •A - C allows for 5% - so we have a total of 15% error possible when using multiple t tests •B - C allows for 5% / •What is a researcher to do?? •For parametric data its ANOVA •First it looks at all the within group variance of all the groups •Compares it to the between group variance for the groups •If the between group variance is larger in proportion to the within group variance then its possible to conclude that a significant difference exists somewhere among the groups •Then,, post hoc tests (like t tests) are conducted to locate the sig. differences

What is Grounded Theory (GT)?

•GT info taken from Tie, Birks & Francis. (2019). Grounded theory research: A design framework for novice researchers. Sage Open Medicine, 7, 1- •GT aims to discover or construct a theory from the collected data --Systematic --Comparative analysis --Mostly associated with qualitative data, but quantitative methods can be used --A variety of methodologies are possible •It might depend on whose school of though to one subscribes to •GT is more involved than it appears - get supervision if you are just starting out

Global Ratings

•Global ratings refer to efforts to quantify impressions of some general characteristic •Often made by therapist or another party to provide broad assessment of treatment •Judgements may vary in complexity about what is actually being rated •Example is an item that asks clinician to rate client improvement •Or how much do the client's symptoms interfere with daily functioning •Global ratings raise concerns such as •There may be little precision in what is being asked (think GAF from DSM-IV) •Homemade measures lack psychometric qualities •Global ratings may be appropriate in some situations •But they should not be the only rating to assess complex constructs (e.g., wellness, etc.)

The Need for Theory

•Goal is to understand human functioning •Its more than accumulating facts and empirical findings •We want to relate these findings to each other and to other phenomena in a cohesive way •Text example: sex differences for a disorder - aside from the obvious question about sex differences, a theoretical understanding can pose how the difference develops, implications the difference may have for understanding biological and psychosocial development. •Theory can bring order to areas where findings are diffused- example: many therapy techniques so think "common factors" and how that theory pulls ideas together •Theory can explain the basis of change and unite diverse outcomes- example: how do the effects of psychotherapy occur? One can explore theoretical statements, prepare hypotheses that can explain the effects and then test them. •When such a statement can be elaborated empirically it deeps our understanding of psychotherapy processes.

What is Power?

•How do we ensure that there is a really good chance that we can find significant difference when it really exists? •Strong power (meaning a strong chance that a difference will be detected if it really exists) •Is not derived mathematically •It is based on convention about the margin of protection one should have against accidentally accepting the null hypothesis when it is false (type II error) •Many moons ago it was suggested that the minimum power should be .80 (see Cohen, 1965) •.80 means that 4 out of 5 (80%) times the difference will be real in the population •Kazdin points to a review of the literature that suggests that many studies do not have the requisite power to tease out small and medium effects •Weak power makes it problematic to draw conclusions •If we find a weak power study that suggests no significant differences what do we conclude? •Moral of the story: statistical power is important - it should always be reported •Including in your dissertation J

Validity and Quality of the Data

•In quantitative methods the different types of validity are distinguished to draw inferences about the experimental manipulations or observed conditions, and to draw causal relations •Qualitative methods do not have the same goals for evaluating the impact of the independent variables --So threats to validity covered in quantitative methods do not transfer well to qualitative methods

How to increase Power?

•Increase the sample size •Contrast conditions that are more likely to vary - more distinct - easier to measure •Is the best test being used? Some will likely be better than others •Pretests/repeated measures is statistically more friendly •Vary alpha •This is tricky - start with .05 (you may cover this in stats) •Use directional tests for significance testing if appropriate •Moves all the error into one tail - Hey Bob, draw this on the board •Decrease variability (meaning error) of the scales •In simple terms - are some scales better than others?

What is Construct validity? How is it different than Internal Validity?

•Interpretation of the basis of the relationship demonstrated in the study •Construct is the underlying concept that is considered as the basis for the study's effect --Think of interpretive validity (remember psychometrics - it's the interpretation that is either valid or not so valid) --So we are looking at any ambiguities in the interpretations of the results of a study •*Construct validity is different than internal validity --Internal validity:* is the manipulation responsible for the findings or are there other factors (history, maturation, etc.) that could be responsible --Construct validity: is the change a function of the construct? --This is often not adequately attended to (just assumed)

What are the sources of Qualitative data?

•Interviews •Direct observations •Statements of personal experience •Documents •Journals, diaries, letters, biographical materials •Photographs •Audio and video recordings •Films •Note: each of the above methods has recommended methodologies for data collection

What is a True experiment?

•Investigations that permit maximum control over the IV or manipulation of interest •Assign subjects to experimental or control conditions •Kazdin makes the point that a true experiment is a generic term applied to studies where •subjects can be randomly assigned to conditions •Investigator controls who receives and who does not receive the experimental manipulation or intervention •Example would be a randomized controlled trails study •Randomized controlled clinical trials have been used to establish evidence-based treatments

What is meant by Self-reflective in Qual?

•Its important to remember that the researcher decides what data to collect and how to arrange it - caution needed as researchers can inadvertently impose their own values and beliefs

What are the Dilemmas Raised by TAUS?

•Its tricky figuring out what a treatment entails at a clinic, hospital or school - no matter what the descriptions and brochures report •The investigator needs to carefully monitor and evaluate what is done as part of the routine of the treatment •From a design perspective, its better for the investigator to oversee, monitor and assess the TAU so that one can report what was actually done as a part of the TAU •Kazdin cautions that TAUs at many clinics may be administered in a sloppy fashion, may be inconsistent and with therapist flexibility, personal style and taste that creates problems •Think of a two-or-more setting study with TAUs provided in different settings •The TAUs need to be carefully monitored •But,,, think about it, does the careful monitoring change the TAU? •With all of the above concerns, TAUs can be very helpful as comparison groups

T/F A threat to Low statistical power is a threat to Data-Evaluation Validity?

•Low statistical power (power is the probability of rejecting the null when it is false) ---So with good statistical power we have the ability to reject the null when it should be rejected ---Common problem is the low probability of detecting the difference if it truly exists.

Modern History of Qualitative research

•Much of science has emerged from the quantitative tradition •Experimental controls, operationalized definitions, etc. •Shy away from subjectivity and related internal processes •Not so focused on how the individual thinks, experiences and constructs the world •Focus has been on more objective measures •Current focus on internal processes has been to move away from the individual with more attention given to operationalized definitions •Think: rating scales to measure stress, fear, loneliness, etc...

What is the Additional Information on Reactions of Control (threat to internal validity?)

•No-intervention group •They may be treated differently to passive them ("those intervention folks get all the perks!") •Threat to internal validity •If participants know they are in the control condition, this understanding may change their behaviors •Influence of special treatment •Participating in a study (experiment or control group) may influence participants in some way. •So,, "damned if you do, damned if you don't!"

Qualitative Data Analysis

•Normally data is taken in descriptive form and recorded for analysis •Analysis of data: -Look for recurring themes or key concepts that emerge -Identify processes or a progression that seems to show the flow of experience -Links variables that emerge concurrently or over time -Generally looks for consistency and patterns in the materials •One aspires to discover new conceptualizations of the phenomenon

Direct observations cont..

•Not always an easy task •Behavior can be a stream of activity and not easily tracked •Codes may need to be developed that define what will be counted and how •Observations must be reliable - consistently counted and coded •So,, training needed for the observers •Past research suggests that such reliability is difficult to achieve •Observations may be obtained in different situations and in different ways •Think home - school - community and all the challenges •Use of technology •Phones used to prompt assessment of symptoms across the day •Feedback can be provided to intervene or support participant •Laboratory settings •More controlled environment - multiple raters

Types of Measure or Modality of Measurement

•Objective measures: Explicit measurements for characteristics such as IQ, personality, etc.. Using fixed responses (e.g., 1 - 7 point scale) •Global ratings: Measure general characteristics, overall impressions of the construct of interest. •Projective measures: Assess underlying motives, processes, personality, etc.. •Direct observations of behavior: measure direct observations. •Psychobiological measures: Examine biological substraits (e.g., arousal, cardiovascular, brain imaging, etc.) •Computerized tech-based & web-based assessment: Use of computers, smart phones, web-based, etc. •Unobtrusive measures: Assessments out of the awareness of the participants

Problems with Case-Control Designs?

•Obvious concerns with retrospective design •Rely on recollections •Different methods for collecting past information is sometimes possible --Self report - archival records, corroborated evidence --Memory is pretty sloppy!!! So efforts are made to collect records (prior diagnoses, school records, medical records, etc.) •These methods are flawed but still raise interesting questions •Correlations do not always point in a certain direction --Is the childhood memory recalled in such a way based on current biases - "I am happy so this influences my understandings or what happened in my past" •These design strategies can be very valuable - but........ Be careful

What are Samples of Convenience?

•Often used in research (think college-age students) •Participants drawn from a particular clinic waiting room •Big question: Is the sample appropriate? --Is there some quality about the convenience sample that might influence findings?? --Onus is on the researcher to discuss this or at least acknowledge any concern •Kazdin examples of circumstances when a sample of convenience may not be horrible --Postpartum depression or breast cancer (main interest is in women) --Pavlov's dogs - probably not necessary for Pavlov to have immediate concerns for different breeds of dogs --Clinics tend to draw from neighborhoods with specific groups

Brief Measures

•One still needs to assess psychometric properties and assure they are adequate for the job •Text gives examples of shortened symptom checklist scales •Practical considerations if the scale is used multiple times •One item scales •May possess little more than face validity - be careful! •Short scales and problems with restricted range •The range of scores may not be sufficient to distinguish participants (e.g., mild - moderate and more intense symptoms)

Data Evaluation

•Planning for how the data will be evaluated •This needs to be done up front to avoid what could be a disastrous experience •Do as much planning as possible and anticipate bad things (smaller sample, drop outs, etc.)

When and how do threats to Internal validity emerge? (1)

•Poorly Designed Study •Overview - more common than one thinks •Pre-post design •History, maturation and testing might account for the real differences in a simple pre-post •Pilot study •Get a feel for the experimental manipulation - how to do it, work out the bugs and get insight into improving the design •Well-Designed Study but Sloppily Conducted •Sloppy procedures •This creates threat to internal validity •Diffusion of treatment (if manipulation is not provided correctly) •Implementation of different conditions - must be as intended

What is Test Sensitization (Threat to External Validity)?

•Pretest sensitization - pretest may sensitize the participants that affects their performance --think "carry-over effects" --Counter balancing techniques (Example: memory for digits and letters) •A pretest does not necessarily restrict generality --But this needs to be evaluated and controlled as possible

What is Pretest sensitization?

•Pretest sensitization effect (sometimes called a carryover effect) - did the pretest sensitive the participants to react differently on the posttest regardless of the intervention •In a randomly controlled treatment people who do not received the preferred or desired treatment might dropout --Actually - just receiving the preferred treatment might influence the results when not accounting for the specific interventions (self-fulfilling) --Ethical issues need to be considered - the study may be stopped early if there is a notable difference in performance among the two or more groups

When We Do and Do Not Care about External Validity

•Proof of Concept (or Test of Principle) •Purpose is to create something like a "contrived" situation to prove a point or argument. •Sometimes we need to do this to demonstrate the theory even though its contrived •Physicists create a special environment to demonstrate a theory •Psych: Can traumatic memory ever be erased? In early research generalities may not be that important •More common in basic research

Basic Research of Translational Research

•Provides a test of proof of concept or theory to identify what can happen •Makes an effort to understand a phenomenon of interest under highly controlled conditions •Isolates processes or variables in ways that might not be how they appear in nature •Uses nonhuman animal models that allow special evaluation or observation of a key process •Uses special circumstances (e.g., procedures, equipment) that allow control or assessment of effects not otherwise available •Terms such as "bench research" or "lab research" are used to characterize basic research •Example: a calorie restricted diet can slow aging and reduce rates of death from diseases associated with aging

What is Applied research?

•Provides a test that focuses on an applied problem that may be of direct benefit to people •Terms such as "bench to bedside" and "bench to community" are used to describe translational research •Testing what can happen in contexts and real-life settings •Makes an effort to impact (reduce symptoms, improve test performance, etc.) and may have practical impact •May isolate influences (components of prevention program) but looks at intervention packages with many components to examine the overall impact •Concerned from the outset about generality to everyday settings •Think "evidence-based" - applying research to practitioner services •Also think of it from the investors in research perspective: "what are we getting for our investment in this research?" •Most funders want to see measured outcomesJ

What is the Solomon Four-Group Design?

•Purpose of the Solomon four-group design is to evaluate the effect of pretesting on the effects obtained with a particular intervention •The question is: "does the pretest influence the results?" (is there a carry-over effect?) •Not a very common technique •But concerns about the effects of pretesting are notable •The likely influence of a pretest in sensitizing people to the intervention will depend on the: •Nature of the pretest •The potency of the intervention itself •The focus of the research •Etc. •What we know •We do know that interventions for psychological, medical and school functioning, pretest sensitization can influence results •Too few studies have been done of sufficient quality to clarify the scope and limits of impact of pretest sensitization •The goal is to make our interventions more effective •Is it possible that using a pretest to sensitize participants to an intervention could be a good thing - improve the intervention effects?

GT - basic research design?

•Purposive sampling - purposely select participants •Generate data - survey - interview - focus groups - etc. •Initial coding --Codes: short labels to identify theme/idea --Compare each incident to each other incident --It's a constant search for consistency and difference --Its inductive and deductive --Goal is to increase abstract analyses •Intermediate coding •Identify core categories - which categories are subsumed by other categories - ID properties of categories •Process allows categories to form around emerging core concepts •Where initial codes fracture the data - intermediate codes transform data to concepts that will allow theories to emerge •Assess and reassess until data is saturated and no new categories emerge

Qualitative methods are?

•Qualitative methods are (or can be) •Rigorous •Scientific •Disciplined •Replicable

Overview and Conclusions of Qualitative Methods?

•Qualitative methods can elaborate the nature of experience and its meaning •Qualitative methods emphasize a level of analysis that includes elaboration and consideration of the details of the phenomenon •Can serve as a good basis for developing and testing theory •Can provide a strong basis for developing one's own research agenda in a given area •Can provide a systematic way of looking at potential causal paths •Can examine phenomena in ways that may reveal many of the facets of human experience that quantitative methods may circumvent •The human experience •Subjective views •How people perceive, make sense of and react to situations within context

What is the strength of a Birth-cohort design?

•Repeated comprehensive assessments over an extended period of time •A large portion of the cohort must be retained - no easy matter •Note the potential strain on researchers and participants! •Effort, costs and all the obstacles one can imagine (retaining researchers, cases, grant support) •An eager young researcher is not going to be eager to start a study that will take decades to complete •New researchers need to be brought on

What is a wait-list control group (Control Groups Used Intervention studies?)

•Same as no treatment group except that after the intervention is complete, the wait-list group receives the intervention - the waitlist group may also receive testing after the intervention --Same as no-treatment controls, but in this example the wait-list control group is provided the treatment after a delay Diagram: R A₁ X A₂ R A₁ A₂ X A₃

What do Qualitative methods focus on?

•Seeks to understand action and experience •Broader sweeps of functioning in context •Then obtain detailed descriptions •Researcher is not one who "collects data" •Rather its someone who participates with the participants to bring out the data •Then integrate the data to give it meaning and substance •Sometimes the researcher becomes deeply involved with the subject matter

Sensitivity of Measure

•Sensitivity of the measure •Does it systematically measure variation and change of sufficient magnitude that one can detect the predicted differences that are hypothesized? •Relatively large range of responses is often better •Small ranges of possible scores means there is not much room to detect changes that might occur •Some items are not very conducive to ranges of responses (yes - no varieties) - can these items be totaled or are they measuring different constructs? •The researcher needs to become well acquainted with each item on the scale •Diversity and multicultural relevance of the measure -We want psychological science to be relevant to the diversity of our culture and the world's cultures

Basics for Obtaining the Group Size (Power)

•Set alpha: probably at .05 •Set statistical power at .80 •Set effect size - remember this is somewhat arbitrary •Cohen's d is often used (various formulas ES= m₁-m₂/s) measures size of difference using standard deviation •Can be obtained from previous research trends •It can be thought of as small effect=.2, medium effect=.5, large effect=.8 •When in doubt - shoot for medium effect of .5 •Refer to table 13.1 (p. 331) for an example •You can also use one of several stats programs that will help calculate the N needed for statistical relevance •G*Power 3 is popular

What is statistical Sig?

•Significance @ .05 means there is no more that a 5% chance that the findings could have occurred due to chance or error •Significance @ .01 means there is no more than a 1% chance that the findings could have occurred due to chance or error •Note: clinical significance is not statistical significance •Clinical significance is like clinical judgementJ - you decide

What is a quasi-experimental design?

•Some aspects of the study are not or cannot be controlled •Random assignment is not possible •Random assignment not possible for all groups (maybe control group is not controlled) •In true experiments and quasi-experiments refer primarily to studies where the IV is manipulated by the investigator •But, much clinical research focuses on variables that are in some way controlled by nature or outside the experimenter's control

What is Matching in research?

•Sometimes one does not want to leave to chance the equivalence of the groups -One may want assurance that the two or more groups possess equivalent characteristics on some characteristic •*Matching is grouping participants together on the basis of their similarity on a particular characteristic or set of characteristics* •Various ways to do this •Match based on identical scores or some test that assesses for the grouping variable (IQ, symptom score, personality characteristic[s]) •More common is to rank order participants based on a symptoms score (or whatever score) and then sort into groups. •If its two groups, then the top two scores are sent to the two groups and so forth •This can get more complicated J

What are single case designs?

•Sometimes referred to as: •N=1 •Time Series •Single Subject Designs •Allows the clinician to track progress of individual participants/clients --Lack of treatment effect is readily noted --Group designs don't really track individuals --Some would argue that N=1 studies do a better job of measuring change •Can tailor treatment to meet needs of client/situation •N=1 can be cheaper, more efficient and,,,, no complicated statsJ •Limitations •Lack of controls •Not easy to generalize to the population - the individual tends to be the focus/more clinical focus

More Self-report issues

•Sometimes the person's ratings are about bias and distortion and not the construct of interest •Acquiescence - tendency to respond affirmatively •Naysaying - tendency to disagree or deny characteristics •Socially desirable responses - tendency to place oneself in a positive light •End aversion bias - avoid extreme responses •All of these biases/distortions systematically shape the person's responses •This is not random error •It is systematic error that goofs with the psychometrics of the measure

What is Observational Research?

•Sometimes true experimental designs are not possible •In much of clinical research the variables are not directly manipulated by the researcher - but are allowed to play out or manipulated by nature •This research is sometimes referred to as "observational" •Meaning that the researcher is observing the effects of the variable rather than directly manipulating it •Same goals regarding the reducing threats to internal, external, construct and data-evaluation validity •But the advantage of controlling and/or manipulating the conditions •Experimental research is highly regarded in psychology •But observational research can identify relations and findings that could not be evaluated experimentally •Other disciplines are more inclined towards observational studies •Kazdin notes epidemiology and public health where intact groups are more routinely studied •Bob asks,, are we being snobbish or just fussy? •Kazdin goes one to note that many well-regarded scientific perspectives rely on observational research (astronomy, seismology, meteorology, etc..) •And in practitioner related psychology observational research plays an important role

Stage three and four of FG

•Stage three: Facilitation -Preparation -Pre-session prep --Who is shy, get to know the group •Actual facilitation -Participant introductions -Note taker or observers introductions -Permissions to record Discuss confidentiality - limits of confidentiality in this environment - be honest --Any other rules (one person talks at a time, cell phones off, etc..) --Read through the script •Stage four: Analysis -Start right after the group to ensure good recall -For each question: --Look for themes and "big ideas" --What were the participants' non-verbal behaviors •Prepare report

Basics of Inferential Statistics

•Standard error of the mean (remember from PsychometricsJ) •It's the standard deviation from a set of sample means, all drawn from the same population •The standard error of the mean tells us what we can expect if we draw a mean from the population •We can also determine what mean scores would be considered highly unusual •Example of a population of people with GAD •Draw a sample and calculate the mean = 55 on the GAD scale •Treat the sample - after treatment the mean is 45 on the GAD scale •Is that 10 point change in the GAD scale sufficient for us to conclude "statistical sig.?" •Lets wait for Bob to click to the next slide so we can see if there is statistical sig.!! J •Oh no,,, Bob is dragging his feet - is it a power trip, a desire for attention???

What are multiple-baseline designs?

•Step one - gather baseline data on multiple concerns •Step two - intervene on one concern while continuing baseline on other concerns •Step three - once first intervention seems to be working it is maintained and an intervention is now introduced for another concern •Etc.... •Another *wrinkle on multiple baseline designs - multiple people:* •Maybe its one problem behavior - but multiple people presenting with that behavior •Step one - collect baseline across all participants •Step two - intervene with one participant while continuing baseline on others •Step three - once intervention shows promise, maintain that intervention and begin intervention on the next participant (still holding others at baseline, etc....) •This model helps demonstrate evidence that it's the intervention and not other factors that might be shaping the behavior of concern

What is a Multiple-Treatment Interference (Threat to External Validity?)

•Studies sometimes use more than one experimental condition •Does exposure and performance on one condition influence the performance on other conditions •Example: being asked to evaluate multiple stimulus materials - maybe vignettes, or faces, etc. Does responding to one stimulus impact how someone might respond to the next one?

Developing the Research idea CONT

•Studying Subtypes: An overall group that has been studied is evaluated to predict critical distinctions or subtypes •Can one distinguish in a meaningful way those individuals who possess some characteristic (i.e., depression, those with high achievement, those with exceptional musical abilities, etc.) •Questions Stimulated by Prior Research: Addressing a question raised by a previous study. Building upon previous research. •Studies that address an issue raised by previous research but not yet examined. •Extensions of Prior Work to New Populations, Problems and Outcomes: Study if relation affects other areas of functioning or domains not originally studies. •A psychotherapy alters symptoms of adults, does it also improve marital relations? Or, does a treatment for depression also work with eating disorders? •Extensions of Concepts or Theory to New Problems: Explore if construct can be extended to areas where it has not yet been applied (addiction, dependence) •Do addictive behaviors extend beyond the usual use of the term?

Psychometric Characteristics of Reliability

•Test - retest •Alternate forms •Internal consistency •Chronbach's Alpha, split halves, etc. •Interrater

Validity's new model - evidence based on:

•Test content - response processes - internal structure - relations to other variables - consequences of testing

What is a case study?

•The "case" can be an individual - a community - a system - an event, etc. •Focus on a single case - referred to as a within-site case study •Focus on multiple cases (used for comparisons, etc.) - referred to as a multi-site case study •Nested design - start by looking at a case (a group) and then focus more intently on a small sample of individuals in the group

What is a Mechanism?

•The basis for the effect. •Processes or events that are responsible for the change •The reasons why changed occurred •How the change came about

What is the effect size?

•The magnitude of difference between the conditions? •Expressed in standard deviation units example: .5 is ½ of one standard deviation •Standard Deviation: The measure of variability of scores around a mean.

What is Beta?

•The probability of accepting the null hypothesis when it is false (Type II Error) •How do you fix this (think about reasons why one would not be able to achieve a significant finding when one believes the significance exists)

What is Power?

•The probability of rejecting the null when it is false or the likelihood of finding differences when the conditions are really different.

Why is Theory important in Research?

•The researcher's theory is a critical part of the research process •Theory: A conceptualization of the phenomenon of interest •Can include views about the nature, antecedents, causes, correlates and consequences of a particular characteristic or aspect of functioning as well as how various constructs relate to each other (nomological networkJ) •Terms to describe this conceptualization include: approach, conceptual view/model, theoretical framework, working model. •These concepts are used with much variability and sometimes can be fuzzy (why fuzzy?) •Theory is an explanation of what is going one or what is happening, why and how variables are related, what is happening to connect the variables, etc.. •Historically, psychoanalytic theory was popular in explaining a number of psychological constructs •It was broad •Now, more circumscribed theoretical views are popular (why? Ok, there is a list below, but generally - why?) •Conceptual views focus on some narrow facet of functioning rather than a grand theory

What is the Moderated Mediation?

•The strength or direction of the relation of a mediator depends on some other variable -that other variable is the moderator. •Text hint: A "moderator" is the host of a quiz show; "mediator" helps with disputes

Guiding Questions

•The type of question one asks can influence the value one places on the research. •A study is likely to be interesting or found to be important based on the extent to which it addresses the follow: •Why or how does the question guide the study representing something that is puzzling or confusing •It may help solve the puzzle •Does the study represent a challenge in some way? •Finding a way to measure something that is elusive (maybe using a newer theory to apply to a phenomenon that has been elusive) •Could the research findings alter one's thinking on a topic? •Wow, we thought "this", but now maybe we should be thinking "that" •Can the research begin a new line of work? •Does the research advance or require a new explanation (theory)? •When the work on "Common Factors" in psychotherapy emerged, it really altered Bob's understanding of psychotherapy processes (in a good wayJ) •A rationale that something has never been done before is probably not sufficientJ •Your role as the researcher is vital in making the case for the importance of the study •Advisors and others will ask many questions and maybe "put you on the spot" •Embrace this and allow it to guide you as you develop your idea(s)

Investigating how two or more variables relate to each other

•There are many ways variables can relate to each other, and identifying them and the key concepts they reflect can serve as bases for doing a study. •All sorts of relations may be considered •And you can examine how or why there is a relation •Through what processes are the variables related •Correlations •Maybe a simple correlation between two variables •Or a more complex relation among multiple variables •Once you identify the relation the question is "why is there a relation" - what is going on? •Are there other moderating variables influencing the relation •So, correlation may be the beginning of a more in-depth exploration of the variables •Text example of handedness (in a sample of those with schizophrenia, 40% were left handed - what is going on?) •Premature birth???? Lefties are a bit more likely to be premature •Time of year of birth??

Key Considerations in Comparison Group Selection

•There are no formal rules for what groups to use,, but Kazdin recommends these guidelines •Helpful to ask one important question: "What do I need to control in this study?" •Consider all the threats to internal and external validity and then ponder J •Ask yourself - what will your results likely look like? •Then ask, what other interpretations are possible, given those results •Be careful, but at times, you can rely on previous literature •If that literature has already dismissed certain threats to internal and external validity, then you may be able to avoid them •But,, make sure there is a consensus that not using comparisons is okay dokay •Selection of control/comparison groups may be dictated and also greatly limited by practical and ethical considerations •Issues like attracting a sufficient number of participants for multiple groups that present with the same issues •Losing - or knowing you will lose - participants during the study •Ethical issues such as withholding treatment (participant is depressed and suicidal and is randomly assigned to a group with no treatment)

What are Yoked Control groups?

•There may be times with different participants are exposed to different in procedures or events during the course of a study of an intervention --Such differences may be systematic - and therefore not random or unsystematic --Such differences can confound the study ----Maybe a variable that was not associated with a research hypothesis but was associated with the intervention emerges that explains the differences between groups ---One wants to anticipate such variables and try to control them •The yoked control group works to ensure that groups are equal with respect to potentially important but conceptually and procedurally irrelevant factors that might account for the group differences --Yoking may require a control group of its own --But yoking can be incorporated into another control group

Time frame for research

•This varies - but most researchers try to collect info in a brief period of time •One meeting per participant (if there is interaction) over a condensed period of time •Longitudinal studies are peachie, but you don't want to take years to complete dissertation and faculty don't want to wait 20 years for promotion and tenureJ

How do you manage threats to internal validity?

•Threat to internal validity •Determine the relevance of a threat to internal validity in order to manage it •If the threat is relevant it needs to be considered •Think "parsimony" •Threats to internal validity are the major category of competing explanations of the conclusion that the intervention was actually responsible for the change. •If these threats are not ruled out they can really muddy the waters.

Triangulation in qualitative research

•Triangulation can be achieved multiple ways -Use multiple sources of data (e.g., interviews, questionnaires, etc..) --Mixed methods combining qualitative and quantitative methods -Conclusions can be examined by others -Participants may be asked to reflect on the data results -Review other research that confirms findings -Some express concerns that qualitative researchers may have an unchecked role in interpreting their findings -Remember that researchers may actively participate with participants --Leading to conscious or unconscious bias --Multiple strategies may reduce concern that the findings are just the researcher's perspective --Investigators may make explicit their own views - try to be transparent --May use an iterative process (i.e., repetitive, checking) where researchers consult with others

What are Quasi-Experimental Designs?

•True experiments are well controlled to eliminate or make very implausible the threats to internal validity •The main feature is the ability to assign participants randomly to the various conditions •There are many situations where the researcher does not possess control over participant assignment, but still wants to evaluate an intervention •Quasi-experimental designs are designs where the researcher cannot exert the control required of true experiments •Maybe the groups already exist - comparing one group that already exists to another •If the groups are already formed and may differ in some way before the intervention they are referred to as "nonequivalent control group designs" •Most common within the pretest - posttest designs looks like this nonR A₁ X A₂ nonR A₁ A₂ •(above) - nonrandomly assigned participants (maybe in difference clinic sites) are compared •Strength of this study depends on the similarity of the experimental and control groups •It may be possible to match folks from the two clinics on some sort of variable (if depression is being examined, maybe the level of depression, etc..) •Note: you can also use the design above, but without the pretest •Examining the effects of a smoke-free environment •We now have many ordinances that control smoking but before that was the trend - having communities give up smoking in public places was a challenge •So researchers identified a community that had just implemented an ordinance that forbid smoking in public places (Pueblo, CO) and compared the people living there to people from a few other communities where smoking was permitted •Looked at hospitalization rates for acute myocardial infarction over a three year period •Findings: a marked decrease in hospitalization rates were recorded for Pueblo with no changes noted in the other two communities •While this is a quasi-experimental design with no control over participants it was still considered a significant finding •Nonequivalent control group designs are very common J

More on Error

•Two types of error •Type I error •A significant difference is reported when it does not really exist •Bob thinks of this as "type I" error because it's the worst kind •It makes all science look stupid •What causes this: $%#& happens JL •Type II error •A significant difference exists but the study does not reveal that level of difference •What causes this? •Lets talk about this question •Kazdin mentions "Big Data" •Super duper large samples where even small differences can be identified as "sig. different" •Pink Floyd would say: "careful with that axe Eugene"

Underrepresented Groups (threat to External Validity)

•Underrepresented Groups -Limited inclusion of women and minorities -Ethnicity -Cross-cultural differences •Additional Information on Underrepresented Groups •If you believe that the simple/pragmatic view (parsimony) is incorrect, demonstrate it •Interesting way to generate research ideas -Studies can also support the generalization of the findings

Key steps to starting the research process

•Various tasks associated with the proposed study emerge over time and in a sequence •One must have an idea for a study •Always be thinking about ethical considerations •What will the participants experience - are there ways to improve the experience (does it need to be so long??) •Really long surveys can scare people away - but sometimes the info is needed - make sure the info is really needed •Is the study a waste of time (sorry to be blunt)? - If so is it ethical to burden participants with it? •Many issues need to be worked out with your advisor before you can pursue a working draft of the procedures, instruments, etc... Don't put the cart before the horses •One must be able to make the idea testable - that is the hallmark of scientific hypothesis! •Once expressed in a testable form, it must be an idea that can be supported or refuted by the results of the study •You can ask: what would be the likely finding that would be consistent with the predictions you are making? •Can you anticipate what results might occur that would challenge your theory? •Remember Popper and falsifiabilityJ •It is important to consider what it would take to provide evidence inconsistent with your view

When and how do Threats to Internal Validity emerge (2 of 2)

•Well-Designed Study with Influences Hard to Control during the Study •Uncontrollable circumstances •Subjects dropping out •Well-Designed Study but the Results Obscure Drawing Conclusions •Example •2 groups (pre-post) with different interventions - both show improvement - it could be maturation, testing, etc..... •Pull your hair out??

What are focus groups?

•Why FG? -Insights into how people think -Goes beyond surveys to capture "deeper information" -More economical than individual interviews •Stage One: Define purpose •Stage Two: Develop methodology to address purpose (research questions) •Step one of stage two is conceptualization •Map out sampling technique based on characteristics that suit purpose •Size? 7 - 12 participants is common •Have about five open-ended questions ready to go (requires brainstorming, prioritization, etc...) •Have prompts ready - prompts used to promote discussion when initial question bombs •Also develop probes •Step two of stage two is Logistics -Organize a plan or approach - includes a detailed schedule of steps and events (plan weeks in advance) -Select facilitator -Develop script or guide for the facilitator used to explain the purpose of the FG to participants

What does Construct validity measure?

•Why the differences occur in a study •Is it a function of the construct or something else? •What was responsible for the change? •This may seem obvious..... •Construct validity is a more nuanced concept than internal validity

Multiple-treatment Designs

•With multiple-treatment designs each of the different conditions (treatments) under investigation is presented to each participant •Is most often found in circumstances where one wants to evaluate the effects of different interventions •Ex: psychotherapy vs medication •Think of the different treatments as different conditions within a study •Balancing or "counter balancing" is often used so that the treatment effect is not confounded by the position in which it appeared to the participants (think of it as trying to avoid carry-over effects or learning effects) •Crossover design •Example of two groups of participants constructed through random assignment •The groups differ only in the order in which they receive the two treatments (it looks more complicated than it is) R A₁ X₁ A₂ X₂ A₃ R A₁ X₂ A₂ X₁ A₃ •This procedure may be used to compare different medications for the same disorder if one allows for a "washout" period between medications to avoid lingering effects of the first medication that is provided

Issues with Self-report measures

•Wording, format and order of appearance of items can influence client responses •Not studied so much •But available research indicates that the order in which disorders are assessed influences client/participant responses •The number of symptoms •Level of impairment •Kazdin is concerned that we don't take these issues as seriously as we should •Self-report measures are sensitive to •Minor changes in wording •The format of the item •Context of the item •Examples •Views of life success varies based on the scale (e.g., 1 to 10 or -5 to +5: -5 to +5 influences stronger successful ratings. •Order of questions can influence marital satisfaction •Climate as an issue varies based on "global warming" vs "climate change"

What is a Birth-cohort design?

•a study that begins with a group that enters the study at birth and is followed for a period of time. •Assessments obtained periodically and over a time frame •Kazdin example: Birth cohort study going on in New Zealand to understand development of psychopathology and adjustment •N = 1,037 born in 1972-1973 •From age 3 - 15 assessed very two years and then at 18, 21, 26 and recently at age 32 ---Full day assessments - physical, mental health interviews, etc.... •Bob's favorite: Kauai Longitudinal Study by Emmy Werner and associates identified 698 children born on the island of Kauai in 1955 and followed them through mid-life - examined factors that contribute to resiliency •Bob's other favorite: The New York Longitudinal Study by Thomas, Chess & Birch started in 1956 and conducted over several decades found nine temperament characteristics that seem to be fairly stable in spite of many different environment factors

What are Nonrandomly assigned or nonequivalent control groups?

•are groups that might be added to a study that uses participants that were not part of the original sample pool and not randomly assigned to the treatment group •Sometimes called "patched-up- control groups •Used to rule out rival hypotheses and decrease the plausibility of specific threats to internal validity •History •Maturation •Testing •Instrumentation •May be used when a no-treatment control group cannot be formed (often an issue in clinics/schools/communities) •Used the same as randomly assigned no-treatment group •May be a weak option - depends on how the group was formed •Example: Pueblo Heart Study (discussed earlier - quasi-experimental study) •A few communities were used as controls where there was no smoke-free ordinance - in these communities no change heart attack rates were noted •But we can't be sure that the groups were equivalent to the treatment group (i.e., Pueblo, CO)

How are cohort and observational designs useful according to Kazdin?

•cohort designs and observational designs can be valuable when •There is an interest in varied outcomes for different groups •Interest in prediction, classification and selection of cases •Identifying varying outcomes •Looking at different outcomes based on the groups selected •One group has some experience and is compared to a similar group who has not had that experience ---As a longitudinal study, this can have notable value - Kazdin's example or exposure to TV violence & aggressive behavior (looks like a Chi Square analysis):

Cross-sectional case-control designs

•examines factors that are associated with a particular characteristic of interest at a current point in time --Examples: growing up in bilingual homes vs those who do not, barriers to obtaining health care among various groups, do depressed mothers interact differently with infants compared to those who are not depressed •Commonly used - see text for more details

What is Systematic Error?

•example is a set of scores that are miscoded in the wrong direction •One of my doc students used a complicated scale where items had to be grouped according to subtests - some items were also reverse-scored •When I ran an analysis of the subtests internal consistence the findings were really difference than the internal measures reported by the author •The student checked his entries and re-did the math •The student ended up rescoring and recoding everything and there was still a problem •The student contacted the scale's author and the author apologized - he had sent the student the procedures for an earlier version of the scale - Fortunately, no one died J

Case-control design (observational research)

•forming groups that vary on the characteristic of interest (here the term "case" refers to the individuals with the characteristic of interest •Example: Two groups - the research compares participants who show the characteristic with those who do not

Retrospective case-control design (observational design)?

•goal is to draw inferences about some antecedent condition that has resulted in or is associated with the outcome •Identify a timeline between causes or antecedents (risk factors) and an subsequent outcome •Essentially evaluating past events to explain current findings (this is how one is now based on this previous experience) •Example in text: lower rates of parent reports of breastfeeding associated with risk for ADHD - see text for details

What is a single-group cohort design?

•identifies subjects who meet a certain criterion and follows them over time (maybe some event that is expected to shape their development over time) •Assessment occurs at least twice and sample needs to be substantial •Examples: PTSD symptoms after a disaster

No-contact control group (No intervention is provided)

•no intervention is provided and this group is not aware that they are participating in the study - have no contact with the researchers •This could require deception - which is seriously frowned upon by IRBs •Sometimes the no-contact group receives a self-guided self-help program

What is Unsystematic error?

•random errors with no pattern •Think of a few simple entry errors that add variability to the data •How do you track for such errors?

What are nonspecific treatment or attention-placebo control groups?

•this group receives some nonspecific treatment that is not considered critical to the actual intervention might appear credible to the participants - like a psychological *placebo* • No-treatment and wait-list control groups are primarily used to address threats to internal validity (e.g., history, maturation, repeated measures). •Nonspecific treatment group addresses the above issues too AND can also help control for construct validity concerns •Many variables may account for therapeutic change •Attending sessions, personal contact with therapist that engenders hope, hearing logical rationale for origins of problem, and participating in a treatment designed to address the concern - *Think Common Factors* •For many treatments (even CBT, etc..) we may not be sure of the mechanism that is responsible for the treatment effect - it could be common factors, the intervention or some combination •Nonspecific treatment serves as a placebo that might capture the common factors but not the intervention under investigation •Ethical issues: providing a treatment that is not well grounded in theory or empirical findings - and if a client really needs services this type of treatment may not be defensible •Think of the placebo treatments provided to those who were suffering from HIV - the use of placebos is controversial •Placebos should not be used when another more credible treatment is available •A treatment as usual group may be more ethical

Treatment as usual group:

•this group receives the standard/routine treatment that would typically be provided by the clinic if not for the researcher's intervention •Some express concern for the name of this control group •Treatment as usual (TAU) can be described as "treatment de jour" •Some therapists believe that tailoring the treatment to the individual is essential •Kazdin makes the point that we really don't know what TAU is - this complicates things •Kazdin discusses a TAU situation where a variety of different approaches were used compared to interpersonal tx for depression - both treatments were shown to improve the depression symptoms - but the interpersonal tx clients demonstrated a larger magnitude of change and the progress continued after treatment

What is a multigroup cohort design?

•two or more groups are identified at the beginning of the study and followed over time to examine the outcomes of interest •Think of two groups that vary in exposure to some condition of interest •Soldiers returning from combat vs similar soldiers who were not in combat •Rutter et al., followed children with mild traumatic brain injury (control group were similar children without TBI) - see text for discussion of this study - relation between TBI and future psychological disorders - BUT as Kazdin notes we need to be careful to not assume that there is a cause-effect relationship


Set pelajaran terkait

Int. Finance - Test #1 - Chapter 1

View Set

Peds - Ch 17: Alteration in Sensory Perception - Disorder of the Eyes or Ears

View Set

Read and Interact - CH 6 Geography

View Set