Capstone Exam 1

¡Supera tus tareas y exámenes ahora con Quizwiz!

External Validity

"Does this apply to other groups, in other settings, or in other geographical areas?" • Extent to which results of study can be generalized beyond the conditions of the experiment to other populations, settings, & circumstances

Define Internal Validity

"Is the change due to the program (intervention), rather than alternative variables?" • the extent to which an investigation rules out alternative explanations of the results. • Determine cause - effect; ensure no confounds explain effect

What is the goal of science?

*Explanation* of what things are, how they work, how they relate to other phenomena, how they emerge

Factors affecting validity?

*Test-taking factors* --> Anxiety, speed, understanding instructions *Factors related to the criterion* --> (E.g., School grades affected by study habits- Are grades indicative of intelligence or study-skills?)

CH. 10: SELECTING MEASURES

. . .

CH. 11: TYPES OF MEASURES & THEIR USE

. . .

Ch. 1: Intro to Clinical Research

. . .

Kazdin Ch. 16 + APA (2017)- ETHICAL ISSUES IN RESEARCH

. . .

Kazdin Ch. 2-3: VALIDITY & RELIABILITY

. . .

§ 1.3: Methodology

. . .

§ 16.4 Critical Issues in Research

. . .

§ 16.6: Intervention Research Issues

. . .

§ 17.3: Critical Issues & Lapses in Scientific Integrity

. . .

§ 1.4: Way of Thinking & Problem Solving

. . . Analyze some of the key concepts that guide scientific thinking and problem solving . . .

Plurality of concepts. . .

. . . should not be posited without necessity; we ought not to add more concepts if they are not needed to explain a given phenomenon.

We generate explanations to draw implications. Those implications are . . .

. . .Hypotheses that elaborate what might be going on & help us move forward

Control Groups & Tx of Questionable Efficacy: Providing a tx designed to be weak or control designed to be ineffective raises issues of:

1. Client's problem may not improve or may even become worse without eff. tx 2. Clients may lose credulity in the process of psychological treatment in general; patients expect effective tx & change; if the given tx fails to achieve change, patients may be discouraged from seeking help in the future.

Threats to External Validity (Table 2.3)

8 *Sample Characteristics* (Extent to which the results can be extended to subjects/clients whose characteristics may differ from those included in the investigation • *Narrow Stimulus Sampling* (Extent to which the results might be restricted to a restricted range of stimuli or other features [experimenters] used in experiment.) • *Reactivity of Experimental Arrangements* (Possibility that subjects may be influenced by their awareness that they're participating in an investigation or in a special program. Thus, effects may not extend to situations in which the individuals are unaware of arrangement.) • *Reactivity of Assessment* (Extent to which subjects are aware that their behavior is being assessed & that this awareness may influence how they respond. Persons who are aware of assessment may respond differently. • *Test sensitization*: Measurement may sensitize subjects to the experimental manipulation so that they are more or less responsive than they would have been without an initial assessment • *Multiple-Tx Interference* (When same subjects are exposed to more than one tx, the conclusions reached about a particular tx may be restricted.Results may apply to only other ppl who experienced both treatments in same way or order • *Novelty Effects*: The possibility that the effects of an experimental intervention depend upon innovativeness or novelty in the situation. • *Generality* across Measures, Setting, & Time: Extent to which the results extend to other measures, settings, or assessment occasions than those included in study. If findings obtained would not transfer to other settings, this poses threat to external validity.

Table 3.1: Threats to Construct Validity

ASED • Attention & Contact Accorded the Client: The extent to which an increase of attention to the client/participant associated with the intervention could plausibly explain the effects attributed to the intervention - - - - - - - - - - - - - - - - - - - - - • Single Operations & Narrow Stimulus Sampling: [Sometimes a single set of stimuli, investigator, or other facet of the study that the investigator considers irrelevant may contribute to the impact of the experimental manipulation.] - - - - - - - - - - - - - - - - - - - - - • Experimenter Expectancies: Unintentional effects the experimenter may have that influence the subject's responses in the experiment. (tone of voice, facial expressions, delivery of instructions) - - - - - - - - - - - - - - - - - - - - - • Demand Characteristics: Cues of the experimental situation that are ancillary to what is being studied but may provide info that exerts direct influence on the results. The cues are incidental but "pull", promote, or prompt bx in the subjects that could be mistaken for the impact of the IV of interest.

Threats to Internal Validity (Table 2.2)

HMT-ISSAD (i-so-sad) • History (any event outside tx occurring at time of experiment that could influence the results or account for the pattern of data; ex. family crises; change in job; loss of partner; pandemíc) • Maturation (Any △ over time that may result from processes within subject; growing older, stronger, healthier, more tired/bored) • Testing (Any △ that may be due to repeated assessment; ex. familiarity, fatigue) • Instrumentation (Any △ in the measuring instrument or assessment procedure over time; ex. autism diagnostic criteria has changed over time) • Statistical regression (Any △ from one assessment occasion to another that might be due to a reversion of scores toward the mean; esp. when patient initially scores at extreme end) • Selection Biases (systematic differences between groups before any experimental manipulation or intervention; thus differences b/t groups may be due to pre-existing differences before experiment.) • Attrition (Loss of subjects over the course of an experiment that can change the composition of groups in a way that leads to selection biases.) • Diffusion of tx (tx is inadvertently provided during times it shouldn't be or to participants who should not yet receive the tx at that point. Control group is affected by tx)

Table 3.4: Threats to Data-Evaluation Validity

LSV-UREMM • LOW STATISTICAL POWER: power is the likelihood of demonstrating an effect or group difference when in fact there is a true effect in the world. Low power = probability of rejecting the null when that hypothesis is false. (Low power is a function of: statistical significance [alpha: probability of rejecting null when null hypothesis is true]; n (sample size); & the group differences (effect size) - - - - - - - - - - - - - - - - - - - - - - - • SUBJECT HETEROGENEITY: Higher heterogeneity → less likely to detect differences. - - - - - - - - - - - - - - - - - - - - - - - • VARIABILITY IN PROCEDURES: How the study is executed can make a difference in whether a true effect is detected. - - - - - - - - - - - - - - - - - - - - - - - • UNRELIABILITY OF THE MEASURES: error in measurement procedures that introduced variability can obscure the results of a study. Unreliable measures increase error in assessment and decrease the likelihood of showing group differences. - - - - - - - - - - - - - - - - - - - - - - - • RESTRICTED RANGE OF THE MEASURES: a measure may have a very limited range (total score from high to low) & that may interfere with showing group differences. - - - - - - - - - - - - - - - - - - - - - - - • ERRORS IN DATA RECORDING, ANALYSIS, & REPORTING: Data are used in selective way where only some measures or analyses are reported. Mislead intentionally or accidentally. - - - - - - - - - - - - - - - - - - - - - - - • MULTIPLE COMPARISONS & ERROR RATES: when multiple statistical tests are completed within same investigation, the likelihood of a "chance" finding is increased. False conclusions will be more likely unless accommodation is made for the # of tests. - - - - - - - - - - - - - - - - - - - - - - - • MISREADING OR MISINTERPRETING THE DATA ANALYSIS: The conclusions reached from the data analysis are not to which the investigator is entitled. Either the proper statistic was not run or the conclusion reached goes beyond the statistical test.

Who gets to be an author?

One of more of the following: • Develops the design • Writes or prepares the manuscript • Integrates or brings together theoretical perspectives • Develops novel conceptual issues • Designs or develops the measures • Makes key decisions about data analyses & • Interprets the results

Due to heuristics . . .

Our view, perception, & conclusions systematically depart from what the data in the world would show if bias was controlled

Define Informed Consent

Participant is informed about the project with its procedures & its implications, & agrees to participate.

Define Methodology

Practices, principles, & procedures that allow us to overcome bias in research

Define Invasion of Privacy

Seeking or obtaining information of a personal nature that intrudes upon what individuals view as private.

How to Select a Topic: Common StudentMistakes (Sternberg, 2005)

Selecting a topic that is: • Not interesting to them • Too easy or safe • Too difficult • Without supporting literature • Too broad

TYPES OF RELIABILITY

TAII • Test-Retest Reliability: Stability of test scores over time; correlation of scores from one administration of the test with scores on the same instrument after a period of time • Alternative-form Reliability: Correlation between different forms of the same measure when the items of the two forms are considered to represent the same population of items. • Internal Consistency: Degree of consistency or homogeneity of the items within a scale. • Inter-rater Reliability: Extent to which different assessors/observers agree on the scores.

Factors related to reliability?

TTV-VG • Test length → More items are better • Test-retest length → Shorter duration is better • Variability of scores →Heterogeneous is better • Variation in the testing situation →Less is better • Guessing

Confirmatory Bias

We select, seek out, & remember "evidence" in the world that is consistent with our view. we don't weigh all experience or the extent to which some things are true based on realities-we pluck out the supportive features that confirm our view.

"Allocation of credit" refers to. . .

Whom to list as authors, the order they appear, the relation b/t junior & senior scientists or faculty & students, & how the different roles & contributions affect authorship

types of validity

ccc-pci-fcd construct,content,concurrent-predictive,criterion,incremental-face,convergent,discriminant •Construct Validity: a broad concept that refers to the extent to which the measure reflects the construct of interest. ------------------ • Content validity: evidence that the content of the items reflects the construct or domain of interest. The relation of the items to the concept underlying the measure. ------------------ • Concurrent Validity: Correlation of a measure with performance on another measure or criterion at the same point in time. ------------------ • Predictive Validity: Correlation of a measure w/ performance on another measure or criterion at some point in future. ------------------ • Criterion Validity: Correlation of a measure w/ some other criterion. ------------------ • Incremental Validity: Whether a new measure or measure of a new construct adds to an existing measure or set of measures in regards to some outcome. ------------------ • Face validity: Extent a measure appears to assess the construct of interest. ------------------ • Convergent validity: Extent to which two measures assessing similar or related constructs correlate with each other. A measure should correlate with other measures it is expected to correlate with. Based on overlap or relation of constructs. ------------------ • Discriminant Validity: Correlation bt measures expected not to correlated to each other or to assess unrelated constructs. Measure should show little or no correlation with measures with which they are not expected to correlate.

α =

α: probability of rejecting the null when the null hypothesis is true

Table 10.2: Response Sets that can Influence Responding When subjects are Aware of being Assessed (Same as Table 11.3)

• Acquiescence: Tendency for individuals to respond affirmatively (true or yes) to questionnaire items • Nay-saying: tendency for individuals to deny characteristics. • Socially Desirable Response: Tendency to respond to items in such a way as to place oneself in a positive (socially desirable) light • End aversion bias: Tendency to avoid extreme scores on an item even if those extreme scores accurately reflected the characteristic.

Justify the Need for Science

• Acquiring knowledge (consistent principles & practices; goal: describe, understand, explain, intervene when needed) • Identify complex relations (impossible to discern these relations from casual observation) • We need Extensive Data Gathering to draw conclusions (large representative sample to provide info in trustworthy+transparent+replicable way) • Surmount Limitations of Human Perception

List the sources of Protection of participants' privacy

• Anonymity (ensure that identity & performance of participants are not revealed; ex. don't collect names/assign code #) • Confidentiality (info will not be disclosed to a third party without the awareness & consent of the participant

Choosing Your Research Question

• Ask yourself "so what?" • Be able to tell a good story • Strike a balance between too narrow and too broad (see page 23)

List the Elements of Informed Consent

• Competence (individual's ability to make a well-reasoned decision & to give consent meaningfully. Are there any characteristics of particips or situation tht would interfere with ability to make thoughtful, deliberative, & informed decision?) • Knowledge (understanding the nature of the experiment, the alternatives available, & the potential risks & benefits. Is there sufficient info provided to the subject, & can the subject process, utilize, & draw on that info?) • Volition (Agreement to participate on the part of subject that is provided willingly & free from constraint or duress. Are there pressures, constraints, or implicit/explicit contingencies that coerce subjects to serve in the study? + Free to revoke consent at any time.

Define Construct + Construct Validity

• Construct: underlying concept that is considered to be the basis for or reason that experimental manipulation had an effect • Construct Validity asks, "Why did the intervention produce this change? Is the reason for the relation between the tx & △ due to the construct given by the investigator?

How to choose a measure?

• Ease of use and access • Sensitivity • Multicultural relevance • Reactivity (is assessment triggering to participants?) • Use multiple measures • Brief of short forms • Who is taking it and administering it? • Standardized, use of an existing measure or developing a new measure -Pros and cons: good reliability & validity; but can be expensive; special license needed to administer some; can only use it in exactly the way they told you to use it

Define Deception

• Entirely Misrepresenting the nature of experiment (active), or • being ambiguous about the experiment or not specifying all or many important details (passive, omission)

Fraud vs. Error

• Error = honest mistakes that may occur in some facet of the study or its presentation • Fraud = explicit efforts to deceive & misrepresent

Landrum's Undergraduate Writing in Psychology Ch. 2: Finding the Thread of your Story: How to Select a Topic?

• Feasibility • Personal and Vicarious Observation • Expand on Previous Ideas • Practical Problems • What truly interests you?

Differentiate *findings* from *conclusions*

• Findings = results • Conclusions = Interpretation; explanation of the basis of the finding; this is the interpretative & theory part

TABLE 11.1: DIMENSIONS OF PSYCHOLOGICAL MEASURES

• GLOBAL/SPECIFIC → measures vary in the extent to which they assess narrowly defined vs. broad characteristics of functioning. Measures of overall feelings, stress, & quality of life are global; mood state/emotion regulation are more specific • PUBLICLY OBSERVABLE INFO/PRIVATE EVENT → observable bx like cigarette smoking, social interaction vs. private: headaches, thoughts, urges, obsessions • STABLE/TRANSIENT CHARACTERISTICS → Long-standing aspects of functioning + Trait-like characteristics (personality, self-control) or short-lived/ episodic characteristics (mood immediately after being provoked) • DIRECT/INDIRECT → Direct: purposes of measure can be discerned by client; indirect: measures that obscure exactly what is being measured from the client • BREADTH OF DOMAINS SAMPLED → Measures vary whether they assess a single characteristic (e.g. introversion, anxiety, risk-taking, need for social approval) while others aim at revealing many diff characteristics of personality or psychopathology (e.g. several personality traits or diff types of symptoms within a single measure) • FORMAT → Measures vary in the methods thru which subjects can provide their replies (multiple choice, t/f, rating scales, narrative reports) • AUTOMATED & EQUIPMENT BASED → Measures rely on special equipment or activities that capture key processes or activity usually outside of the awareness of the subjects

What are the critical questions to conducting a study?

• How do I select a research question? • What participants should I use? • How do I decide what measures to include in my study? • Should I use random assignment? (R.A. is not always critical, not problem-free, & often not the best way to serve intended goals)

Why can Memory be a roadblock to accruing knowledge?

• Human memory re-codes reality, recall can be distorted (We fill in details with internal processes [imagination, thought] thus struggle w/ reality-monitoring, which is differentiating from external world-based memories vs. internal-based mems [thoughts, perceptions]) • Filled in memories + False Memories: Mems can be induced, implanted; thus coding & recalling experience, even when vivid & confident, may not be accurate.

APA (2017) Ethical Principles and Code of Conduct: Research and Publication (§8)

• Institutional Approval • Informed consent • Client, student, subordinate research participants • Dispensing with informed consent • Deception • Inducement • Debriefing • Humane care and use of animals • Reporting research results • Plagiarismn • Publication credit • Duplicate publication • Sharing data • Reviewers + also Assessment (Section 9)

What does Construct Validity Ask?

• Is the reason for the change due to the construct identified by the researcher? [ex. is it the nicotine that is having the calming effect-or is it the social interaction associated w/ smoking?] • WHY did it work? • consider confounds (features of experiment that interfere with interpretation of findings)

Other influences on perception?

• Motivation • Mood state • Biological state (hunger, thirst, fatigue) * All directly guide how reality is perceived (motivated perception/wishful perceiving)

TABLE 11.2: TYPES OF MEASURES

• OBJECTIVE MEASURES → explicitly specify the material that is presented (the items) & response formats that are required to answer them. e.g. 1-7 point scales • GLOBAL RATINGS → efforts to quantify impressions of general characteristics. Reflect overall impressions or summary statements of the construct of interest. • PROJECTIVE MEASURES → assessments that attempt to reveal underlying motives, processes, styles, themes, personality, & other psychological process by presenting ambiguous stimuli to patient. • DIRECT OBSERVATIONS OF BX → Measures that assess bx of interest by looking at what client actually does; overt bx, sampled in everyday or specially designed situations • PSYCHOBIOLOGICAL MEASURES → Assessment techniques designed to examine biological substrates & correlates of affect, cognition, & bx, & the links bt biological processes & psychological constructs. Encompass many types of functions, e.g. Autonomic system, cardiovascular/gastro/neuro systems, microelctrodes, brain imaging. • COMPUTERIZED/TECH-BASED → smartphones, tables, web-based; automated collection of info + automated scoring & evaluation of info • UNOBTRUSIVENESS MEASURES → Assessments that are out of awareness of subject being assessed

Old vs. New Measures: Pros & Cons

• Old can be good because they have been validated, standardized, "gold standard"; but bad because old measures can be outdated//inappropriate language// or construct validity has weakened so it's no longer relevant

Nelson & Steele's Beyond Efficacy & Effectiveness: Potential New Research Questions

• Outcome Evaluation • Consumer Evaluation • Provider Evaluation • Economic Evaluation

List the Required Components of Consent Form

• Overview • Description of procedures (goals, exp. conditions, assessment procedures, requirements of particips) • Risks & benefits (physical/psych risks, inconveniences & demands ex. # of sessions, meetings, requests of & contacts) • Costs (Charges in tx + payment + compensation) • Confidentiality (Assurances that info is confidential) • Alternative tx options • Selective Refusal Statement (particip can skip any question/measure if desired) • Voluntary participation (statement of willingness + can say no now or later without penalty) • Questions and further information (encouraged to ask q's at any time) • Contact person + Signature lines • Authorization + stamp of approval by institution overseeing rsch

2017 APA Ethical Principles (Aspirational)

• Principle A: Beneficence & Non-maleficence • Principle B: Fidelity and Responsibility (are you staying true to original research intentions) • Principle C: Integrity • Principle D: Justice • Principle E: Respect for People's Rights & Dignity

Using Multiple Measures: Pros & Cons + What to be careful for?

• Pros: helps make more complex + helps to make sure results are not restricted to the construct as assessed by particular method & measure used. • Cons: can tire out the participants & make them frustrated

Define Debriefing & its purposes

• Providing a description of the experiment & its purposes. • Purposes of debriefing: 1) counteract/minimize any neg. effects experiment may have had 2) educative; convey value & goals of rsch., why info might be important, & participant has contributed to rsch. Extra Notes on Debriefing - Suspiciousness, does not erase all false impressions - Does it really change the OC?

Consent and the interface with threats to validity: Randomization & Attrition

• Random assignment prevents threats to internal validity (selection biases) • Attrition (dropping out) can influence all types of exp. validity • Informed consent raises issues that affect attrition & threats to validity; informed consent demands that particips are able to withdraw consent & drop out at any time-this affects validity

List the Components of Methodology

• Research design (experimental plans to test hypothesis) • Assessment (systematic measures to provide data) • Data evaluation (methods to characterize sample, describe performance, & draw inferences) • Ethical issues + Integrity (responsibilities to participants, discipline, & community) • Communication of findings (eg. journals, articles, news)

List the Limitations of Accruing Knowledge

• Senses & their Limits (senses are selective- as humans we only see one selective part of the world) • Cognitive Heuristics (processes outside of our awareness that serve as mental shortcuts to help us negotiate everyday experience + solve problems; emerge as bias when we attempt to draw accurate relations based on our own thoughts, impressions, & experiences

Consent Forms: Define "Minimal Risk" + Procedures

• The risks of the study do not exceed the risks of normal living. •

What does "Data evaluation validity" mean?

• To what extent are the relations found with our data evaluation? • How well can our statistical procedure detect these relations if they do exist? • If an outcome/effect exists, are we able to identify it through our statistical procedures/measures?

Figure 3.1: Statistical Decision Making

• Type I Error: α - False Positive. Reject null hypothesis (claim there's a difference) when in reality the null hypoth. is true & there are no group differences. . • Type II Error: β - False Negative. Fail to reject the null, thus claiming there is no group difference, when in reality there is an effect.

Sensitivity: Statistical & Clinical

• Want a nice range on your measure that allows you to detect a △ in clinical outcome. Measure should FIND THE THING YOU WANT TO FIND • Measurement sensitivity = capacity of measure to reflect systematic variation, △, or differences in response to experimental manipulation, intervention, or different group composition

What is the Role of Theory

• What phenomena & variables relate to each other, how are they connected, & what implications can we draw from that? • Describe, predict, explain; theory can tie this all together

§17.6: Conflict of interest

• anytime investigator may have an interest or obligation that can bias or be perceived to bias a research project. Competing role (legal, personal, financial) that could impair judgement or objectivity • can undermine credibility of science • Duplicate publication of data • Professional reviewers • For-profit journals; predatory

Withholding the intervention: Issues with no-tx or wait-list control

• denies/delays tx from which a person may benefit; prolong suffering or patient's condition may deteriorate while tx is witheld • waitlist can improve on its own w/ no tx

Plaigarism

• direct use & copying of material of someone else without providing credit or acknowledgement; Pretending that one is the source of the material or idea or that the present statement in one's own work has not been provided before.

Informing clients about tx

• normally, should provide particips with info on effectiveness of tx • psychotherapy: full disclosure can interfere with study (ex. diminishing hope, itself the driving factor being studied)

Differentiate the uses of "theory"

• popular use: speculative, barely proven • scientific use: an explanation or model we develop to guide our next steps in science

What is parsimony+ why is it important in science?

• providing the simplest version or account of the data among alternatives that are available; not that explanations are "simple," but we shouldn't add complex constructs, views, relationships, & explanations if an equally plausible + simpler account can be provided.

Sharing of materials and data: Why the reluctance?

• use of data may violate consent of participants • og investigators may not be able to write up all of their studies before others who receive the data set early write up those originally planned studies • interfering w/ study & if early findings could bias later findings or alter participation in the study (e.g. attrition) • providing access to materials or inventions that may harm developers in some way • sharing data & materials often has costs & is not merely a matter of sending an electronic file


Conjuntos de estudio relacionados

RN Pharmacology Online Practice B

View Set

Vocabulary Power Plus Level 8 LESSON 4

View Set

Chapter 2: Types of Life Insurance Policies

View Set