PS214 Exam 1

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Assessing Validity - Face Validity

"seems like" measure assesses construct Example: The researcher could examine face validity by having a group of personality and emotion researchers judge the extent to which each scale item "seems like" it assesses trait anxiety.

Authority (Definition & Pros+Cons)

Adopt the conclusions of a trusted source (i.e. expert, friend or relative, institution) PROS: easy to believe authority CONS: not really experts, bias

anonymity vs. confidentiality

Anonymity: Data are anonymous (researcher cannot connect particpants ID with their data) Confidentiality: data are confidential (researcher can connect ID with data but the public cannot)

Mode

most frequent value only index that can be used with categorical variables

Pearson's correlational coefficient (r) - Direction - Strength

(r) indicates the direction and strength of a linear relationship between two variables (ranges from -1 to 1) - Direction: positive (scores on one variable increase and so do scores on the other), negative (scores on one variable increase and the other decreases) - Strength: -1 or 1 is strong, almost 0=weak relationship, 0=no relationship

Experimental Control

- All extraneous variables kept constant (cannot be a confounding variable - treat participants in all groups in the experiment identically Example: in an experiment on the effect of exercise, the only difference between exercise and no-exercise groups is the exercise.

Tuskegee Syphilis Study (Gyst of what happened)

- Purpose was to study progression and possible treatment What went wrong: - did not reveal test results - planned to attempt treatment 6 months after the study but there was no treatment available at the time - converted study to long term progression of syphilis - Penicillin established as an effective cure by 1940s so study should have stopped but they didn't tell the participants and study continued - By the time study was stopped many had died, infected their wives, or a child was born with it As a result: Passed National Research Act which regulated human subject research

Institutional Review Board

- committees evaluate ethics of research conducted at an institution For example Colby has an Institutional Review Board (Colby Institutional Review Board and Colby Inst. Animal Care and Use Board) - Researchers must obtain permission before doing a study

Research Questions and Hypotheses

...

The Scientific Process (How does science work? A general model of the steps)

1. Ask a question 2. Consider possible answers (Hypothesis) 3. Make a plan (Research design) 4. Collect Data 5. Draw conclusions using data

The Belmont Report (3 Key Principles and short definition)

1. Beneficence: max. benefits and min. risks 2. Justice: Fair recruitment and distribution of benefits and risks (shouldn't concentrate benefits on advantageous populations and risks on disadvantaged pops) 3. Respect for Persons: individuals allowed to make an informed decision about whether to participate

Manipulation Check

A manipulation check is a measure used to determine whether or not the manipulation of the independent variable has had its intended effect on the participants. Also, it provides evidence for the construct validity of the manipulation

Basic vs. Applied Research

Basic: primarily motivated by a desire to understand behavior (often description and explanation) Applied: Primarily motivated by a desire to address a particular problem (often prediction and influence)

Beneficence Potential Benefits Potential Risks How to Maximize the Benefits and Min. Risks.

Benefits: Material compensation, education and personal insight, treatment or intervention benefits Risks: physical discomfort, psychological stress, loss of privacy Max and min: - no permanent or psychological harm - protect privacy through anonymity or confidentiality - check whether participant has suffered any harm and then repair it by first aid or discussion

Reliability (Conceptual Definition)

Conceptual: extent to which repeated measurements of same person, object, event produce similar values (Dorm dimensions - use a tape measurer vs your foot)

Concurrent (current things) Validity and Predictive (future things) Validity

Concern how scores on a measure correlate with real life behaviors and outcomes concurrent concerns current stuff predictive concerns future stuff

Confounded variables and the Third Variable Problem (Related)

Confounded Variables: When we actually know that an uncontrolled third variable is operating, we can call the third variable a confounding variable. If 2 variables are confounded, they are intertwined so you can't say which of the variables is operating in a given situation Third Variable Problem: provides an alternate explanation for a possible causal relationship Example: As ice cream sales increase, the rate of drowning deaths increases sharply.Therefore, ice cream consumption causes drowning. (example fails to recognize the importance of time and temperature in relationship to ice cream sales. Ice cream is sold during the hot summer months at a much greater rate than during colder times, and it is during these hot summer months that people are more likely to engage in activities involving water, such as swimming.)

Construct Validity vs Reliability

Construct Validity: The extent a measure assesses a characteristic that it is supposed to Reliability: extent to which measure assesses something Construct Validity: extent to which a measure assesses the RIGHT thing Can't have validity w/out reliability but can have reliability without validity

Descriptive vs Inferential Statistics (use Colby attractiveness example)

Descriptive: methods for summarizing info about sample (i.e. summarizing the samples attractiveness scores) Inferential: method for making conclusions about a population by using information about sample

Random Assignment

Each participant must be randomly assigned to a level of each variable Randomization helps create equivalent groups of participants

Retest reliability

Estimates reliability of a measure as the correlation between a sample's scores on the measure at time 1 and the same sample's score on same measure at time 2 - should only be used w/measures that are assumed to be stable over time (personality is okay, mood is not)

Experimental vs. Correlational Research Designs

Experimental: Researcher manipulates one or more independent variables and observes effects on one or more dependent variables (This is an experimental design, because each participant is randomly assigned to a level of the independent variable (music or no music).) Correlational: Researcher measures 2+ variables but does not manipulate them Predictor: Expected cause Outcome Variable: Expected Result

4 Goals for Understand Behavior: 3. Explaining Behavior

Explaining the events that have been described (understanding why the behavior occurs) Example: if we know that TV violence is a cause of aggressiveness we need to explain this relationship (is it due to imitation?)

Convergent Validity

Extent to which scores on a measure correlate w/scores on other, theoretically related measures - The researcher could examine convergent validity by having a sample of participants complete the new anxiety scale, as well as other measures of anxiety and related constructs (e.g., stress, Big-Five Neuroticism). They would then check whether scores on the new anxiety scale correlate strongly with scores on these other measures.

Fraud and Plagiarism

Fraud: the fabrication of data (most common reason for suspecting fraud is when an important or unusual finding cannot be replicated) Plagiarism: misrepresenting another's work as your own

Control Group

Helpful to include a control group (not manipulated) A group in a scientific experiment where the factor being tested is not applied so that it may serve as a standard for comparison against another group where the factor is applied.

Histogram and bar chart http://www.mathsisfun.com/data/images/bar-chart-vs-histogram.gif

Histogram: bars touch for quantitative variables Bar chart: can use categorical data, bars do not touch

Independent vs. Dependent Variables

Independent: Variable manipulated Dependent: Variable being measured

Informed consent and debriefing

Informed consent: providing all relevant information before someone agrees to participate Debriefing: providing additional info to each participant after they complete the study - specific purpose, design and hypothesis - how to learn about results and conclusion - adress questions/concerns - repair any damage

Modality and Skewness

Modality: number of systematic peaks in a distributions histogram (unimodal, bimodal, trimodal) Skewness: symmetrical or asymmetrical Positively skewed (distribution points toward positive values) Negatively skewed (distribution points toward negative values) "attractiveness scores ranged from 4.2 -9.8 and their distribution was unimodal and positively skewed"

4 Goals for Understand Behavior: 2. Predicting Behavior

Once it has been observed with some regularity that two events are systematically related to one another (e.g. greater credibility is associated with greater attitude change), it becomes possible to make predictions - and therefore anticipate events Developing Major Depressive disorder example (What biological and environmental factors could be used to identify people at risk for developing major depressive disorder, so that targeted interventions could help prevent the disorder?): This is most clearly a question about predicting behavior, specifically about predicting who is and is not likely to develop major depressive disorder.

Parameters vs. statistics (use Colby attractiveness example)

Parameter: value that describes a population (i.e. average attractiveness of male Colby students Statistics: value that describes a sample (i.e. average attractiveness of the 15 male Colby men in the stats class - the sample)

Population vs. Sample (use Colby attractiveness example)

Population: group of things (people, events, behavior) researcher ultimately wants to draw conclusions (i.e. Colby men Sample: group of things, drawn from the population, that researcher actually observes in study (i.e. 15 male Colby men in psych stats class)

Quantitative vs. Categorical Variables

Quantitative: have values that represent a range from most to least of some characteristic Categorical: Have values that distinguish between things but don't order them from most to least

4 Goals for Understand Behavior: 1. Describing Behavior

Researches are often interested in describing the ways in which events are systematically related to one another. Men vs. Women Example: This is most clearly a question about describing behavior, specifically about describing the relationship between gender and talkativeness.

What makes a good research design? VALIDITY High external validity (Definition and how to maximize it)

Results can be generalized to other people, settings, and measures or manipulations -Use a representative sample - observe behavior in naturalistic setting - use realistic manipulation - conduct multiple studies with different methods

Psychological Science (Section 1)

Section 1

Research Design (Section 2)

Section 2

Research Ethics (Section 3)

Section 3

Psychological Measurement (Section 4)

Section 4

Frequency Distributions (Section 5)

Section 5

Central Tendency and Variability (Section 6)

Section 6

Grouped frequency distribution How do you figure out the width of the intervals?

Sets of adjacent values merged to create wider intervals and so you can see more general patterns (make sure the first and last intervals don't extend beyond the theoretical range of the scale) (Highest-lowest value) / number of desired intervals

Reliability (Technical definition - includes definition of observed score, true score, and measurement error)

Technical: Observed score = true score (someones hypothetical average on an infinite # of repeated measurements) + Measurement error (things effect score i.e. situational factors)

What makes a good research design? VALIDITY High construct validity (Definition)

The operational definition of each variable captures its conceptual meaning

Measurement (Definition and Importance) What makes a good measure?

The process of assigning values to observations (of people, object, events) Can be quantitative and categorical Important because psychologists measure all kinds of stuff - the better you measure a variable the easier it is to find relationships with other variables What makes a good Measure? (High Reliability and High construct validity)

Examples of Concurrent and Predictive validity

The researcher could examine concurrent validity by administering the scale to both (a) a sample of adults with generalized anxiety disorder, and (b) a sample of psychologically healthy adults. They could then check whether the average scale score was substantially higher in the sample of adults with generalized anxiety disorder. The researcher could examine predictive validity by administering the scale to a group of participants, and then having them keep a daily diary of their emotional experiences. For example, at the end of every day for two weeks, each participant could rate the extent to which they had felt happy, sad, anxious, etc. over the course of the day. The researcher could then check whether scores on the new anxiety scale correlated positively with ratings of everyday anxiety.

What makes a good research design? VALIDITY High statistical conclusion validity (Definition)

The statistical methods you use to draw conclusions from your data are reasonable

4 Goals for Understand Behavior: 4. Influencing

To know how to change behavior, we need to know the causes of behavior (A related question about influencing behavior would be what interventions are effective at preventing the development of major depressive disorder.)

Content Validity

Total content of items reflects complete definition of the construct - The researcher could examine content validity by having a group of experts judge whether the scale includes individual items that target each part of the definition of anxiety: (a) expecting that bad things are going to happen, (b) feeling physically tense, and (c) being vigilant for potential threats.

To compute variance and standard deviation MAKE A TABLE Variance and what's wrong with it

Variance = average of the squared deviations from the mean 1. Compute mean 2. Compute deviations (x-mean) and squared deviations (x-mean)^2 3. Compute variance: s^2x = the sum of squared deviations / (number of values-1) Whats wrong with it? The result is in squared units, rather than original units (so standard deviation (the square root of the variance) puts it back in original units)

Deception and cover stories

Withholding some info about study's purpose or procedure or misleading participants with a cover story (used so that participants don't conform to hypothesis) Acceptable as long as risks are told, participants are debriefed about the deception Could help when you're studying very socially desirable (e.g., helping) or undesirable (e.g., narcissism) behaviors. If you tell people you're studying helping,

What makes a good research design? VALIDITY High internal validity (Definition and how to maximize it)

Within the study there was clearly a causal relationship between variables . In correlational studies: - establish the temporal order of the proposed cause and effect - measure obvious potentially confounded variables In Experiments: - treat all groups identically (except for independent var) - use a large sample to ensure equivalent groups - don't communicate hypothesis to participants so they cant adapt their behavior

Frequency distribution What are the 3 steps?

a table displaying the number of times each specific value of a variable was observed 1. order observed variables (Quantitative from small to large, categorical order how you want) 2. Count number of times each value occurs 3. Make a table (columns would be value, frequency, and cumulative frequency)

Variable

any characteristic (of a person, situation, behavior) that has more than one possible value

Mean

arithmetic average of the scores used most frequently most stable from one sample to the next often defined by an algebraic equation, so useful for computations

Protected populations

children and minor adolescents psychological disorders or mental handicaps students and employees captive groups Need a competent adult/guardian for these people

Intuition (Definition & Pros+Cons)

draw conclusions on basis of personal experience and judgement PROS: it's easy, anyone can do it CONS: your experience may not be representative or you could be bias

Science (Definition & Pros+Cons)

draw conclusions on the basis of systematic observations (Men vs. Women Example: A researcher could put male and female participants in social situations together and record the number of words that each person says.) PROS: representative set of data, repeatable, controlled, can do it on a large scale CONS: expensive, can't answer all questions scientifically

Internal- consistency AND Cronbach's coefficient alpha (how to use it)

estimates the reliability of a multiple-item measure from the correlations between items (can only be used w/ multiple-item measures - don't have to get particpants to come back) Cronbach's 1. Reverse score all false-keyed items 2. compute all possible inter-item correlations 3. Compute average inner-item correlations (sum of correlations/# of correlations) 4. Compute cronbach's alpha (i = # of items on the measure) r = mean inter-item correlation) alpha = (# of items on the measure)*(mean inter-item correlation) all divided by (1+ (mean inter-item correlation)*(# of items on the measure -1))

Discriminant Validity

extent to which on a measure do not correlate w/scores on other theoretically unrelated measures - The researcher could examine discriminant validity by having a sample of participants complete the new anxiety scale, as well as measures of unrelated constructs (e.g., Big-Five Extraversion, Agreeableness, Conscientiousness, and Openness to Experience). They would then check to make sure that scores on the new anxiety scale do not correlate strongly with scores on these other measures.

Reliability (3rd definition)

extent to which variability in observation scores reflects variability in true score, rather than measurement errors Range from 0 to 1 (0=don't correspond with true score) example: .90=90% of observation variability reflects individual differences in true score, only 10% represents measurement error 0.70 or more is acceptable

Central tendency

indices of central tendencies summarize info about average or center point of a set of data (mode, median, and mean)

Variability

indices of variability summarize information about spread or dispersion of a set of scores (range, variance, standard deviation)

Range

largest score - smallest score makes use of very little info (only considers 2 scores)

True-keyed and false-keyed items

scores of false keyed indicate the opposite of what is being tested and are switched

Standard deviation

square root of the variance - similar to the average size of the deviation scores

Operational Definition

the way a particular variable is measured in a study (The independent variable is music. Its operational definition is being randomly assigned to either listen or not listen to soft classical music while taking an exam. The dependent variable is exam performance. Its operational definition is score on the exam, from 0 to 100.)

Normal distribution

unimodal, symmetrical, and bell shaped

Median How to find median (computing median position)

value that 50% of scores fall at or below (50th percentile) 1. order smallest to largest 2. compute median position (number of scores +1)/2 3. count from smallest to the median position to find the median less influenced by outliers so often reported for variables with skewed distributions

Causal Relationship and the requirements for establishing one (1. Covariation, 2. Temporal Precedence, 3. Alternative Explanations

when one variable causes a change in another variable. 1. Covariation: must be statistical association between the cause and effect 2. Temporal Precedence: cause must occur before effect 3. Alternative Explanations: all plausible alternative explanations for the association must be eliminated (easier for experimental studies to meet these 3 requirements)


Set pelajaran terkait

Chapter 63 Management of Patients with Neurologic Trauma

View Set

Why and how were the late 1940s and 50s a time of economic prosperity?

View Set

Colonization and Independence in Africa

View Set

MAR 3023 Ch 17 Direct, online, social media, and mobile marketing

View Set