Research Methods & Data Analysis in Psychology Exam #1 University of Iowa
Occam's Razor
"The simplest explanation is usually the correct one."
Variable
*A property that can take on more than one possible value.* 1. Must have at least *two levels* or values to be one. 2. Otherwise, it is a constant (1 level in your study) 3. Not always a number, but can be. (*Ex.: Age; year in school; height; favorite color; day of the week; operating system; type of phone; GPA; etc.*)
Manipulated Variable
*A variable that a researcher directly controls. The researcher manipulates some characteristic to investigate how different levels of the variable impact some other variable.* (*Ex.:* different dosages of a drug; different curricula in math class; changes in what room people study in.) 1. This variable must be *directly* controlled by the experimenter. 2. Not every study has to have these kinds of variables.
Continuous Variables
*A variable whose values are not restricted (in theory). There are an infinite number of potential values for these variables.* (*Ex.:* Temperature can be broken down to infinitely small categories: 1°; 0.1°; 0.01°; 0.000001°) 1. Some variables *seem* continuous, but are actually coded discretely. 2. Technically, almost no measure is *truly* continuous. We are limited by a measurement's *precision*. 3. Expected values of continuous variables can be smaller than the measurement precision. EV of a continuous variable is the same as the *mean* of the continuous variable.
Construct Validity (Measurements)
*An indication of how well a variable was measured or manipulated in a study.* 1. Construct validity is essential for psychological measurement. 2. There is no perfect operational definition. If it were perfect, it wouldn't be operational. 3. If a direct measurement is possible, it's no longer a hypothetical mental construct. 4. Poor operational definitions can be as damaging as poor data as inferences drawn on the basis of a poor operational definition are misguided. 5. Construct validity is a way of evaluating how appropriate an operational definition is. 6. Can be gauged by either *subjective measures* or *objective measures.*
*Review Question:* A professor notices that 85% of students that come to office hours get A's on exams. What other information is needed to know if there's a link between attending office hours and doing well on exams?
*Answer:* - How students that don't go to office hours do on the exams. - Data is most helpful if we can compare what would happen both with and without the thing we're interested in.
*Review Question:* Explain why Content & Face Validity are seen as less important than other aspects of Construct Validity?
*Answer:* Content & Face Validity are *subjective*. They are matters of opinion, and more open to interpretation. Also, Face Validity incorporates opinions of non-experts, who may not know what's actually relevant.
*Review Question:* The 2017 Burrow & Rainone Study #1 found that the number of likes individuals received on their Facebook profile pictures was positively related to self-esteem. What kind of variable is self-esteem in their first study.
*Answer:* Measured Quantitative Variables
*Review Question:* The precision of a continuous variable... A. Makes it technically a nominal variable B. Makes it impossible to compute expected values C. Reflects theories and measurement tools D. Indicates whether an operational definition is valid
*Answer:* Option
*Review Question:* What happens to values of a variable as a measure becomes less precise? A. They get smaller B. They get larger C. They become closer to the expected value of the distribution D. They become further from their true continuous values
*Answer:* Option
*Review Question:* Average minutes of physical activity last week, measured with an activity tracker. A. Discrete B. Continuous
*Answer:* Option A / Discrete
*Review Question:* Result of a coin flip. A. Discrete B. Continuous
*Answer:* Option A / Discrete
*Review Question:* Which of the following is true of the relationship between hypotheses and theories? A. Hypotheses are steps taken to determine if the theory is accurate. B. Theories are used to determine if the hypotheses are accurate. C. Multiple theories are needed to test if a hypothesis is accurate. D. Hypotheses and theories are synonymous terms. E. None of the above.
*Answer:* Option A / Hypotheses are steps taken to determine if the theory is accurate.
*Review Question:* When examining the statistical validity of a frequency claim, one should look for the: A. Margin of error estimate B. Strength of the association C. Statistical significance D. Length of the measurement
*Answer:* Option A / Margin of error estimate
*Review Question:* A researcher is curious how different types of candy impact a child's impulsivity. He divides children into groups: one group receives Snickers bars; the other receives hard candies. He then measures how quickly the children reach for the candy. What kind of variable is reaching time? A. Measured B. Manipulated
*Answer:* Option A / Measured
*Review Question:* A teacher thinks students who eat larger lunches are less likely to talk during class. She keeps track of what each child brings to school, and then compares how much students with large lunches talk in class with how much students with small lunches talk in class. What kind of variable is amount of talking? A. Measured B. Manipulated
*Answer:* Option A / Measured
*Review Question:* Students who are interested in being consumers of but not producers of research might choose all of the following professions EXCEPT: A. A high-school teacher B. A political pollster C. An advertising executive D. A science writer
*Answer:* Option B / A political pollster
*Review Question:* "USA studying potential environmental link to autism" What is the type of claim is this? A. Frequency B. Association C. Causal
*Answer:* Option B / Association
*Review Question:* Number of steps taken in a day. A. Discrete B. Continuous
*Answer:* Option B / Continuous
*Review Question:* Some colleges no longer require the SAT I or the ACT tests, instead basing their admissions on other factors, such as high school GPA. A large reason that they have done this is that they have found a low correlation between the scores on the tests and the students' freshman year GPA. In other words, they were concerned that college entrance exams lacked which type of validity? A) Face validity B) Criterion validity C) Discriminant validity D) Content validity
*Answer:* Option B / Criterion Validity
*Review Question:* Dimitri is interested in understanding the effects of sleep deprivation on working memory. Which of the following is an empirical approach Dimitri could take to answer this question? A. Ask his psychology teacher for his opinion on the effects of sleep deprivation on short-term memory. B. Design and execute a study which measures working memory function following different amounts of sleep. C. Watch several (fictional) movies about sleep deprivation and use the characters' experiences to reason about the effects of sleep deprivation on working memory. D. Consider his own experiences with sleep and memory.
*Answer:* Option B / Design and execute a study which measures working memory function following different amounts of sleep.
*Review Question:* Dr. Hadden wants to conduct a study that will allow him to make claims that apply to all college students. Which of the following validities is he prioritizing? A. Construct B. External C. Internal D. Statistical
*Answer:* Option B / External
*Review Question:* A researcher is conducting a study on how the wording of questions affects people's responses. In her study, she only includes native speakers of English, and excludes people with hearing deficits and attention deficit disorder. These decisions about her sample improve the study's: A. External validity B. Internal validity C. Construct validity D. Statistical validity
*Answer:* Option B / Internal validity
*Review Question:* A researcher is curious how different types of candy impact a child's impulsivity. He divides children into groups: one group receives Snickers bars; the other receives hard candies. He then measures how quickly the children reach for the candy. What kind of variable is the type of candy? A. Measured B. Manipulated
*Answer:* Option B / Manipulated
*Review Question:* A teacher thinks students who eat larger lunches are less likely to talk during class. She keeps track of what each child brings to school, and then compares how much students with large lunches talk in class with how much students with small lunches talk in class. What kind of variable is the size of lunches? A. Measured B. Manipulated
*Answer:* Option B / Manipulated
*Review Question:* What if B.D.I. correlated negatively at r = -.85 with a measure of extraversion. Would this be supportive of discriminant validity? A. Yes B. No
*Answer:* Option B / No
*Review Question:* A researcher wants to see whether people walk faster when it's cold outside. He rates temperatures as freezing, cold, warm and hot, and then measures people's walking speed in meters per second. What kind of variable is temperature? A. Nominal B. Ordinal C. Interval D. Ratio
*Answer:* Option B / Ordinal
*Review Question:* Psychologists are empirical scientists; therefore, ______. A. They use logic to prove that their theories are correct. B. They use data to test whether their theories make the correct predictions. C. They use intuition to reason about how the mind works. D. All of the above.
*Answer:* Option B / They use data to test whether their theories are correct.
*Review Question:* Why is it critical that a measure has good discriminant validity? A. We want to make sure we are measuring the entirety of the construct. B. We want to make sure we are only measuring one construct. C. Because constructs aren't real if they're hypothetical. D. To avoid reactivity in participants.
*Answer:* Option B / We want to make sure we are only measuring one construct.
*Review Question:* Which of the following is NOT an example of being a producer of research? A. Administering a questionnaire of PTSD symptoms. B. Conducting a study that involves observing the behavior of adolescents who have been bullied on social media. C. Attending a psychological conference. D. Measuring where the neurotransmitter dopamine is low in brains of patients with schizophrenia.
*Answer:* Option C / Attending a psychological conference
*Review Question:* "Regular exercise improves brain health & stimulates creativity" What is the type of claim is this? A. Frequency B. Association C. Causal
*Answer:* Option C / Causal
*Review Question:* Which validity is appropriate to interrogate with rigor for every study? A. External validity B. Internal validity C. Construct validity D. Only statistical validity
*Answer:* Option C / Construct validity
*Review Question:* Which is likely the "3rd variable" in the association between grip strength and memory? A. Hair Color B. Books read per month C. Creativity D. Age
*Answer:* Option D / Age
*Review Question:* Which of the following is true of operational definitions? A. There are multiple operational definitions that are possible for any one conceptual definition. B. Operational definitions form a bridge between theories about mental constructs and the data we collect. C. Operational definitions are created after conceptual definitions are determined. D. All of the above are true.
*Answer:* Option D / All of the above are true.
*Review Question:* A researcher wants to see whether people walk faster when it's cold outside. He rates temperatures as freezing, cold, warm and hot, and then measures people's walking speed in meters per second. What kind of variable is walking speed? A. Nominal B. Ordinal C. Interval D. Ratio
*Answer:* Option D / Ratio
*Review Question:* Hosea is studying the relationship between caffeine consumption and problem-solving as the number of problems completed within 10 minutes. What kind of variable is this? A. Nominal B. Ordinal C. Interval D. Ratio
*Answer:* Option D / Ratio
*Review Question:* Why is there asymmetry in reliability & validity? (Why can a reliable measure be invalid, but a valid measure can't be unreliable?) A. Low reliability damages face validity. B. Validity is more subjective than reliability. C. There are more types of validity than reliability. D. Unreliable measures provide inconsistent data.
*Answer:* Option D / Unreliable measures provide inconsistent data.
*Review Question:* Why is a bad theory more acceptable than bad data?
*Answer:* Theories can be revised and reconsidered in light of new data.
*Review Question:* Why is reactivity a threat to construct validity?
*Answers:* 1. If participants' behaviors are unnatural, we can't be confident in what we are measuring. 2. We are instead measuring what participants think is socially acceptable
*Review Question:* Research Done specifically to add to our general understanding of psychology is known as _______, whereas research done with a practical problem in mind is known as _______.
*Answers:* Basic Research; Applied Research
Reactivity
*Any change in the behavior of participants due to the fact that they are being measured.* 1. *Ex.:* Fear of being judged negatively. - People may act differently because they know they are being observed.
Measured Variables
*Any variable that is observed and recorded, without directly being changed by the researcher.* 1. Sometimes we are interested in things that are impossible to manipulate: gender, age, height 2. Sometimes we are interested in things that may be affected by other variables, but aren't directly manipulated: reaction time, accuracy, depression, academic success
Possible ways to gauge Construct Validity
*Construct Validity* 1. Subjective Measures 1a. Content Validity 1b. Face Validity 2. Objective Measures 2a. Criterion Validity 2b. Convergent Validity 2c. Discriminant Validity
Example of Internal vs. External Validity
*Controlling participant variables:* Ex.: A researcher wants to see whether reading to children improves academic performance. To ensure effects arise because of reading, he only includes families with high S.E.S., who already have at least 2 hours of face-time with their kids a day, and families that have two parents in the household. - High internal validity (few possible confounds) - Low external validity (very restricted population)
Convergent Validity
*Correlation of the measure with other measures of the same construct.* 1. Similar to criterion validity, but specifically focused on *multiple operational definitions of the same construct.* 2. Convergent validity doesn't require the correlated measure to be previously validated (unlike criterion) 3. Convergent validity concerns how exhaustive a measure is. 4. If several different operational definitions of the construct are highly correlated, then it is likely that the measures are covering most of the construct. 5. Correlations should be strong (r=.70 or better). 6. If the correlations are weak, the measure may be incomplete because it's not covering all of the construct.
Concurrent Validity
*Does the measure correlate with other measures we already have (or collect at the same time?)* - *Ex.:* Do ACT scores correlate with high school GPA?
Predictive Validity
*Does the measure predict future performance?* - *Ex.:* Do ACT scores predict college GPA?
Test-Retest Reliability
*Does the test yield the same results when given multiple times?* 1. High consistency is expected across tests. 2. *Ex.:* A new measure of depression. Without administering a treatment, the same people should be diagnosed as depressed each time. If there are wide fluctuations across tests, the measure is *unreliable* (and thus invalid.)
Ratio Variables/Scales
*Equal intervals and a meaningful zero.* 1. A Ratio scale is a numeric scale with equal intervals between steps and true zero value. 2. Similar to interval scales, but zero signals total lack of the quality measured. 3. Allows us to examine ratios between scores: a dog that weighs 150 pounds is 1.5 times heavier than a dog that weighs 100 pounds. (*Scale Ex.: height, weight, income, reaction time, accuracy...*)
Interval Variables/Scales
*Equal intervals between units but no meaningful zero.* 1. Interval variables are numeric scales with *equal* intervals and *no true zero.* 2. *Variables Ex.: Temperature in Fahrenheit or Celsius.* - Zero isn't the lowest possible value (can be below 0 degrees) - Distance between 1° and 2° is the same as between 1001° and 1002°* 3. *Scale Ex.: IQ:* Difference between 50 & 75 is the same as difference between 125 & 150. Not a true zero. An IQ of 0 doesn't mean no intelligence. A person with an IQ of 150 isn't 1.5 as smart as a person with an IQ of 100.
Types of Claims
*Frequency, Association, & Causal.* Work on a theory can often produce multiple types of claims from to their predictions. The type of claim that can be made depends on the type of study conducted, the data collected and the theory being investigated.
Example of Constant
*Holding something constant can reduce variability* Ex.: Study of relationship between exercise and academic performance. *Hold constant:* *Health:* Only include healthy individuals *Enrollment Status:* Exclude non-traditional students *Handedness:* Exclude left-handed people
Frequency Claims Part #1
*How often does something happen?* 1. Statements of how common a behavior, occurrence, etc. is. (*Ex.: "60% of..." "1 in 3..."*) 2. About a single variable. (*Ex.: "1 in 68 children have..."*) 3. Sometimes numerous claims are made together. 4. Essentially a description of data collected. (*Ex.: "350 million people worldwide suffer from depression."*)
Construct Validity Part #1
*How well the variables in a study measure what they are intended to measure.* 1. In other words, the extent to which the measure provides an accurate estimate of the theoretical construct. (*Ex.: Is the operational definition (O.D.) appropriate? An invalid O.D. makes it impossible to evaluate claims.*)
Producers
*Individuals in this role have/are:* - Coursework in upper-level classes. - Typically in graduate school in any area of science. - Working in a research labs of academics, research institutes, or industry.
Consumers
*Individuals in this role:* - Read printed or online news stories based on research. - Watch/listen shows, podcasts, and other media. - Apply findings to their lives. - Research helps shape public policy.
Discriminant Validity
*Lack of correlation of the measure with "other" measures of "other" constructs.* 1. Discriminant validity ensures that the measure is *only* measuring what it intends to measure. 2. Discriminant validity is essential to ensure that the measure is exclusive/selective. 3. If the measure correlates with things other than the construct, then it's not measuring what we want. 4. Correlations should be weak (lower than r=.20) 5. If correlations are too strong, we are measuring too much, the measure is not selective to our construct of interest.
Ordinal Variables/Scales
*Meaningful values but unequal intervals between units.* 1. An Ordinal variable is a categorical variable for which the possible values are ordered. 2. Ordinal scales are a rank ordering of categories. 3. The categories are discrete, but have a meaningful order. (*Ex.: Finish order in a race: 1st place before 2nd, 2nd before 3rd, etc.*) 4. Increments between categories are unequal. (*Ex.: In a race, the distance between 1st and 2nd isn't necessarily the same as between 2nd and 3rd.*)
Selective
*Part of Construct Validity* The measure should *only* include aspects of the construct, not other outside components. (*Ex.: A measure of intelligence should measure intelligence, not age, visual acuity, etc.*)
Exhaustive
*Part of Construct Validity* The measure should cover the entirety of the construct, not a sub-construct. (*Ex.: A measure of intelligence should not only measure verbal IQ.*)
External Validity
*The degree to which the results of a study (and its conclusions) generalize to a larger population or to other situations.* 1. Is this finding representative of other circumstances? (*Ex.: If I conducted this study in a new university, would I expect the same results?*)
Statistical Validity
*The degree to which the statistical results support the claim.* 1. Does the data truly provide evidence that the claim is accurate? 2. What is the likelihood that these results occurred because of random chance?
Internal Reliability
*The extent to which multiple items are answered the same by the same people.* 1. *Are multiple questions that measure the same thing giving the same responses?* 2. Many measures use multiple related questions to check internal reliability. 3. If items are well-structured, there should be high internal validity. 4. Internal reliability is measured with *Cronbach's alpha.*
Internal Validity
*The extent to which the effect arises because of the experimental treatment, and not some alternative variable.* 1. Could another factor of the study explain the results? (*Ex.: Confounds - something other than the target manipulated variable also varies between conditions.*)
Interrater Reliability
*The measure should produce consistent scores even if different people are doing the scoring.* 1. The results can't arise because of how one rater scores them.
Discrete Variables
*The possible values for the levels are all independent. It is impossible to have a value between two categories.* (*Ex.:* A fruit is either an orange or an apple. It is impossible to be halfway between orange and apple.) 1. *Nominal* & *Ordinal* variables are always Discrete variables. 2. Discrete variables don't allow intermediate values (*Ex.:* Possible outcomes of a die: 1,2,3,4,5,6. You can't roll a 3.5!) 3. *Expected value* of a Discrete variable *can* be an intermediate value.
Expected Value
*The predicted value that a variable will take. Calculated as the sum of each value multiplied by its probability of occurrence.* (*Ex.:* Expected value of a die roll: 3.5 / 1+2+3+4+5+6=21, 21 divided by 6 = *3.5*) 1. For discrete variables, expected value (EV) is: *EV(x)=Σx times p* - *Sigma* is the sum. *X* is each possible "x" value. *P* is that x value's probability. 2. The expected value is not the most likely outcome from a single event (you will never roll a 3.5) 3. Expected value is the *long-term value.* 4. The possible outcomes of a game are *discrete*; the expected value is not. 5. Expected value can change depending on *precision of measurement*.
The Theory-Data Cycle
*Theory* leads researchers to pose particular *Research Questions*, which lead to an appropriate *Research Design*. In the context of the design, researchers can formulate *Hypotheses.* Researchers then collect and analyze *Data*, which feed back into the cycle in one of two ways: 1. *Supporting data strengthen the theory.* or 2. *Nonsupporting data lead to revised theories or improved research design.*
Confounding Variables
*These are "extra" variables that you didn't account for.* 1. Are like additional I.V.'s because they can affect the D.V.'s while remaining hidden. 2. Can ruin an experiment and give you useless results when unaccounted for. 3. Can suggest there is correlation when in fact there isn't. 4. Problems they can cause: can increase variance & even introduce bias. (*Ex.: In a study about how Activity Level affects Weight Gain [the I.V. & D.V., respectively] the results could be can be hampered if Age or Gender aren't considered relevant. In this case, Age & Gender are "extra" variables.*)
Constant
*These things can't vary in a study are NOT variables.* (*Ex.: Study of words used in mother-child interactions. Parent gender NOT a variable as only one value is possible; not looking at fathers.*) 1. Holding something constant can help ignore factors that are outside the study. 2. Holding something constant can reduce variability. 3. Major difficulty with constant is *Generalizability*; who do the results apply to? 4. The more things we hold constant, the less able we are to generalize our findings. 5. Fine balance between *control* (Internal Validity) & *generalizability* (External Validity)
Nominal Variables
*This type of variable classifies objects in discrete categories.* 1. Data categories are *mutually exclusive* (each observation has only one category) 2. Data categories have no logical order (*only qualitative differences*) (*Ex.: Eye color: [Blue, Brown, Green, Hazel.] Has visited France: [Yes, No]*] 3. Can be coded numerically (e.g., 1=blue eyes, 2=brown eyes) but the *numbers do not imply order of categories.* It also wouldn't matter if the numbers were assigned differently. 4. Limitations in how comparisons between groups can be done. (*Ex.: We can't take an average across nominal categories [what's the average of three people with blue eyes and two with brown eyes?]*)
Construct Validity Outline
*Two paths of validity:* 1. Construct Validity => Exhaustive => Convergent Validity 2. Construct Validity => Selective => Discriminant Validity
Discrete vs. Continuous
*Variables can consist of discrete levels or continuous values.* 1. Interval and Ratio variables can be discrete *or* continuous.
Causal Claims Part #1
*What causes something to happen?* 1. An argument that two variables are related, and that one variable *causes* another variable. 2. These claims have directionality: *The value of one variable is the source of changes in the other.* 3. Use directed terminology: "leads to, affects, causes, changes, etc." 4. The relationships can be the same possible shapes as association claims (positive, negative, non-linear)
Association Claims Part #1
*What types of things happen together?* 1. Suggest that there is a link between two variables. 2. A change in one variable is expected to correspond with a change in the other variable. 3. Use words like: "is linked to, is associated with, goes with, may predict..."
Face Validity
*Whether the measure appears even to non-experts to measure the construct.* 1. Quite similar to content validity. 2. May be important to consider as from the perspective of a participant in the study.
Criterion Validity
*Whether the measure correlates with other, "known" consequence of the construct (or with other measures of the construct)* 1. Based on relationships to other measures. 2. To have strong criterion validity, we need another measure that we trust to correlate with. 3. Two types of criterion validity: *Concurrent Validity* & *Predictive Validity.) 4. Can be good for seeing whether the measure is useful at predicting behavior. 5. But this validity assumes a causal relationship. It requires other measures; not ideal for developing new measures of less studied constructs. 6. Especially useful for developing diagnostic measures.
Content Validity
*Whether the measure used as an operational definition makes sense to experts in the field.* 1. Also known as *logical validity.* 2. Essentially, does the measure pass the eyeball test of seeming reasonable? 3. *Ex.:* Using height as a measure of intelligence. Height and intelligence likely correlated, but height clearly is not a good measure of intelligence.
Pearson's Strength of Association Equation
*r* or *r2* (squared) is used to measure correlation strength. Higher *r* signals a stronger correlation.
Basic Research
- Basic processes of behavior and cognition - Designed to advance theory - Takes longer to affect policy - Cares about "*why*" question
Applied Research
- Immediate practical implications - Designed to answer a practical problem - Can affect policy quickly - Cares about "*so what*" question
Motivational Bias
1. A discrepancy, usually conscious, motivated by one's personal situation. 2. One should keep in mind that motivational bias is different from cognitive bias where. a discrepancy, usually subconscious, is introduced by the manner in which the individual processes information.
Good Theory Falsifiability
1. A theory must be testable, such that some imaginable pattern of data can prove it wrong. 2. Must be able to answer: "What would you need to see to change your mind?" 3. If any possible outcome can support the theory, that theory is meaningless.
5 Steps in the Scientific Method
1. Ask a question or identify a problem. 2. Background research. 3. Form a hypothesis. 4. Experiment & observe. 5. Draw a conclusion.
Operational Definitions
1. At their most controversial in the fields of psychology and psychiatry, where intuitive concepts (such as intelligence) need to be operationally defined before they become amenable to scientific investigation. (*Ex.:* through processes such as IQ tests.) 2. However, there is no perfect operational definition. If it were perfect, it wouldn't be operational.
Sources of Association
1. Both variables caused by a 3rd factor (*Ex.: Height & hair length can be influenced by gender*) 2. Coincidence (*Ex.: Per capita cheese consumption correlates with # of people who died by becoming tangled in their bedsheets.*)
Bias
1. Cause to feel or show inclination or prejudice for or against someone or something. 2. Personal experience vs. general patterns. 3. *Availability heuristic:*Things that are easier to recall carry more weight. 4. Motivational Bias/Confirmation Bias 5. Everyone has this.
Cronbach's Alpha
1. Cronbach's alpha is the mean of all possible correlations of each item with every other item. 2. If the alphas number is closer to 1, the better it is.
Armchair Psychology
1. Intuitions about behavior. 2. Can guide hypotheses, but meaningless without data. 3. Inherently *atheoretical* (not based on or concerned with theory).
Validity
1. Isn't a single measure, there are 4 main types. 2. No single study is fully valid; instead, the types trade off each other. 3. This is because *"convergent approaches to a theory allow stronger testing because different studies can emphasize the different types."*
Poorly Reported Findings
1. Journalists are human and vulnerable to the same biases of experience and intuition, but they may have a bias blind spot. 2. Media needs articles that are easy to understand and generate interest. 3. Research studies often have small effects, modest findings or are preliminary. 4. Media reports often focus on possible implications, and over-generalize the concepts to make the story easier to understand.
Empiricism
1. Observable evidence supports an idea, so that idea must be true. 2. Deriving knowledge from observation and experimentation. 3. Psychology is an empirical science. 4. Theories are built and tested on the basis of observable data. 5. Authority and Rationalism are secondary.
Subjective Validity
1. One of the ways to gauge construct validity. 2. *Content validity and Face validity* are under Subjective validity. 3. Subjective validities are a matter of opinion, and so are not often relied on. 4. In some cases, something that seems irrelevant to one researcher may prove a useful and valid measure to another.
Objective Validity
1. One of the ways to gauge construct validity. 2. *Criterion validity, Convergent validity, and Discriminant validity* are under Objective validity.
Theory
1. Statement or set of statements that describes general principles about how variables relate to one another. 2. Built upon collected knowledge; they're not uninformed guesses. 3. Typically the current best explanation for a pattern of data. 4. Can't be proven, but data can suggest whether they should be changed/replaced.
Confirmation Bias
1. Tendency to interpret new evidence as confirmation of one's existing beliefs or theories. 2. Easy to ignore events that contradict belief. 3. Events that support it stand out.
3 Types of Reliability
1. Test-retest reliability 2. Interrater reliability 3. Internal reliability
Independent Variable
1. This variable(s) is often the *cause* of change in an experiment and unaffected by other variables. 2. It is the manipulated variable, predicted to affect other variables.
Dependent Variable
1. This variable(s) is the *affected* in an experiment, influenced by another, manipulated variable. 2. When this variable is measured, its values are compared against the different levels of the manipulated variable.
Precision
1. We always round numbers eventually. 2. We opt for levels of precision using theoretical bases and measurement limitations. 3. *Precision of measurement* means that measures are at best only *pseudo-continuous*. 4. Despite this, we still treat measures as continuous. (*Ex.:* Age could be broken down into nanoseconds, but we ignore these tiny details in most instances.)
Validity vs. Reliability
1. We have focused on validity of a measure. *How well does that measure correspond with the construct we want to measure?* 2. We can also ask about the reliability of a measure. *How consistent is the measure?* 3. A *reliable measure* is one that produces consistent data. Under the same conditions, the measure will yield extremely similar outcomes; there is little randomness in the measure. 4. *A measure must be reliable to be valid.* If data are unreliable, we can't use them to assess a theory about a construct. *But a reliable measure is not necessarily valid.* 5. Reliability is *necessary* for validity, but it is not *sufficient* for validity. 6. *Ex.:* Shoe size as a measure of intelligence. The measure is extremely consistent; so very reliable. But it has no real relationship with the construct, so it's invalid.
Good Theory Data
1. Without support from data, there is no evidence that the explanation accounts for something real. (*A single piece of evidence is poor support.*) 2. Converging data/evidence and successful replication of such data/evidence is crucial.
Construct Validity Part #2
2. 1 of the 4 types of validity. 3. Strong construct validity means that the measure provides an *exhaustive* & *selective* estimate of the theoretical construct.
Association Claims Part #2
4. Mostly focuses on linear relationships (correlations). 5. These claims on their own don't signal directionality. 6. These claims may seek to imply causality. (*Ex.: Children who are read to more often have higher grades.*) 7. Additional empirical methods are needed to test these claims (longitudinal designs, experiments.)
Causal Claims Part #2
5. However, now there's a specific claim for *why* there is a relationship. 6. Most experiments make these types of claims, as a variable is manipulated to see how it impacts another measured variable. 7. These are the strongest types of claims, but are the most difficult to show support for. 8. Researchers must perform careful and thorough longitudinal studies/controlled experiments to test these claims.
Frequency Claims Part #2
5. Important to remember that these claims are indicative of the operationally defined variable. (*Bad Ex.: "A researcher defines a child's happiness as whether or not a child laughs when tickled. He then reports that 97% of children are happy." Propensity to laugh when tickled likely is a poor indicator of overall happiness.*)
Data
A set of observations representing the values of some variable, collected from one or more research studies.
Claims
A statement(s) or argument about some psychological data or theory. The hope when collecting data and testing theories is that we can make reasonable claims about psychological processes.
Hypothesis/Prediction
A way of stating the specific outcome that the researcher expects to observe if the theory is accurate.
Working Memory
Ability to temporarily hold and manipulate information for cognitive tasks.
Mental Constructs
An explanatory variable that is unobservable, and therefore hypothetical. *Ex.:* Intelligence, Depression, Attention, Reading Ability, etc.
Rationalism
An idea makes logical sense, thus it must be true.
Positive Linear Correlation
As one variable increases, the other also increases. Characterized by a left-to-right upwards diagonal line in graphs.
Negative Linear Correlation
As one variable increases, the other decreases. Characterized by a left-to-right downwards diagonal line in graphs.
3 Main Sources of Information
Authority, Rationalism, Empiricism.
Why is good data more important than good theories?
Data is paramount. Theories are interpretations of data. People can argue about interpretations, but they need to agree on the data.
4 Goals of Psychology
Describe, Explain, Predict, Control.
Comparative
Good data is comparative, making base rates and control groups quite necessary.
Statistics
How do researchers organize, summarize and interpret the data gathered from their research studies?
Internal vs. External Validity
Many attempts to improve a certain type of validity will hurt the reflecting type of validity, and vice versa. (*Ex.: Making participant variables constant.*)
Types of Variables
Nominal (also known as "categorical"), Quantitative.
Types of Quantitative Variables/Scales
Ordinal, Interval, Ratio.
Reliability & Validity
Reliability: Consistency of the measure. Validity: Whether you're measuring what you think you're measuring.
Authority
Someone in this position could tell you something is true, so thus it must be true.
Qualities of a Great Theory
The best theories are supported by converging data, falsifiable, & parsimonious.
Correlation Strength
The correlation strength is determined by the consistency (tightness/compactness) of variables on a graph. - Spread out points = Weak - Some grouping/concentration = Moderate - Concentrated / single line = Strong
Precision of a Measurement
The smallest unit of difference that is considered in the measurement scale.
Mean
The sum of all values, divided by the number of values. Mean= *EV(x)=(Σ)(x) divided by n* - *Sigma* is the sum. *X* is each "x" value. *N* is the total number of values.
No Linear Correlation
The value of one variable gives us no information about the value of the other variable.
Research Methods
The ways/methods that psychological researchers set up research studies in order to test hypotheses.
Psychological Science
Tries to understand why people think, feel, and behave the way they do.
Bias Blind Spot
We believe that we are personally unbiased, however this assumption/belief is incorrect as we are all biased in one way or another.
Parsimony
When two theories both explain data equally well, the simpler theory is preferred over the other.