midterm review

Ace your homework & exams now with Quizwiz!

test-retest reliability

a method for determining the reliability of a test by comparing a test taker's scores on the same test taken on separate occasions; degree of consistency over time

Ways to Acquire Knowledge

Tenacity: it has always been that way Intuition: it feels true Authority: mom says its true Rationalism: it follows logically Empiricism: I observed it to be true Science: a combination of rationalism and empiricism

Test-retest reliability

Test-retest reliability is a measure of reliability obtained by administering the same test twice over a period of time to a group of individuals. The scores from Time 1 and Time 2 can then be correlated in order to evaluate the test for stability over time

IV and DV

The IV is the variable that is controlled and manipulated by the experimenter; whereas the DV is not manipulated, instead the DV is observed or measured for variation as a presumed result of the variation in the IV

Between Subjects Design: Multi-group

- Greater understanding of relationship between variables; More levels of IV along continuum; More difficult to find an effect Different # hours of sleep; Multiple categories (Happy, sad, neutral) A multi-group design is an experimental design that has 3 or more conditions/groups of the same independent variable ex. Chris designed a study to test children's enjoyment of certain toys. He has a group of 120 children aged 5-7 play with one of the following toys: a jump rope, a remote control car, a doll, or a puzzle. The children then answer three questions about how much they liked the toy, enjoyed playing with it, and would want to play with it again Have about 30 participants per group

Shortcomings of correlational research

Cannot explain the relationship with full confidence No control No manipulation Therefore, no clear causation Does high self-esteem lead to better grades or do better grades lead to high self-esteem? Is there a third variable that is responsible for the correlation (i.e., a mediator)?

threats to validity

Poor operationalization of variables Presence of confounding variables Unrepresentative samples Inappropriate statistical tests or violations of statistical assumptions Subject and experimenter effects Common sense control procedures Preparation of the setting Eliminate distractions that might interfere Natural setting increases ____ validity Replication Demonstrates that a finding is reliable and robust Direct Do exactly the same thing a researcher did Conceptual The concept is the same being tested but the variables are operationalized differently

RATIONALISM * EMPIRICISM = SCIENTIFIC METHOD

Rationalists claim that there are significant ways in which our concepts and knowledge are gained independently of sense experience. Empiricists claim that sense experience is the ultimate source of all our concepts and knowledge

Ordinal

Reflects the order, but not the amount of a variable classifies + order (or magnitude) (e.g., place in a race); you don't know how much difference there is between one in two ex. I like chocolate more than vanilla don't know how much difference

Representative heuristic

We tend to make judgements by using stereotypes

Outliers

a case or instance that is distinct from the majority of other cases; an oddball.

Belmont Report

1. Respect for persons: Autonomy; Participants can make an informed decision about participating; Informed consent 2. Beneficence: Obligation to maximize benefits and minimize risks 3. Justice: Fairness in selecting study participants; Burdens and benefits must be equitably distributed

2 group vs multigroup design

2 group: 1 independent variable that has two levels; each level is a different group of people Multigroup: 1 IV, but more than two levels; each level has different groups of participants; between subjects design

A researcher studying whether amount of time a student spends reading affects their SAT score. After a few of the subjects come to the lab and read for a certain amount of time, one of the lamps in the lab burns and therefor makes the lab little darker. Amount of light is what type of variable? a) Confounding variable b) Extraneous variable c) Dependent Variable d) Not any type of variable

A

Chauncey wants to conduct an experiment on the impact of negative feedback on concentration. As part of the procedure, some participants "overhear" another participant, who is actually a confederate, say something negative about them. Which ethical principle should Chauncey be most concerned about? a Beneficence b Justice c Respect for persons d Privacy

A

If a study you conduct involves the purposeful misleading or misdirection of participants, when and how do you notify participants of the deception? a You explain the nature and necessity of the deception in the debriefing at the end of the study. b You notify participants of the deception in the consent form before each participant agrees to participate c As long as the participants leave the study happy, you never have to tell them d You only tell participants about deception if it involves confederates.

A

Santiago is applying to be a resident assistant at his college. He believes his leadership abilities are above those of typical applicants. Yet, when asked during his interview to provide examples of times he was a good leader, he cannot think of any. Which of the following explains why he struggled to answer the question? a Better-than-average effect b Overconfidence phenomenon c Hindsight bias d Belief perseverance

A

When something happens that is the exception to the rule or distinct from the majority of other cases, it is called an a outlier b the law of small numbers c error d the false uniqueness effect.

A

Hawthorne effect

A change in a subject's behavior caused simply by the awareness of being studied

Scales

A measurement strategy that assigns a number to represent the degree to which a person possesses or exhibits the target variable. Summated Ratings Scale: A scale whereby a participant evaluates a series of statements using a set of predetermined response options. The responses are summed to represent the overall measurement for the variable. Commonly referred to as a Likert scale.

Cause and effect

A relationship in which change in one variable causes change in another; Identify the cause and effect relationship between variables by Establishing covariation (two variables must vary or change together in a systematic way) Establishing temporal precedence (If you believe that body posturing causes a change in hormone levels, you have to establish that body posture changes occur before hormone level changes. That is, you must demonstrate temporal precedence by showing that changes in the suspected cause occur before the changes in the effect or outcome) Rule out extraneous variables

Measurement and Statistics: Reliability

Ability of a test to yield very similar scores for the same individual over repeated testings

Alternative form reliability

Alternative-form (or equivalent-form) reliability Correlation between two versions of scale Alternate form reliability occurs when an individual participating in a research or testing scenario is given two different versions of the same test at different times. The scores are then compared to see if it is a reliable form of testing.

Informed Consent

An ethical principle requiring that research participants be told enough to enable them to choose whether they wish to participate

After seeing a scary movie, you begin to wonder how watching such a movie can influence how you feel about the people sitting nearby. For example, could being scared make you feel friendlier toward them? Since you want to test this empirically, which of the following is the best option? a Ask a movie usher who has worked in the theater for over 3 years what his thoughts are about scary movies' effects on friendliness b Systematically observe moviegoers sitting in the same set of seats during several types of movies to see which groups act friendlier toward each other c Think back to your own experiences and recall how you felt after watching scary movies and how you felt after watching funny movies. d Ask some of your friends what their experiences have been after watching scary movies.

B

In a lab, researchers are talking about why their study did not work. A senior researcher explains to the other researchers that the study had adequate "power". What is the best explanation for what the senior researcher means? a) Study had enough participants b) Study was able to find a difference (i.e. yield significant results) if one was actually present. c) The study had significant reliability and validity d) The independent variable had a strong effect on the dependent variable

B

Specifically identifying how your independent variable will be manipulated and how your dependent variable will be measured is a process known as: a) Selection b) Operationalization c) Hypothesis testing d) Statistical analysis

B

Which of the following is based on casual observations rather than rigorous or scientific analysis? a Skepticism b Anecdotes c Humility d Focusing effect

B

You have just taken a job as a peer tutor for first-year students at your school. You notice that many of the students mistakenly believe that they are doing better than other students in the course, and that they will have no problem catching up if they fall behind. Which two flaws in thinking (in order) are these students expressing? a Overconfidence; hindsight bias b Better-than-average effect; overconfidence c Hindsight bias; overconfidence d Better-than-average effect; confirmation bias

B

Between Subjects Design: 2 Group

Between subject experiment with two conditions:- Different subjects are randomly allocated to the 2 conditions & exposed only to one condition of the independent variable; can be part of the treatment group or the control group, but cannot be part of both- Insures the groups will be equal in all respects, except by chance. Enables us to compare their performance between experimental and control conditions. Any difference can be attributed to differences between conditions- Advantages: avoids carryover effects- Disadvantages: require large number of people; complex; environmental factors, generalization, researcher bias; sampling error; doesn't guarantee groups can be equalRandom assignment- assignment to conditions is determined by chance.- Bigger the sample, less error - Maximize the difference between conditionsExtreme ends continuumNot as much information Treatment vs. no treatment groups (exposure to TV)Two different levels (amount of alcohol)Two different categories (hot/cold)

General qualitative approaches

Bottom up approach Inductive Main feature is you gather information before forming concrete ideas Participant-led exploration Gather information before forming concrete ideas The information provided from the participant's direct experiences guides the researcher's development of a theory e.g., Do aliens like garbanzos? (how would you even start?) Top down approach Deductive The researcher uses a theory-first or deductive approach whereby the researcher tests preconceptions and previously established theories when collecting data. Example—Ancient Aliens theorists suggest that we are partially alien, so our biology should be.....so, Aliens like garbanzos in same ways as humans. "Real" example: Health care needs.

An informed consent should include all of the following except? a) Any forceable risks or discomfort b) That the Participant's participation is voluntary c) That the participant could not quit after signing the informed consent d) That the responses will be confidential

C

Billy Ray is having a problem with weeds in his vegetable garden. He wants to determine the best way to control the weeds, but wants to approach it empirically. Which of the following is the best example of an empirical approach? a He could go to the local home improvement store and buy whatever solution is most expensive, because if it is expensive, it must be good b He could ask his neighbor what works best in her garden c He could try out several different solutions one by one to see what works best for him d He could simply go out and pick the weeds by hand.

C

Deidra needs just one more participant to complete her data collection for her undergraduate thesis. The last participant signs the consent form, but halfway through the study, the participant wants to leave. Deidra tells her that she must stay and finish the study, and the participant complies. What ethical principle has Deidra violated? a Beneficence b Justice c Respect for persons d Privacy

C

Interactions

Close to how the world actually works World almost never operates by one IV; those interactions jointly produce effect; there are very few things that don't change in response to other things; factorials get closer to representing how real world works

Coding system for observational data and types of coding systems

Coding System: A set of rules to help researchers classify and record the behaviors under observation. Continuous Recording: A method for recording observations that involves recording all the behavior of a target individual during a specified observation period. Interval Recording: A method for recording observations that involves breaking down an observational period into equal-sized, smaller time periods and then indicating whether a target behavior occurred. Record duration or frequency of key events

Facts and Constructs

Construct has an analogical nature; it is an idea/explanation/inference of something whereas a fact is a concrete observation fact is directly observed (most facts of psych are behaviors) constructs are inferred from facts (constructed to explain observations) fact: ran a mile in six minutes construct: physical ability

Data analysis techniques

Content Analysis Analysis of written artifact to identify themes, patterns, and meaning. E.g., Emails of millennial voters. Conversation Analysis Analyze natural patterns of communication, including pauses, eye gazes, etc... E.g., Analysis of group of millennials discussing voting. Narrative Analysis First-person stories analyzed from story-teller's point of view. E.g., A story of a millennial's attitude about voting.

anatomy of a factorial design

Contrast with other designs 2 group: 1 independent variable that has two levels; each level is a different group of people Multigroup: 1 IV, but more than two levels; each level has different groups of participants; between subjects design Within subjects: one type has one IV with different levels; participants are measured more than once on DV Factorial: more than one IV in study, not just different levels; Take the most basic factorial design: a 2x2 design. In a 2x2 design, there are 2 independent variables that each have 2 levels (or conditions). Just for simplicity, let's call each of those Ivs just IV 1 and IV 2, and each level of the Ivs level 1 and 2; What is unique to the factorial design is that each of the levels of the IVS are crossed with one another to form. These crossings represent distinct groups in the factorial deisgn.

Correlations and correlational research methods

Correlational research methods Attempt to quantify strengths between two or more variables If a, then probably b Anatomy of correlation Scatter plot Negative relationship: one variable increases while other decreases Positive: variables increase together

A developmental psychologist is interested in how the activity level of 4-year-olds may be affected by viewing a 30minute video of SpongeBob SquarePants or a 30-minute video of Sid the Science Kid. In this example which one is DEPENDENT variable a) Sponge bob square pants b) Sid the science kid c) Different type of cartoon shows d) Activity level of kids

D

A psychologist was hired by a local winery to conduct a taste test of four new wines. For the taste test, the psychologist had 100 participants come into the lab, take a small sip of each wine, and rate the taste on several characteristics. Between each wine, participants ate a small cracker. What type of design did the psychologist use? a Nonexperimental b Longitudinal c Between-subjects d Within-subjects

D

All of the following are forms of deception except a false feedback b cover story c confederates d imprecise informed consent.

D

In a Theories of Personality class, you fill out a questionnaire that indicates you are an extravert. You then learn that researchers describe extraverts as enthusiastic, talkative, and assertive. You immediately question that research because you remember several times in the past when you were not at all assertive. What has most likely led to your conclusion? a Relying on "truthiness" b Relying on self-reflection c Relying on scientific reasoning d Relying on anecdotal versus scientific evidence

D

Which of the following questions is outside of the scope of science? A How do parents influence their children's confidence levels? b What is love? c What do dreams tell us about a person? d Are our lives predestined or predetermined?

D

Unethical Research practices

Data exclusion; Massaging the Data (p-hacking) ■Dropping participants ■Reporting only significant results ■Including some variables but not others Misrepresenting Data Through Words ■Overstating the data ■In the media, as an expert witness, etc. Plagiarism-representing others' ideas as your own, or without giving proper credit ■Self-plagiarism; Paraphrasing

Data collection techniques

Direct Observation Questionnaires Interviews Psychological Tests Physiological Recordings Examination of Historical Records

Nominal

Each number reflects label rather than amount of a variable Classifies or identity (e.g., male/female)

Advantages of factorial designs

Establish Cause and Effect Efficient: Conduct Multiple Experiments at Once Examine How a Combination of IVs Affects the DV (Interaction) Closer to the way the real world is.

Confounds: How to control

How to control Assigning participants to groups Independence Each person is a unique data point Random assignment Matched random assignment: participants matched on key (relevant) features Then random assignment Controls for the confounds you fail to consider Controls by chance Idiosyncratic features likely found in both experimental and control Better with larger N But no guarantee Large, well-chosen, randomly selected (then randomly designed) TEND to be more representative Common sources Independence: Problems with nonindependence; Seeing other people make decisions can influence your own Maturation Process internal to the participant due to passage of time that may affect dependent variable Random assignment Problems: Matched pair or matched random assignment Ways confounds can operate Affects subset of conditions Reduces internal validity A flawed study Affects all conditions (all participants) Reduces external validity Aka global confounds ifferences in how we administer our measures during the study can also lead to instrumentation problems. Suppose for our baseline measurement that we have participants complete a paper version of the self-esteem measure while waiting for the study to begin. After watching several reality TV shows, the participants complete an online version of the same self-esteem measure as the posttest. We now have a potential confound in our study. That is, we may have unintentionally manipulated another variable in addition to our independent variable, meaning that any changes in participants' scores could be due to differences in how we administered the measurement, not from our independent variable, making causal claims difficult to assert

Naturalistic observation: Methods to make uncontaminated observations

How to make uncontaminated observations Participant observation Observing behavior while participating in the situation Unobtrusive observations Observing participants' behavior without their knowledge Contrived observations?? Princeton Seminary Students Limitations of observer Low-constraint studies often rely on the observational skills of the researcher Experimenter reactivity Any action by E that influences Ss' response Experimenter bias Any impact that E's expectations have on observations or their recording

Main effect

In a factorial design, the overall effect of one independent variable on the dependent variable, averaging over the levels of the other independent variable.

Control group

In an experiment, the group that is not exposed to the treatment; contrasts with the experimental group and serves as a comparison for evaluating the effect of the treatment.

IRB

Institutional Review Board, review research in advance to ensure ethical considerations are met

interrater reliability

Inter-Rater Reliability refers to statistical measurements that determine how similar the data collected by different raters are. A rater is someone who is scoring or measuring a performance, behavior, or skill in a human or animal. Examples of raters would be a job interviewer, a psychologist measuring how many times a subject scratches their head in an experiment, and a scientist observing how many times an ape picks up a toy.

Non-experimental studies: External and internal validity

Internal Making sure the change in dependent variable is because of independent; the degree to which we can rule out other possible causal explanations for an observed relationship between the independent and dependent variables External How generalizable are the results to other settings, other times, other participants?; the extent to which study findings are applicable or generalize outside the data collection setting to other persons, in other places, at other times.

Disadvantages of factorial designs

Main effects ignore effects of other variables **Interaction** Does the main effect of one variable depend on the level of the other? how combinations of variables affect the results Higher order interactions (e.g., in 2x2x2 designs!)

Internal consistency

One common strategy for assessing the internal consistency of a scale is calculating a statistic called Cronbach's alpha (Cronbach, 1951). This statistic evaluates how well the individual scale items "hang together," or are consistent with each other. The values for Cronbach's alpha can range from 0 to 1.0. Zero means that there is no internal consistency among the items in the scale; they are essentially measuring completely different things. Conversely, 1.0 means that all of the items measure the same exact thing. While this may sound ideal, it really is not because it implies there is complete redundancy among the items. In effect, you are asking the same exact question, just in slightly different ways. For acceptable internal consistency reliability, you ideally want your Cronbach's alpha to be at least .70 (

Operationalization

Operationalizing a variable means putting a variable into valid, precise, and measurable terms The operational definition represents how we will use (or put into operation) the variables in our study if we are concerned with the effect of media violence on aggression, then we need to be very clear what we mean by the different terms ex. aggression measures by number of arrests for murder

Within Subjects Design: Repeated Measures

Participants are measured on the dependent variable after exposure to each level of the independent variable. A repeated-measures design is another type of within-subjects design where we expose participants to each level of the independent variable, measuring each participant on the dependent variable after each level. Unlike the pretest-posttest design, there is no baseline measurement For example, in a candy taste test, the researcher would want every participant to taste and rate each type of candy

Measurement and Statistics: Standard p-value for "significance"

Remember here that statistical significance (represented by p) is the probabilistic indication (really just a percent likelihood) of how much confidence we have that the two groups differ. If the t-test for independent means is significant (p < .05), we can be fairly confident that our results represent a real difference between the groups. If the t-test for independent means fails to reach significance (p > .05), there is not enough evidence to suggest that the groups are different.

Observational Research: Laboratory observation

Sampling behavior Obtain (1) uncontaminated record of (2) behaviors in a natural environment

APA format

Title page includes A concise title describing the study List of authors and their affiliations; Title: (in the upper half of the page, centered) name (no title or degree) + affiliation (university, etc.) Abstract A concise summary Goes on a separate page (page 2) Often written last Abstracts published in computer databases Intro Number the first text page as page number 3 Type and center the title of the paper at the top of the page Type the text double-spaced with all sections following each other without a break Method Participants: who, how many, how selected Procedure: describe how you conducted the experiment Materials or Apparatus (optional) Results: Descriptive statistics Inferential statistics APA format for reporting statistics e.g., reporting t-tests t(38) = 3.21, p< .05, d = 1.04 Figure and/or Table Discussion: Recap results Compare results to prior research (from Introduction) Interpret results problems/limitations Directions for future research

Specific qualitative approaches (e.g., phenomenological approaches, etc...)

Understand a human experience and its meaning based on how those involved view the situation Uses situated analysis Researcher examines a topic while it is embedded within its naturally occurring context Capturing information as it naturally occurs 2. Grounded Theory Technique Researcher has no explicit theories or hypothesis--uses information from participants to generate categories and build theory. 3. Action Research Active involvement of participants to help change the research's focus. Education research (e.g., sex education) Philanthropic research Streetknits: Homeless are invisible--research projects to create infographics to increase homelessness visibility. 4. Ethnography Detailed, usually long-term, observations to situate phenomenon in proper cultural context 5. Post-modern Questions very assumptions of research..... No claims to truth? Researchers unavoidable influence research Overview of each 1. Phenomenological Approach? Understand a human experience and its meaning based on how those involved view the situation 2. Grounded Theory Technique? Researcher has no explicit theories or hypothesis--uses information from participants to generate categories and build theory. 3. Action Research? Active involvement of participants to help change the research's focus. 4. Ethnography? Detailed, usually long-term, observations to situate phenomenon in proper cultural context. 5. Post-modern? Narratives--like storytelling.

Demand Characteristics

a cue that potentially makes participants aware of what the experimenter expects

Confounding variables

a factor other than the independent variable that might produce an effect in an experiment; a variable that the researcher unintentionally varies along with the manipulation

Placebo

a group where participants believe they are getting the treatment, but, in reality, they are not.

Random Sampling

a sample that fairly represents a population because each member has an equal chance of inclusion

correlation matrix

a table that summarizes a series of correlations among several variables answers ABC for E, I guess close to 1 or -1

Non-experimental studies: History

a threat to the internal validity of a study due to an external event potentially influencing participants' behavior during the study

Non-experimental studies: Maturation

a threat to the internal validity of a study stemming from either long-term or short-term physiological changes occurring within the participants that may influence the dependent variable

Within Subjects Design: Longitudinal

a type of within-subjects design where we repeatedly measure participants over an extended period of time Participants are measured on the dependent variable repeatedly over a period of time.

Within Subjects Design: Pre-test/post-test

a within-subjects design where participants are measured before and after exposure to a treatment or intervention where we measure the dependent variable before and after exposing participants to a treatment or intervention, will allow us to accomplish this goal. Pretest-posttest designs are an example of a within-subjects design, as we are measuring participants twice on the same dependent variable We call the initial assessment in a pretest-posttest design the baseline measurement or "pretest." This measurement tells us what participants were like at the onset of our study and prior to any treatment or intervention Participants are measured twice, once at the beginning of the study and again at the end of the study.

Manipulation checks

an extra dependent variable that researchers can include in an experiment to determine how well an experimental manipulation worked Following the test, we will give each participant a manipulation check, which is a measure that helps determine whether the manipulation effectively changed or varied the independent variable across groups Because we were manipulating whether participants could check their smartphones, we need to see if the two groups differed in their perceptions of the permissiveness to check their phones during the study. We can simply ask them to rate on a scale of 1 to 7 how permissible it was to check their phones. If we find that there is no difference between the high-restriction and low-restriction groups on this question, then we cannot be certain we adequately manipulated our independent variable

Random Assignment

assigning participants to experimental and control groups by chance, thus minimizing preexisting differences between those assigned to the different groups

Shortcomings of intuitive scientific reasoning (e.g., availability heuristic)

availability heuristic: judgments about the likelihood of an event or situation occurring based on how easily we can think of similar or relevant instances representative heuristic: we determine the likelihood of an event by how much it resembles what we consider to be a "typical" example of that event better than average effect: Most people tend to consider themselves above average with respect to socially desirable qualities overconfidence phenomenon: we tend to be overly confident in the correctness of our judgments hindsight bias: "I knew it all along" phenomenon confirmation bias: We generally like to be right and therefore tend to focus on information that proves that we are focusing effect: To help confirm our preexisting beliefs, we often exhibit the focusing effect, where we emphasize some pieces of information while undervaluing others

Royale is a tattoo artist at The Ink Spot. He believes that getting a tattoo makes people become more outgoing, brave, and confident. To test this, he finds 60 people who want to get a tattoo and randomly gives half of them a tattoo on their shoulder blade, while the other half get a temporary tattoo in the same spot. Royale asks everyone to keep a daily diary that he will use to determine how outgoing, brave, and confident participants act over the next 3 months. Once the study is underway, Royale becomes worried that some of his participants were already outgoing and finds out that about a dozen already had tattoos. Should Royale be concerned? Why or why not? a Yes, if people were already outgoing, then a tattoo cannot have any effect on them. b Yes, having people who already have tattoos ruins the whole premise of the study. c No, random assignment should balance these types of differences out between groups. d No, a dozen people isn't enough to worry about.

c

Floor and ceilings effects

ceiling: If you were to ask participants their annual income, but the highest level they could select was "$35,000 per year," you might get a ceiling effect, in which participants would tend to provide responses at the top end of the scale. Because the highest option is $35,000, every person with this salary and higher would be forced to answer the same. A ceiling effect occurs when a measurement tool's upper boundary is set too low, leading everyone to select the highest response, which provides an imprecise and inaccurate representation of the information we hope to gather. floor: all of the responses are at the low end of the scale, which would happen if you asked, "How likely would you be to go skydiving without a parachute? 1 = not at all likely; 7 = very likely." In this case, the vast majority of participants (or all of them, we hope) would not skydive without a parachute, so they would all answer a "1." With so many scores on the low end of the scale, the measure essentially becomes useless

Interval

classifies + order + equal intervals (e.g., levels of agreement); you know the amount of difference between answers

Ratio

classifies + order + equal intervals + true 0 ratio is interpretable (Kelvin scale); there is an absolute zero ex. one degree higher from zero degrees same as one degree different from fifty degree

Direct vs. conceptual replication

conceptual: A replication study in which researchers examine the same research question (the same conceptual variables) but use different procedures for operationalizing the variables. direct: A replication study in which researchers repeat the original study as closely as possible to see whether the original effect shows up in the newly collected data. Also called exact replication

Measurement and Statistics: Validity (construct, internal, external, statistical)

construct: the degree to which the scale actually measures the desired construct; established by evaluating the convergent and discriminant validity of the measurement internal: the degree to which we can rule out other possible causal explanations for an observed relationship between the independent and dependent variables external: the extent to which study findings are applicable or generalize outside the data collection setting to other persons, in other places, at other times statistical: the extent to which the conclusions drawn from a statistical test are accurate and reliable. To achieve statistical validity, researchers must have an adequate sample size and pick the right statistical test to analyze the data

Between Subjects Design: Matched Group design

create a set of two participants who are highly similar on a key trait (e.g., two texting addicts), then randomly assign one to the experimental group and the other to the control group

Deductive and Inductive reasoning

deductive: examines general idea and then considers specific actions or ideas inductive: reverse process used, one builds from specific ideas or actions

divergent validity

demonstrated by showing little or no relationship between the measurements of two different constructs; The construct you are measuring differs from (is not correlated too strongly with) measurements of a similar but distinct trait Measures can have one of the subtypes of construct validity and not the other

Face, content, construct, criterion validity

face validity, or degree to which the scale appears to measure such interest. However, one of the problems with assessing face validity is its subjectivity, such that it depends on what the observer perceives (Sartori, 2010). In addition, if the scale makes too obvious to participants what you are trying to measure, it may introduce demand characteristics and the potential for social desirability bias. Have you ever taken a test that you believed was unfair because it did not cover much of the material that you had learned in class? If so, that test had poor content validity, which rates the degree to which the items reflect the range of material that should have been covered (Carmines & Zeller, 1979). A scale with good content validity covers the basic aspects associated with the measured variable. We want a scale that assesses attitude toward joining a fraternity or sorority. Thus, we should make sure that it does not actually measure something different, such as attitude toward partying in general or succeeding in college. We are now considering our scale's construct validity, or the extent to which the scale actually measures the construct we want (Carmines & Zeller, 1979). To establish a scale's construct validity, we evaluate two specific types of validity (Campbell & Fiske, 1959; Cronbach & Meehl, 1955). First, we look at the scale's convergent validity, or the degree to which scores on the scale correlate with participants' scores on measures of other theoretically related variables. For example, because Greek organizations are opportunities for extracurricular involvement, participants' attitudes toward joining a fraternity or sorority should correlate with attitudes toward becoming involved in campus life in general. If they do not correlate, then we will need to question our scale's convergent validity. Second, we should establish the scale's discriminant validity, which is basically the opposite of convergent validity. For a scale to have high discriminant validity, it should not correlate with measures of unrelated variables. We would have concerns about the discriminant validity if we found that scores on our scale were highly correlated with a scale measuring interest in joining a church (convergent validity the degree to which scores on a measurement correspond to measures of other theoretically related variables; used to help establish the construct validity of a measurement. discriminant validity the degree to which a measurement does not correspond to measures of unrelated variables; used to help establish the construct validity of a measurement.) Finally, our attitude scale should correlate with the person's likelihood of actually engaging in the behavior of seeking out membership in a fraternity or sorority. That is, it should have high criterion validity, which is how strongly the scale relates to a particular outcome or behavior. If people score high on our scale, they really should be more likely to join a Greek organization. We can evaluate criterion validity in two ways. criterion validity the degree to which a measurement relates to a particular outcome or behavior; established by evaluating the concurrent and predictive validity of the measurement. concurrent validity the degree to which a measurement corresponds with an existing outcome or behavior; used to establish the criterion validity of a measurement. predictive validity the degree to which a measurement corresponds with a particular outcome or behavior that occurs in the future; used to establish the criterion validity of a measurement.

ANOVA

have more than one independent variable

Types of response bias on scales

https://kwiksurveys.com/blog/survey-design/response-bias

t test

if you're conducting t test have one and independent variable with two groups paired is used when we are interested in the difference between two variables for the same subject

extraneous variables

influences two things that are associated but not causally related just related because of third variable ex. shoe size and ability to read related because as you age shoe size increases

Heuristics

mental shortcuts; Quick rules of thumb people use to make decisions; Don't integrate a lot of information Quick and efficient; work well in natural environments Generate something that you use; Determining a restaurant tip

Non-experimental studies: regression to the mean

phenomenon that if a variable is extreme on its first measurement, it will tend to be closer to the average on its second measurement—and if it is extreme on its second measurement, it will tend to have been closer to the average on its first; ex. how coaches might have a tendency to say I should only provide feedback when my players do bad because they then do better but when I give feedback doing a good performance they'll perform worse, but really they are just statically performing at their normal amount after performing at their best

Survey: Design

quantitative research strategy in which we systematically collect information from a group of individuals in order to apply our findings more generally to other, larger groups

convergent validity

scores on the measure are related to other measures of the same construct; Multiple measures of the same construct

Non-experimental studies: Attrition

the differential dropping out of participants from a study; also known as mortality if the participant dies; hurts internal validity

within-subjects design

when all participants experiences all conditions ex. everyone experience shallow processing words and deep processing word; run paired samples t test because data coming from same people


Related study sets

World History 1112: Lesson 6 The Mughal Empire (Chp. 20)

View Set

Chapter 25 plus free bonus sections

View Set

Chapter 6: Momentum and Collisions

View Set

Unit 4 APUSH Treaties and their Dates

View Set

Chapter 8 - Operations & management

View Set

Anatomy lab: Muscles of the anterior and lateral thigh

View Set

Module 9 - Introducing Risk and Return

View Set

ISM 4220 Final Practice Questions

View Set