PSY 213: Exam 2

Ace your homework & exams now with Quizwiz!

Experimental Research

(Experimental Research) --Finding a relationship between two variables is easy But: --The hard part is demonstrating that one variable causes a change in the other Experiment: It's like a scientific test where you change one thing to see what happens. Groups: You have groups of people - one gets the special treatment (listening to music), and the other doesn't (control group). Measure: You measure or observe something to see if there's a change. Compare: You compare the results to decide if your guess was right or not. Learn: You learn something new about the world! --experimental research is like being a scientific detective. You set up a special test to see if your idea about how things work is true or not. It's a fun way to explore and understand the world around us!

Experimental Control:

--A set of tools to allow an experimenter to eliminate confounds and control potential 3rd variables to ensure that the IV is causing changes in the DV (not something else) --Tools for eliminating systematic variation (or confounds): A.Holding constant B.Matching C.Random assignment D.Control groups

Q2: Do you think the following study has a threat to external validity? Why or why not?: ("Australian researchers study the effect of long term use of computer monitors on human vision. They recommend that people spend no more than 30 consecutive minutes in front of a computer monitor or damage to the visual system may result.")

--Only threat if the visual system of Austria is diffrent for other countries --There isn't an external validity

Internal Validity: Confounds EX

--We were interested in measuring someone's aggression based on whether they won or lost playing virtual checkers. -IV: win or lose condition -DV: aggression measure --Confound: temp of the room ( when you have confound, you can't say iv caused dv)

Q2: Using the fidget spinner scenario discussed in class, give an example of how you would exercise experimental control in this design by matching.

--You get certain small groups around the classroom so they all have a chance. --If the confound is where they're sitting : --Across the rooms there's no differences -Equal match : if in front fidget spinner eitehrr have it or don't have it same for the back of the room, the right side of the room and left side of the room = so matching, where ever you are

Defining and Measuring Variables: Hypothesis

-A hypothesis is a testable statement or prediction about the relationship between two or more variables. -It is derived from a theory and serves as a specific, measurable proposition that can be tested through research. -Hypotheses are used to guide the design of research studies and to determine the specific data to be collected. A. Make sure your hypothesis is specific, testable, and clear. It should propose a relationship between variables that can be investigated

Defining and Measuring Variables: Theory

-A theory is a well-substantiated and comprehensive explanation of some aspect of the natural world. It is a systematic and organized set of principles that has withstood repeated testing and scrutiny -while theories themselves are not directly tested, they provide a basis for formulating testable hypotheses. The empirical testing of hypotheses contributes to the validation or modification of a theory. -A theory is a set of statements—as simple as possible—that describes general principles about how variables relate to one another.

Defining and Measuring Variables: construct

-A variable of interest, stated at an abstract level, usually defined as part of a formal statement of a psychological theory. See also conceptual variable. -When researchers are discussing their theories and when journalists write about their research, they use more abstract names, called constructs -Constructs are broad concepts or topics for a study. Constructs can be conceptually defined in that they have meaning in theoretical terms.

Extraneous Variable (EV)

-Any variable you didn't measure that ties to a person is an extraneous variable = other stuff -Anything that isn't your IV OR DV in your study Definition: Extraneous variables are like unexpected guests at your experiment. They are things that you didn't plan on studying but might still affect your results. Example: If you're testing whether a new study method helps you remember things better, but then you find out that everyone who used the new method also got more sleep the night before, sleep becomes an extraneous variable. You didn't plan on studying sleep, but it could still influence the results.

Types of Claims: Association

-Association claims assert that there is a relationship or association between two or more variables. However, they do not imply causation. -Example: "There is a positive correlation between exercise frequency and mental well-being." -Characteristics: These claims identify patterns or relationships but do not establish a cause-and-effect connection. Correlation does not imply causation, and there may be confounding variables influencing the observed association. -An association claim states a relationship between at least two variables. To support an association claim, the researcher usually measures the two variables and determines whether they're associated. This type of study, in which the variables are measured and the relationship between them is tested, is called a correlational study. Therefore, when you read an association claim, you will usually find a correlational study supporting it.

Threats to Validity: Population

-Can our results generalize: --Population-other participants, cultures, gender, age, etc. issues : generalized in different groups --For example, college preferences, but can this be generalized? Can it generalize to older people? Is this study relevant to older people, not just college students? Population validity, also known as external validity or ecological validity, refers to the extent to which the results of a study generalize to, or have relevance for, a larger population beyond the specific individuals who participated in the study. In other words, it assesses how well the findings can be applied or extended to people, settings, and conditions outside the scope of the study.

Form (Linear vs everything else)

-Cases where correlation is helpful, while in other cases, correlation isn't helpful. -You can have a linear relationship where one goes up, the other goes up, or one goes down, and the other goes down. Or one goes up, and the other goes down. Or just independent and curvilinear -curvilinear relationship - curve line

Types of Claims: Causal

-Causal claims go a step further by asserting a cause-and-effect relationship between two variables. They suggest that changes in one variable cause changes in another. -Example: "Increased physical activity leads to improved mental well-being." -Characteristics: Establishing causation requires more than just observing a relationship; it involves demonstrating that changes in one variable directly influence changes in another through experimental designs or other rigorous methods. -Whereas an association claim merely notes a relationship between two variables, a causal claim goes even further, arguing that one of the variables is responsible for changing the other. Note that each of the causal claims above has two variables, just like association claims: -Causal claims, however, go beyond a simple association between the two variables. They use language suggesting that one variable causes the other—verbs such as cause, enhance, affect, decrease, and change. In contrast, association claims use verbs such as link, associate, correlate, predict, tie to, and be at risk for.

Correlation Correlaitnshion does NOT mean CAUSATION :

-Causation requires all of the following 1. Correlation 2. Temporal precedence -You know that x happens first then measured y. We expose the person to x then we measure y. 3. Ruling out all other 3rd variables (nonspurious relationship) Isolate any other vairbles and say this causes that. So x causes y.

Construct Validity:

-Construct Validity: **Answer the question: -Does this expoiment really measure what the research claims it measures? **Construct validity is all about making sure that you're really measuring what you think you're measuring when you're doing research. In other words, it's about ensuring that the way you're looking at or testing a concept actually represents that concept accurately.

Potential Pitfalls & solutions Counterbalancing (one of the solutions. Just learn this one)

-Counterbalanced method: A.An equal number of subjects each order Ex: drinking and driving example A.Making groups where first it's (water first then alcohol) B.The second group (alcholo then water) C. Making all groups equal to see if there's nay carry put Ex: drinking study we talked about How people are drinking : -Does alcohol impair your ability to drink? "drinks then drinks water to drive but can't drink, it's a carry on effect because they are still can't drink, it's a carry on effect because they are still drunk. -So you do counterblancing to see if there is a carry on effect (how we find out) -You still spit the groups into two, they all get eveything (so they get all levels of iv) just in a diffent order For exmaple : water then drive, alchol then drive -2nd gorup: achole then drive, water then drive -To seee if there is a carry on effect : which is alcholo -So if they are still drunk and can't drive it's because the alchol is being carried on from the sart to the end

validity:

-Discriminant validity refers to the extent to which a test is not related to other tests that measure different constructs. construct is a behavior, attitude, or concept, particularly one that is not directly observable. The expectation is that two tests that reflect different constructs should not be highly related to each other. If they are, then you cannot say with certainty that they are not measuring the same construct -Construct validity whether a measurement tool really represents the thing we are interested in measuring. central to establishing overall validity of a method. ex= There is no objective, observable entity called "depression" that we can measure directly. But based on existing psychological research and theory, we can measure depression based on a collection of symptoms and indicators, such as low self-confidence and low energy levels. Content validity assesses whether a test is representative of all aspects of the construct. ex:A mathematics teacher develops an end-of-semester algebra test for her class. The test should cover every form of algebra that was taught in the class. If some types of algebra are left out, then the results may not be an accurate indication of students' understanding of the subject. if she includes questions that are not related to algebra, the results are no longer a valid measure of algebra knowledge. Face validity considers how suitable the content of a test seems to be on the surface. It's similar to content validity, but face validity is a more informal and subjective-assessment. You create a survey to measure the regularity of people's dietary habits. You review the survey items, which ask questions about every meal of the day and snacks eaten in between for every day of the week.survey seems like a good representation of what you want to test so it's hig

Discriminant/Divergent Validity

-Divergent (aka Discriminant) Validity: *Show lack of correlation with. a different (potentially explanatory) construct A.Divergent measure: want to rule out the possibility that our measure is measuring a related construct (e.g., actively level, measure activity level by counting the total duration of running during recess) b.Two measures shouldn't be correlated (rule out possibly that you measure by other measures by showing they are not correlated) --Ex: aggression (looking at kids go crazy in the playground. Kickin, swinging, running, etc) there's two sides of this. "Im pushing you because im taking out my anger on your" or "im going crazy didn't realzie i let my aggression on on you" --Discriminant (Divergent) Validity: Discriminant validity is a concept in research that ensures that different measurements or tests are truly measuring distinct and unrelated concepts. In simpler terms, it's about making sure that each test is unique and doesn't overlap too much with other tests. ex:Imagine you have two tests—one to measure how good you are at playing soccer and another to measure how good you are at playing the piano. If these tests have good discriminant validity, your soccer test results won't tell us anything about your piano skills, and vice versa.

Validity :

-Do numbers or measurements, even if they are reliable but answer my research question -More notes : We need reliability for validity --Ex: weight (measuring weight) Trial 1: 131 ibs Trial 2: 129 ibs Trial 3: 132 ibs Trial 4: 134 Digital scale: Trial 5: 125.9 Trial 6: 126.4 Trial 7: 126.2 Trial 8: 126.1 -(you measure yourself 4 times) (there's always an error for measurements)

Correlational Research :

-Examine the relationship between 2 variables : A. X and y -Correlation says that x is related to y, but A. Does x cause y? B. Does y cause x? C. Does z cause both x and y? (3rd variable problem) -Relationship does NOT imply causation!

Confounds

-For any given research question, there can be several possible alternative explanations, known as confounds or potential threats to internal validity. -The word confound can mean "confuse": When a study has a confound, you are confused about what is causing the change in the dependent variable. -Confounding variables (a.k.a. confounders or confounding factors) are a type of extraneous variable that are related to a study's independent and dependent variables. A variable must meet two conditions to be a confounder: --It must be correlated with the independent variable. This may be a causal relationship, but it does not have to be. --It must be causally related to the dependent variable. ---definition: A confound is something extra that you didn't mean to include in your experiment, and it can mess up your results. ---Example: If you're testing whether eating a new snack makes people feel happier, but then you realize that everyone testing it is in a room with fun music and bright colors, the music and colors could be confounds. You won't know if people are happier because of the snack or because of the music and colors.

3rd variable problem

-When the third variable correlates with one or both of the two variables of interest 3rd variable: · When a 3rd variable correlates to one or both of the two variables of interest · And reasonably explains away or accounts for the observed correlation

Types of Claims: Frequency

-Frequency claims describe the rate or extent of a single variable. They focus on how often a particular phenomenon occurs. -Example: "60% of adults in the United States own a smartphone." -Characteristics: These claims provide information about a single variable's prevalence, occurrence, or distribution but do not imply a relationship between variables. -Frequency claims are easily identified because they focus on only one variable—such as level of food insecurity, rate of smiling, or amount of texting. - In addition, in studies that support frequency claims, the variables are always measured, not manipulated. For example, the researchers measured children's food insecurity by using a questionnaire or an interview and reported the results.

Internal Validity

-Internal validity : **Refers to whether the effects observed in a study are due to the manipulation of the independent variable and not some other factor --Internal validity refers to the extent to which an experiment or study accurately measures the relationship between the variables it intends to investigate. In simpler terms, it assesses whether the observed effects within an experiment can be confidently attributed to the manipulation of the independent variable and not to other factors. --One of three criteria for establishing a causal claim; a study's ability to rule out alternative explanations for a causal relationship between two variables. Also called third-variable criterion. See also covariance, temporal precedence.

Independent Variable: Experimental Research

-Manipulation: change variable to create multiple conditions, also called levels of the independent variable (IV) Question: You're curious about something, like whether the amount of sunlight affects how fast plants grow. Setup: You decide to test this by growing plants under different amounts of sunlight. Some plants get a lot of sunlight, and some get only a little. Independent Variable: Now, the thing you're changing on purpose is the amount of sunlight. That's your independent variable. It's called "independent" because you, the scientist, can change it independently—on your own.

Experimental Control:Matching

-Match particiaptns (or stimulus material) on the confounding variables across levels of the IV (all conditions) -Therefore, for every level of the IV, there exists one iteam/person with the same value of the potential confound Ex 1: shoe szie For size 4: we got 5 children each, ages 5,6,7 and 8 For size 6: we got 5 children each, ages 5,6,7, and 8

Strength of correlation Coefficients :

-No relationship / No correlation (+_ o to + .09) -small/ weak correlation (+_ .10 tp + .29 ) -medium/ moderate correlation ( +_ .30 to +_ .49) -Large / strong correlation ( +_ .50 to +_ 1)

Study: Pepsi vs. coke

-People didn't know which cup had which soda flavor. *Pepsi wins! But = A.Label was confounded with beverage (l always coke and S always pepsi) *Label was as a confound variable *L = coke in L cups *S = spirt in S cups B.In an experiemnt, when randomly assign label (L or S), people chose S 85% of the time regardless of the beverage -For whatever reason people liked drinking out of cup drinking some the S (letter) cup Results : IV: Beverage (levels: Pepsi, Coke) cuased the differences DV: Preference (ratings) Extraneous variables: gender, haor color, height Confound: Label -Common Threats to Internal Validity : (compound is one already)

Causation:

-Recall that causation requires three things: causation **Correlation: the relationship between two variables **Temporal Precedence: causes come before the effect -An experiment does this by manipulating the IV -Caise comes before the effect. IV came before DV. measure IV and the effect on DV **Eliminate other explanations or 3rd variables- we get this through experimental control, and if successful, we have strong internal validity. -When talking about causation, we say compounds

Experimental Control: Random assignment

-Reminder: sampling is how we select people from the population -The assignment is how we place that sample into the conditions of the experiment (levels of the IV) -Random assignment (how you get people from your sample into your condition) what you do is random assignment Random assignment: for each participant from the sampe randomly assign them to condition (flip coin, random number generator) --Any difference between individuals should be equally spread across conditions by choice Ex 1: shoe size/reading ability- not possible --Quasi-experiment= experiment that lacks random assignment Ex 2: hunger/aggression- we could randomly assign people to these conditions and control the temp in both rooms, this way we are certain that changes in aggression are not due to all of the hungry people being more aggressive

Sampling Plans

-Sample Size: Could you determine the appropriate size of the sample? The sample size should be large enough to provide sufficient statistical power but small enough to be manageable within the constraints of time and resources. -Sampling Bias: Consider potential sources of bias that may affect the representativeness of the sample. Sampling bias occurs when the selected sample does not represent the target population, leading to inaccurate generalizations. -Sampling Precision: Could you define the level of precision or margin of error acceptable for your study? This is often expressed as a confidence interval or percentage. -Sampling Plan Documentation: Clearly document the entire sampling plan, including the target population, sampling frame, sampling method, sample size, and any adjustments made during the sampling process. -Randomization: In many sampling plans, randomization plays a crucial role. Random selection helps ensure that each member of the population has an equal chance of being included in the sample, reducing the risk of bias. -Data Collection Strategy: Specify how data will be collected from the selected sample. The data collection strategy should align with the research design and objectives.

Self-report - Self-Report Research:

-Self-report research involves gathering data directly from participants through their responses to questions or statements about their own thoughts, feelings, behaviors, or experiences. This method relies on individuals' introspection and willingness to share information about themselves. -Research Question: How satisfied are employees with the new workplace policies? -Method: Administer a survey asking employees to rate their satisfaction with various aspects of the workplace policies on a scale (e.g., from "very dissatisfied" to "very satisfied"). -A self-report measure operationalizes a variable by recording people's answers to questions about themselves in a questionnaire or interview. Diener's five-item scale and the Ladder of Life -question are both examples of self-report measures about life satisfaction.

Correlation

A measure of the relationship between two variables

Individual Difference/Quasi-Experimental Study design

-Sometimes the hypothesis is about those individual differences ---In summary, an individual differences study, often carried out using a quasi-experimental design, explores how existing variations among individuals in a particular characteristic relate to other outcomes of interest. It's a way of studying relationships in a more natural setting where researchers don't control or manipulate specific variables. Individual differences study: (AKA quasi-experimental): -Examines the pre existing differences between individuals **Ex's: Young vs sold Disabled vs. non-disabled Althetes vs. non-athletes Quasi-experimental designs AKA : individual differences Select a sample of people Divide based on preexisting the difference Natural IV level 1 Natural IV level 2 Measure DV Individual differences design : ex= Differences between right and left hand : That's quasi experiment

Survey Research

-Survey research is a method of collecting data from a sample of individuals using standardized questionnaires or interviews. Surveys are designed to gather information on a population's attitudes, opinions, behaviors, or other characteristics. -Research Question: What are the eating habits of adolescents in a specific region? -Method: Develop a survey with questions about daily food intake, preferences, and eating behaviors. Could you administer the survey to a representative sample of adolescents in the target region? -Researchers use surveys and polls to ask people questions online, in door-to-door interviews, or on the phone. You may have been asked to take surveys in various situations. After you purchased an item from an Internet retailer, you got an e-mail asking you to post a review. While you were reading the news online, a survey might have popped up. A polling organization such as Gallup may have called your cell phone. -Researchers may ask open-ended questions that allow respondents to answer any way they like. -One specific way to ask survey questions uses forced-choice questions, in which people give their opinion by picking the best of two or more options.

Requirements for causation

-Temporal precedence -3rd variable problem

Threats to Validity: Temporal

-Temporal: to other periods (of the day, of the year) or generations ---Temporal validity, also known as temporal generalizability, refers to the extent to which the results of a study remain relevant and applicable over time. It involves considering whether the findings from a study conducted at one point in time can be generalized or applied to different time periods. This aspect of validity is particularly important when examining trends, behaviors, or phenomena that may change over time due to various factors. *Ex: morning vs night. *Ex: If we are measuring people's moods in february vs August (period time of the year) A. Study both months because it might have issues if you don't.

Why use correlation?

-Testing for relationships between variables can be used to : A.Identify patterns and trends B.Use one variable to predict the score of a second variable C.Determine the likelihood of the pattern observed in the sample also being present in the population

Reliability:

-The consistency of results of the measure Reliability: the consistency of a research study or measuring test -reliability refers to the precision of the data -are these systemic errors? -reliability is necessary for validity ( which refers to whether the data answers the research question ) -you can't answer any question with lousy data Reliability : -Results are consistent

Dependent Variable

-The dependent variable is the variable that is observed or measured to assess the effect of the independent variable. It is the outcome variable that researchers are interested in understanding or explaining. -Role: The dependent variable is the presumed effect in a cause-and-effect relationship. It is the variable that researchers expect to be influenced by changes in the independent variable. -Example: In the same drug study, the dependent variable might be the participants' sleep quality, measured using a standardized sleep assessment tool. Changes in sleep quality are expected to be influenced by the administration of the drug.

ASSOCIATIONS:

-The headline "New study links exercise to higher pay" is an association in which high goes with high and low goes with low; it is called a positive association or positive correlation. Stated another way, high rates of exercise go with higher levels of pay, and low rates of exercise go with lower levels of pay. -In a negative association (or negative correlation), high goes with low, and low goes with high. In other words, high rates of coffee go with less depression, and low rates of coffee go with more depression. -The study behind the headline "A late dinner is not linked to childhood obesity, study shows" is an example of a zero or no association between the variables (zero correlation).

Independent Variable

-The independent variable is the variable that the researcher manipulates or controls. It is the variable that is hypothesized to cause a change in the dependent variable. -Role: The independent variable is the presumed cause in a cause-and-effect relationship. Researchers manipulate the independent variable to observe its effect on the dependent variable. -Example: In a study examining the impact of a new drug on sleep quality, the independent variable would be the administration of the drug. Different groups may receive different doses or a placebo.

Defining and Measuring Variables: Operational definitions

-The specific way in which a concept of interest is measured or manipulated as a variable in a study. Also called operationalization, operational variable. -When testing hypotheses with empirical research, they create operational definitions of variables, also known as operational variables, or operationalizations. To operationalize a concept of interest means to turn it into a measured or manipulated variable -For example, a researcher's interest in the construct "coffee consumption" could be operationalized as a structured question in which people tell an interviewer how often they drink coffee. Alternatively, the same construct might be operationalized by having people use an app in which they record everything they eat or drink for a period of time.

Threats to Validity: Ecological

-This aspect of population validity relates to the similarity between the study conditions and the real-world conditions to which the findings will be applied. High ecological validity means that the study settings, materials, and procedures closely resemble the situations in which the research is meant to be applied.

Avoid threats to internal validity: Experimenter Bias

-When the experimenter is aware of the IV, they : *Behave differently *Respond to subjects differently *Measure differently So use a single- or double-blind study -Single blind: experimenter unaware -double-blind : experimenter and participant both unaware

Q3: Do you think the following study has a threat to external validity? Why or why not?: ("A 1969 study found that college students spent the majority of their free time participating in political events. This finding is being used by a political campaign manager as justification for funding events at colleges for the 2016 presidential election.")

-Yes threat because it doesn't represent all college students. Because 1969 is different from 2016. Things have changed

Q1: --Do you think the following study has a threat to external validity? Why or why not?: ("A study about sleep patterns in young adults. SU undergraduates volunteer to participate at 7 am. The researchers conclude that the average bedtime is 10 pm.")

-Yes threat because it's going off people that might have an early sleeping schedule

Scatter plot:

-take the individual data and mop it on the x and y axis and where this person stands between the two variables A. You see the strength of the relationship B. Each dot is one observation or a pair of data

Experimental Control:Holding constant

1. Holding constant : -Hold the value of a potential confound constant across all levels of the IV -All items have the same value or a restricted range Ex 1: shoe size/ reading ability - age ( everyone is 8 years old) Ex 2: hunger/aggression-temp in room (every room is set to 72 degrees)

How do we measure reliability? Inter-rater reliability Reliability: Interrater reliability =

1. Interrater reliability: the degree to which different raters give consistent estimates the correlation between the different ratings of judges -take scores from two or more judges and correlate them -if your test is reliable, the results will be positively correlated A. For this, you need high positive coloration interrater reliability B.If judges are negative, something is wrong --With interrater reliability, two or more independent observers will come up with consistent (or very similar) findings. Interrater reliability is most relevant for observational measures. ex: Suppose you are assigned to observe the number of times each child smiles in 1 hour at a childcare playground. Your lab partner is assigned to sit on the other side of the playground and make their own count of the same children's smiles. If, for one child, you record 12 smiles during the first hour and your lab partner also records 12 smiles in that hour for the same child, there is interrater reliability. Any two observers watching the same children at the same time should agree about which child has smiled the most and which child has smiled the least.

internal reliability (Split-half reliability)

1. Split-half reliability = (cut in half) -design a test that has different items that access the same construct (e.g., design 10 questions that assess classic psychological theory; we want to know if this test is reliable -randomly split the test into two halves -correlate the scores against one another. -if your test is reliable, the results will be positively correlated -If it's reliable then it should have a strong positive correction

How do we measure reliability? Reliability: test-retest reliability =

1. test-retest reliability: same measurements, different times Ex: test (screenshot) they took it twice EX: test-retest reliability, let's suppose a sample of people took an IQ test today = --When they take it again 1 month later, the pattern of scores should be consistent: People who scored the highest at Time 1 should also score the highest at Time 2. Even if all the scores from Time 2 have increased since Time 1 (due to practice or schooling), the pattern should be consistent: The highest-scoring Time 1 people should still be the highest scoring people at Time 2. Test-retest reliability can apply whether the operationalization is self-report, observational, or physiological, but it's most relevant when researchers are measuring constructs (such as intelligence, personality, or gratitude) that are theoretically stable. Happy mood, for example, may reasonably fluctuate from month to month or year to year for a particular person, so less consistency would be expected in this variable.

History=Common Threats to Internal Validity --Common Threats to Internal Validity : (compound is one already)

1.History: uncontrolled events that happen mid-experiment --There was a lot of research happening, but what was happening in the real world impacted the results. EX: 911 -Study before 9/11, the study found that stress levels were low after 9/11, stress levels were high. Why? Because of what was happening outside of the study. Definition: History threats occur when something external to the experiment happens during the course of the study that could explain the observed changes in the dependent variable, apart from the manipulated independent variable. Example: Imagine you're conducting an experiment to assess the effectiveness of a new teaching method on student performance. If, during the study, there's a major educational reform that impacts all students, the effects on student performance may be due to the reform rather than the teaching method. The external event (educational reform) becomes a history threat.

Instrumentation =Common Threats to Internal Validity --Common Threats to Internal Validity : (compound is one already)

1.Instrumentation: change in the ability to use instrumentation. Or in the measurement device itself EX: --Running a study on activity- used an older cheap pedometer and halfway through changed to a new Fitbit/Apple Watch (changed the objects) Another ex: --The digital vs analog scales we talked about in reliability definition: Instrumentation threats occur when there are changes in the instruments or procedures used to measure the dependent variable, leading to a potential confounding effect on the results.

Maturation=Common Threats to Internal Validity --Common Threats to Internal Validity : (compound is one already)

1.Maturation: participants change over time Ex: study on relationships -Running a study on relationship quality over time. However, some participants break up throughout the study. This leaves us with only couples who are very satisfied in their relationship. -Couples break up and don't want to do the study anymore -The data you have left is people who are still together -It is harmful because not everyone continues, and it's not the full set of data. unfished. Definition: Maturation threats arise when natural developmental or biological changes in participants occur during the course of the study, and these changes might be responsible for the observed differences in the dependent variable. Example: Let's say you're conducting a study on the effects of a new reading program on children's reading skills. If, over the course of the study, the children naturally become better readers due to normal developmental processes, it becomes challenging to attribute the improvements solely to the reading program. The maturation process becomes a threat to internal validity.

Selection/Assignment=Common Threats to Internal Validity --Common Threats to Internal Validity : (compound is one already)

1.Selection/Assignment: self-selection or improper assignment to condition --Selection or assignment threats to internal validity refer to issues related to how participants are chosen or assigned to different groups in an experiment. These threats can compromise the internal validity of the study by introducing biases or confounding factors. Here are some common selection or assignment threats: Ex: --Running a study where. I need volunteers to take time out of their day to complete a battery of measures ... only certain people will take up that opportunity, which can bias the results. Overall : -not everyone can be in your study -One group has more people (because they agree with this group more) = research group

Defining and Measuring Variables: Construct

A construct refers to an abstract concept or idea that is not directly observable or measurable. Constructs are mental constructs or theoretical concepts that researchers use to explain and understand various phenomena. Unlike concrete and tangible variables, constructs are inferred from observable behaviors, responses, or measurements. examples of Constructs: -include intelligence, motivation, personality traits, self-esteem, and attitudes. These are concepts that researchers use to describe and explain human behavior but cannot be directly measured in a concrete way. -a construct is an abstract concept or idea that researchers use to explain and understand phenomena in the social and behavioral sciences. -Operationalizing constructs allows researchers to measure and study these abstract concepts in a concrete and systematic manner. Constructs are foundational to the development of theories and contribute to the advancement of knowledge in various fields.

Sampling Research Participants: Population

A population is the entire set of people or products in which you are interested. -Unfortunately, it can be expensive and time-consuming to gather data for every individual in a population, which is why researchers typically gather data for a sample from a population and then generalize the findings from the sample to the larger population. -The 1,000 students represent the population, while the 100 randomly selected students represent the sample

Sampling Plans

A sampling plan is a systematic approach to selecting a subset of elements from a larger population to gather information and make inferences about the entire population. -target Population: Define the population of interest. This is the group to which you would like to generalize your study findings. The target population should be clearly defined based on the research question or objective. -Sampling Frame: Could you identify the list or source from which the actual sample will be drawn? The sampling frame should ideally include all members of the target population, but practical considerations may lead to a subset of the population being used. Sampling Method: Choose the specific method for selecting individuals or elements from the sampling frame. Common sampling methods include: -Random Sampling: Every member of the population has an equal chance of being selected. -Stratified Sampling: The population is divided into subgroups (strata), and samples are randomly selected from each stratum with an equal number in each stratum (I put into 5, I get 20 for each) (equal in all categories, doesn't matter of population. we are going to fill equal numbers -Proportionate stratified sampling (probability sampling). It doesn't use equal numbers in each stratum instead, it uses the proportion of the population in each stratum (for example, high-income, middle, lower income. you keep it population with population. so no group is repented or higher repsented -Convenience Sampling: Elements are chosen based on their availability and accessibility. (non-probability) Quota sampling (non probability)

Between-subjects designs

An experimental design in which each participant serves in one and only one condition of the experiment. --Between subject designs : Independent groups = Select a sample of people Divide them into (equal sized) groups IV level 1 IV level 2 Measure DV

Between-subjects designs: Advantages and Disadvantages

Avantages of between subject deigns : A.No practice/testing effects B.No differential carry over C.Less fatigue D.Can ask questions about individual difference Disadvantages : A.Needs lots of resources (more participants) B.Risk differences between participants in each group that are not related to your iV (adds noise)

Potential Pitfalls & solutions

Avoid threats to internal validity: Eliminate confounding variables -Control all other aspects of experiments **The only difference between groups should be level of IV **Avoid observing your two groups at different times of day, or different months of the year etc **Avoid different researchers being assigned to different conditions Patience: Take your time; don't rush decisions. Open-mindedness: Consider different perspectives. Seek Help: Don't hesitate to ask for assistance. Fact-Checking: Verify information before believing it. Humility: Stay open to learning; nobody knows everything.

EX: Are left handel more creative No random assignment

But in true experimental research : Only difference between the two conditions are their levels of the independent variable Attempts to eliminate or minimize pre-exisiting differences between groups

Control groups

Control Groups: -A control group is used to measure the effect on the DV in the absence of the IV (OR with a potential confound) allows us to look for one group threats to validity Ex: -If we see high score on the DV, can we say that lessons improved performance -By how much? FROM THE SCREENSHOT : Pre test for both cases - treamteant- post test

Controlled variables

Controlled variables: in a perfect scenario, all other variables are controlled or are the same between conditions' Curiosity: You're still curious and this time you're investigating how the amount of sunlight affects plant growth. Experiment Setup: You've decided to grow plants under different amounts of sunlight, with one group getting a lot, another getting a medium amount, and the last getting very little. Controlled Variables: Now, you want to make sure that the only thing changing is the amount of sunlight. You don't want other stuff messing up your results. So, you keep everything else the same. The type of soil, the size of the pots, the water you give them, and even the type of plant—all of these things stay constant. These are your controlled variables.

Convergent/Criterion Validity

Convergent Validity: ---Convergent validity is about checking if two different measurements that are supposed to be measuring the same thing are giving similar results. It's like making sure different rulers that are supposed to measure the same length are actually showing close values. ---Example: Imagine you have two tests that claim to measure how creative a person is. If someone scores high on one test, they should also score high on the other if both tests are convergent and accurately measuring creativity. A. Convergent measure: a different operational definition of aggression (e.g., ask teacher for overall ratings of aggressiveness of child) B. Two measures should be positively correlated --Ex: measure depression How do we support this? -Take a popular test that is used widely for depression. Ideally, if I score low on my study and the actual clinical study scores high, then something is up. It needs to correlate. So, converging with another measure (research): (one is low and the other study is really low) (one is high and the other study is high) --Both positively correlated It has to line up This one is overall the measure to show what you're saying Criterion Validity: Criterion validity is about checking if a measurement or test accurately predicts real-world outcomes. It's like making sure your ruler is so good that it can predict exactly how long something will be in reality. --Example: Let's say you have a test that's supposed to measure how well someone will perform in a basketball game. If the scores on your test are closely related to the number of points someone scores in actual basketball games, your test has good criterion validity.

Experimental Research

Experimental research is a type of research design that involves the manipulation of an independent variable (IV) to observe its effect on a dependent variable (DV), while controlling for extraneous variables. The primary goal of experimental research is to establish a cause-and-effect relationship between the independent and dependent variables

Threats to Validity: External Validity

External validity : -Usually, we want to be broad and global in our conclusions **Therefore, the question is: A. Does this specific time/place/group matter? B. Could there be something about this situation that influences our results? -You must state how that special property affects the results (if you have no reason to suspect that is does, then it probably does not matter) -Sometimes, we study a specific sub-group/time/place -> then external validity is not a concern ---Sometimes you want to study a specific group of people, then external validity isn't a concern in a study like that. --Ex: his bestifrned lost dad in 9-11. He was in a study where it was only 911 family members. That's like a case where external validity isn't a concern **You not trying to generalize, it's a specific group External validity : --Tells us how far we can generalize our results to different times, places, and people

Q3: Using the fidget spinner scenario discussed in class, give an example of how you would exercise experimental control in this design through random assignment.

Flip a coin= fidget spinner where ever you want and you are in that spot = just radom

Controlled Difference Study design

Post test design Can test that groups were equivalent prior to manipulation (IV) Can measure how IV changed performance within each level of IV **This is a mixed-design (more on those later...) --A "post-test design" refers to a research design in which measurements or observations are taken after an intervention or treatment has been applied.

Q2: For the research question is "is someone a good leader?", Measure their height. In the following scenario, would the type of measurement proposed be a reliable measurement? Why or why not?

Heigh is a reliable measurement but it's a really a valid measurement

Q1: Using the fidget spinner scenario discussed in class, give an example of how you would exercise experimental control in this design by holding constant.

Held constant equal the same) = across levels of the independent variables because it is a suspect confound. So everyone gets.a figet and everyone doesn't depending on the day.

Dependent Variable : Experimental Research

Measurement: the same dependent variable (DV) is measured for each condition --Curiosity: You're still curious, and this time you want to know if the amount of sunlight affects how fast plants grow. Setup: Just like before, you decide to test this by growing plants under different amounts of sunlight. Dependent Variable: Now, instead of changing something, you're going to watch and measure something that might change on its own. This is your dependent variable. It "depends" on what you're doing with the independent variable. In this case, your dependent variable is how fast the plants grow.

Why type of reliability is being used? Reliability : can you count getting the same measurements = Multiple judges are evaluating the degree to which art portfolios meet certain standards.

Interrater reliability- mutiplte judges and corrlected them together

Question 1: For the research question is "is someone lucky", Give 10 lottery tickets to a sample of people and count the total amount of $ won. In the following scenario, would the type of measurement proposed be a reliable measurement? Why or why not?

No because you can't count the wins of the lotery. You won't get the same measurements.

Q3: For the research question is "is someone prepared for college?", Measure scores on the ACT exam. In the following scenario, would the type of measurement proposed be a reliable measurement? Why or why not?

Not the most reliable. You can typical take it 1 to 3 times. You won't get the same score all times. You won't nessearsy count the same measurement.

Survey questions

Open-ended questions Open-ended, or long-form, questions allow respondents to give answers in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered. For example, respondents may want to answer "multiracial" for the question on race rather than selecting from a restricted list. Example: Open-ended questions How do you feel about open science? How would you describe your personality? A forced choice question requires the respondent to provide a specific answer. This type of question eliminates in-between options, forcing survey participants to be for or against a statement. Which one is your preferred color? Yellow Red Black Gold A Likert scale is a rating scale used to measure opinions, attitudes, or behaviors. It consists of a statement or a question, followed by a series of five or seven answer statements. Respondents choose the option that best corresponds with how they feel about the statement or question. The format of a typical five-level Likert question, for example, could be: Strongly disagree Disagree Neither agree nor disagree Agree Strongly agree the semantic differential scale is a type of survey rating scale used for psychological measurement. It helps to get to know your audience's attitudes, approaches and perspectives. A researcher develops a survey allowing a respondent to express a judgment, using a scale of five to seven points. TasteBland ———-|———-|———-|———-|———-|———-|———- Flavorful When you think of leading questions, you likely think first of the language—the words and phrasing—of those questions. But the question type, topic, and order can be equally influential. "Does your employer or his representative resort to trickery in

Random error vs.

Random error : because it's random, it cancels out with repeated measures -e.g., weight on different trails -Random error: Sometimes it is too high or too low. If you take a lot of measurements, it'll even out. -Taken more measurements gets you closer to having less of an error -But if you have a counsticsly error it'll keep being room no matter how many times you measure

Representative sample:

Representative sample: A sample in which the characteristics of the individuals closely match the characteristics of the overall population. -Ideally, we want our sample to be like a "mini version" of our population. So, if the overall student population is composed of 50% girls and 50% boys, our sample would not be representative if it included 90% boys and only 10% girlterm-30s. -Or, if the overall population is composed of equal parts freshman, sophomores, juniors, and seniors, then our sample would not be representative if it only included freshman.

Instead of random assignment, we cold use Matched Paris designs

Select a sample of PARIS Randomly divide into equal sized groups IV level 1 or iV leevle 2 Measure DV

Why type of reliability is being used? Reliability : can you count getting the same measurements =All items of a test that are intended to probe the same area of knowledge (e.g.,

Split-half reliability- take one test cut it in half and corrlected them -Taking one exam and spittling in half -Same person takes the exam twice (test-restest reliability)

Systematic error

Systemic error: -consistent error, this is a problem -E.g., weight on different scales -Measurements will be wrong because systemic error -Systematic error = is wrong every singl time --Ex: telling me i'm going 60 when i'm going 70. -Is the error the same every time you measurement it ? -If it's different random (lower or higher) we don't worry about that.

Temporal Precedence /Directionality

Temporal precedence (directionality) · The cause precedes (comes before) the effect · We can design experiments to ensure that is the case Ex: drug -> lower anxiety -But surveys and other forms of correlational research do not accomplish this

Practice/Testing Effects =Common Threats to Internal Validity --Common Threats to Internal Validity : (compound is one already)

Testing effects: change in performance due to practice or fatigue with the material Ex: -Measuring if my mindfulness manipulation helps with my reading ability. I measure this by seeing if they read a passage of text faster after intervention .. however., the person has already read it once, so now they have had practice reading it (it's not what we did that made you better; you just practice) Ex: of multiple threats in one study Pre-test: GRE (DV) 146 3 mo. Kaplan Class (IV) Post-test: GRE (DV) 152 Maturation: learned things from college courses or life History: media coverage of a section of the test Testing effects: familiar with the formatting/timing of the GRE Overall : Taking it again, stress is better because you know how it is already, and you have practice. The Kaplan class is the reason for better grades, but there are some threats to the study. definition: Practice/testing effects refer to changes in participants' performance on a test or measure due to their familiarity with the testing procedure. These changes might be unrelated to the experimental manipulation and can impact the internal validity of the study. Example: Let's say you're conducting a study to evaluate the effectiveness of a new memory training program. If participants take the same memory test multiple times, their performance might improve simply because they become more familiar with the test, not necessarily because of the memory training. The repeated exposure to the test becomes a practice/testing effect.

Sampling Research Participants: Sample

The sample is a smaller set, taken from that population. You don't need to eat the whole bag (the whole population) to know whether you like the chips; you only need to test a small sample. -For example, suppose we want to understand the movie preferences of students in a certain school that has 1,000 total students. Since it would take too long to survey every individual student, we might instead take a random sample of 100 students and ask them about their preferences. -The 1,000 students represent the population, while the 100 randomly selected students represent the sample. Once we collect data for the sample of 100 students, we can then generalize those findings to the overall population of 1,000 students, but only if our sample is representative of our population.

probability sampling

To maximize the chances that we obtain a representative sample, we need to focus on two things when obtaining our sample: -representative sample from a population, probability sampling is the best option. There are several techniques for probability sampling, but they all involve an element of random selection. In probability sampling, also called random sampling, every member of the population of interest has an equal and known chance of being selected for the sample, regardless of whether they are convenient or motivated to volunteer.

Within-sibject design: Advantages and Disadvantages

Within-subjects experiment: Advantages : A.No worries about differences between groups (because there is a single group) B.Statistically more powerful because individual differereces are controlled (each person is their own control) Disadvantages: A.History effects -"History effects" refer to external events or influences that occur between the pretest and posttest measurements in a study. These external events can affect the outcomes of the study and create a potential threat to internal validity. History effects are particularly relevant in longitudinal or repeated measures designs where measurements are taken at different points in time. B.Differential carry over -Differential carryover" typically refers to a situation in research or experimental design where the effects or influences of a previous condition or treatment persist and affect subsequent conditions differently across groups or participants. This can be a potential issue in studies with repeated measures or crossover designs. C.practice/ testing effects -esearchers investigate how the act of practicing or testing oneself on learned material enhances long-term retention and learning. The findings have implications for educational practices and memory enhancement strategies. D.Maturation -If changes are observed in a study group over time, researchers need to consider whether these changes are a result of the intervention or treatment being studied or if they are due to natural maturation processes.

Within-subjects designs

an experimental design in which the same participants respond to all types of stimuli or experience all experimental conditions -Within Subject Designs : Select a sample of people IV level 1 -> measure DV IV level 2 -> measure DV

What type of claim : People Who Multitask the Most Are the Worst at It

association

What type of claim : Shy People Are Better at Reading Facial Expressions

association

What type of claim : Music Lessons Enhance IQ

causal

Whiff of Rosemary Gives Your Brain a Boost

causal

Face Validity Construct Validity:

face Validity: -Refers to the degree to which an assessment or test subjectively appears to measure the variable or construct that it is supposed to measure ---Face validity is a type of validity assessment that focuses on whether a test or measurement appears, on the surface, to measure what it claims to measure. In simpler terms, it's about whether a test looks like it's doing what it's supposed to do. This is often based on a common-sense evaluation or "at face value." EX: screenshot (pictures) --If all the questions have to do with the direct measure = which means high face validity --Ex: if im doing study on attitudes on condoms and none of the questions have anything related to that, then poor face validity Face validity- under the umbrella of construct validity

What type of claim : 1 in 25 U. S. Teens Attempts Suicide

frequency

What type of claim : 44% of Americans Struggle to Stay Happy

frequency

Non-probability Sampling

is a method wherein each member of the population does not have an equal chance of being selected. When the researcher desires to choose members selectively,non-probability sampling is considered. Both sampling techniques are frequently utilized. However, one works better than others depending on research needs.

Probability Sampling

is a method wherein each member of the population has the same probability of being a part of the sample.

Carryover effects

occur when participants' experience in one condition affects their behavior in another condition of a study --threats to internal validity from (Carry over effect): --Past experiences with the level, is hurting you later on --Another ex of the carry over effect In studies, researchers worry about carryover effects because they want to be sure that any changes they see are because of what they're studying, not just because of what happened before. It's like making sure the improvements in the racing game are because you're a great racer, not just because you played the puzzle game. So, carryover effects are a bit like a secret power—sometimes they can help, but researchers need to be careful to understand if they're influencing the results in a way that might not be related to what they're studying.

Why type of reliability is being used? Reliability : can you count getting the same measurements =A test designed to assess student learning in psychology could be given to a group of students twice, with the second administration perhaps coming a week after the first.

test-retest reliability :

Validity :

validity -The appropriateness of a conclusion or decision. See also construct validity, external validity, internal validity, statistical validity.


Related study sets

Green Group: Sort 21 - Short -u and Long -u (Open syllable -ew and -ue)

View Set

Chapter 18: The Cardiovascular System: The Heart Part A

View Set

Stats Chapter 1 Homework 1.1a Sampling and Parameters

View Set

Intro to Cultural Anthropology - Final

View Set

Health Assessment Ch 14 Hair, Skin & Nails

View Set

Government Operations Center Liaison Fundamentals

View Set

CH 9 Prosocial Behavior: Doing What's Best for Others

View Set

Week 8, 9, & 10: Controlled Substances Part 1

View Set

Introduction of Corporate Governance

View Set