Psych 201 Chapter 3 + 5

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

physiological measure

operationalizes a variable by recording biological data such as brain activity, hormone levels, or heart rate.

self-report measure

operationalizes a variable by recording people's answers to questions about themselves in a questionnaire or interview.

internal reliability

(also called internal consistency), a study participant gives a consistent pattern of answers, no matter how the researcher has phrased the question. people who agree with the first item on the scale should also agree with the second item ways to check the reliability (consistency) of your measure

Historical Influences and events outside the laboratory

(e.g., "Sh#@! happens") - any significant social or environmental event that occurs between the measurement during Time 1 and Time 2 that affects the DV. Often situations and factors beyond our control that nevertheless might influence the results. EX. stress levels will influence behaviour and ppl are more stressed middle to end of term so do test at beginning

Identify the "threats" to internal validity and give an example of each kind.

- Confounds - Poor experimental design - Experimenter/Participant effects

Directionality Problem

- Correlations present a Chicken-or-Egg problem to the researcher - we dont know which caused the other ex. marage(variable 1) and life expectancy(variable 2): however, rich healthy men are more likely to get married Limiting texting would improve academic performance. OR Improving academic performance would limit one's texting.

Split-Half Correlation

- Divide the test into even and odd items and measure their correlation. High correlation means good internal reliability and low correlation means poor internal reliability

Two kinds of independent variables

- Situational variable: Characteristic that differs across environments, stimuli or conditions - Subject variable: a personal characteristic that differs across individuals (gender, income level, major)

Confounding Variable Problem

- There might a 3rd, hidden variable that influences both variables.

Faulty Instrumentation and lack of control over testing environment

- changes that occur in the measuring instrument or testing environment during data collection - make sure its quiet/ participants understand task/ changes that occur when measuring like instruments

Naturalistic Observation

- in which researchers passively observe behaviour in a natural setting.

Participant mortality and attrition (dropout) rate

- occurs when a participant fails to complete a study. Problem when the participants who didn't complete study differ from those who do - data is skewed because people will drop out for certain reasons

ratio scale

- of measurement applies when the numerals of a quantitative variable have equal intervals - the value of 0 truly means "nothing." On a test a researcher might measure how many items people answer correctly. Some people might answer 0 items correctly, and their score of 0 represents truly "nothing correct" (0 answers correct). Response time, accuracy

Time sampling

- recording observations at different times of the day. Example: Recording the number of charity donations in the morning, afternoon and evening.

Accuracy

- represents the degree to which the measure yields results that agree with a known standard. How "true" is your measure?

Structured Observation

- researcher fully or partly configures the setting in which behaviour will be observed

What are the advantages and disadvantages of structured observations?

-Advantage of structured observation Greater control of the environment, events and variables -Disadvantage of structured observation Situation is only an analogue of real world situation.

Analytic surveys/ questions

-An attempt to identify the relationship between two or more variables. • eating habits and attitudes about the environment • feelings of loneliness and number of friends in children with autism • mathematical ability and musical training

Situation sampling

-recording observations at different locations. Example: Recording the number of donations in front of a church, hotel, and homeless shelter.

What is the difference between a "between groups" study and a "within groups" study? If you were designing an experiment examining the effects of marijuana on the ability to do a fine motor task which task would you choose and why?

1.) Between-group design: Participants are divided into separate groups that are exposed to only one level of the independent variable (IV). Have all participants either start w alcohol then do memory test and then a week later have them do the test sober (vice versa) (counterbalancing) 2.) Within-group design: Participants are exposed to all levels of the IV ex have one group have alcohol and one group as a control then do a memory test - studying marij i would say that between group design would be better as you could see how well their motor tasks are sober and then see how good they are stoned

What is the criteria for causal claims?

1.) Covariance - while correlation does not imply causation (correlation = causation), causation does imply correlation (causation = correlation)means that when two factors have a relationship to each other and one changes, there should be a change seen in the other factor also, either positive or negative . 2.) Temporal precedence - one variable must precede the other in time. That is, the cause must come before the effect. 3.) Internal validity - Alternative explanations must be ruled out.

scatterplot

A graph in which one variable is plotted on the y-axis and the other variable is plotted on the x-axis; each dot represents one participant in the study, measured on the two variables.

Reliability and Validity

A measure can be reliable without being valid. Example: A watch that is consistently five minutes slow or a scale that is 5 pounds underweight. An unreliable, inconsistent measure cannot be valid (i.e., truthful). Example: A watch the that runs fives minutes fast then five minutes slow. A scale that is 5 pounds underweight, then 2 pounds overweight. - although the scale is reliable and will always have u 5 pounds over weight, you do not actually weigh five pounds and therefore it isnt valid A reliable measure is not necessarily valid, but a valid measure MUST be reliable.

one-group, pretest/posttest design

A researcher recruits one group of participants, measures them on a pretest, exposes them to a treatment, intervention, or change, and then measures them on a posttest.

testing threat

A specific kind of order effect, refers to a change in the participants as a result of taking a test (dependent measure) more than once. People might have become more practiced at taking the test, leading to improved scores, or they may become fatigued or bored, which could lead to worse scores over time.

correlational study.

A study that measures two or more variables

Practice effects of testing

All things being equal, we are better at performing a task a second time. Why? Performance is better the second time because participants are "learning to learn". EX. stroop test say color not work

instrumentation threat

Also called instrument decay, occurs when a measuring instrument changes over time.

operational definitions of variables

Also known as operational variables, or operationalizations. To operationalize means to turn a concept of interest into a measured or manipulated variable.

variable

As the word implies, is something that varies, so it must have at least two levels, or values. "Nearly 60% of teens text while driving." Here, texting while driving is the variable, and its levels are "whether a person does text while driving or does not text while driving."

Attrition Threat

Attrition can happen when a pretest and posttest are administered on separate days, and some participants are not available on the second day.

Maturation, fatigue, discomfort

Changes in performance due to participant's natural development (especially true with children) Changes in performance due to time-related factors during the experiment (state of alertness, physical discomfort).

Confounding effects

Confounding effects - when an unintentional variable is systematically correlated with your independent variable and therefore, changes the difference between groups. Leads to erroneous results and interpretations.

Cardinal Rule

Correlation does not imply causation!

Define validity

Define: Researcher's conclusion is true in that it reflects the actual state in the real world. Psychological validity is different from logical validity. Relates to the truthfulness of the dependent measure. Test can be reliable without being valid. However, a test cannot be valid unless it is reliable.

Association claim

Describes a relationship or correlation between two dependent variables/ how do they act in relationship w one another "these two things seem to be related" Examples: • Attractiveness and intelligence • Height and pay scale • Age and driving accidents • Video game playing and grades Music lessons is linked to IQ. Family meals are associated with teen eating disorders. Exercising is related to life span. Attendance may predict course performance.EX. the relationship between heat and ice cream consumption

what is an experimenter effect? what is a participant effect? how would you control for experimenter and participant expectations

Experimenter effect= the preconceived expectations of the experimenter can shape their treatment of the participant and influence the participant's behaviour and the experimenter's observations. Participant effects= due to Demand Characteristics for example, wanting to "please" researcher -Single-blind experiments - Researchers are "blind" to experimental conditions. Example: In an observation study, raters are unaware of what children are in treatment condition and which children are in the control condition - Double-blind experiments - participants nor the experimenters know which group is receiving the treatment and which is the control.

What is the difference between construct validity, internal validity and external validity?

Four Types of Validity 1. Internal validity - ensuring that your independent variable is the ONLY thing causing the change in your dependent variable. A causes B no cofounds. 2. Construct validity - ensuring that your independent and dependent variables are measuring what you think it is measuring. 3. External validity - ensuring that your results can generalized to other situations, different subject groups, settings, treatments - relevant to the "real" world. 4. Statistical validity - addresses the strength of an effect and its statistical significance (the probability that results could have been obtained by chance).

counter balancing

Half the participants receive Stroop condition first and the other half receive the control condition first. Counterbalancing is a type of experimental design in which all possible orders of presenting the variables are included. For example, if you have two groups of participants (group 1 and group 2) and two levels of an independent variable (level 1 and level 2), you would present one possible order (group 1 gets level 1 while group 2 gets level 2) first and then present the opposite order (group 1 gets level 2 while group 2 gets level 1). This way you can measure the effects in all possible situations.

generalizability

How did the researchers choose the study's participants, and how well do those participants represent the intended population?

Selection of the observation behavior

Identifying a "quantifiable" target behavior (dependent, measured variable) that is relevant to your hypothesis (construct validity). aggression -> hitting behaviors on a playground alcohol abuse -> number of beers consumed at a party hygiene -> number of people who wash their hands after using the facilities

Non-random selection of participants and assignment to control and experimental groups

If differences between groups existed before experiment, differences found after the experiment may not be due to the experimental treatment (IV). To prevent selection, participants should be randomly assigned to conditions/ cant generalize .

Internal Validity

In a relationship between one variable (A) and another (B), the extent to which A, rather than some other variable (C), is responsible for the effect on B.

Define "independent variable"

It is the cause(whats manipulated) The experimenter directly manipulates the stimulus, experimental condition, aspect of the environment or the participant to determine its influence on behavior. - is the cause

Define "dependent variable" (DV). List the different types of dependent variables. What are the four scales of a dependent variable?

It is the effect Changes in the Dependent Variable (DV) should be caused by manipulation of Independent Variable (IV). (DV) (measured variable) is how the effect of the Independent Variable (IV) is evaluated.

Descriptive: suverys/ questionairs

No attempt to identify relevant variables that determine behaviors, just a description of a particular behavior.

Interobserver reliability measure:

Number of times actually agree OVER Number of opportunities to agree x 100

within groups pros and cons

Pro: Control for between-participant variability - differences between groups may reflect differences between the individuals that were randomly assigned to the groups Pro: Fewer participants needed. Con: Control for practice effects

between groups pros and cons

Pro: Eliminates practice effects - participants are only exposed to one experimental condition. Con: You need many more participants. Con: You can't control for individual differences.

Define reliability and why is it important in scientific experiment? What are the three types of reliability and give an example of each kind. For observation studies, why is inter-rater reliability important? How do you compute the inter-rater reliability of two observers?

Reliability refers to the consistency of the study (replication) or in this case, the reliability of the measure. Three types: 1.) Test-retest reliability measures the same person at two points in time (very short interval) ex. if a person did bad on a memory test the first time-> likely to do bad the second time 2.) Internal reliability is based on multiple identical identical forms of a test (different versions of the same test) EX. people who agree with the frst item on the scale should also agree with the second item 3.) Inter-rater reliability is based on comparison of different raters or observers. ex. experimenters watch same kid on park from diff veiws/ kappa measures the extent to which two raters place participants into the same categories- needed when not quantitative(overlap tells you consistency between observers)

Construct validity

The degree to which a test measures what it claims, or purports, to be measuring

Frequency Claims

The focus is only on one variable and the variable is always measured and never manipulated (example: depression, income, IQ). - no attempt to link EX. 44% of Americans Struggle to Stay Happy.

convergent validity

The measure should correlate more strongly with other measures of the same constructs People who scored as depressed on the BDI measure of depression also scored as depressed on the CES-D measure of depression;

causal claim:

The strong verb enhance indicates that the music lessons actually cause the improvement in IQ. They use language to suggest that one variable causes the other—verbs such as cause, enhance, and curb.EX. Music lessons enhance IQ. Family meals prevents teen eating disorders. Exercising increases life span. Attending class improves course performance. Establishing a cause and effect relationship

temporal precedence

To say that one variable has temporal precedence means that it comes first in time, before the other variable. To make the claim "Music lessons enhance IQ," a study must show that the music lessons came first and the gains in IQ came later one variable must precede the other in time. That is, the cause must come before the effect.

What are the advantages and disadvantages of naturalistic observations?

USEFUL: • as exploratory research, defining the problem. • when the questions are specific to a group of people, settings, or events that can't be recreated in the laboratory. • as a way of evaluating the generalizability of findings from the lab to natural environments; no concern of external validity DISADVANTAGES 1.) Replicability is often difficult because few constraints on participant's behavior. 2.) Poor generalizability because participant selection is not controlled. 3.) Causal inferences cannot be directly made because there are no independent variables.

null effect

What if the independent variable did not make a difference in the dependent variable; there is no significant covariance between the two?

maturation threat

a change in behavior that emerges more or less spontaneously over time. People adapt to strange environments; children get better at walking and talking; plants grow taller—but not because of any outside intervention. It just happens.

content validity

a measure must capture all parts of a defined construct. For example, consider this conceptual defnition of intelligence, containing distinct elements such as the ability to "reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience" should include questions or items to assess each of these components.

Type II error

a study might mistakenly conclude from a sample that there is no association between two variables

Type I error

a study might mistakenly conclude, based on the results from a sample of people, that there is an association between two variables

conceptual variables (construct)

abstract concepts such as "shyness" or "intelligence."

ceiling effect

all the scores are squeezed together at the high end.

a floor effect,

all the scores cluster at the low end.For example, if Dr. Williams really did manipulate his independent variable by giving people $0.00, $0.25, or $1.00, that would be a floor effect, because these three amounts are all low—that is, they are squeezed close to a floor of $0.00.

Statistical validity,

also called statistical conclusion validity, is the extent to which a study's statistical conclusions are accurate and reasonable. How well do the numbers support the claim? Addresses the strength of an effect and its statistical significance (the probability that the results could have been obtained by chance if there really is no effect). Also addresses the extent to which a study minimizes the probabilities of two errors: concluding that there is an effect when in fact there is none (a "false positive," or Type I error), or concluding that there is no effect when in fact there is one (a "miss," or Type II error) Some associations—such as the association between height and shoe size—are quite strong.

measurement error

any factor that can infate or defate a person's true score on a dependent measure. For example, a man who is 160 centimeters tall might be measured at 161 cm because of the angle of vision of the person reading the meter stick,

interval scale of measurement

applies to the numerals of a quantitative variable that meet two conditions: Diener's (scored from 1 = strongly disagree to 7 = strongly agree) are interval scales. A quantitative variable in which subsequent numerals represent equal distances, but there is no true zero. Shoe size. Degree of agreement on a 1-7 scale.

Demand characteristics

are a problem when participants guess what the study is supposed to be about and change their behaviour in the expected direction.

quantitative variables

are coded with meaningful numbers. Height and weight are quantitative because they are measured in numbers, such as 150 centimeters or 45 kilograms.

association claim

argues that one level of a variable is likely to be associated with a particular level of another variable. Variables that are associated are sometimes said to correlate Shy People Are Better at Reading Facial Expressions the variables are measured, not manipulated.

ordinal scale of measurement

assign numbers to objects but the numbers also have meaning, ex, 1st 2nd, 3rd indicates order - doesnt have equal intervals

categorical variables/ nominal scale

assign numbers to objects where different numbers mean different objects. The numbers have no real meaning other than differentiating the object. ex. 1=males, 2=females

interrater reliability

consistent scores are obtained no matter who measures or observes. ways to check the reliability (consistency) of your measure discuss them and attempt to come up with rules for How to compute: deciding when they would give a "3" or a "4" for a rating on a specific item

frequency claim.

describe a particular rate or degree of a single variable. Merely gives a percentage of people who attempted suicide; 44% of Americans Struggle to Stay Happy. they focus on only one variable— such as depression, happiness, or rate of exercise.

What is external validity? Please define and explain why testing Psych 100 students may pose a problem to a study's external validity. What is the trade-off between the need for internal validity and the generalization of external validity?

ensuring that your results can generalized to other situations, different subject groups, settings, treatments - relevant to the "real" world. Thus, using psych 100 students dont not generalize to the general population at all as young, rich, smart. researchers who want to make a causal claim emphasize internal validity: Their interest in making a causal statement means that they may sacrifice internal validity for some loss of external validity, making sure their dependent variable manipulated their independent can sometimes be more important and then they will assume it could happen to anyone

Random Error

errors to accuracy - random fluctuations in the measuring situation that cause the obtained scores to deviate from a true score.

Systematic Error or Bias

errors to accuracy - the same (or constant) amount of error occurs with each measurement. (Example: clock that runs ten minutes fast or scale is overweighs by 100 grams)

margin of error

estimate, a statistical fgure, based on sample size for a poll, that indicates where the true value in the population probably lies In other words, the 44% value is an estimate, and the true value is probably between 41% and 47%.

Criterion validity

examines whether a measure correlates with key outcomes and behaviors..evaluates whether the measure under consideration is related to a concrete outcome, such as a behavior, that it should be related to, according to the theory being tested. You could give the sales test to each current sales representative and then measure their sales figures (a measure of their selling behavior) sometime later—say, 3 months. You would then compute the correlation between the new sales aptitude measure and the relevant behavioral outcome.

situation noise

external distractions of any kind—is a third factor that could cause variability within groups and obscure true group differences.

negative association (or negative correlation)

high goes with low and low goes with high. In other words, high rates of multitasking go with a low ability to multitask, and low rates of multitasking go with a high ability to multitask.

positive association, or a positive correlation

high scores on shyness go with a high ability to read facial expressions, and low scores on shyness go with a lower ability to read facial expressions.

Reliability

how consistent the results of a measure are, n Indicates how strongly two variables (e.g., testretest) are related n Ranges from 0.0 to 1.0 n High correlation (over 0.80) implies that the observed scores reflect the true scores to a relatively high degree

Covariance

one variable usually cannot be said to cause another variable unless the two are related - two variables are associated - while correlation does not imply causation (correlation = causation), causation does imply correlation (causation = correlation).

external validity

how well the results of a study generalize to, or represent, people or contexts besides those in the study itself. The extent to which the results of a study generalize to some larger population (e.g., whether the results from this sample of children apply to all U.S. schoolchildren), as well as to other times or situations (e.g., whether the results based on this type of music apply to other types of music)

double-blind study

in which neither the participants nor the researchers who evaluate them know who is in the treatment group and who is in the comparison group

a known-groups paradigm

in which researchers see whether scores on the measure can discriminate among a set of groups whose behavior is already well understood. For example, to validate the use of salivary cortisol as a measure of stress, a researcher could compare the salivary cortisol levels in two groups of people: those who are about to give a speech in front of a classroom, and those who are in the audience. Public speaking is recognized as being a stressful situation for many. Therefore, if salivary cortisol is a valid measure of stress, people in the speech group should have higher levels of cortisol than those in the audience group.

manipulation check

is a separate dependent variable that experimenters include in a study, just to make sure the manipulation worked.

Face validity

is a subjective judgment: If it looks as if it should be a good measure, it has face validity.

what is a placebo and a placebo effect? why does a placebo pose a threat to the internal validity of a study? How would you control for a placebo effect?

is a substance or procedure that has no inherent power to produce an effect that is sought or expected. Example: sugar pill. Placebo effect is a genuine psychological or physiological effect, in a human or another animal, which is attributable to receiving a substance or undergoing a procedure, but is not due to the inherent powers of that substance or procedure. thus the independent variable is not manipulating the dependent variable. To control for a placebo effect researchers can perform a double blind study and one group will just take a fake pill then when shown results if there is a greater difference or change from the placebo group to the experimented group you know that it works.

manipulated variable

is a variable a researcher controls, usually by assigning participants to the different levels of that variable. For example, a researcher might give some participants 10 milligrams of a medication, other participants 20 mg, and still others 30 mg. examples: type of music, type of beer glass)

measured variable

is one whose levels are simply observed and recorded. Some variables, such as height, IQ, and blood pressure

constant

is something that could potentially vary but that has only one level in the study in question. ex. Every father in the study would presumably be male.

claim

is the argument someone is trying to make.

Observation

is the empirical process of using one's sense to recognize and record factual events. Behavior is systematically watched.

discriminant validity

it should correlate less strongly with measures of different constructs. Although mental health and physical health probably do overlap somewhat, we would not expect the BDI to be strongly correlated with a measure of perceived physical health. that the BDI has discriminant validity (divergent validity) they want to be sure that their measure is not accidentally capturing a similar but different construct. Does the BDI measure depression or perceived health?

a double-blind placebo control study

neither the people treating the patients nor the patients themselves know whether they are in the real group or the placebo group.

placebo effect

occurs when people receive a treatment and really improve— but only because the recipients believe they are receiving a valid treatment.

Observer bias

occurs when researchers' expectations infuence their interpretation of the results. For example, Dr. Yuki might be a biased observer of her patients' depression: She expects to see her patients improve, whether they do or do not.

selection-attrition threat

only one of the experimental groups experiences attrition.

correlation coefficient

or r, to indicate how close the dots on a scatterplot are to a line drawn through them. relationship is strong, r is close to either 1.0 or -1.0;when the relationship is weak, r is closer to zero.An r of 1.0 represents the strongest possible positive relationship, and an r of -1.0 represents the strongest possible negative relationship. , also known as r, R, or Pearson's r, a measure of the strength and direction of the linear relationship between two variables

regression threat

refers to a statistical concept called regression to the mean: When a performance is extreme at Time 1, the next time that performance is measured (Time 2), it is likely to be less extreme—that

Validity

refers to the appropriateness of a conclusion or decision, and in general, a valid claim is reasonable, accurate, and justifable. with how well a measure is associated with something else, such as a key behavior or another indicator of intelligence. Researcher's conclusion is true in that it reflects the actual state in the real world. Psychological validity is different from logical validity.

masked design, or blind design

some studies, participants know which group they are in, but the observers do not

observational measure

sometimes called a behavioral measure, operationalizes a variable by recording observable behaviors or physical traces of behaviors. For example, a researcher could operationalize happiness by observing how many times a person smiles.

Regression to the mean

statistical probability that extreme high and low performers on a measure will perform more average the next time. Regression due to random error. In this examples, students who scored extremely low on Quiz 1 will be more likely to do better on Quiz #2. Similarly, students who scored extremely high will be more likely to do less wel

test-retest reliability

the researcher gets consistent scores every time he or she uses the measure. ways to check the reliability (consistency) of your measure

How would you define a variable?

variable is anything that can vary, i.e. changed or be changed, such as memory, attention, time taken to perform a task, etc. - a phenomenon of interest and how to study it A "variable" is any factor or attribute that can take two or more values or levels. We use variables to quantify a psychological construct. can be categories. (Sex: male & female; Major: psych, history, education; Income: low, medium, high Examples: • Number of times a child disrupts a class in a day can vary from one child to another or from one class to another. • Factors such as depression, GPA and sleep can vary not only between people, but within the same person over time. • All the ways that people can vary from one another are "variables" that can be investigated.

validity concerns

whether the operationalization is measuring what it is supposed to measure.

history threats

which result from a "historical" or external event that affects most members of the treatment group at the same time as the treatment, making it unclear whether the change in the experimental group is caused by the treatment received or by the historical factor. external factor must affect everyone or almost everyone in the group. an outside event or factor systematically affects people in the study—but only those at one level of the independent variable.


Kaugnay na mga set ng pag-aaral

Final exam study guide (all quiz and test questions/answers)

View Set

Management Information Systems Midterm

View Set

международные организации

View Set

Chapter 2: Family-Centered, Community-Based Care

View Set

IT project management Goals and Requirements

View Set