Research Methods in Psychology Chapter Five Study Set

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

Define observational measure

operationalizes a variable by recording observable behaviors or physical traces of behaviors. For example, a researcher could operationalize happiness by observing how many times a person smiles.

What is Cronbach's alpha?

A correlation - based statistic that measures a scale's internal reliability. Also called coefficient alpha. Does item one correlate to item two? Does item two correlate to item three? How well? etc.

Define strength in terms of correlation and scatterplots.

A description of an association indicating how closely the data points in a scatterplot cluster along a line of best fit drawn through them.

Define interval scale

A measurement that applies to numerals of a quantitative variable that meet two conditions: 1.) The numerals represent equal intervals between levels. 2.) There is no 'true zero' Because they do not have a true zero, intervals scales cannot allow a researcher to say things like 'twice as hot' or 'three times happier'

Define Ratio Scale

A measurement that applies when the numerals of a quantitative variable have equal intervals and when the value of 0 truly means 'nothing'. A researcher may say that 'Miguel answered twice as many problems as Diego' in this instance.

Define correlation coefficient

A single number, ranging from -1.0 to 1.0 that indicates the strength and direction of an association between two variables. Indicates how close the dots on a scatterplot are to a line drawn through them.

Define Internal Reliability

A study participant gives a consistent pattern of answers, no matter how the researcher has phrased the question.

Define categorical variables

A variable whose levels are categories (e.g. male/female). Also known as nominal variables

Define quantitative variables

A variable whose valuables can be recorded as meaningful numbers. Examples might be height/weight

Discriminant validity

An empirical test of the extent to which a measure does not associate strongly with measures of other, theoretically different constructs. If a measure correlates less strongly with measures of different constructs it is discriminant validity.

Define criterion validity

Evaluates whether the measure under consideration is related to a concrete outcome, such as a behavior, that it should be related to, according to the theory being tested. EX: Your company wants to predict how well job applicants would do as salespeople. They have been using IQ tests to predict sales aptitude and they want to establish a better measure so they hire a consultant to develop a paper and pencil scale to measure sales aptitude. Do the test scores correlate with success in selling? One would need to collect unbiased observations about whether the new sales aptitude test is correlated with success in selling.

Define Reliability

Refers to how consistent the results of a measure are.

How are scatterplots used for establishing reliability?

Scatterpots can show interrater agreement or disagreement. When two researchers are looking at the same data (observation for example) they will usually draw similar conclusions. Their scatterplot will show their plots close to the sloping line that indicates perfect agreement. If for some reason the researchers come to different conclusions with their observations - whether due to the coders not being trained well enough, or they didn't have a clear enough operational definition of their variable to work with. In this distance the plotted dots will be scattered widely from the straight line drawn through them.

Describe what would be considered strong or weak in terms of correlation coefficients

The closer to 1.0 or -1.0 a number is, the stronger the correlation. The closer to 0.0 a number is, the weaker the correlation, or even there may be a complete lack of correlation if it is in fact 0.0

What information can you learn from a scatterplot that you cannot learn from the correlation coefficient? the direction of the relationship the values for each pair of measurements the strength of the relationship whether the relationship is statistically significant

The correlation coefficient describes the direction and strength of the association between the two variables, regardless of the scale on which the variables are measured. The values for each pair of measurements

Define Validity

Whether the operationalization is measuring what it is supposed to measure.

Define physiological measure

operationalizes a variable by recording biological data such as brain activity, hormone levels, or heart rate. These usually use equiptment such as EMG, fMRI, etc.

Define Self-report measure

operationalizes a variable by recording people's answers to questions about themselves in a questionnaire or interview

Georgina graduated as valedictorian of her high school class because of her class ranking. What type of scale is used for the quantitative variable of class ranking? Question 5 options: nominal scale interval scale ratio scale ordinal scale

ordinal scale

What are three ways in which to operationalize variables?

self-report measures observational measures physiological measures

Define test - retest reliability

The researcher should get consistent scores every time he or shes uses the measure. Ex: IQ tests. Take the test get a score. Take the test again and your score may be ever so slightly higher just from practice, but intelligence is not something that will change drastically

Define slope direction

The upward, downward, or neutral slope of the cluster of data points in a scatter plot

Define Known-groups paradigm

A method for establishing criterion validity, in which a researcher tests two or more groups, who are known to differ on the variable of interest, to ensure that they score differently on a measure of that variable. Researchers see whether scores on the measure can discriminate among set groups whose behavior is already well understood.

Which statistic is used to represent the internal reliability of multiple-item self-report scales? r, the correlation coefficient Kappa Cronbach's alpha s, the standard deviation

Although r can be used to evaluate interrater reliability when the observers are rating a quantitative variable, a more appropriate statistic, called kappa, is used when the observers are rating a sample on a categorical variable. Kappa measures the extent to which two raters place participants into the same categories. As with r, a kappa close to 1.0 means that the two raters agreed. Cronbach's alpha is a correlation - based statistic that measures internal validity. Cronbach's validity

Convergent validity

An empirical test of the extent to which a measure is associated with other measures of a theoretically similar construct. If the measure correlates strongly with other measures of the same constructs, it is convergent.

Define Ordinal Scale

An ordinal scale of measurement applies when the numerals of a quantitative variable represent a ranked order. The intervals may be unequal. Example: A bookstore website. The #1 book sold more than the #2 book which sold more than the #3 book. The first two rankings are only 10 books apart, but the difference between the 2nd best seller and the 3rd best seller is 150,000

What are the three types of reliability?

Test - retest reliability Interrater reliability internal reliability

Define content validity

The extent to which a measure captures all parts of a defined construct. EX: Conceptual definition of intelligence consisting of distinct elements such as the ability to ' reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience'. To have adequate content validity, any operationalization of intelligence should include questions or items to assess each of these components.

Define face validity

The extent to which a measure is subjectively considered a plausible operationalization of the conceptual variable in question. Face validity is a subjective judgement: If it looks as if it should be a good measure, it has face validity.

Define Interrater Reliability

Two or more independent observers will come up with consistent or very similar findings. This is most relevant for observational measures. Ex: You and your lab partner are both observing how many times a particular child smiles throughout recess. You should both get the same score.

What are the two statistical devices researchers use for data analysis before using a particular measure to test a hypothesis?

scatterplots and the correlation coefficient

Dr. Johnson wants to do a study to investigate whether the physiological measure, heart rate variability, varies over time or whether it is a trait that stays stable within the same person over time. He records participants' heart rate variability once at the beginning of the semester and once at the end of the semester. He finds a high positive correlation (r = .65) between the first and second time points. What type of reliability is he examining? Question 4 options: test-retest interrater internal construct

test-retest


Kaugnay na mga set ng pag-aaral

Leadership SLS1261 Final Exam Review 5/5

View Set

Chapter 12 - Individual Policy Provisions

View Set

Econ 211 final exam minus last 3 chapters. (Dr. Mann)

View Set

Pharmacology Prep U Chapter 24 Antiparkinsonism Agents

View Set

The Missing Pieces of Charlie O'Reilly

View Set