Chapter 5 Questions

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

A(n) ____________ variable has only two categories

Dichotomous

The __________ method of measuring reliability involves two measures of the same concept, with both measures applied at the same time. The results of the two measures are then compared.

Split-halves

______ is a type of measurement error that results in systematically over- or under-measuring the value of a concept.

Bias

Which type of validity involves two measures which theoretically are expected not to be related and thus the correlation between them is expected to be low or weak?

Discriminant

Deciding how to measure the presence, absence, or amount of concepts in the real world for use in an empirical investigation is the process of providing a(n) ________________ definition.

Operational

Which level of measurement below best describes a variable for which the assigned values represent only different categories or classifications for that variable?

Nominal

A ___________ is a multi-item measure in which respondents are presented with increasingly difficult measures of approval for an attitude

Guttman scale

Why are reliability and validity threats to the accuracy of measures? In your answer please define both terms

A reliable measure is one that produces consistent results on repeated trials. A valid measure is one that measures what it is supposed to measure. Both reliability and validity are major threats to the accuracy of measures. Unreliable measures will produce different results in repeated trial, and we will therefore be unsure of whether we are accurately measuring a concept in any given trial. An invalid measure consistently measures the wrong concept and is therefore useless in measuring the right concept.

A variable that measures education on a scale that includes (0) none, (1) less than college, (2) college, (3) more than college, is an example of the _________ level of measurement.

Ordinal

A(n) ___________ measurement assumes that a comparison can be made on which observations have more or less of a particular attribute.

Ordinal (level)

What is the difference between a Likert scale and a Guttman scale?

A Likert scale score is calculated from the scores obtained on individual items. Each item generally asks a respondent to indicate a degree of agreement or disagreement with the item, as with the abortion questions discussed earlier. A Likert scale differs from an index, however, in that once the scores on each of the items are obtained, only some of the items are selected for inclusion in the calculation of the final score. Those items that allow a researcher to distinguish most readily those scoring high on an attribute from those scoring low will be retained, and a new scale score will be calculated based only on those items. The Guttman scale also uses a series of items to produce a scale score for respondents. Unlike the Likert scale, however, a Guttman scale presents respondents with a range of attitude choices that are increasingly difficult to agree with; that is, the items composing the scale range from those easy to agree with to those difficult to agree with. Respondents who agree with one of the "more difficult" attitude items will also generally agree with the "less difficult" ones.

Please identify and define the four levels of measurement. In your answer please provide an example variable for each level of measurement.

A nominal measurement is involved whenever the values assigned to a variable represent only different categories or classifications for that variable. In such a case, no category is more or less than another category; they are simply different. An ordinal measurement assumes that a comparison can be made on which observations have more or less of a particular attribute. With an interval measurement the intervals between the categories or values assigned to the observations do have meaning. The value of a particular observation is important not just in terms of whether it is larger or smaller than another value (as in ordinal measures) but also in terms of how much larger or smaller it is. The final level of measurement is a ratio measurement. This type of measurement involves the full mathematical properties of numbers. That is, the values of the categories order the categories, tell something about the intervals between the categories, and state precisely the relative amounts of the variable that the categories represent. The students should also include an example of each level of measurement. Alternatively, this question could be rewritten as a short answer question by removing the requirement of providing an example of each level of measurement.

The ____________ method of measuring reliability also involves measuring the same attribute more than once, but it uses two different measures of the same concept rather than the same measure.

Alternate-form

Which type of validity is demonstrated when a measure of a concept is related to a measure of another concept with which the original concept is thought to be related?

Construct

________ validity involves determining the full domain or meaning of a particular concept and then making sure that measures of all portions of this domain are included in the measurement technique.

Content

The results of interitem association tests are often displayed in a ________. Such a display shows how strongly related each of the items in the measurement scheme is to all the other items.

Correlation matrix

Imagine that you were to test a variable of your own design measuring ideology for validity. What tests might you use, how would each test confirm validity, and what problems might you encounter in establishing validity?

Essentially, a valid measure is one that measures what it is supposed to measure. Unlike reliability, which depends on whether repeated applications of the same or equivalent measures yield the same result, validity refers to the degree of correspondence between the measure and the concept it is thought to measure. A measure's validity is more difficult to demonstrate empirically than is its reliability because validity involves the relationship between the measurement of a concept and the actual presence or amount of the concept itself. Information regarding the correspondence is seldom abundant. Nonetheless, there are ways of evaluating the validity of any particular measure. Face validity may be asserted (not empirically demonstrated) when the measurement instrument appears to measure the concept it is supposed to measure. To assess the face validity of a measure, we need to know the meaning of the concept being measured and whether the information being collected is "germane to that concept." Content validity is similar to face validity but involves determining the full domain or meaning of a particular concept and then making sure that measures of all portions of this domain are included in the measurement technique. A third way to evaluate the validity of a measure is by empirically demonstrating construct validity. When a measure of a concept is related to a measure of another concept with which the original concept is thought to be related, convergent construct validity is demonstrated. In other words, a researcher may specify, on theoretical grounds, that two concepts ought to be related in a positive manner (say, political efficacy with political participation, or education with income) or a negative manner (say, democracy and human rights abuses). The researcher then develops a measure of each of the concepts and examines the relationship between them. If the measures are positively or negatively correlated, then one measure has convergent validity for the other measure. Discriminant validity involves two measures which theoretically are expected not to be related and thus the correlation between them is expected to be low or weak. If the measures do not correlate with one another, then discriminate construct validity is demonstrated. A fourth way to demonstrate validity is through interitem association. This is the type of validity test most often used by political scientists. It relies on the similarity of outcomes of more than one measure of a concept to demonstrate the validity of the entire measurement scheme. It is often preferable to use more than one item to measure a conceptreliance on just one measure is more prone to error or misclassification of a case. Validity of the measures used by political scientists is seldom demonstrated to everyone's satisfaction. Most measures of political phenomena are neither completely invalid nor valid but, rather, are partially accurate. Therefore, researchers generally present the rationale and evidence available in support of their measures and attempt to persuade their audience that their measures are at least as accurate as alternative measures would be. Nonetheless, a skeptical stance on the part of the reader toward the validity of political science measures is often warranted.

_______________ tests validity by relying on the similarity of outcomes of more than one measure of a concept to demonstrate the validity of the entire measurement scheme.

Interitem association

A __________ score is calculated from the scores obtained on individual items. Each item generally asks a respondent to indicate a degree of agreement or disagreement with the item.

Likert scale

Generally speaking, political scientists prefer more precision in measurement to less. But, how can too much precision be a bad thing when it comes to measurement?

Measures with many response possibilities take up space if they are questions on a written questionnaire or more time to explain if they are included in a telephone survey. Such questions may confuse or tire survey respondents. A more serious problem is that they may lead to measurement error. Think about the possible responses to a question asking respondents to use a 100-point scale (called a thermometer scale) to indicate their support for or opposition to a political candidate, assuming that 50 is considered the neutral position and 0 is least favorable or coldest and 100 most favorable. Some respondents may not use the whole scale (to them no candidate ever deserves more than an 80 or less than a 20), whereas other respondents may use the ends and the very middle of the scale and ignore the scores in between. We might predict that a person who gives a candidate a 100 is more likely to vote for that candidate than a person who gives the same candidate an 80, but in reality they like the candidate pretty much the same way and would be equally likely to vote for the candidate. Another problem with overly precise measurements is that they may be unreliable. If asked to rate candidates on more than one occasion, respondents could vary the number that they choose, even if they don't change their opinion.

A variable that measures education in number of years is an example of the _________ level of measurement.

Ratio

A ____________ measure is one that yields the same results on repeated trials.

Reliable

A __________ is a method of accumulating scores on individual items to form a composite measure of a complex phenomenon.

Summation index

Please define a summation index and give a detailed example of how you would use a summation scale to create a variable of your choice.

The answer will vary by the example chosen by the student but should otherwise be similar to the answer below. A summation index is a method of accumulating scores on individual items to form a composite measure of a complex phenomenon. An index is constructed by assigning a range of possible scores for a certain number of items, determining the score for each item for each observation, and then combining the scores for each observation across all the items. The resulting summary score is the representative measurement of the phenomenon. Attitudes are complex phenomena, and we usually do not know enough about them to devise single-item measures. So we often ask several questions of people about a single attitude and aggregate the answers to represent the attitude. A researcher might measure attitudes toward abortion, for example, by asking respondents to choose one of five possible responses—strongly agree, agree, undecided, disagree, and strongly disagree—to the following three statements: (1) Abortions should be permitted in the first three months of pregnancy; (2) Abortions should be permitted if the woman's life is in danger; (3) Abortions should be permitted whenever a woman wants one. An index of attitudes toward abortion could be computed by assigning numerical values to each response (such as 1 for strongly agree, 2 for agree, 3 for undecided, and so on) and then adding the values of a respondent's answers to these three questions. (The researcher would have to decide what to do when a respondent did not answer one or more of the questions.) The lowest possible score would be a 3, indicating the most extreme pro-abortion attitude, and the highest possible score would be a 15, indicating the most extreme anti-abortion attitude. Scores in between would indicate varying degrees of approval of abortion.

What does the level of measurement tell us about a variable?

The level of measurement tells us about the amount of precision in our variable. It tells us how much information is included in a variable. At the bottom of the range there is very little information in a nominal level variable, and an increasing amount of information as the level of measurement increases to ordinal, then interval, and finally ratio. The level of measurement also tells us about the mathematical properties of the variable. A higher level of measurement means that a variable has more mathematical properties, and that we can use a higher order of math.


Ensembles d'études connexes

Micro Economics Practice Final Exam

View Set

Ch 5 Cisco Routing and Switching Essentials

View Set

SCHM Exam 3 - Chapters 9, 6, 11, 14

View Set

Ch. 7 Perfect Competition and the Invisible Hand

View Set