JMU POSC 295 Exam #1 Study Guide

¡Supera tus tareas y exámenes ahora con Quizwiz!

What Is A Descriptive Research Question?

Provides information about the nature or main features of variables - such as the mean or frequency distribution of a variable. (Chapter 2, Page 320)

What Is An Associational Research Question?

Relationships that do not define or assume any cause-and-effect relationship between variables. (Chapter 2, Page 318) When two variables are only associated (with no causality implied), no preference exists regarding the location of the variables. (Page 134)

What Is A Causal Research Question?

Relationships that specify cause and effect among variables, whereby one variable affects another. (Chapter 2, Page 318) Causation requires (1) empirical (that is, statistical) correlation and (2) a plausible cause-and-effect argument.

What Are The Requirements For Causality?

Relationships that specify cause and effect among variables. (Chapter 2, Page 318) Show cause & effect. In these instances, one variable is assumed to affect another. (Page 24) Among causal relationships, we further distinguish between independent variables and dependent variables. (Page 24) Independent Variable(s) -------> Dependent Variable (Page 24) Require both (1) empirical (that is, statistical) correlation and (2) a plausible cause-and-effect argument. (Page 25) Requires a persuasive argument (also called "theory") about how one variable could directly affect another. (Page 25) Both statistical correlation and a persuasive theoretical argument are required to stake a claim of causation. (Page 25)

What Is A Relational Research Question?

Specifications of which variables are related to each other, and the ways in which they're related to each other. (Chapter 2, Page 328)

What is a Measurement Scale?

A collection of attributes used to measure a specific variable. For example, the variable "gender" is commonly measured on a scale defined by the specific attributes "male" & "female".

Criterion Validity

A justification or argument pertaining to measurement validity in which an index measure is compared against another external measure (from other research) with which it should be correlated based on theoretical grounds. (Chapter 3, Page 319)

What Is Construct Validity?

A justification or argument pertaining to measurement validity in which an index measure is compared against other (internal) study measures (variables) with which it should be correlated, based on theoretical grounds. (Chapter 3)

Construct Validity

A justification or argument pertaining to measurement validity in which an index measure is compared against other (internal) study measures (variables) with which it should be correlated, based on theoretical grounds. (Chapter 3, Page 55 & 319)

Sample Frame

A list from which a sample is drawn. The sampling frame of a population is usually not exactly identical to a population because the identities and locations of some population members are unknown. (Chapter 5, Page 329) One of the first tasks in conducting a survey is to acquire the sampling frame (so that later a sample can be drawn. Ideally, the sampling should closely match the survey population, but discrepancies will exist in practice that should be acknowledged. Surveys require adequate sampling frames.

Purposive Sampling Strategy

A nonrandom sampling method that is used to produce further insight, rather than to generalize to another population. (Chapter 5, Page 327) Purposive sampling is used to produce further insight, rather than generalization. (Page 90) Often, these are case studies that are not even generalizable to other exemplary organizations, but insights of how things are done are of great importance and most useful in improving public management. (Page 90) Research based on purposive samples can yield important insights, but the results are not generalizable. (Page 91)

Random Types Sampling Strategy

A sampling method whereby each population member has an equal chance of being selected for the study sample. Random samples are thought to result in representative samples. (Chapter 5, Page 328) Random sampling is the most accurate way to obtain a representative sample. (Page 89) Two popular methods of random sampling are to assign a number to each population member and use computer-generated random numbers to select the sample or to use randomly dialed telephone numbers to select participants for phone surveys. (Page 89) Should be used if generalization if the objective. (Page 90) Random sampling is also used in historical or archival research when an entire population of records is too large for study; conclusions drawn from the sample are then inferred to the population of all records. (Page 89)

What is Interval-Level Scale?

A scale that exhibits both order and distance among categories. For example, someone who has an IQ score of 120 has a score of 20 points higher than someone who has an IQ score of 100 (see also "ratio-level scale"). (Chapters 3 & 6

What is Ratio-Level Scale?

A scale that exhibits both order and distance among categories. Someone who earns $75,000 per year makes exactly three times that of someone making $25,000. The only difference between interval and ratio scales is that the latter have a true "zero" (for example, income can be zero, but IQ cannot) (see also "interval-level scale".). (Chapters 3 & 6)

What is Nominal-Level scale?

A scale that exhibits no ordering among the categories. For example, the variable "gender" has a nominal scale because there is no ordering among the attributes "men" & "women". These scales typically provide the least amount of information relative to other types of scales. (Chapters 3 & 6)

What is Oridinal-Level Scale?

A scale that exhibits order among categories, though without exact distances between successive categories. For example, assume that we are measuring anger by whether someone feels irritated, aggravated, or raging mad. Although we can say that "raging mad" is more angry than "aggravated," we cannot say how much more angry "raging mad" is than "aggravated". Hence, there is order among the categories, but no exact distance. (Chapters 3 & 6)

Sample

A selection, such as of citizens, from a population or sampling frame. (Chapter 5, Page 329) A sample is a selection from an entire population. (Page 89) A sample is a portion or subset of the population. (Page 169) Samples are very common because they save time and money. (Page 169) We draw, analyze, and generalize from samples. (Page 169) Many official statistics are also based on samples, and samples are taken not only of people, but also of administrative goods, such as police arrests, school grades, welfare outcomes, and business permits, promotions. (Page 169) The best way to draw a sample is random, each subject should have an equal chance of being selected as part of your sample. (Page 169) Such a sample is considered to be a representative one, one from which you make inferences to the population as a whole. (Page 169) Few people are interested in the sample. (Page 169) Every random sample is a bit different and thus produces different sample means. In most surveys, the purpose of a sample is to make a generalization or a statement about one group that is applied to another group or to a broader group; a statement about a sample is then held to be valid for the population from which the sample was drawn. (Page 89)

What Is An Experimental Research Design?

A study design method that assesses rival hypotheses through the use of control groups. (Chapter 2, Page 321) Rival hypotheses (and their associated control variables) can be dealt with through (1) experimental design & (2) statistical control. Experimental designs address rival hypotheses through the use of control groups, which are similar to the study group in all aspects except that members in the control group do not participate in the intervention. (Page 30) Similar to the comparison group only for the intervention.

Index Variable

A variable that combines the values of other variables into a single indicator or score. (Chapter 3, Page 52, 323) Commonly used to empirically measure abstract concepts and multifaceted, encompassing phenomena. (Page 52) The logic of index variables is created by adding up the values of the measurement variables that constitute the dimension or concept summed measurement variables. (Page 52) When one or more of the measurement variables are missing from an observation, the value of the index variable for that observation is missing too. Index variables are often continuous. (Page 52-53) A practical problem with index variables is that individual components sometimes have different scales or ranges. (Page 53)

What Is A Rival Hypothesis?

Alternative explanations for observed outcomes. (Chapter 2 & Appendix 11.1) 12 categories of possible rival hypotheses. Variables associated with these are called control variables. The analytical task is to identify these rival hypotheses, collect data for the control variables, and use statistical methods to take their impacts into account. (Page 38) To examine the rival hypotheses, we divide the sample into 2 (or more) groups.

Internal Validity

Comparison against internal sources. This comparison does not provide absolute proof but it may provide some reassurance and a measure of validity. A lack of correlation would require further inquiry and explanation. (Page 55)

External Validity

Comparison with external sources. When the variable correlates as expected, additional validity is provided. (Page 55 Also known as Criterion Validity & Triangulation.

Triangulation

Comparison with external sources. When the variable correlates as expected, additional validity is provided. Also known as Criterion Validity or External Validity.

What Is A Quasi-Experimental Research Design?

Comparisons between experimental and comparison groups that do not meet the standard of classic research designs because they often lack randomization, baseline measurement, and/or a comparison group. (Chapter 2, Page 328) Quasi-experimental designs vary from classic, randomized design through (Page 35 & 36) -Research design with a nonrandomized comparison group. -One-group research design with posttest measure, only. -Research design with comparison group and posttests, only. -One-group research design with pretest and posttest. Program evaluation often uses quasi-experimental research designs. Such designs typically use before-and-after measurement, comparison groups, and/or baseline measurement in a variety of ways. The theory of quasi-experimental research design includes consideration of different types of rival hypotheses, which are distinguished as threats to internal or external validity. Familarity with these categories can help analysts to identify rival hypotheses. (Page 38 & 39)

What Are The Different Kinds Of Research Questions?

Descriptive, Relational, Associational, & Causal

What are the four types of measurement validity?

Face, Content, Criterion, & Construct

What are the three different levels of measurement of a measurement scale?

Nominal, Ordinal, & Ratio

What Is Measurement Validity?

The extent to which a measurement measures that which it intends or purports to measure. (Chapter 3)

Measurement Validity

The extent to which a measurement measures that which it intends or purports to measure. (Chapter 3, Page 325) Simply means that something measures or reflects what it's intended to. (Page 47) Variables really measure what they're said to measure. (Page 55) Analysts are not expected to use all possible strategies to find this, however, they're expected to justify their variables in some way. (Page 55)

Reliability

The extent to which repeated sampling and measurement produce the same result. (Page 91) Statisticians use the term reliable to describe a consistent and predictable performance. A performance is said to be reliable if the individual performances center closely around the average performance. A performance lacks reliability if individual performances scatter all over, departing greatly from the average performance. (Page 150)

What is Operationalization?

The process of identifying variables that are used for measuring each dimension of a concept. (Chapter 3, Page 326) The second step of concept measurement. (Page 49) This process develops the specific variables that are used to measure concepts. (Page 49) Three approaches to operationalization 1. Develop Separate Measures For Each Dimension. This is the most comprehensive approach because it requires that you measure each dimension separately. 2. Develop a single set of measures that encompass all of the dimensions. This approach develops questions that each measure a different aspect of a topic but span the entire a topic rather than develop these for separate dimensions. Such an approach might be necessary because of data limitations or because other study concepts are more important. Whether this measure suffices depends on the need for more specific information and on validation. Sometimes this 3rd approach develops into the 1st approach as analysts give more careful consideration to the distinct dimensions of a concept. (Page 51) 3. Measure the concept through a single variable. It's decidedly nonrigorous. It's not biased but does not provide any information about specific aspects of the phenomenon. This approach is typically used when the concept is of quite minor importance to the program or evaluation. (Page 52) -These three strategies reflect a declining order of rigor.

What is A Dependent Variable?

Variables that are affected by other variables (hence, they are dependent on them). (Chapter 2, Page 320)

What is An Independent Variable?

Variables that cause an effect on other variables but are not themselves shaped by other variables (hence, they're independent of other variables). (Chapter 2)

Are External Validity, Criterion Validity, & Triangulation the same thing?

Yes

Are Internal Validity & Construct Validity The Same Thing?

Yes


Conjuntos de estudio relacionados

Casualty Quiz Questions - general part 1

View Set

Economics- chapter 18 & 19 notes

View Set

Chapter 7: The Costs of Credit Alternatives

View Set