Research Methods

Ace your homework & exams now with Quizwiz!

Content Analysis

"A technique for systematically describing written, spoken or visual communication. It provides a quantitative (numerical) description." Reading a text using a coding sheet (created via a theory); Give the content a nominal scale for measure Can't be used to test causality; cannot determine if something is the cause of something else Can determine prevalence of a problem/ issue/ phenomenon

Critical Scholarship: Ontology

All reality is socially constructed (radical constructivist will believe there is nothing but what we have socially constructed)

Social Science: Assumptions

Answers to our questions mean something

Content Validity

Assessment of how well a set of scale items matches with the relevant content domain of the construct that it is trying to measure.

Critical Scholarship: Methodology

Assumption dependent

Social Science: Ontology

Believes in a reality that exist independently to

Causality & how to demonstrate it

Causality: Whether the observed change in dependent variable is indeed caused by a corresponding change in hypothesized independent variable and not variables extraneous to the research context. Causality requires three conditions: (1) covariation of cause and effect (i.e., if cause happens, then effect also happens; and if cause does not happen, effect does not happen), (2) temporal precedence: cause must precede effect in time, (3) no plausible alternative explanation (or spurious correlation).

Critical Scholarship: Epistemology

Considering varying individual perspectives

Inductive Reasoning

Empirical (specific) to Theoretical (general)

Empirical Assessment of Validity

Empirical assessment of validity examines how well a given measure relates to one or more external criterion, based on empirical observations.

Social Science:Epistemology

Empirical observation; we can observe in a post positivist (logical assumptions) way

Internal Validity

Evidence that any observed outcomes in the dependent variable can conclusively be attributed to the manipulation and not to some extraneous variable or source. "Did the manipulation really cause the outcome?"

Concurrent Validity

How well one measure relates to other concrete criterion that is presumed to occur simultaneously

Manipulation Check

Important task in experimental design: Checking for the adequacy of design. Pilot tests, pre tests, post tests all serve as manipulation checks. / A means of assessing whether or not the manipulation worked as intended - E.g. Manipulating single player vs. multiplayer games; you can ask after playing a multiplayer game if the children cooperated with others or played alone.

Stratified sampling

In stratified sampling, the sampling frame is divided into homogeneous and non-overlapping subgroups (called "strata"), and a simple random sample is drawn within each subgroup. (Bhatta pg. 67)

Pretest-posttest control group design

In this design, subjects are randomly assigned to treatment and control groups, subjected to an initial (pretest) measurement of the dependent variables of interest, the treatment group is administered a treatment (representing the independent variable of interest), and the dependent variables measured again (posttest). (Bhatta pg. 86)

Quota sampling

In this technique, the population is segmented into mutually- exclusive subgroups (just as in stratified sampling), and then a non-random set of observations is chosen from each subgroup to meet a predefined quota (bhatta pg. 69)

Variable

Measurable representation of a construct

Negative Relationship

Negative correlation is a relationship between two variables in which one variable increases as the other decreases, and vice versa.

Paradigm

Our worldview; frame that influences how and what we think

Sampling

Sampling is the statistical process of selecting a subset (called a "sample") of a population of interest for purposes of making observations and statistical inferences about that population.

Operational Definitions

Scientific research requires operational definitions that define constructs in terms of how they will be empirically measured.

Convergent Validity

The closeness with which a measure relates to (or converges on) the construct that it is purported to measure

Discriminant Validity

The degree to which a measure does not measure (or discriminates from) other constructs that it is not supposed to measure.

Deductive Reasoning

Theoretical (general) to empirical (specific)

Theoretical Assessment of Validity

Theoretical assessment of validity focuses on how well the idea of a theoretical construct is translated into or represented in an operational measure.

Posttest-only control group design

This design is a simpler version of the pretest- posttest design where pretest measurements are omitted (Bhatta pg. 86)

Attrition

Threat to Validity: when participants drop out of the study before completion (various reasons for this)

Type 1 Error + Type 2 Error

Type 1: Getting a false positive to the hypothesis. They way you collect data can result in a Type 1 or Type 2 error. Type 2: Getting a false "no relationship" even if there is.

Validity

Validity, often called construct validity, refers to the extent to which a measure adequately represents the underlying construct that it is supposed to measure. For instance, is a measure of compassion really measuring compassion, and not measuring a different construct such as empathy?

Hypothesis

The empirical formulation of propositions, stated as relationships between variables, is called hypotheses. Propositions are specified in the theoretical plane, while hypotheses are specified in the empirical plane. Hence, hypotheses are empirically testable using observed data, and may be rejected if not supported by empirical observations.

Praxis

The idea that scholarship should serve to aid in the emancipation of people from power

Operational Linkages

The predicted connection or relationship between the operationalized variables in the hypothesis, first, and in the statistical analyses, second, is referred to as the operational linkage. Operational linkages are those link that predict and test theoretical linkages. Krcmar, pg. 11.

Theory and its relationship to conceptual model

The process of theory or model development may involve inductive and deductive reasoning. (*Your theory will inform whether you should be using inductive or deductive reasoning to form your conceptual model). Deduction is the process of drawing conclusions about a phenomenon or behavior based on theoretical or logical reasons and an initial set of premises. As an example, if a certain bank enforces a strict code of ethics for its employees (Premise 1) and Jamie is an employee at that bank (Premise 2), then Jamie can be trusted to follow ethical practices (Conclusion). In deduction, the conclusions must be true if the initial premises and reasons are correct. In contrast, induction is the process of drawing conclusions based on facts or observed evidence. For instance, if a firm spent a lot of money on a promotional campaign (Observation 1), but the sales did not increase (Observation 2), then possibly the promotion campaign was poorly executed (Conclusion). As shown in Figure 2.3, inductive and deductive reasoning go hand in hand in theory and model building.

Theoretical Linkages

Theories provide explanations of social or natural phenomenon. Theoretical linkages are propositional statements of relationship stating that two variables are linked in such a way that they are either associatively or causally related. Ex: "As exposure to television violence increases, aggression levels increase" OR "Exposure to television violence causes increases in aggressive behavior" Krcmar, pg. 7

Switched Replication Design

This is a two-group design implemented in two phases with three waves of measurement. The treatment group in the first phase serves as the control group in the second phase, and the control group in the first phase becomes the treatment group in the second phase, as illustrated in Figure 10.7. In other words, the original design is repeated or replicated temporally with treatment/control roles switched between the two groups. By the end of the study, all participants will have received the treatment either during the first or the second phase. This design is most feasible in organizational contexts where organizational programs (e.g., employee training) are implemented in a phased manner or are repeated at regular intervals. (bhatta pg. 89)

Randomized block design

This is a variation of the posttest-only or pretest-posttest control group design where the subject population can be grouped into relatively homogeneous subgroups (called blocks) within which the experiment is replicated. For instance, if you want to replicate the same posttest-only design among university students and full-time working professionals (two homogeneous blocks), subjects in both blocks are randomly split between treatment group (receiving the same treatment) or control group

Sampling Frame

This is an accessible section of the target population (usually a list with contact information) from where a sample can be drawn. If your target population is professional employees at work, because you cannot access all professional employees around the world, a more realistic sampling frame will be employee lists of one or two local companies that are willing to participate in your study.

Statistical Regression to the Mean

Threat to Validity: phenomenon that can be particularly problematic when researchers use a repeated measure design in which participants are measured on a given variable more than once. Those who score really high or really low tend to move toward the sample mean on subsequent measures.

Selection Bias

Threat to validity: When participants self select into a study, researchers must determine if those who are in the study are similar to the population of interest and whether they behave in unusual ways simply because they are willing to participate.

Hawthorne Effect

Threat to validity: when outcomes or changes in the participants occur because they know they are being observed. Wanting to perform to please the experimenters.

Axiological

Value assumption: study and the nature or value and valuation, the kinds of things that are valuable- What values drive what we know? Value-free = objectivism / Value-intended = subjectivism.

Mediating variables

Variables that are explained by independent variables while also explaining dependent variables are mediating variables (or intermediate variables)

Dependent Variable

Variables that are explained by other variables are dependent variables

Independent Variable

Variables that explain other variables are called independent variables

Moderating Variables

Variables that influence the relationship between independent and dependent variables are called moderating variables.

Ontology Definition

What we think the world is- e.g. subjective, objective.. (our paradigm influences our ontology)

Face Validity

Whether an indicator seems to be a reasonable measure of its underlying construct "on its face". Ex: frequency of one's attendance at a religious service seems to make sense as an indicator of a person's religiosity without a lot of explanation.

Systematic sampling

the sampling frame is ordered according to some criteria and elements are selected at regular intervals through that ordered list. Systematic sampling involves a random start and then proceeds with the selection of every kth element from that point onwards, where k = N/n, where k is the ratio of sampling frame size N and the desired sample size n, and is formally called the sampling ratio. (Bhatta pg. 67)

Implicit Theory

System 1 theories: "our perceptions, interpretations, understanding, and even memory and recall occur at the preconscious level.

Explicit Theory

System 2 theories: "when we read, draw conclusions, and perhaps summarize what we have read in a systematic, consciously aware way."

Positivism

(Auguste Comte) At its core, positivism is based on the assumption of an observable, law-based reality. Knowledge is limited to claims about the world that are unambiguously (positively) true, either because they are based on verifiable observations or because they must logically follow from such an observation.

Post-Positivism

(Karl Popper) (Falsification) We cannot be certain that what we think is true, but we can be certain about what we know not to be true.A theoretical model could be valid if one fails to observe empirical evidence inconsistent with it or even contradicting it when trying to falsify the theoretical model.

Systematic Error

(Measurement Error) Systematic error is an error that is introduced by factors that systematically affect all observations of a construct across an entire sample in a systematic manner. In our previous example of firm performance, since the recent financial crisis impacted the performance of financial firms disproportionately more than any other type of firms such as manufacturing or service firms, if our sample consisted only of financial firms, we may expect a systematic reduction in performance of all firms in our sample due to the financial crisis. Unlike random error, which may be positive negative, or zero, across observation in a sample, systematic errors tends to be consistently positive or negative across the entire sample.

Random Error

(Measurement error) The error that can be attributed to a set of unknown and uncontrollable external factors that randomly influence some observations but not others. As an example, during the time of measurement, some respondents may be in a nicer mood than others, which may influence how they respond to the measurement items.

Conceptual Model

A conceptual model is a representation of a system, made of the composition of concepts which are used to help people know, understand, or simulate a subject the model represents

Constructs

A construct is an abstract concept that is specifically chosen (or "created") to explain a given phenomenon. A construct may be a simple concept, such as a person's weight, or a combination of a set of related concepts such as a person's communication skill, which may consist of several underlying concepts such as the person's vocabulary, syntax, and spelling. The former instance (weight) is a unidimensional construct, while the latter (communication skill) is a multi-dimensional construct (i.e., it consists of multiple underlying concepts).

Content Analysis

A content analysis is used to analyze the relationship between the measurement rules used to assign numerical value to messages in order to describe the messages and make inferences about the meaning.

Model

A model is a representation of all or part of a system that is constructed to study that system (e.g., how the system works or what triggers the system). While a theory tries to explain a phenomenon, a model tries to represent a phenomenon. Models are often used by decision makers to make important decisions based on a given set of inputs. For instance, marketing managers may use models to decide how much money to spend on advertising for different product lines based on parameters such as prior year's advertising expenses, sales, market growth, and competing products. Likewise, weather forecasters can use models to predict future weather patterns based on parameters such as wind speeds, wind direction, temperature, and humidity. While these models are useful, they may not necessarily explain advertising expenditure or weather forecasts. Models may be of different kinds, such as mathematical models, network models, and path models. Models can also be descriptive, predictive, or normative.

Population

All people or items (unit of analysis) with the characteristics that one wishes to study - If you wish to identify the primary drivers of academic learning among high school students, then what is your target population: high school students, their teachers, school principals, or parents? The right answer in this case is high school students, because you are interested in their performance, not the performance of their teachers, parents, or schools.

Convenience sampling

Also called accidental or opportunity sampling, this is a technique in which a sample is drawn from that part of the population that is close to hand, readily available, or convenient. (Bhatta pg. 69)

Threats to Validity

Anything that happens during an experiment that can question if the manipulation really caused the outcome.

Positivism

Argues that for something to be valid it must be observable (see it to believe it); anti metaphysical- can't touch it, it's not real

Theory

Broad explanations of/ predictions of behavior or phenomena. / A theory is a set of systematically interrelated constructs and propositions intended to explain and predict a phenomenon or behavior of interest, within certain boundary conditions and assumptions. Essentially, a theory is a systematic collection of related theoretical propositions. While propositions generally connect two or three constructs, theories represent a system of multiple constructs and propositions.

Characteristics of a true experiment

One or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed. (Types of Experiments: Pretest-posttest control group design, posttest only control group design, etc).

Test-retest reliability.

Test-retest reliability is a measure of consistency between two measurements (tests) of the same construct administered to the same sample at two different points in time. If the observations have not changed substantially between the two tests, then the measure is reliable.

Predictive Validity

The degree to which a measure successfully predicts a future outcome that it is theoretically expected to predict

Split-half reliability

Split-half reliability is a measure of consistency between two halves of a construct measure. For instance, if you have a ten-item measure of a given construct, randomly split those ten items into two sets of five (unequal halves are allowed if the total number of items is odd), and administer the entire instrument to a sample of respondents. Then, calculate the total score for each half for each respondent, and the correlation between the total scores in each half is a measure of split-half reliability.

Multi-Stage Sampling

Depending on your sampling needs, you may combine these single-stage techniques to conduct multi-stage sampling. For instance, you can stratify a list of businesses based on firm size, and then conduct systematic sampling within each stratum. This is a two-stage combination of stratified and systematic sampling. Likewise, you can start with a cluster of school districts in the state of New York, and within each cluster, select a simple random sample of schools; within each school, select a simple random sample of grade levels; and within each grade level, select a simple random sample of students for study. In this case, you have a four-stage sampling process consisting of cluster and simple random sampling. (Bhatta pg. 68)

Measurement Validity (psychometric theory)

Examines how measurement works, what it measures, and what it does not measure. (Bhatta pg. 55) Scales indeed measure the unobservable construct that we wanted to measure (scales are valid) and they measure the intended construct consistently and precisely (reliable).

Criterion-related Validity

Examines whether a given measure behaves the way it should, given the theory of that construct.

Social Science: Methodology

Experiments, content analysis, surveys, ethnographies, interviews

Concepts

Explanations require development of concepts or generalizable properties or characteristics associated with objects, events, or people. While objects such as a person, a firm, or a car are not concepts, their specific characteristics or behavior such as a person's attitude toward immigrants, a firm's capacity for innovation, and a car's weight can be viewed as concepts.

Cluster Sampling

If you have a population dispersed over a wide geographic region, it may not be feasible to conduct a simple random sampling of the entire population. In such case, it may be reasonable to divide the population into "clusters" (usually along geographic boundaries), randomly sample a few clusters, and measure all units within that cluster. (Bhatta pg. 68)

Solomon four-group design

In this design, the sample is divided into two treatment groups and two control groups. One treatment group and one control group receive the pretest, and the other two groups do not. This design represents a combination of posttest-only and pretest-posttest control group design, and is intended to test for the potential biasing effect of pretest measurement on posttest measures that tends to occur in pretest-posttest designs but not in posttest only designs. (Bhatta pg. 88)

Simple Random Sampling

In this technique, all possible subsets of a population (more accurately, of a sampling frame) are given an equal probability of being selected. Simple random sampling involves randomly selecting respondents from a sampling frame, but with large sampling frames, usually a table of random numbers or a computerized random number generator is used.

Inter-rater reliability

Inter-rater reliability, also called inter-observer reliability, is a measure of consistency between two or more independent raters (observers) of the same construct.

Internal consistency reliability

Internal consistency reliability is a measure of consistency between different items of the same construct. If a multiple-item construct measure is administered to respondents, the extent to which respondents rate those items in a similar manner is a reflection of internal consistency. This reliability can be estimated in terms of average inter-item correlation, average item-to-total correlation, or more commonly, Cronbach's alpha. As an example, if you have a scale with six items, you will have fifteen different item pairings, and fifteen correlations between these six items.

Social Science: Topics

Interpersonal, organizational, intercultural, etc.

Interval Scales

Interval scales are those where the values measured are not only rank-ordered, but are also equidistant from adjacent attributes. For example, the temperature scale (in Fahrenheit or Celsius), where the difference between 30 and 40 degree Fahrenheit is the same as that between 80 and 90 degree Fahrenheit. Likewise, if you have a scale that asks respondents' annual income using the following attributes (ranges): $0 to 10,000, $10,000 to 20,000, $20,000 to 30,000, and so forth, this is also an interval scale, because the mid-point of each range (i.e., $5,000, $15,000, $25,000, etc.) are equidistant from each other.

Interview Surveys

Interviews are a more personalized form of data collection method than questionnaires, and are conducted by trained interviewers using the same research protocol as questionnaire surveys (i.e., a standardized set of questions). However, unlike a questionnaire, the interview script may contain special instructions for the interviewer that is not seen by respondents, and may include space for the interviewer to record personal observations and comments. In addition, unlike mail surveys, the interviewer has the opportunity to clarify any issues raised by the respondent or ask probing or follow-up questions. (face-to-face, focus groups, telephone interviews)

Post Positivism

No objective reality; you can make logical assumptions without physical observation

How are Quasi-Experimental Designs different from a true experimental design?

No random assignment

Nominal Scales

Nominal scales merely offer names or labels for different attribute values. Nominal scales, also called categorical scales, measure categorical data. These scales are used for variables or indicators that have mutually exclusive attributes. Examples include gender (two values: male or female), industry type (manufacturing, financial, agriculture, etc.), and religious affiliation (Christian, Muslim, Jew, etc.). Even if we assign unique numbers to each value, for instance 1 for male and 2 for female, the numbers don't really mean anything (i.e., 1 is not less than or half of 2) and could have been easily been represented non-numerically, such as M for male and F for female.

Non -probability sampling

Nonprobability samplingis a sampling technique in which some units of the population have zero chance of selection or where the probability of selection cannot be accurately determined. (Bhatta pg. 69)

Empirical

Observable

Ordinal Scales

Ordinal scales are those that measure rank-ordered data, such as the ranking of students in a class as first, second, third, and so forth, based on their grade point average or test scores. However, the actual or relative values of attributes or difference in attribute values cannot be assessed. For instance, ranking of students in class says nothing about the actual GPA or test scores of the students, or how they well performed relative to one another. A classic example in the natural sciences is Moh's scale of mineral hardness, which characterizes the hardness of various minerals by their ability to scratch other minerals. For instance, diamonds can scratch all other naturally occurring minerals on earth, and hence diamond is the "hardest" mineral. However, the scale does not indicate the actual hardness of these minerals or even provides a relative assessment of their hardness. Ordinal scales can also use attribute labels (anchors) such as "bad", "medium", and "good", or "strongly dissatisfied", "somewhat dissatisfied", "neutral", or "somewhat satisfied", and "strongly satisfied".

Philosophy and its relationship to theory

Our mental modes (paradigms) impact/constrain our thinking abd reasoning about observed phenomenon. (Positivism/ Post Positivism/ Ontology/ Epistemology) Our ways of looking at the world impact how we study the world (objectively, subjectively) and therefore impact the theories we believe or develop.

Epistemology Definition

Our understanding of how to study the world; how we know what we know

Positive Relationship

Positive correlation is a relationship between two variables in which both variables move in tandem. A positive correlation exists when one variable decreases as the other variable decreases, or one variable increases while the other increases.

Probability Sampling

Probability sampling is a technique in which every unit in the population has a chance (non-zero probability) of being selected in the sample, and this chance can be accurately determined. Sample statistics thus produced, such as sample mean or standard deviation, are unbiased estimates of population parameters, as long as the sampled units are weighted according to their probability of selection.

Ratio Scales

Ratio scales are those that have all the qualities of nominal, ordinal, and interval scales, and in addition, also have a "true zero" point (where the value zero implies lack or non- availability of the underlying construct). Most measurement in the natural sciences and engineering, such as mass, incline of a plane, and electric charge, employ ratio scales, as are some social science variables such as age, tenure in an organization, and firm size (measured as employee count or gross revenues). For example, a firm of size zero means that it has no employees or revenues. The Kelvin temperature scale is also a ratio scale, in contrast to the Fahrenheit or Celsius scales, because the zero point on this scale (equaling -273.15 degree Celsius) is not an arbitrary value but represents a state where the particles of matter at this temperature have zero kinetic energy.

Rhetorical Ontology

Reality is socially constructed; communication of language can construct subjective reality for audience

Reliability

Reliability is the degree to which the measure of a construct is consistent or dependable. In other words, if we use this scale to measure the same construct multiple times, do we get pretty much the same result every time, assuming the underlying phenomenon is not changing? An example of an unreliable measurement is people guessing your weight.

4 components of Empirical Inquiry

Replicable, Precise (theoretical constructs must be concisely defined), Empirically Testable, Falsifiability (anything that can possibly be disproven- Faith is not falsifiable), Parsimonious (simple- not too many conditions)

Conceptual Model's relationship to study design

Research designs vary based on whether the researcher starts at observation and attempts to rationalize the observations (inductive research), or whether the researcher starts at an ex ante rationalization or a theory and attempts to validate the theory (deductive research).

Questionnaire Surveys

Research instrument consisting of a set of questions (items) intended to capture responses from respondents in a standardized manner. Questions may be unstructured or structured. Unstructured questions ask respondents to provide a response in their own words, while structured questions ask respondents to select an answer from a given set of choices.

Survey

Research method involving the use of standardized questionnaires or interviews to collect data about people and their preferences, thoughts, and behaviors in a systematic manner.

How to write a hypothesis

Should include: Directionality (whether it is positive or negative), causality (whether x causes y, or y causes x), should clearly identify independent and dependent variables, and should be able to be evaluated as either true or false.

Covariance designs

Sometimes, measures of dependent variables may be influenced by extraneous variables called covariates. Covariates are those variables that are not of central interest to an experimental study, but should nevertheless be controlled in an experimental design in order to eliminate their potential effect on the dependent variable and therefore allow for a more accurate detection of the effects of the independent variables of interest. The experimental designs discussed earlier did not control for such covariates. A covariance design (also called a concomitant variable design) is a special type of pretest posttest control group design where the pretest measure is essentially a measurement of the covariates of interest rather than that of the dependent variables. (Bhatta pg. 86)

Matched Pair Sampling

Sometimes, researchers may want to compare two subgroups within one population based on a specific criterion. For instance, why are some firms consistently more profitable than other firms? To conduct such a study, you would have to categorize a sampling frame of firms into "high profitable" firms and "low profitable firms" based on gross margins, earnings per share, or some other measure of profitability. You would then select a simple random sample of firms in one subgroup, and match each firm in this group with a firm in the second subgroup, based on its size, industry segment, and/or other matching criteria. (Bhatta pg. 68)

External Validity

Studies that can be readily generalized beyond the laboratory setting are considered to be externally valid. Studies that take place outside of the lab in more natural settings have to allow for a lower degree of control but more accurately represent the real world. Thus, they may be more externally valid.


Related study sets

Fundamentals of Information Security

View Set

NR 206 Collecting Objective Data: The Physical Examination

View Set

Logistics - Exam III (Ch. 10 - 15)

View Set

Multiple-choice Questions — Select One or More Answer Choices

View Set

Networking - Chapter 5: Ethernet

View Set