Comprehensive Exam Preparation

Ace your homework & exams now with Quizwiz!

What is wrong with this SAS program? 001 data new-data; 002 infile prob4data.txt; 003 input x1 x2 004 y1 = 3(x1) + 2(x2); 005 y2 = x1 / x2; 006 new_variable_from_x1_and_x2 = x1 + x2 - 37; 007 run;

001: cannot have special character in data name 002: infile needs to have "" 003:needs to have ;

Three Major Input Styles in SAS

1) list input/ free formatted input - most basic, limits: all data must be entered and missing data must be marked with " . "; no embedded spaces; no values greater than 8 characters in length 2) column input - advantages over list input: spaces are not required between values; missing values can be left blank; character data can have embedded spaces; can skip unwanted variables 3) formatted input - good for when raw data are not straightforward data (eg: embedded commas - 1,000) or non-numeric (anything other than numerals, plus/minus sign, decimals, or E for scientific notation) use informatw. ; date is the most common non-standard input Mixing input styles - list style is easiest; column style is a little bit more work; and formatted style is the hardest of the three. Since different styles can read different properties, they can be mixed in SAS. Column and formatted styles do not require spaces or other delimiters between variables and can read embedded blanks. Formatted style can also read special data such as dates. For example, If you have raw data that includes U.S. national parks, the year each park was established, and their acreage (with commas), then the data can be input as follows: INPUT ParkName $ 1-22 State $ Year @40 Acreage COMMA9.; These will read in the ParkName with the column style input, State and Year with list style input, and Acreage in formatted style input.

What are the assumptions for a one-way ANOVA model?

1) treatment variables residuals are normally distributed, 2) variances of populations are equal, and 3) responses for a given group are independent and identically distributed normal random variables 4) Measurements for an ANOVA are made on the Interval or Ratio scale

Binary Logistic Regression

667 A logistic regression model to study a dichotomous outcome in a natural experiment Use binary logistic regression to understand how changes in the independent variables are associated with changes in the probability of an event occurring. This type of model requires a binary dependent variable. A binary variable has only two possible values, such as pass and fail.

What is the difference between a conceptual framework and a testable model?

A framework indicates the perspective you are using to approach educational research. For example, your investigative framework might suggest whether a quantitative or a qualitative approach is best for addressing your research question. It describes the relevant variables for study of a particular topic and proposes a collection of hypotheses. A model, though, is developed within a framework. Your model is a descriptive tool that might, for example, help you impose some order on how variables are potentially interrelated so you can begin to formulate questions aligned with your chosen framework.

Statistically Significant

A mathematical indication that research results are not very likely to have occurred by chance.

Standard Deviation

A measure of variability that describes an average distance of every score from the mean.

What is a Prospective Study

A prospective study watches for outcomes, such as the development of a disease, during the study period and relates this to other factors such as suspected risk or protection factor(s). The study usually involves taking a cohort of subjects and watching them over a long period. The outcome of interest should be common; otherwise, the number of outcomes observed will be too small to be statistically meaningful (indistinguishable from those that may have arisen by chance). All efforts should be made to avoid sources of bias such as the loss of individuals to follow up during the study. Prospective studies usually have fewer potential sources of bias and confounding than retrospective studies.

Explain why a randomized experiment can provide better information than a non-randomized test. Under what conditions will a randomized experiment produce results identical to those of a nonrandomized test?

A randomized controlled trial is a study in which people, entities, or places are randomly allocated to one or more interventions. One of the interventions may be a control condition that receives no special treatment, and which is then construed as the counterfactual. "Quasi-experiments" and "observational studies" aim to estimate effects of interventions, but they do not include the randomization features of a trial. In these kinds of studies, the researchers do not have complete control over the conditions to which the experimental units are exposed. That is, neither they nor other agencies can randomly allocate the units to different interventions. The biggest advantage of the randomized experiment, especially as compared to quasi-experiment, is that an experimental study can yield unbiased estimates of relative effects and is therefore a strong basis for drawing causal inference. Put in other words, the properly run randomized experiment assures internal validity of the findings (pp. 2-3). In quasi-experiments and other non-randomized studies that aim to estimate effects, researchers try their best to include as many relevant control variables as possible. The big challenge is that there are often desirable variables that cannot be measured (for example, home resources, in the example above) and others that are relevant but unknown and therefore unobserved. Consequently, the bias of the estimated relative effect of the interventions cannot be fully eliminated. In a randomized trial, since the interventions are randomly assigned, the receipt of the intervention will be independent of all other observed and unobserved variables. In this way, even if some important control variables are omitted, researcher can remain confident about the accuracy of the parameter estimate for the difference in outcomes of the interventions (i.e. the effect of a particular intervention or treatment relative to a control condition) (p. 7). In making a fair between-group comparison in a non-randomized study so as to estimate an intervention's effects, one must usually assume that one has the right statistical model (functional form), with the right variables in the model (no important ones having been omitted), and that these variables are measured in the right way. An experiment does not rely on any such assumptions, providing the design and protocol are well-implemented (p. 23).

Define and Explain the merits and shortcomings of a randomized experiment, using a numerical example.

A randomized experiment is randomly allocating experimental units across treatment groups. Advantages Reliability is high Validity is high It reduces bias with respect to factors that are not accounted for in the experimental design Disadvantages Can pose ethical problems In psychiatric trials, it can reduce the number of therapeutic options

What is a Retrospective Study

A retrospective study looks backwards and examines exposures to suspected risk or protection factors in relation to an outcome that is established at the start of the study. Many valuable case-control studies, such as Lane and Claypon's 1926 investigation of risk factors for breast cancer, were retrospective investigations. Most sources of error due to confounding and bias are more common in retrospective studies than in prospective studies. For this reason, retrospective investigations are often criticised. If the outcome of interest is uncommon, however, the size of prospective investigation required to estimate relative risk is often too large to be feasible. In retrospective studies the odds ratio provides an estimate of relative risk. You should take special care to avoid sources of bias and confounding in retrospective studies.

Z-score

A score that represents how many standard deviations above or below the mean a score is. Has a mean of 0 and a SD of 1

For an ANOVA test, what does it mean to have a significant F?

A significant F entails a significant difference in variances between the treatment and error, or that at least one of the mean values is different

Research Hypothesis

A specific and falsifiable prediction about the relationship between or among two or more variables. A hypothesis must be - Testable (i.e. this is not related to whether a statement is true or false, but rather to the issue of the way in which the statement is something that you can test -- or not test -- the truth of) - Falsifiable (i.e. there must be the possibility of testing or showcasing to show how false it is)

Null Hypothesis

A statement or idea that can be falsified, or proved wrong. The hypothesis that states there is no difference between two or more sets of data.

What are the properties of a statistic?

A statistic is an observable random variable, which differentiates it both from a parameter that is a generally unobservable quantity describing a property of a statistical population, and from an unobservable random variable, such as the difference between an observed measurement and a population average.

How might covariance analysis help in a randomized experiment?

ANCOVA can be used as means to eliminate unwanted variance on the dependent variable. This allows the researcher to increase test sensitivity. Adding reliable and necessary variables to these models typically reduces the error term. By reducing the error term, the sensitivity of the F-test also increases for main and interactive effects. In addition, use of ANCOVA can correct for initial group differences that exists on the dependent variable. Using this method, the researchers adjusts means on the dependent variable in an effort to correct for individual differences. The researchers can go about interpreting main effects and interactions as they typically would. The difference is that the regression of the covariate on the dependent variable is estimated fist before the variance in scores is partitioned into differences between and within group. However, the error term is adjusted from the regression line derived from the covariate on the DV vs. running through the means in ANOVA designs.

What are the advantages and limitations of stepwise regression?

Advantages of stepwise regression: 1. The ability to manage large amounts of potential predictor variables, fine-tuning the model to choose the best predictor variable from the available options. 2. It's fast. Limitations: 1. Collinearity is usually a major issue. Excessive collinearity may cause the program to dump predictor variables into the model.

What is a Probability Sample

Any method of sampling that utilizes some form of random selection. In order to have a random selection method, you must set up some process or procedure that assures that the different units in your population have equal probabilities of being chosen.

What is Construct Validity

Are the constructs appropriate for the sample; are they measured properly Threats Inadequate explication of constructs Construct confounding Mono-operation bias Mono-method bias Confounding constructs with levels of constructs Treatment-sensitive factorial structure Reactive self-report changes Reactivity to the experimental situation Experimenter expectancies Novelty and disruption effects Compensatory equalization Compensatory rivalry Resentful demoralization Treatment diffusion

What is External Validity?

Are the results generalizable to other persons, settings, treatment variables, and measurement variables Threats Interaction of the causal relationship with units Interaction of the causal relationship over treatment variations Interaction of the causal relationship with outcomes Interaction of the causal relationship with settings Context dependent mediation

Explain the benefits of a randomized experiment.

Benefits The aim of a randomized controlled trial is to identify causal relationships through (a) a fair comparison of the different interventions in estimating their effects and (b) a legitimate statistical statement of one's confidence in the results of the comparison. Item (a) means that, at the outset of the trial, there are no systematic differences between the groups being compared, on account of the random allocation. In statistical language, there will be no bias in estimating the mean differences in the outcomes from each arm of the trial, if the trial is carried out properly. Item (b) means that chance differences--normal variation in behavior of people or organizations--are taken into account. This is accomplished through formal tests of statistical hypotheses or through the estimation of statistical confidence intervals. At the most basic level, the use of randomization minimizes the possibility of systematic differences between groups at the outset of the study (p. 23). These include baseline differences (randomization gives reason for baseline equivalence) that may not be accounted for in other studies (i.e., selection bias; how the participants end up in the study - pre-intervention test scores; other variables that would inflate the effect size). Benefit: Yield most accurate analysis of the intervention. Can make generalization High internal validity Establish cause-effect relationship Minimize bias

What is the biserial coefficient?

Biserial correlation is almost the same as point biserial correlation, but one of the variables is dichotomous ordinal data and has an underlying continuity. For example, depression level can be measured on a continuous scale, but can be classified dichotomously as high/low. The formula is: rb = [(Y1 - Y0) * (pq/Y) ] /σy, Where: Y0 = mean score for data pairs for x=0, Y1 = mean score for data pairs for x=1, q = proportion of data pairs for x=0, p = proportion of data pairs for x=1, σy = population standard deviation. Y is the height of the standard normal distribution at z, where P(z'<z) = q and P(z'>z) = p.

What is Internal Validity?

Causal reasoning; does the correlation between treatment and outcome reflect a causal relationship? Threats Ambiguous temporal precedence: must be sure that A → B, and not B → A Selection: systematic pre-existing conditions that may be causation and not the treatment itself History: events occurring in time can affect results Maturation: natural changes (e.g., economic growth) Regression to the mean: extreme scores tend to regress toward the mean naturally; measures are not correlated perfectly to each other (random measurement error); e.g.: those who scored the highest don't always score the highest (their True score is closer to the mean) Attrition (mortality): loss of respondents before the final outcome is measures; may be due to the treatment itself Testing: participants may have learned from repeated measures of test, not form the treatment Instrumentation: e.g.: machinery getting old Additive and interactive effect of threats to internal validity: interaction = multiplicative effect; threat is dependent on the combination

What are the two statistical approaches to assess change in a pre/post test?

Change Score Residualized Gain Score

What is a change score?

Change score - measures difference from T1 to T2 within the individual's scores Not independent of initial level AKA baseline score (susceptible to measurement error from day-to-day changes; regression to the mean; spurious correlations of other variables)

Define and explain the relative merits and shortcomings of cross sectional surveys

Cross-sectional surveys involve Multiple groups at a single point in time Advantages Will allow for quicker conclusions to be made Can represent the whole population Affordable Disadvantages Cannot help make conclusions about the direction of the association

Lay out the steps that one would take to assist in designing a survey

Define research question Ask stakeholder - test/revise research question, set budget/timing Design survey Plan - focus group/pilot group Modify the survey accordingly Sample selection Implement survey Data collection Data analysis Presenting

What are the two types of statistics

Descriptive and Inferential

Experimental Methods in Behavioral Research

Experimental studies which take place in environments that are controlled and account for other variables

T/F SAS has three data types: character, numeric, and integer.

FALSE. The three types are character, numeric, and date (The Little SAS Book, p. 42)

Define and Explain the merits and shortcomings of a probability sample survey using a numerical example.

For probability sample surveys, we first make a list of the target population i.e. the sampling frame, then randomize, and then draw a sample. Advantages Measurable sampling error Unbiased i.e. sample mean = population mean Disadvantages Possible biases with relation to non-response or response Self-selection (and also selection) biases Participation and coverage issues

What are the fundamental questions underlying evaluations conducted on education in the United States? How are the questions related to or dependent on one another?

From the Boruch syllabus: In this course, evaluation means addressing one or more of six broad questions and producing evidence in response to each: (1) What is the problem? And how do we know? (2) What policies, programs, etc. are in place? And how do we know? (3) What is their effect? And how do we know? (4) What works better? How do we know? (5) What are the relative benefits and costs of alternatives? How do we know? (6) Who poses the question and how might the evidence be used? How do we know?

What is ipsative measurement?

Ipsative measures are also referred to as forced-choice techniques. An ipsative measurement presents respondents with options of equal desirability; thus, the responses are less likely to be confounded by social desirability. Respondents are forced to choose one option that is "most true" of them and choose another one that is "least true" of them. A major underlying assumption is that when respondents are forced to choose among four equally desirable options, the one option that is most true of them will tend to be perceived as more positive. Similarly, when forced to choose one that is least true of them, those to whom one of the options is less applicable will tend to perceive it as less positive. For example, consider the following: "I am the sort of person who... " -prefers to keep active at work. -establishes good compromises. -appreciates literature. -keeps my spirits up despite setbacks. The scoring of an ipsative scale is not as intuitive as a normative scale. There are four options in each item. Each option belongs to a specific scale (i.e., independence, social confidence, introversion, or optimism). Each option chosen as most true earns two points for the scale to which it belongs; least true, zero points; and the two unchosen ones each receive one point. High scores reflect relative preferences/ strengths within the person among different scales; therefore, scores reflect intrapersonal comparisons. In an ipsative questionnaire, the sum of the total scores from each respondent across all scales adds to a constant. This creates a measurement dependency problem. For example, if there are 100 items in an ipsative questionnaire with four options for each item, the total score for each participant always adds up to be 400. Because the sum adds to a constant, the degree of freedom for a set of m scales is (m - 1), where m is the number of scales in the questionnaire. As long as the scores on m - 1 scales are known, the score on the mth scale can be determined. The measurement dependency violates one of the basic assumptions of classical test theory—independence of error variance—which has implications for the statistical analysis of ipsative scores, as well as for their interpretation. The problem with having the total ipsative scores add to a constant could be solved by avoiding the use of total scores. The measurement dependency problem is valid when the number of scales in the questionnaire is small. However, the problem becomes less severe as the number of scales increases.

Explain the limitations of a randomized experiment

Limitations A variety of conditions may prevent or limit the use of randomized trials in a given social setting. Coyle et al. (1991, p. 183) present five circumstances that justify the selection of a non-randomized approach: (1) decision makers' tolerance of ambiguity in estimating the effect of the new program; (2) the assumption that competing explanations of the program's effect are negligible; (3) the political l or legal requirement that all individuals eligible for the intervention must be involved in the program; (4) the preference for a non-randomized trial in meeting standards of ethical propriety; and (5) the explicitness of theory-based or data-based predictions of effectiveness. Human rights issues/ ethics RCTs are very specific to the context in which they are implemented, which limits the generalizability (external validity) of the findings (p. 23). Limitation: Time-consuming Ethnical limitation

Define and explain the relative merits and shortcomings of longitudinal surveys.

Longitudinal surveys involve Many touch points An extended period of times There are 3 kinds -- panel, cohort and retrospective Advantages Can help to discover sleeper effects Can show patterns in a variable over time Disadvantages Panel attrition Panel conditioning Can help to discover sleeper effects We do not account for what happens in between time points

Explain the merits, shortcomings, and usefulness of one major probability sample survey.

NAEP selects both private and public schools for assessment. It's a multistage sampling stage: 1. Select public schools within designated areas 2. Select students in the relevant grades within schools 3. Allocate those students to assessment subjects. As part of the selection process, public schools are combined into strata on the basis of school characteristics. Flaw: large-scale assessment can be time consuming. Only state level, not individual student level.

Difference between Naturalistic and Experimental Methods in Behavioral Research

Naturalistic studies are good because by observing subjects in their own environment, you can catch behaviors that you would otherwise not have been able to see with a controlled experimental setting. However, naturalistic studies have issues with reliability i.e. it is hard to see exactly the same kinds of results from one naturalistic study replicated later, so the results will always be different.

Naturalistic Methods in Behavioral Research

Naturalistic studies often take place in the subject's own environment

Nominal Logistic Regression

Nominal logistic regression models the relationship between a set of independent variables and a nominal dependent variable. A nominal variable has at least three groups which do not have a natural order, such as scratch, dent, and tear.

Define and explain a non-probability sample survey

Non-probability sampling means that you have excluded some of the population in your sample, and that exact number can not be calculated - meaning there are limits on how much you can determine about the population from the sample. Nonprobability sampling methods include convenience sampling, quota sampling and purposive sampling - or judgement sampling, and snowball sampling.

What is Normative measurement?

Normative measures provide inter-individual differences assessment, whereas ipsative measures provide intraindividual differences assessment. Normative measurement usually presents one statement at a time and allows respondents using a five-point Likert-type scale to indicate the level of agreement they feel with that statement. Here is an example: "I keep my spirits up despite setbacks." Strongly disagree Disagree Neutral Agree Strongly agree Such a rating scale allows quantification of individuals' feelings and perceptions on certain topics. Scoring of normative scales is fairly straightforward. Positively phrased items get a 5 when marked as Strongly agree, and negatively phrased items need to be recoded accordingly and get a 5 when marked as Strongly disagree. Despite occasional debates on the ordinal versus interval nature of such normative scales, scores of similar items are usually combined into a scale score and used to calculate means and standard deviations, so norms can be established to facilitate interpersonal comparisons. The normative scores can be submitted to most statistical procedures without violating the assumptions assuming the normative scores are accepted as interval-level measurements.

What is the default storage length for SAS numeric variables (in bytes)?

Numeric: 8 bytes Assuming a single-byte character set, you can use the maximum 352 bytes possible for the name, label and other data for each variable.

What is omitted variable bias?

Occurs when a variable that is correlated with both the dependent and one or more included independent variables is omitted from a regression equation. The nature of the bias on the included independent variables depends on the nature of the correlation between (a) the dependent and the excluded variable and (b) the included and excluded independent variables. If Y is the dependent variable, X1 is the included independent and X2 is the excluded independent, then the coefficient on X1, β1, will be biased in the following manner if X2 is excluded from the regression: Negative correlation, X1 and X2 Positive correlation, X1 and X2 Negative correlation, Y and X2 β1 is overestimated β1 is underestimated Positive correlation, Y and X2 β1 is underestimated β1 is overestimated

Why are cause-effect relationships difficult to demonstrate with reasonable certainty in behavioral research

One of the key problems with demonstrating cause and effect is showing that the effect happened after the cause (such as the relationship between depression and alcohol). Another problem in noticing the right kinds of causes and effects in research has to do with the question of what questions are asked and who funded the research. A third issue is related to isolating the effects of confounding variables and being able to attribute correctly the main influence of variables Another problem is multiple group threats in social science settings

describe one standard approach to dealing with the difficulty of demonstrating with reasonable certainty the cause-effect relationship

One standard approach to dealing with the problem of establishing cause and effect relationships is to first establish association and define the association between the dependent and the independent variable. The second is to determine the time order of the variables of interest, and the third is to rule out alternative explanations for what we are seeing in the data.

Ordinal Logistic Regression

Ordinal logistic regression models the relationship between a set of predictors and an ordinal response variable. An ordinal response has at least three groups which have a natural order, such as hot, medium, and cold.

Write the appropriate statements to compute the average price and the average number of shares of your stocks.

PROC MEANS maxdec=3; RUN;

What is the partial correlation coefficient?

Partial correlation measures the strength of a relationship between two variables, while controlling for the effect of one or more other variables. For example, you might want to see if there is a correlation between amount of food eaten and blood pressure, while controlling for weight or amount of exercise.

Negative Binomial Regression

Poisson regression assumes that the variance equals the mean. When the variance is greater than the mean, your model has overdispersion. A negative binomial model, also known as NB2, can be more appropriate when overdispersion is present.

Define and explain a probability sample survey

Probability sampling methods means that everyone in the population has a chance of being sampled, and you can determine what the probability of people being sampled is. Probability sampling includes: Simple Random Sampling, Systematic Sampling, Stratified Sampling, Probability Proportional to Size Sampling, and Cluster or Multistage Sampling. And have these elements in common 1 Every one has a known (calculated) chance of being sampled 2 There is random selection

What are random errors?

Random error (also called unsystematic error, system noise or random variation) has no pattern. One minute your readings might be too small. The next they might be too large. You can't predict random error and these errors are usually unavoidable.

What is a residualized gain score?

Residualized gain score - measures difference from T1 to T2 of individual's scores to the group's score from T1 to T2 Fit a regression line, then compare the individual score residual to the regression line Ancova: In ANCOVA, the dependent variable is the post-test measure. The pre-test measure is not an outcome, but a covariate. This model assesses the differences in the post-test means after accounting for pre-test values. In the ANCOVA approach, the whole focus is on whether one group has a higher mean after the treatment. It's appropriate when the research question is not about gains, growth, or changes. The adjustment for the pre-test score in ANCOVA has two benefits. One is to make sure that any post-test differences truly result from the treatment, and aren't some left-over effect of (usually random) pre-test differences between the groups.

What is forward regression?

Start with no variable in the model, testing the addition of each variable using a chosen model fit criterion, adding the variable who inclusion gives the most statistically significant improvement of the fit, and repeat this process until none improves the model to a statistically significant extent.

What is backwards regression?

Starting with all candidate variables, testing the deletion of each variable using a chosen model fit criterion, deleting the variable whose lose gives the most statistically insignificant deterioration of the model fit, and repeating this process until no further variables can be deleted without a statistically significant loss of fit.

What are the four general types of validity in terms of published research methods)?

Statistical Conclusion Validity Internal Validity Construct Validity External Validity

What is Statistical Conclusion Validity?

Statistical conclusion validity Related to the correlation between treatment and outcome; to conclude covariation of IV/DV Whether the presumed cause/effect covary How strongly they covary Assess statistical covariance of errors Threats Low statistical power: power = ability of test to detect relationships in population (probability that statistical test will reject the H0 when false; erroneous statistical rejection/fail to reject); low power → less precise effect size, may incorrectly conclude there is no correlation between IV/DV Violated assumptions of statistical tests: e.g., are the units completely independent? Increase for Type I error (they may covary for other reasons) Fishing and the error rate problem: e.g., enough random significance may be deemed significant if the researcher wants it to be → inflated significance; can be corrected by Bonferroni correction (gives a more conservative alpha coefficient) Unreliability of measures: either of the variables measured unreliably (latent variable modelling; inter-rater reliability, etc.); this can artificially weaken the relationship between 2 variables, or artificially weaken OR strengthen the relationship between 3+ variables Restriction of range in study variables: if the variable units are cut off; this weakens relationships Unreliability of treatment implementation: inconsistency; underestimates the effects of the IV Extraneous variance in experimental setting: includes environmental differences (temperature, time of day, noise, lighting, other distractions etc.); this can inflate errors Heterogeneity of units: characteristic differences can increase error variance Inaccurate effect size estimation: these include outliers, dichotomy for continuous variables, or other systematic fail; this can overestimate/underestimate effect sizes

On what does the statistical power of a test depend?

Statistical power is the likelihood that a study will detect an effect when there is an effect there to be detected. Statistical power depends on: Sample size: other things being equal, larger size yields higher power. Variance: smaller variance yields higher power Experimental design Components of statistical power: Sample size: Effect size: the ability to detect an effect relative to the other factors that appear in your study. Alpha level: the likelihood that what you observed is due to chance rather than your program Power: the likelihood that you will detect an effect from your program when it actually happens. Type I error: false positive. Reject null when you are not supposed to. P value: probability of finding the observed results when the null hypothesis is true.

A counseling psychologist invents a therapy that is believed to work better than conventional therapy. But there is some suspicion that the new approach is much less effective for adolescent females than it is for males. Design an experiment and the appropriate model and analysis to explore the matter.

Stratified selection for female and male adolescent. Randomly assign them to control and treatment group. Run the intervention for a period of time. Then run two-way ANOVA, and analyze main effect and interaction.

T/F You can use several lines for a single SAS statement.

TRUE.

T/F You can place more than one SAS statement on a single line.

TRUE. (The Little SAS Book, p. xiii) SAS doesn't really care where statements start or even if they are all on one line.

T/F In SAS, OPTIONS and TITLE statements are considered global statements.

TRUE. Global statements do not to either a PROC or DATA step. They usually appear in the first line, but,they can appear anywhere. (The Little SAS Book, p. 26; p. 101)

What is the tetrachoric coefficient?

Tetrachoric correlation is used to measure rater agreement for binary data; Binary data is data with two possible answers — usually right or wrong. The tetrachoric correlation estimates what the correlation would be if measured on a continuous scale. It is used for a variety of reasons including analysis of scores in Item Response Theory (IRT)and converting comorbidity statistics to correlation coefficients. This type of correlation has the advantage that it's not affected by the number of rating levels, or the marginal proportions for rating levels. The term "tetrachoric correlation" comes from the tetrachoric series, a numerical method used before the advent of computers. While it's more common to estimate correlations with methods like maximum likelihood estimation, there is a basic formula you can use.

SAS Data Step

The DATA step consists of a group of SAS statements that begins with a DATA statement. The DATA statement begins the process of building a SAS data set and names the data set. The statements that make up the DATA step are compiled, and the syntax is checked. If the syntax is correct, then the statements are executed. In its simplest form, the DATA step is a loop with an automatic output and return action.

How is the F-ratio calculated for ANOVA?

The amount of variation within a sample that is attributable to error. F(A) = 14/2.1 =6.67 F= MSbetween/MSwithin

What is the central limit theorem?

The central limit theorem (CLT) is a statistical theory that states that given a sufficiently large sample size from a population with a finite level of variance, the mean of all samples from the same population will be approximately equal to the mean of the population. Furthermore, all of the samples will follow an approximately normal distribution pattern, with all variances being approximately equal to the variance of the population divided by each sample's size.

What is the decision rule in ANOVA?

The decision rule is whether or not to reject the null hypothesis if F < X for a given degrees of freedom and p-level

What is the effect size of a test?

The effect size is the ability to detect an effect relative to the other factors that appear in your study

Discuss the virtues and potential pitfalls of employing an ethnographic approach in the context of conventional sample surveys, or controlled randomized experiments, or indicator systems. Use a specific example, hypothetical or otherwise, to illustrate.

The ethnographic approach to qualitative research comes largely from the field of anthropology. The emphasis in ethnography is on studying an entire culture. Originally, the idea of a culture was tied to the notion of ethnicity and geographic location (e.g., the culture of the Trobriand Islands), but it has been broadened to include virtually any group or organization. That is, we can study the "culture" of a business or defined group (e.g., a Rotary club).Ethnography is an extremely broad area with a great variety of practitioners and methods. However, the most common ethnographic approach is participant observation as a part of field research. The ethnographer becomes immersed in the culture as an active participant and records extensive field notes. As in grounded theory, there is no preset limiting of what will be observed and no real ending point in an ethnographic study. (https://socialresearchmethods.net/kb/qualapp.php) Essentially, ethnography entails a qualitative approach while the Census is a wide-scale quantitative method. Ethnography takes a large-tent view where "for the greater good" is likely enforced. The issue there is that a method that benefits one vicinity may not work in another vicinity. For instance, the method to engage with suburban homeowners would likely not work with the urban youth.

What is stepwise regression?

The general idea behind the stepwise regression procedure is that we build our regression model from a set of candidate predictor variables by entering and removing predictors in a stepwise manner into out model until there is no justifiable reason to enter or remove anymore. The variables to be added or removed are chosen based on the test statistics of the estimated coefficients. F - test or t -test used here.

What is the idea of interaction vs no interaction in two-way ANOVA?

The idea of the interaction comes from the comparsion of one group to another over the same conditions. In short, if the linear slope of the two groups are the same (aka parallel lines) then there is no interaction, as theoretically the lines will never cross. In all other instances, there is an interaction

Mean

The numerical average of a set of data

What is the regression to the mean?

The phenomenon that if a variable is extreme on its first measurement, it will tend to be closer to the average on its second measurement—and if it is extreme on its second measurement, it will tend to have been closer to the average on its first.

What is the phi coefficient?

The phi coefficient is a measure of the degree of association between two binary variables. This measure is similar to the correlation coefficient in its interpretation. Two binary variables are considered positively associated if most of the data falls along the diagonal cells (i.e., a and d are larger than b and c). In contrast, two binary variables are considered negatively associated if most of the data falls off the diagonal.

What is the point-biserial coefficient

The point biserial correlation coefficient, rpbi, is a special case of Pearson's correlation coefficient. It measures the relationship between two variables: One continuous variable (must be ratio scale or interval scale). One naturally binary variable.* Many different situations call for analyzing a link between a binary variable and a continuous variable. For example: Does Drug A or Drug B improve depression? Are women or men likely to earn more as nurses? Cautions: *If you intentionally force data to become binary so that you can run point biserial correlation, perhaps by splitting continuous ratio variables into two segments, it will make your results less reliable. There are exceptions to this rule of thumb. For example, you could separate test scores or GPAs into pass/fail, creating a logical binary variable. An example unnaturally forcing a scale into a binary variable: saying that people under 5'9″ are "Short" and over 5'9″ are "tall." One assumption for this test is that the variables are randomly independent. Therefore, the point biserial shouldn't be used to analyze experimental results; use Linear Regression with dummy variables instead.

What is the power of the statistical test?

The power of the statistical test is the likelihood that you will detect an effect from your program when it actually happens, also known as (1-beta)

What is the standard error of measurement?

The standard deviation of error around a person's true score. SEM, put in simple terms, is a measure of precision of the assessment—the smaller the SEM, the more precise the measurement capacity of the instrument. Consequently, smaller standard errors translate to more sensitive measurements of student progress. The observed score and its associated SEM can be used to construct a "confidence interval" to any desired degree of certainty. So, to this point we've learned that smaller SEMs are related to greater precision in the estimation of student achievement, and, conversely, that the larger the SEM, the less sensitive is our ability to detect changes in student achievement.

What is the standard error of estimate?

The standard error of the estimate is a measure of the accuracy of predictions.

T-statistic

The test of the statistical significance of the difference between the means of the experimental and control group Defined as the statistic less the hypothesized value divided by estimated standard error fo the statistic

What is the homoscedastic assumption?

The variance around the regression line is the same for all values of the predictor variable (X)

What is the standardized correlation coefficient?

This is simply the correlation between X and Y

What is the significance level of a test?

This is the probability of concluding that the treatment has an effect when, in fact, it does not. Also known as ()

Explain the concept of total survey error and its composition

Total survey error is the difference between a population parameter (such as the mean, total or proportion) and the estimate of that parameter based on the sample survey or census. It has two components: sampling error and nonsampling error. Sampling error, which occurs in sample surveys but not censuses results from the variability inherent in using a randomly selected fraction of the population for estimation. Nonsampling error, which occurs in surveys and censuses alike, is the sum of all other errors, including errors in frame construction, sample selection, data collection, data processing and estimation methods. Total survey error includes all sources of bias (systematic error) and variance (random error) that may affect the validity of survey data. There are four types of errors: sampling, measurement, coverage and non-response error. There are two types of survey error: random and systematic error. Random errors are assumed to cancel each other out, but systematic error shift the sample estimate systemically away from the true value. Ex. If the wording is problematic, the result could be very wrong. Response bias: tendency of a person to answer questions on a survey untruthfully or misleadingly. Nonresponse bias: a significant difference between those who responded to survey and those who didnt. Nonresponse if often problem with mail surveys, where the response rate can be very low. To avoid non-response bias: develop a relationship with respondents. Send reminder Measurement error: ex. Poor question wording.

What needs to hold for two events to be independent?

Two events are independent if P(A^B)=P(A)*P(B). In this case, P(A)*P(B) = 0.05, which does not equal P(A^B), so these events are not independent.

What is Type I Error?

Type I Error- In hypothesis testing, the instance when the alternative hypothesis is chosen but the null hypothesis is actually true. Normally indicated as the probability alpha

What is Type II Error?

Type II Error - In hypothesis testing, the instance when the null hypothesis is chosen but the alternative hypothesis is actually true. Normally indicated as the probability beta

Poisson Regression

Use Poisson regression to model how changes in the independent variables are associated with changes in the counts. Poisson models are similar to logistic models because they use Maximum Likelihood Estimation and transform the dependent variable using the natural log. Poisson models can be suitable for rate data, where the rate is a count of events divided by a measure of that unit's exposure (a consistent unit of observation).

What is a two-way disordinal interaction?

When an interaction is significant and "disordinal", main effects cannot be sensibly interpreted. Disordinal interactions involve crossing lines. Generally speaking, one should not interpret main effects in the presence of a significant disordinal interaction

Zero-inflated Regression Models

Your count data might have too many zeros to follow the Poisson distribution. In other words, there are more zeros than the Poisson regression predicts. Zero-inflated models assume that two separate processes work together to produce the excessive zeros. One process determines whether there are zero events or more than zero events. The other is the Poisson process that determines how many events occur, some of which some can be zero.

Frequency Histogram

a bar graph that represents the frequency distribution of a data set

SAS Input Buffer

a logical area in memory into which SAS reads each record of data from a raw data file when the program executes. (When SAS reads from a SAS data set, however, the data is written directly to the program data vector.)

SAS Program Data Vector

a logical area of memory where SAS builds a data set, one observation at a time. When a program executes, SAS reads data values from the input buffer or creates them by executing SAS language statements. SAS assigns the values to the appropriate variables in the program data vector. From here, SAS writes the values to a SAS data set as a single observation. The program data vector also contains two automatic variables, _N_ and _ERROR_. The _N_ variable counts the number of times the DATA step begins to iterate. The _ERROR_ variable signals the occurrence of an error caused by the data during execution. These automatic variables are not written to the output data set. descriptor information

Linear Regression

defines a line of best fit for correlational data that can be used as a prediction equation Use linear regression to understand the mean change in a dependent variable given a one-unit change in each independent variable. You can also use polynomials to model curvature and include interaction effects. Despite the term "linear model," this type can model curvature. This analysis estimates parameters by minimizing the sum of the squared errors (SSE).

What is Total Survey Error

the difference between a population parameter (such as the mean, total or proportion) and the estimate of that parameter based on the sample survey or census. It has two components: sampling error and nonsampling error.

Experimental Group

the group in an experiment that receives the variable being tested

Control Group

the group that does not receive the experimental treatment.

Median

the middle score in a distribution; half the scores are above it and half are below it

Regression

the relationships between a set of dependent variables and an independent variable


Related study sets

Chapter 10- Motivating Employees (Learn Smart)

View Set

A&P 1, Chapter 3, Anatomy of the Cell

View Set

Community Final Exam Practice Q's

View Set

AP European History Mid-Term Multiple Choice

View Set

general Psychology chapter 4 consciousness

View Set

Chapter 3 Quiz Q's - Life Policy Riders, Provisions, Options, and Exclusions

View Set

Cognitive Psych Exam #3 (Chapters 6,7 and 8)

View Set