Adv Stats Ch. 7,8,9,10

¡Supera tus tareas y exámenes ahora con Quizwiz!

Association

Variables are categorical (no such thing as higher or lower)

Predictor Variable

The X variable in correlation/regression. It is sometimes called the Independent Variable, but technically IV refers to the variable manipulated or determined by the researcher in researching designed to find differences, the one that makes the groups different. In research involving relationships the IV is now called the predictor variable.

Criterion Variable

The Y variable in correlation/regression. It is sometimes called the Dependent Variable, but technically DV refers to the measured behavior of subjects in research involving finding differences. In research involving finding relationships, it's called the criterion variable.

Standard Error

The standard error is the standard deviation of the sampling distribution (σ/√n). It represents the expected amount of variation due to sampling error.

Which to Use?

The unpooled variance approach is used when n's are small and equal, but in that case both approaches result in the same t-ratio. When n's are large (>30) or the standard deviations of the two groups differ by more than 2 times (2.3 versus 5.7 for example), then use the pooled variance approach.

Central Limit Theorem;

"Given a population mean µ and variance σ2, the sampling distribution of the mean (distribution of sample means) will have a mean equal to µ, a variance equal to σ2/n, and a standard deviation equal to σ/√n. The distribution will approach the normal distribution as sample size (n) increases."The grand mean is the mean of the sampling distribution of means; the mean of means.

Adjusted r

The Pearson r is a biased estimate of the true population relationship. You can "adjust" it to remove the bias and thus get a better estimate of the true population relationship, but no one does.

Robust

A statistical test is said to be robust when you can violate the underlying assumptions and still maintain your alpha level (probability of type 1 error). The t-test is robust. So why not just assume homogeneity of variances and furthermore forget about pooling the variances. The formula t = M1-M2/√s21/n1+s22/n2 works just fine in dang near all situations. But you may find some folks insist on pooling variances, and some folks insist on a test of homogeneity. JASP and SPSS provide them. Also, why not analyze the data both ways and see if it makes a difference? JASP and SPSS make it easy.

Covariance

A value that reflects the degree to which two variables "vary together." This is much like a variance, but is based on multiplying the deviation scores of the X data set and the Y data set to arrive at a single number depicting the amount of variance in both data sets. It could be used as a measure of relationship except that it is highly influenced by the values of the standard deviations of the two data sets and thus the value has different meaning depending on the standard deviations. We really prefer our summary statistics to have a constant meaning no matter what the sample size, the variability, etc.

Intercept

Also called the Y-intercept, it is the b term in the regression equation: Y = aX + b. It is where the regression line crosses the Y-axis when X is zero (which of course is the only place the line crosses Y). The Y-intercept is rarely of any interest in its own right. But there are some theories that make hypothesis about where the regression should cross the Y-axis and then you would need a statistical difference test to compare where the line intercepts Y compared to where your theory says it should have (and we can do that too, but not in this class)!

Kendalls' Tau (τ)

Also ranked data, but now the rankings include the idea of "inversions." Imagine you have rankings from time one and time two to correlate. What if subject 1 went from 1st ranked down to 5th ranked, in a set of N = 20 scores. Subject 1 drop from 1/20 to 5/20 where 19 were ranked lower, and then 15 were ranked lower. So the "inversion" is 15.

Confidence Limits

An interval of values (lower-upper) as an estimate of a population value

Credible Intervals

Bayesians call confidence intervals, credible intervals. Frequentists say they have 95% confidence that the interval encompasses the true population mean (or that there is a 95% chance that the interval values encompass the true population mean), while Bayesians say the probability that the true population mean is encompassed by the interval values is 0.95.

R2 or r2

Both mean the same thing. It is the proportion of the variability accounted for in the Y data by the X data. It's a measure of effect size, but also tells us how much of the Y data is related to the X data in terms of percentage (r2 = .23 is 23%). 1- r2 then tells us how much of the Y data is NOT related to the X data. R2 also tells us the Proportional Reduction in Error (PRE) and the Proportional Improvement in Prediction (PIP)! The idea of PRE is that if we were to have only Y data and no X data, our best prediction of Y can only be the mean of Y. R2 tells us how much better it is to have the X data (23% better predictions, or 23% reduction in error of prediction. PIP requires a simple calculation (see book), but it tells us the reduction in error of prediction from No X data to having X data in terms of standard deviation units (like Cohen's d does for difference tests).

Phi Coefficient (Φ)

Both variables Dichotomous (male/female, arrested for dui, yes or no). Pearson r by another name. Chi-square is same thing, but a bit different question being asked...is there a significant relationship (goodness of fit) of the two variables? Phi gives degree/strength of the relationship.

Effect Size

Cohen's d uses the pooled standard deviation (sp). The formula above for the pooled variance estimate was shown as: s2p = (n1-1)s21+(n2 - 1)s22/n1+n2-2. To find the pooled standard deviation you simply take the square root of this: √ s2p. Cohen's d is then: M1-M2/sp. This gives effect size in terms of the standard deviations separating the two means.

Bivariate Normal Models

Correlation (possibly with regression as post hoc analysis if the correlation is significant).

Linear Relationship

Correlation and Regression refer to the Linear Relationship of two (or more) variables. It is best to graph the data as the first step and simply look at the scatterplot to see if there is any indication that the relationship is not linear - ie, curvilinear. If it is definitely forming a curved pattern, the Pearson r is not appropriate and will not accurately reflect the relationship - it will be closer to zero than it should be and most likely will be nonsignificant then, despite the fact there is an obvious relationship (just not a linear one).

Power and Variance

Decreasing variance increases power.

Assumptions

Homogeneity of variance and normal bell curve in populations are the two assumptions for correlation and regression. However, they are now consider "arrays" because an array is simply two or more sets of data, not just one set.

Confidence Limits

How confident can we be that the difference we see in the two group's means is the "real" difference? Let's find out. The upper limit is: M1-M2 + (t0.025)(sM1 - M2). The lower limit is: M1-M2-(t0.025)(sM1 - M2). Notice that when alpha is 0.05, we use the t from the table associated with alpha of 0.025 for two-tailed tests. The sM1 - M2 is the standard error of the difference between means (whichever approach you used to calculate it earlier - pooled or unpooled). If the interval includes 0, then we won't have much confidence in our findings. Remember, a mean difference of 0 means no difference! And if our 95% confidence interval includes the value of 0, then we can be 95% confident that the difference in the means is zero! No difference. This might happen with large n's and small mean differences.

Power and the Alternative Hypothesis

If the alternative hypothesis is true, the difference in two means will be large and power is also large - in other words the bigger the difference the easier to see the difference!

Slope

In Y = aX + b, the slope value is given by a. It is the slope or angle of the regression line. If the correlation is negative, the slope must be negative, if the correlation is positive, the slope must be positive. The slope indicates line angle and means that for one unit change in X (from 2 to 3 for example), there is a corresponding change in Y equal to the slope value. So if slope is -.5, then it's a half-unit change subtraction in Y (X goes from 2 to 3 and Y goes from 4 to 3.5, for example)

Power and Number of Subjects

Increasing the number of subjects increases power. This is about the only thing under experimenter control and is probably the way most researchers increase power. However, changes in experimental design can also increase power.

Retrospective Power

It's useful to calculate power based on the experiment you just completed (using your means and sd's as estimates or guesses) to help you design a future study along the same lines. In fact, some will suggest a "pilot" study with just a few subjects and then using the means and sd's from the pilot study to determine the number of subjects needed to reach a desired power level. The book makes a good case for the idea that post hoc power calculations do not reveal much about the data and that calculating confidence intervals is more relevant. Turns out that power is about .5 when alpha is 0.05 and if you found that p was greater than 0.05 (no significant difference), then power is less than 0.5! There is no situation where you can have power at 0.8 and a nonsignificant finding. So if you think that one reason for finding no difference might be low power, you can't determine that after the fact!

Kendalls' Coefficient of Concordance (W )

Judges rankings. One variable is the judges; if 4 judges it has 4 "categories." The other variable is their rankings (ordinal variable) on something (more than two levels). Put these six things in order of importance and you have 10 subjects do the task. Now use Kendall's W.

Calculating Power - Effect Size

Larger mean differences indicate more power, so one measure is an effect size measure based on the assumed population means and population standard deviation: d = μ1 - μ2 / σ. How do you know these values? Guess.

Regression

Once a statistically significant correlation is found, regression analysis is conducted. Regression involves summarizing all the variability in the two variables (scatter) as a straight line. This is called the regression line or the line of best fit (it is the one line that fits the data best of all possible lines). It is also called the regression equation and is the algebraic equation: Y=mX + b. It is used to make predictions. The idea is that once you have a significant relationship you can use that information to predict the value of the Y variable given that you know the value of the X variable.

The t-test With Difference Scores

Once difference scores are calculated, you now have one set of scores from which the mean (MD) and standard deviation (SD) are calculated. The standard deviation is used to estimate the standard error (SDM): SDM = SD /√n. Now the t-ratio (tcal) can be calculated: tcal = MD/SDM. Some texts will show it as: tcal = MD- 0/SDM in order to emphasize that the population mean is 0 (if the null is true the difference from before to after should be 0). That's rather Bayesian; it's a specific statement of an apriori subjective probability; but I digress. Back to the Frequentist version. The t-distribution table is then used to find the ttab to compare to tcal to determine if there is a significant difference from before to after. Degrees of freedom are N-1 where N is then number of difference scores (not the total number of subjects). And our decision rule is the same as always, if the absolute value of the one you calculate is larger or equal to the one in the table there is statistically significant difference.

Prediction

One goal of Regression Analysis is to be able to predict Y given X. However, another goal is to understand how multiple variables interact and their contributions to outcomes (outcomes meaning the Y variable).

Relationships

One major approach in statistics is to compare variables to determine if they are related or unrelated in terms of an association between. A significant relationship means that given values for one variable we can predict the most likely value for another variable. IQ related to Head Size would mean that if I know your Head Size I can accurately predict your IQ.

Point-Biserial (rpb)

One variable Dichotomous (two categories), One variable continuous, Pearson r by another name (male = 1, female = 2). Slope depends on values set for male female. Mean of category with lower value (male = 1) becomes the y-intercept. Regression line plotted along the two means on X axis.

Conclusions

Overall, my suggestion is to follow the literature examples. As you read the past research you will read all about number of subjects used, how sampled, etc., and this alone is pretty much all you need to know. If you use about the same number of subjects used in past research you will have about the same level of power they had, whatever that might have been. However, you may encounter folks who want to see power calculations, most likely they want to see that you will have sufficient number of subjects, or want to justify the number of subjects you plan to sample, in which case estimate the means and standard deviations from the samples given in past research and use G*Power

Low Birth-weight Babies

Pay special attention to pages 186-188 where the author examines the raw data, the stem-and-leaf display, box-plot, and the Q-Q plot. You should be able to analyze them much like the author does with any given data set.

Covariance and r

Pearson's r can be calculated from the covariance: covxy/sxsy. You can see that it involves forming a ratio with the products of the standard deviations and thus "accounts for" the variability in each data set.

Confidence Intervals with µ

Point Estimate: a single value as an estimate of a population value (M is a point estimate of µ.

Guessing Population Means and Standard Deviations

Prior research reports means and standard deviations of samples and by considering those means and sd's, you can make a better guess. Imagine one publication says means are 15 and 20, another says 17 and 21, another says... Well you see that the mean differences in several studies are about 5-7 units. You can start with a difference of a particular size because that's what you think is important. A therapy that is increases success by ¼ of a standard deviation might not be worthwhile, but a therapy that increases success by 2 standard deviations would be fantastic. By starting with the "answer" you work backwards to see what's needed. 2 = μ1 - μ2 / σ. Past research says I can expect a standard deviation of 3. That indicates a mean difference of 6 is needed.

Sampling Distribution of Means

Randomly sample groups of the same size (n) over and over and calculate the sample mean of each of the samples and then put these sample means into a frequency distribution. This is the sampling distribution (of the mean, or of means). Sample sizes of 30 or more work best, but even smaller samples work too. The histogram/polygon will be normal (bell shape) even if the population distribution is not.

Ranking Data

Rarely we will want to convert continuous data to ranked data. Why? Best use is when there are a few extreme scores at one or both ends. Ranking pulls everything in next to each other.

Power and Alpha Level

Reducing the alpha level, say from 0.05 to 0.10, increase the chance of Type I error from 5% to 10%, but it increases power.

Error and Accuracy of Prediction

Remember that variance is error and the standard deviation is the square root of the variance. The variance of the Y data set is then a measure of error and if we create a set of predicted Y scores from across the various values of X we can then get a standard deviation or variance of the predicted Y-scores and use that as measure of error - but we don't usually. Instead we consider the standard error of estimate which is like the sum of the deviations of the predicted Y scores and the actual Y scores squared, divided by degrees of freedom, and then the square root. It's SY.X = √Ʃ(Y - Y^)2/N-2. If it is squared it becomes the residual variance also called error variance.

Special "effects"

Restricted range means you have not measured the full range of possible scores and thus you may find Pearson's r is reduced to near zero. Heterogenous subsamples is another problem. Imagine you have two distinct groups, but you treat them as one group and calculate Pearson r. It may come out to be 0.00, but if you then separate the subgroups and calculate Pearson r for each separately, you may find both Pearson r's are significant!

Tetrachoric

Same as Phi Coefficient, but now both dichotomous variables are artificial. Archaic, describe here for general knowledge purposes, no longer used.

Biserial

Same as Point-Biserial, but the one dichotomous variable is "artificial;" we could have measured it as continuous but didn't or did, but then categorized after the fact (no good reason for this though). If you ask age and have subjects check a box (<30, >29), you took a continuous variable and made it artificially a dichotomy. Archaic, describe here for general knowledge purposes, no longer used.

Bootstrapping

Sampling with replacement from a sample data set. Often called resampling, this computer based procedure samples from the data set several thousand times, calculates sample means and sample standard deviations and compares them all to the sample mean and standard deviation of all the sample data. How many or what proportion of those sample means would result in a statistically significant decision? With an alpha of 0.05, we would expect 95% of them to be statistically significant, just like the sample mean based on all the data. If bootstrapping shows this not to be the case, then a Type I error is likely.

Linear Regression Models

Simple Linear Regression involves two variables, a predictor and a criterion. Typically a correlation is found to be significant and then regression analysis is conducted. Multiple Linear Regression involves many variables and both correlation and regression are done simultaneously in an effort to find the best few predictors that give the most accurate prediction of the criterion variable. Correlation involves two random variables (X and Y) and Regression involves two or more fixed variables (Predictors) and one random variable (Criterion).

Measure of Effect Size Cohen's d R2

Small Effect Moderate Effect Large Effect 0.10 - 0.49 0.50 - 0.79 0.80 and above 0.10 - 0.25 0.26 - 0.40 0.41 - 1.00

Degrees of Freedom

The degrees of freedom are the degrees of freedom of the two groups summed. Df = (n1 -1) + (n2 -1).

Difference Scores

The difference (subject) between Before and After scores. It makes no difference whether it's B-A or A-B as long as the same subtraction is done with all subjects.

Scatterplot

The graph showing the individual data collected in a correlation study. It may or may not include the regression line. Scattergraph, scatter diagram, and other terms refer to the same thing.

Sampling Distribution of Differences between Means

The independent samples t-test compares two sample means to determine if they statistically differ. The two means are subtracted (M1 - M2) and this difference is then compared to a were conducted over and over, we could get thousands of mean differences and create the distribution.

Correlation

The linear relationship between two variables, most often stated in terms of Pearson's Product-Moment Correlation Coefficient (r). Pearson's r varies from -1 to +1 with the absolute value indicating strength of the relationship and the sign of the value indicating direction.

Residual

The name of error. What's left over after subtracting the actual data from the predicted data. We often will analyze the "residuals" and even graph the residuals to get a better understanding of the error of prediction.

Hypothesis Testing Using z

The population mean (µ) and the population standard deviation (σ) must be known. First calculate the standard error: σM = σ/√n. The standard error is equal to the population standard deviation divided by the square root of the sample size. Next calculate z: z = M - µ / σM. If the absolute values of the z you just calculated is less than or equal to 1.96, then the sample mean (M) and the population mean (µ) statistically differ. The difference between the two is a real difference and not due to chance and error.

Power

The probability of correctly rejecting a false null hypothesis, when a particular alternative hypothesis is true. 1-β. Where β is the probability of a Type II error. Basically, you can see that if β is large, then power is small and vice versa. The problem is we don't really know how to calculate β. The ability to detect a treatment effect if one exists.

Errors of prediction

The regression line is the line of best fit to the data. There a millions of potential regression lines, but only one fits the data best. It is found considering how each data point differs from the data point predicted by the regression equation. So if my empirical data is (2,5), where X is 2 and Y is 5, and my predicted Y from the equation is 6, then I have 1 unit of error of prediction. The goal is find the line that reduces the sum of the squared errors across all the data. There are actually several ways to do this, but the one most used by computer programs is called the "least squares solution." It involves trial and error in which the slope and y-intercept are "guessed" and then the sum of errors of prediction calculated. Another line is "guessed" and tested, and so on and so forth until millions of lines have been created and then they are compared to find the one with the "least sum of squared errors of prediction." AKA the line of best fit.

Two Standard Errors

The standard error shown above (√σ21/n1 + σ22/n2) is preferred when the n's of both groups are the same. When the n's are not the same, a "pooled variance estimate" is preferred. The idea is that we want a measure of variability based on all the data. When the n's are equal, a simple averaging of the two sample standard errors works fine, but when the n's are not equal we want to "weight" the means according to the number of subjects. The group with more subjects is a closer approximation to the population variance than the one with fewer subjects. The first step is to Pool the Variances, which means to mathematically calculate the weighted average: s2p = (n1-1)s21 + (n2 - 1)s22/n1+n2-2. With this pooled variance term (s2p) we can now calculate the pooled standard error: √ s2p(1/n1 + 1/n2). The t-test is then: t = M1-M2/√ s2p(1/n1 + 1/n2).

Variance Sum Law

The sum or difference of two independent variables is equal to the sum of their variances. The people. The term does not refer to independent variables of the research as meaning manipulated variable. I'd say two "levels" of an independent variable because that's what the two groups are. In symbols: σ2M1-M2 = σ2M1 + σ2M2 = σ21/n1 + σ22/n2. This is read "The variance of the difference between two means is equal to the variance of mean one plus the variance of mean two, which is the variance of group one divided by the number of subjects in group one plus the variance of group two divided by the number of subjects in group two."

The t-test

The t-ratio is the difference between the two sample means divided by the standard error. In symbols: t = M1-M2/√s21/n1+s22/n2.

Confidence Interval

The values "enclosed" by the confidence limits. Upper Limit: M + [ttab(sM)]. Lower Limit: M - [ttab(sM)]. The "ttab" is the value of the t-ratio from the t-distribution table. Recall that very little is exact in statistics and the sample mean as an estimate of the population mean is no exception. The confidence interval indicates the range of possible and acceptable sample means that we would expect to see if the study were replicated. Further, if we use an alpha level of 0.05, then the confidence interval is called the 95% confidence interval and if we use an alpha level of 0.01, then the confidence interval is called the 99% confidence interval.

Fixed Variable

The values of a fixed variable are determined by the experimenter. The Independent Variable, when quantitative, is a fixed variable. The Predictor Variable may be fixed or random (but see linear regression models).

Random Variable

The values of a random variable are not determined by the experimenter, but are free to vary. The Dependent Variable is a random variable. The Criterion Variable is a random variable.

How to Calculate Power

There is a table in the back of the textbook that shows power give that you first calculate delta, where δ=d/√n, and d is determined from your guesses about population means and standard deviations (see above on effect size). See book, section 8.4.

Normal Equations

These are the formulae for the slope (b = covXY/s2X) and the y-intercept (a = MY - bMX) in the regression equation (Y = bX + a).

Power for Single Samples

Think about your study and how big of an effect you expect or want. Let's say you want a Cohen's d of 0.30, a moderate effect. You might really hope for a larger effect, but what size would you be happy with? That gives d. Using Appendix Power in the textbook, you find that a two-tail test with alpha at 0.05 and a power of 0.77 gives δ of 2.70. Now you have the basic numbers you need to determine sample size: n=(δ/d)2; and n = (2.7/0.3)2; and n = 92, which is 81. So, 81 subjects would be needed.

G*Power

This is a computer software program that calculates power and you download it here: HYPERLINK "http://www.softpedia.com/get/Science-CAD/G-Power.shtml" http://www.softpedia.com/get/Science-CAD/G-Power.shtml.

p level

This is the p-value or alpha level. It is the probability of committing a Type I error (5% or 1%). Before computers, statistics were done by hand and the t-distribution table (and other tables) was absolutely necessary to determine if the critical region had been reached. With computers the tables are no longer needed because the software computes actual p-values. The statistician simply looks at the output p-value and decides if it is less than 0.05 or not (or 0.01).

Homogeneity of Variance

This refers to the variances of both groups being similar. Simply eyeballing the two standard deviations is sufficent to make that decision. If s1 = 2.45 and s2 = 3.02, they are similar. But if s1 = 2.45 and s2 = 6.87, you might then consider that the assumption of the homogeneity of variances has been violated and the t-ratio is not valid. There are tests for homogeneity of variances, but honestly in the end it tends to make little difference one way or the other.

Matched Samples t-test

This t-test is used to test various hypothesis regarding twins, repeated measures on the same subjects, correlated samples, dependent samples, etc. The actual name used depends on how data were collected. The more common form of this test is the repeated measures t-test in which behavior is measured before some treatment and again after some treatment and the difference is calculated. In this case the population mean of the difference scores would by 0 if the null is true (no difference from before to after). Twin Studies: One group contains one twin the other group contains the other twin. Difference scores are then calculated between the twins. Matched Groups: Subjects are measured on some variable that might influence the results and then those subjects with similar scores are placed into two different groups. The idea is that the two groups are now "matched" or similar on this third variable. In all three situations, difference scores are calculated and analyzed.

Independent Samples t-test

This test is used whenever comparing two means from two independent groups. The subjects are in one group or the other-that is there are different people in the two groups. One group may have 10 (n = 10) and the second group may have the same number or a different number of people (unequal n's). The total number of subjects is the sum of the n's (n1 + n2 = N). Typically, one group receives the treatment and one group does not. The group that does not get the treatment represents the normal population. That is, the mean of the no-treatment group is considered an unbiased estimate of the normal population mean.

Confidence Intervals for Matched Samples t (Repeated Measures t)

Upper Limit: MD + [ttab(sMD)]. Lower Limit: MD-[ttab(sMD)]. The "ttab" is the value of the t-ratio from the t-distribution table.

Chi-Square with One Variable Ordinal

Use Pearson r. Can also "weight" the orders if one seems more important (instead of 1,2,3,4 you might want them to be 1,2,5,6) to indicate that whatever category 3 and 4 are, they are "more important" than 1 and 2 categories.

Spearman's rho (rs)

Used for correlating ranked data. Just use Pearson r as if data not ranked.

Differences

Usually refer to Means of groups in which the means are compared to determine if they differ (one mean larger or smaller than another mean). A significant difference means that the difference in the values of the means is a "real" difference and not a difference due to sampling error. Finding that kids who go to private school differ in IQ compared to those who go to public school would mean that "something" is going on that causes the difference. Since such a comparison is observational, not experimental, we cannot say what the cause might be, but we can suggest possibilities. Maybe private schools "do" something that raises IQ of kids, or maybe private schools have acceptance standards that selects for higher IQ kids, or maybe the more affluent the more likely to go to private school and being more affluent is related to having higher IQ.

Three Reasons for Correlation (other than relationship)

Validity (does r on X test represent true population r?) More advanced analysis using several or many r's as input variables Effect size

Correlation

Variables are continuous (higher values = more or less of something)

Confidence Intervals

We can do them but we don't.

Standard Error of the Difference Between Two Means

We now take the square root to find the standard error of the difference between two means: √σ21/n1+σ22/n2. This standard error is best used when the two n's are equal (that is, same number of subjects in both groups.

Significance Testing

We test the significance of r, we can also test the significance of b (the slope) against another regression line from a different set of XY data to see if there is a difference in the slope of the two regression lines. We can also test for differences in two Pearson r's.

Single Sample t-test

When the population mean is known, but the population standard deviation is not known, the single-sample t-test is used. The standard error of the mean (σM) is now estimated using the sample standard deviation (sM). Thus, sM = s/√n. Now calculate the t-value, also called the t-ratio or just plain t: t = M - µ / sM. This value is compared to the value found in the table of t- distribution values in order to determine the cut-off t-value for a given critical region and alpha level. This requires knowing the degrees of freedom, which are N-1. The decision rule is that if the absolute value of the t that you calculate (tcal) is greater than or equal to the t in the table (ttab), then the sample mean and population mean statistically significantly differ.

Power in Other Designs

Your textbook covers the basics on power in the independent samples t-test and matched samples t-test, but honestly, there's not much to be gained from those. You won't be calculating power by hand, but will use computer software in those few situations where someone wants you to calculate power (thesis?).

Effect Size

d-family: Cohen's d = MD / sD. The mean of the difference scores is divided by the standard deviation of the before scores (sD). This gives the amount of "gain" (mean difference) in terms of standard deviations. If d = 1.75 it means a gain of 1.75 standard deviations from Before to After. Can also average the before and after standard deviations - if that makes more sense given the data. r-family: R2 = t2/t2+df. The t here is tcal. R2 represents the percent of variability in the DV accounted for by the IV. Of all the potential causes of the DV, R2 shows how much is due to the IV.


Conjuntos de estudio relacionados

APES Topic 1.8- Primary Productivity

View Set

5 Largest Countries by Population

View Set