Kinesiology Statistics Exam II

¡Supera tus tareas y exámenes ahora con Quizwiz!

Effect Size Standards

-0.2<ES<0.5: considered small difference -0.5<ES<0.8: considered medium/moderate difference -0.8<ES<1.0: considered large difference ***Must be judged within context of research question -May also want to examine percent difference to assess efficiency of intervention in addition to, or rather than, effect size

Factorial ANOVA (Two-Way ANOVA)

-1-way ANOVA and repeated measures ANOVA examined effect of one independent variable (single variable can have multiple levels, ex sex is a variable with two levels, male or females) or one dependent variable -Want to examine effect of multiple independent on single dependent variable

ANOVA

-Analysis of variance: used to compare means from three or more groups; unlike repeated t-tests, allows us to maintain pre-determined alpha level (in t-tests the alpha level is multiplied with each pairwise ) -ANOVA determines if there are significant differences between (among) groups by examining ratio of variance between groups to variance within groups -F = mean variance between groups and grand mean/mean variance within groups = MSb/MSw **See sheet for walkthrough with example**

Repeated Measures ANOVA

-Compare same subjects at several different time points (similar to dependent t-test, just with more than two groups) -Subjects serve as own controls so variability between means due to: 1. Treatment (columns) 2. Intraindividual variability (within subjects - rows) 3. Error (unexplained variability) *Interindividual variability eliminated (because comparing same subjects to themselves overtime; this reduces denominator in F-ratio) -F-ratio = MSc/MSe 1. Find variability between columns (treatment effects) 2. Find variability between rows (differences among subjects) 3. Find variability due to error (SSe = SStotal - SSc - SSr) 4. Find df for columns (trials - 1) 5. Find df for error (trials-1)(n per group - 1) 6. Calculate MSc and MSe **Don't actually have to do on exam, just know procedure just in case**

2-Sample Case (Test of Independence)

-Data consist of frequencies with which subjects belong to categories of two different variables -Ex. cholesterol levels and pos/neg CAD

Regression

-Determine single line of best fit to describe relationship between two variables -Allows us to predict value of y variable (criterion) based upon value of x variable (predictor) (still no causal relationship) -Squares method determines line of best fit because it creates line where sum of squared distances from data points is minimized -Need to know intercept and slope to make accurate predictions -Goodness of fit reflects how well the line of best fit fits the data -> closer data points, higher goodness of fit, higher confidence

Factorial ANOVA (Two-Way ANOVA) Example

-Do both wealth and political party affiliation impact opinions on president's handling of economy? -Two main effects: wealth and party, with three categories for wealth (low, middle, high) and two categories for party (democrat, republican) so this a 3x2 design -Interactive effect (interaction) between wealth and party -Two-way ANOVA will provide outcome (F ratio) for two main effects (wealth, party) and interactive effect which is combined effect of two main independent factors on dependent variable; this interaction reveals whether different groups of subjects have different responses to the intervention or 2nd main effect -This is an example of between-between factorial ANOVA, but can also have between between-within (ie different groups over time, men vs women over three different time points)

Type II Error

-Fail to reject false null hypothesis; conclude there is no difference, when there really is a difference -Power (1-beta) protects against type II error or ability to detect a real difference

One-Tail vs. Two-Tail

-For one-tailed determine significance in only one direction, with two-tailed in both directions -One tail easier to clear and establish significance but must justify a priori why one-tailed test was chosen (why was only one direction possible for results) -One tail -> two tail: double the significance -Two tail -> one tail: halve the significance

Effect Size (ES)

-If sample size is large and/or variability is low it will be easy to achieve p<0.05 resulting in statistical significance, but this does not assume physiological significance, or make the difference meaningful in real life -ES (most common Cohen's D) best used after establishing significance to determine meaningfulness of difference between groups -Cohen's d = X1 - X2 / s -Independent test: standard deviation average of both groups -Dependent: standard deviation of just control group

Standard Error of the Difference

-Independent test: SEd = sq. rt. (SEM1^2 + SEM2^2) -Dependent test: SEd = sq. rt. (SEM1^2 + SEM2^2 - 2r(SEM1)(SEM2))

Degrees of Freedom

-Independent: df = total number of subject (group 1 + group 2) - 2 -Dependent: df = number of subjects - 1

Correlation

-Indicates the degree to which two variables are related, but does not indicate causal effect (or how much of total variance of one variable can be associated with variance of second variable) -The variable to be correlated must be paired observations from same individuals **Absence of evidence, is not evidence of absence**

Factorial ANOVA (Two-Way ANOVA) Outcomes

-May have significant main effect for wealth (1st main effect) collapsed across party. If so, then post hoc to find pairwise difference between three income groups without distinguishing between party -May have significant main effect for party (2nd main effect) collapsed across wealth. If so, then post hoc to establish pairwise difference without distinguishing between wealth (here there are only two groups, meaning there is a difference between those two groups and no post-hoc needed to determine "between which groups?") -May find significant interactive effect, then run post hoc for wealth, and another for party so we examine all possible pairwise differences (1-way ANOVA for six different groups)

Hypothesis Testing

-Must develop two hypotheses: 1. Null (Ho): X1 equals X2 (no difference) 2. Alternate (Ha): X1 does not equal X2

Error in Prediction

-Overall: e = sum(Y-Y')/n -Average: SEe = sq. rt. sum(Y-Y')^2/n-2

Non-Parametric Statistics

-Parametric statistics based upon certain assumptions: 1. Normal distribution of scores (adequate N, which depends on field ex. clinical less, epidemiological more) 2. Homogeneity of variance of populations 3. Data scale ir interval or ratio -If these assumptions concerning the parameters of the population cannot be met, chance of type II error will increase -If these assumptions are not met, must use non-parametric procedures -Concepts of testing Ho and Ha, type I and II errors, alpha levels, critical values, confidence intervals, and p values still apply. *Less powerful

Correlation Coefficient

-Pearson Product Moment Correlation used to measure linear relationship between two variables: provides correlation coefficient (r) 1. Measure strength of relationship between two variables 2. Predict outcome of unknown (second) variable based on results of known (first) variable -r can vary from -1.0 to 1.0 -Indicates strength and direction of relationship between two variables -In determining r both mean and variance and variance about the line of best fit are considered

Type I Error

-Reject true null hypothesis; conclude that there is a difference when there really is no difference -Alpha protects against type I error -When level of confidence is increased we decrease chance of type I, but increase chance of type II

Correlation Coefficient Scale

-Scale is ordinal, meaning that r = 0.8 is not twice as great as r = 0.4, can only say that one is higher or stronger than the other -0-0.29: little if any pos/neg correlation -0.3-0.49: low pos/neg correlation -0.5-0.69: moderate pos/neg correlation -0.7-0.89: high pos/neg correlation -0.9-1: very high pos/neg correlation -For use in predicting individual scores: -0.5-0.69: low -0.7-0.89: moderate ->0.9: good

Coefficient of Determination

-The square product of r (r^2) gives the proportion of the variance (fluctuation) of one variable that is predictable from the other variable (shared variance) -r between IQ and GPA is 0.7 so shared variance is 0.49, meaning that 49% of results of both IQ and GPA are due to common factors - the other 51% caused by other factors (non-shared variance)

1-Sample Case

-Use with single variable measured on sample drawn from population (inferential statistics) -Ex. this person would make a good president. a. agree, b. disagree, c. uncertain

Post-Hoc Test

-Used in conjunction with ANOVA to determine which pairs of groups differ -Can ONLY be used if F-ratio from ANOVA test is significant -Different types of post-hoc tests, some more conservative than others

2-Sample Case (Dependent Samples)

-Used in pre- and post- situations -Ex. This person would make a good president before and after watching the debate

T-test

-Used to compare means from two sets of data -Independent (unpaired): compare means from two different groups -Dependent (paired): compare the means of the same group before and after experimental treatment

Chi-Square (X^2)

-Used with nominal data -1 sample case, 2 sample case (test of independence), 2 sample case (dependent samples)

Parametric vs. Non-Parametric Tests

-When using ordinal data or when parametric assumptions do not apply, there are corollary statistical procedures that can be used (ranks the data, eliminates interval or ratio values) -Independent t-test vs. Mann-Whitney U-test -Dependent t-test vs. Wilcoxon Matched-Pair Test -1-Way ANOVA vs. Kruskal-Wallis H-test -1-Way repeated ANOVA vs. Freidman's x2 Test for Repeated Measures -Pearson correlation vs. Spearman Rho Correlation *More difficult to achieve significance

Linear Regression Equation

-Y' = bx + a -Y' = predicted value of criterion -b= r(SDy)/SDx -Determined by using means of variables, SD of variables, and r between variables -Once be is found, use means for x and y to solve for intercept

Correlation Example

-r between IQ score and GPA is 0.7, this suggests that there are other factors (health, motivation) other than innate intelligence that contribute to GPA -Can separate variance in GPA into two components: 1. Variance associated with differences in innate intelligence 2. Variance associated with other factors *The greater the correlation between group's IQ and GPA, the larger the portion of total variance in GPA that is related to innate intelligence

T-Test Calculation

-t = X1 - X2 / SEd -This provides the t-ratio -With a t-test critical value table, use the appropriate alpha level/confidence level for a one-tail or two-tail test (a priori decision - cannot change to validate results) and the degrees of freedom to determine the critical value -If the calculated t-ration exceeds the critical value, the difference is significant ("95% confident that there is a statistically significant difference")

Correlation and Significance

1. Establish alpha level 2. Find degrees of freedom, df = (number of pairs of scores) - 2 3. Find critical value in table; if the critical value is exceeded then correlation is significant **This indicates whether we're confident that the relationship is real, NOT HOW STRONG IT IS (ie could be 99.99% sure that there is a weak correlation)** -Heterogeneity of subjects provides significant correlation

Types of Correlation

1. Pearson Product Moment Correlation: used to describe linear relationship between two variables when both variables measured on interval or ratio scales 2. Spearman Rank Order (Rho) Correlation: used to describe linear relationship between two variables when both variables measured on ordinal scale 3. Point-Biserial Correlation: Used to describe linear relationship between scores from ratio or interval scale, with scores from dichotomous variable *Units of measurement unimportant *Does not accurately assess non-linear relationship

Types of Post-Hoc Tests

1. Scheffe: most conservative, difficult to achieve significance (least powerful) 2. Tukey's: Moderately stringent, commonly used 3. Fisher PLSD: least conservative, easiest to establish significance (most powerful meaning most likely to find a difference)


Conjuntos de estudio relacionados

3.0 Network Device Programmability

View Set

unit 4 Financial Accounting as Compared to Managerial Accounting I need help with this question

View Set

Intro to Physical Anthropology Chapter 14

View Set

multiple endocrine neoplasia (MEN types 1,2a and 2b)

View Set

NURS 405 Unit 1 Quiz (Weeks 1-3)

View Set

ELS - court structure & hierarchy

View Set

Consumer Math B - Unit 1 - Lesson 2

View Set

Thermodynamics Comment Questions (HW and Exams) & Equations

View Set