Stats Exam 2

Ace your homework & exams now with Quizwiz!

F statistic - Linear Regression decision rule

Decision Rule: Fcalc ≥ Fcrit

F statistic - Multifactor Repeated Measures decision rule

Decision Rule: Fcalc ≥ Fcrit

F statistic - One-Way ANOVA decision rule

Decision Rule: Fcalc ≥ Fcrit

F statistic - Three-Way ANOVA decision rule

Decision Rule: Fcalc ≥ Fcrit

F statistic - Two-Way ANOVA decision rule

Decision Rule: Fcalc ≥ Fcrit

F statistic - 1 way Repeated Measures decision rule

Decision Rule: Fcalc ≥Fcrit

F statistic - ANCOVA decision rule

Decision Rule: Fcalc ≥Fcrit

H statistic - Kruskall-Wallis ANOVA decision rule

Decision Rule: Hcalc ≥ Hcrit

T-statistic- Wilcoxon Signed Ranks Test decision rule

Decision Rule: Tcalc ≤ Tcrit

U-statistic- Mann Whitney U Test Decision Rule

Decision Rule: Ucalc ≤ Ucrit

r statistic - Pearson Product Moment Correlation decision rule

Decision Rule: rcalc≥rcrit

rho (rs) statistic- Spearman Rank Correlation decision rule

Decision Rule: rhocalc ≥ rhocrit

t statistic- Independent t-test/Unpaired t-test decision

Decision Rule: tcalc ≥ tcrit

t-statistic- Paired t-test decision rule

Decision Rule: tcalc ≥ tcrit

Chi-square- Freidman ANOVA decision rule

DecisionRule: Chi-square calc ≥ Chi-square crit

Sign Test

Good for variables that do not require precision

3 t-tests result in

Greater probability of making a type 1 error (incorrectly reject null) ▪ Increases risk of concluding a difference when there isn't one

Within Factors

Groups/conditions are dependent (related) on each other o Performance on 1 DV DOES affect performance on a different DV

Between Factors

Groups/conditions are independent from each other o Measurement of on DV does not impact measurement of other DV

Analysis of Residuals

Horizontal band = assumptions of linear regression have been met ▪ Wider distributions = more error, have not been met

Bonferroni Adjustment

If you MUST run multiple t-tests o Adjust the threshold requirement for statistical significance (more rigorous) o New a = a/C

Line of Best Fit (linear reg. line)

Line that "best" describes the orientation of all the data points in scatter plot (smallest sum of residuals) ▪ Y-hat = predicted value of Y ▪ A = regression constant (y-intercept) ▪ B = regression coefficient (slope) ▪ X = value of IV

Scheffe Comparison

Most flexible/rigorous, based on the F distribution § Adopts a FW error rate that applies to all contrasts § Provides strong protection against type I error

Student Newman Keuls (SNK)

Alpha specifies the type I error rate for each pairwise contrast based on the studentized range (q) § Q used differently for each contrast depending on the number of adjacent means "r" within and ordered comparison level § Uses a larger critical difference as the comparison interval increases

Linear regression

Assessment of 2 variables to determine how well the IV predicts the DV o Both X/Y are continuous variables (ratio/interval)

ANCOVA

Combination of linear regression and ANOVA o Used to compare groups on a DV where there is some reason to suspect that groups differ on some relevant characteristic (covariate) before treatment o Variability that can be attributed to the covariate is partitioned pit and effectively removed from the ANOVA allowing for a more valid explanation of the relationship

Bonferroni Correction

Corrects for inflation of alpha to protect against a type I error

Familywise Error Rate

Cumulative error ▪ FW = 1 - (1- 0.5)C ▪ C = number of comparisons (tests)

Correlation Coefficients

Quantitively describe the strength and direction o 0.00-0.25 = little or no relationship o 0.26-0.50 = fair relationship o 0.51-0.75 = moderate to good o 0.76-1.00 = good to excellent

Type I error Rate

Where each individual comparison is tested at a = 0.05

Least -squares line

an estimate of the population regression line, and each point on the line is an estimate of the population mean at each value of X

Use ANOVA when

comparing 3 or more means

MCID (Minimal Clinically Importance Difference)

minimally clinically important difference, the difference that is considered an important and not trivial difference

Paired t test

o Analyzes difference scores within each pair o Subjects are compared with themselves

Pearson's R

o Between -1 and 1, 1 is a perfect correlation § +/- tells whether the relationship is direct or indirect o Decision rule § Correlation by sample size

Prerequisites for Causality

o Biological Plausibility - consistent w/ existing biological/medical knowledge o Logical time sequence - treatment occurred before change o Dose response relationship - larger causal factor = larger outcome o Consistency of findings across several studies

Product Moment Correlation

o Both variable continuous and interval or ratio o Both variables are normally distributed

Procedures for Multiple Comparisons Test

o Compare each pairwise difference against a Minimum Significant Difference (MSD) o If pairwise difference > MSD, then they are significantly different o Greater variance between groups = less likely statistical difference

Nonprobabilistic sampling and types

o Convenience: recruit individuals as they become available o Quota: use strata but recruitment stops when strata is filled o Purposive: researcher handpicks participants o Snowball: participants recruit other participants

Linear regression aasumptions

o For any given value of X we can assume that a normal distribution of Y exists o Least-squares line: an estimate of the population regression line and each point on the line is an estimate of the population mean at each value of x o Several measurements for each value of X reduces standard error and increases accuracy

Decision rule for linear regression

o Fstat = MS regression/MS residual o Fcalc > Fcrit OR p<0.05 o In general, large F statistics indicate that the model is significantly better at predicting the outcome than the mean

Repeated Measures ANOVA

o Interested in a comparison across treatment condition within each subject o Total variance partitioned into variance between and within subjects § Between: error component, individual difference among subjects § Within: difference between the treatment conditions, error

Multifactorial and mixed design Questions to answer prior to conducting the study

o Methods include statistical treatment options o IV dictates statistical treatment options o DV dictates statistical treatment options

Post Hoc for Factorial Designs

o Multifactor experiments - multiple comparison procedures used to compare means for main and interaction effects ▪ Simple effects: convert factorial design into smaller "single factor" experiments ▪ Simple comparisons: when there are 3 or more levels of an IV, can use a separate ANOVA on each level

Assumptions of ANCOVA

o Normality, homogeneity of variance, random independent samples (same as ANOVA) o For each IV, the relationship between the DV (y) and the covariate (x) is linear o The lines expressing these linear relationships are parallel (homogeneity of regression slopes) o The covariate is independent of the treatment effects (IV)

Coefficient of Determination

o R2: square of the correlation coefficient ▪ Indicates percent of the total variance in Y scores that can be explained by X ▪ 1-r2 = the proportion of variance that is not explained by the relationship between X and Y

Assumptions for Parametric tests

o Randomly drawn, normal distribution, homogeneity of variance o Should only be used with interval or ratio

Spearman Rho (non-parametric)

o Rho between -1 to 1 o One ordinal variable and one ration/interval OR 2 ordinal variables o Examines disparity between the two sets of rankings by looking at the difference between the ranks of X and Y assigned to each subject, given the value of d o The sum of squared differences (Σd2), is an indicator of the strength of the observed relationship - higher sums = greater disparity

Kruskal-Wallis ANOVA by Ranks

o Same procedure as Mann Whitney U o Separate back into groups with assigned ranking from combined, sum for individual group

Tukey's HSD

o Sets a FW error rate, alpha identifies the probability that one or more of the pairwise comparisons will be falsely declared significant § Calculate using q

Probabilistic sampling and types

o Simple random sampling: everyone in the population has an equal chance of being selected o Systematic: select individuals on an interval basis o Stratified random sampling: partition into groups called strata o Proportional/disproportional stratified § Proportional= picking per percent of population § Disproportional= equal amount of people with a weight applied o Cluster/multistage: link members to an already established group

One Way ANOVA

o Single factor experiment (1 factor, 3 or more levels) o Compute the sum of squares for the entire data set (SST)

Friedman Two Way ANOVA by Ranks

o Single-factor, repeated measures design with 3+ levels o Rank scores for each subject across conditions § Sum ranks in each column o Ho = sum of ranks will be equal between columns

Correlation assumptions

o Subject scores represent the underlying population, which is normally distributed o Each subject contributes to a score for X and Y o X and Y are independent measures o X values are observed, not controlled o The relationship between X and Y must be linear

Mann-Whitney U

o Tests the null hypothesis that the two samples come from the same population o Cannot use table for samples over 25

Unequal variances t procedure

o Two independent groups through random assignment o Variances § Equal: use t to determine significance, pooled variance § Unequal: based on separate variances, differences in variance can effect the t ratio

Independent Samples t test (unpaired)

o Two independent groups usually created through random assignment o Composed of different sets of subjects

Wilcoxon Signed Rank Test

o Use when data provides info on the magnitude o Difference scores, rank by absolute magnitude § Attach a sign to the difference, determine if more +/- § T = the sum of ranks with less frequent sign

Interaction effects

o What is the combined effect of factors A and B? o When lines have different slopes/cross there is an interaction effect

Main Effects

o What is the effect of factor A? o What is the effect of factor B?

Advantages of multifactorial mixed design

§ Individual differences controlled § Variance due to inter-subject differences is separated from total § Reduced size of error term · Increases F ratio § More powerful than when independent samples are used

Effects on Power (PANE)

§ Power (1-B) decrease B § Increase alpha § N increased sample size § Decreased effect size

Decision Rule Pearson's R

§ R calc ≥ R crit or p < 0.05 § Correlation by sample size • Small sample = low correlation coefficient, possible type II error

F-Stat

§ Sum of squares for treatment and error effect are divided by their associated df to obtain mean squares · Fcalc ≥ Fcrit § Ratio of variance due to treatment conditions and error variance § Variance due to inter-subject difference is separated from total · Error variance is smaller § Reduced size of error term · Increases F ratio § More powerful than when independent samples are used

Levene's Test

· Assumption states that the degree of variance will be roughly equivalent, or not significant · Larger samples are more likely to show equal variance

ICF- International Classification of Function (Biopsychosocial Model of Health)

· Body structure and function · Activity · Participation

Post hoc tests

· Specific comparisons are decided after ANOVA has been completed · Comparison of all pairwise difference, receive info about which means are significantly different from others means

Type I vs Type II error

· Type I: rejecting the null when the null is true, false positive o Alpha is the probability of committing a type I error · Type II: accepting the null when the null is false, false negative o Beta is the probability of committing a type II error

MDC (Minimum Detectable Change)

· minimal detectable change, the minimal amount of change not attributed to error o Smaller = more responsive

Factorial Designs: Mixed model designs

• Have between and within factors

Factorial Designs: Repeated measures designs

• Measure the same subject on multiple occasions • Take multiple measurements that might relate to each other on a single person (ROM, strength)

F ratio

▪ H0 is true: variance due to error, MSE >MSB, F<1.0 ▪ H0 is false: between group variance is large, MSB > MSE, F>1.0 ▪ Larger F = greater difference between group means relative to within group variability ▪ Only tells us there is a difference, not where the difference is → post hoc testing (multiple comparison tests)

Friedman Two Way ANOVA by Ranks Decision Rule

▪ Hr2calc ≥ Hr2crit

Kruskal-Wallis ANOVA by Ranks Decision Rule

▪ Hstat≥Hcrit

Test of Sphericity

▪ Mauchly's Test • Only relevant if ANOVA results in significant F stat • Sphericity assumption is met if Mauchly's test is not significant • If the sphericity assumption is not met, Mauchly's test is significant and a correction is necessary

Spearman Rho (non-parametric) Decision Rule

▪ Rho calc ≥ rho crit OR p < 0.05

Equal Variances t procedure

▪ Value of t is used to determine if mean difference is significant • Pooled Variance • Standard error of difference • Calculate t

Homogeneity of Variance

Sphericity ▪ States that the variances within each pair of difference scores will be relatively equal and correlated with each other

Between Groups (SSB)

Spread of group means about the grand mean • Treatment effect = variability BETWEEN the groups

Within Groups or Error (SSE)

Spread of scores within each group about the group mean • Unexplained sources = variability WITHIN the groups

Power

the probability of committing a type I error, probability of rejecting the null when the null is false


Related study sets

FCE Trainer Test 1 Sentence Transformations

View Set

Promotions: Unit 5 ( Learning plan 13 a,b, & c) study guide questions

View Set

Molecular Cell Bio Test IV: Clicker Question

View Set

Fritsch US History: LAP 11 (COMPLETE)

View Set