PSYC3010-Final Exam (ANOVA)

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

df factor dfa dab

# of the levels of the factor - 1 a - 1 b - 1

Mixed ANOVA

- Also called split-plot ANOVA, as the design emerged in agricultural research -Has Between-Participants and Within-Participants factors in our focal IV - Because one of the IVs is a withinwithin-participants, we include the random factor participants in the partitioning of the variance - The participants factor is said to be nested under levels of the between-participants factor (Each participant is tested in only 1 group) and crossed with the repeated measured factor block (each participant participates in each block). -Advantages Within-participant ANOVA is great for power, but some variables can be tricky or unethical to manipulate within-participants (gender, brain injury) Also manipulate a variable BP to exclude the potential carry-over effects, because observations in BP are independent (pre and post measures of the repeated measure) reducing Type 2 error

Multivariate Approach (MANOVA)

- Creates a linear composite of multiple DVs - Our repeated measures variable is treated as multiple DVs and combined / weighted to maximise the difference between levels of other variables (similar to the approach regression uses to combined multiple predictors). - Multivariate tests - Pillai's Trace, Hotelling's Trace, Wilk's Lambda, Roy's Largest Root - Does not require restrictive assumptions that mixed model within participants design does (best to use when all levels are using the same scale) - Problem: Instead of adapting model to observed DV's, selectively weight or discount DVs based on how they fit the model. Anti-therotical and over-capitalise on chance

Epsilon adjustments

- Epsilon is simply a value by which the degrees of freedom for the test of F-ration is multiplied - reducing Type 1 error by adjusting the degrees of freedom - Equal to 1 when sphericity assumption is met (hence no adjustment, no problem in the data), and < 1 when assumption is violated - The lower the epsilon value (further from 1), the more conservative the test becomes - the bigger the adjustment your making

Blocking

- Explain variance in a DV with a novel IV / treatment that you have a problem with power - To reduce error variance (represents all variance left over after accounting for systematic variance explained by IV of interest) to increase power when you know you have a problem or concern (expense and time consuming) by adding a second factor (that correlated with the DV) may account for some of this let-over variance - Don't pick an control variable closely rated to your IV often, variance in the DV can also be explained by additional factors which are less novel or interesting (control or concomitant variables) - Control variables (reflecting chance or systematic unmeasured influences from other factors) into your design reflect additional sources of variation or pre-existing differences on the DV score, reduces error. As long as the original IV and 2nd factor don't explain the same variance in the DV.

Blocking design

- Homogenous blocks created with levels of blocking factor: Ps are matched within levels of blocking factor e.g. IQ (hi, med, low) - Participants within each block then randomly assigned to levels of the IV (stratified random assignment) e.g. within each of three IQ groups, assign Ps to either caffeine or control. Blocking is not randomised: participants are categorised into levels of the blocking factor, assuming equal n for levels of blocking factor. Design with blocking factor is also known as: - randomised block design - stratification - matching people - matched samples design Blocking variables are selected because of their known association with the DV and should not explain the same variance as the focal IV (can distort the test) such as loss of power if blocking variable is poorly correlated with DV as you don't sink the error variance (offset by the change in the df error) The effect of the blocking factor is not usually of theorical interest and is only used to reduce error and increase the power of the test for the focal IV or a control variable that is some way a moderator Also can be used to a detect confound variable (a significant effect of the generalisation of the focal IV e.g. additional systematic variance that is unwanted. For instance the effect of the treatment may depend on who the experimenter was [E x T interaction or the main effect of E, rather than main effect of T]

2-way Within-Participants Design

- Main effects for A and B, as well as a AxB interaction - Each effect tested has a separate error term - This error term simply corresponds to an interaction between the effect due to participants, and the treatment effect Main effect A = MS AXP Main effect B = MS BXP AXB interaction = MS ABXP - A separate main effect comparison error term must be calculated for each inconsistency within the comparison undertaken e.g. = MS A COMPxP - Simple effects error term is MS A at B1xP - the interaction between the A treatment and parti pants, at B1 - Simple comparisons error term is MS A COMP at B1xP - interaction between the A treatment (only the data contributing to the comparison, A COMP), and the participants, at B1

Mixed ANOVA Design

- Three omnibus effects Main effect of group Main effect of block Group x Block interaction - One error term is required for the between-participants factor (participants within groups) which is deviations of mean for each P from group mean. F = ratio of variability among group means divided by variability within groups - One error term is required for the within-participants factor and the two-way interaction (interaction between block and participants within groups) inconsistencies in the effect of the WSF across Ps, adjusted for group differences. F = ratio of variability among means for WPF levels divided by variability in WPF effect. Greenhouse Geisser needs to be reported for the within main effect and interaction. - Follow-up main effects - between- participant factor uses the original error term from test MS P within G - Within- participants factor uses a separate error term MS B COM x P within G (Comparisons) difference error term for each comparison - Interaction of group x block (focal IV within-group factor) - simple effects of block we run a 1-way within participants anova on block separately for each group (each has it own error term). Note that the average of these error terms = the value of our MS BxP within G - group also uses a seperate error term for each simple effect by running four 1 way BP ANOVAS to compare groups at each of the four blocks e.g. MS P within G at B1, MS P with G at B2 etc OR - It may be OK to pool (same error) because between participants effects should be independent So a special pooled error term may be used MS Ps within cells. This error term is an estimate of the average error variance within the 12 cells MS ps within cell = SS ps within cell? (dfps within G + df BXPs within G) - Simple comparisons use the same error term as the simple effects.

Violations of sphericity

- doesn't matter when within-participant factors have two levels, because only one estimate of covariance can be computed - adjustments available if they are violated in 3+ levels in F-ratios that are positively biased (critical vales of F are too small) probability to type-1 error increases - Best to assume that we have a problem and make adjustment proactively - change critical F by adjusting degrees of freedom

Prior power

- future research, what N to achieve .80 power? - If you want to know how many participants you need in your study, in order to find the effects you're looking for. - typically use previous research to estimate the likely 1. effect size and 2. MS error.

Post hoc power

- power within the study to help strengthen your argument - If you didn't find significant results, but you wanted to: can argue that you didn't have sufficient power - If you didn't find significant results, and that's good: can argue that you did have sufficient power - Information needed 1. effect size, 2. MS error and 3. N in your study

How to reduce Error Variance (reduce variation in DV scores from sources other than you IVs)

-Improve operationalisation of variables (increases validity) -Improve measurement of variables (increases internal reliability) -Improve design of you study (account for variance from other sources e.g. blocking designs) -Improve methods of analysis (control for variance from other sources e.g. ANCOVA)

Within-Participants (repeated-measures) designs

-Total Variance = Between participants + within participants - between participants variance due to individual differences is partitioned out of error (and treatment -Within participants = between treatment + treatment x participant interaction - inconsistencies in the treatment effect - People are tested in each of j conditions, "participant" factor is crossed with IV (e.g. factor A), a cell is one observation (person i in condition j) - Error A x P (Factor A x Participant) design with only 1 observation per cell e.g. is the changes (inconsistency) in the effects of A across participants - individual differences are removed from error term -Much more refined (smaller) error - huge increase in power = reduce Type 2 error - Most of the variance is caused by the fact that some participants learn quickly, and some learn slowly i.e. people are different. Design remove individual difference variation from the error term (necessary to have 2 estimates). Variability in both between conditions and within conditions are influenced by participant factor - F test = TR / TR x P

Differentiating effect sizes (Cohen, 1973)

0.2 = small - 85% overlap 0.5 = medium - 67% overlap 0.8 = large - 53% overlap

Graphs for 3-way interactions

1. Plot 2-way interactions within each level of the third factor (least important) 2. Check it pattern for 1st graph (simple interaction of AB at C1) is different from @nd graph (simple interaction of AB at C2)-- If graphs are not same pattern , there is a 3-way interaction

3-way Factorial ANOVA

7 Omnibus test have to be reported which are: 3 main effects, if significant > 2 levels then main effect comparison 3 two-way interactions, if significant then simple effects > 2 levels simple comparisons (test the more theoretical important variable of the moderator, 1 set) 1 three-way interaction, if significant then simple interaction effect at each level of least important factor or according to hypotheses. Then if the simple interacts effects are significant follow up with a simple simple effects of focal variable (each simple identifies a moderator such as the effect of factor A at each level of factor B, at each level of factor C), if significant > 2 levels simple simple comparisons, except we compute for each level of a third factor (with a Bonferroni adjustment for 2 comparisons). Higher order interactions may require you to change the interpretation given by the lower effect alone - partly depending on whether it's a disordinal interaction. Conducting an exhaustive set of follow-up tests for higher-order factorial designs can inflate family wise error rate - Probability of NO type error in 27 tests is 25% - So only report tests related to your research predictions.

F

= MS treatment /MS error

Marginal Means

A significant main effect of example type tells us these marginal means are different (margins of the marginal table). Main effects compares marginal mean (Differences in the average height of the levels of the factor). These are group means averaged over the levels of the other factor. If more than two means a follow-up comparisons would be necessary.

Omnibus Tests

Are the tests you always do; Main Effect Factor 1, Main Effect Factor 2 & Simple effects (Interaction).

Grand mean

Average of all the data, if the group means are equal to each other, they will also equal the grand mean.

One - way Error Variance

Can't be explained (random or due to unmeasured influences). When there is more variance error within the groups than between the groups then the difference is not meaningful. When there is less error the more we can conclude the the conditions are systematic and meaningful.

MS

Corrected variance estimate used to calculate F ratio. = SS / df

Mixed ANOVA assumptions

DV is normally distributed Between participants terms - Homogeneity of variance within levels of between-participants factor Within-Participants terms - Homogeneity of variance: assume WPFXP interactions constant at all levels of between participant factor - Variance-covariance matrix same at all levels of WPF - Pooled (or average) variance-covariance matrix exhibits compound symmetry (c.f. sphericity) - Ususal epsilon adjustments apply when within-participants assumptions are violated.

Omega-squared

Describes the proportion of variance in the population's DV scores that is accounted for by the effect, more realistically small, but uncommon A less biased estimate of the effect size with a more conservative estimate

Eta-squared n2

Describes the proportion of variance in the sample's DV scores that is accounted for by the effect Considered a biased estimate slightly too large (over inflated) of the true magnitude of the effect in the population, but common. Most commonly reported effect size measure because easily interpretable (like R2) Difference between the two estimates depends on sample size and error variance. The smaller the sample the more bias not difference above 20 people .

One - way Variance

Dispersion or spread of scores around a point of central tendency - the mean.

Interaction

Does the effect of one factor on scores on the dependent variable depend on the level of the other factor. Non-parallel lines indicate an interaction. A significant interaction must always be followed up with tests of the simple effects. Simple effects test the effects of one factor at each level of the other factor.

Partitioning Variance Factorial between subjects ANOVA

Effect due to first factor Effect due to second factor Effect due to interaction Error / residual / within-group variance

One - way Within-groups variance

Error variance (due to random chance or unmeasured influences). Distribution of individual DV scores around the group mean.

One -way Anova - partitioning variance

Experimental manipulations or groups of interest is proportionally greater than the rest of the variance (participants' scores on some DV differ from one another or random chance or unmeasured influences).

Type-1 error

Finding a significant difference in the sample that actually doesn't exist in the population - denoted Alpha Significant differences are defined with reference to a criterion or threshold (i.e., acceptable rate) for committing type-1 errors: typically set at.05 or .01

Type-2 error

Finding no significant difference in the sample when one actually exists in the population - Probability of accepting a false Hypothesis (1-B) - denoted Beta Hypothesis testing pays little attention to type-2 error. Concept of power shift focus to type-2 error

Omnibus 2-way interaction in a 3-way design

Ignores levels of the third factor Tests the 2-way P x R interaction with data (collapses across the third factor) averaged across gender. Implications for the error term (includes all men and women).

Effect size

Is another way of assessing the reliability of the result in terms of variance. Can compare size of effects within a factorial design: how much variance explained by factor 1, factor 2, their interaction, etc

Simple Comparisons

Is the follow up of a significant simple effects with more than two levels, using t tests or linear contrasts. The procedure is identical to that used to follow up main effects (main effect comparisons) except it is within the level of the moderator among the cell means. Most people use the pairwise comparisons because that is what SPSS outputs. Don't report non-significant effects even if SPSS gives it to you as it is a Type 1 error.

MS error

Is the pooled error from the test that protects us against Type 1 error.

Protected t-test

Is used to conduct pairwise comparisons (i.e. compare 2 means), protected against Type 1 error rate inflation. Just the same as a normal t-test, but the error term used is MS error (from the original ANOVA of the whole design) therefore adjusts error. Compares means of 2 levels of the factor.

Different Types of Epsilon

Lower-bound epsilon - Act as if there is only 2 treatment levels with maximal heterogeneity - Used for conditions of maximal heterogeneity, or worst-was violation of sphericity - often too conservative (Type 2 error). Greenhouse-Geisser epsilon - size of E depends on degree to which sphericity is violated -Varies between 1 (sphericity intact) and lower-bound epsilon (worst-case violation) - generally recommended - not too stringent, not too lax Huynh-Feldt epsilon - an adjustment applied to the GG-epsilon - often results in epsilon exceeding 1, in which case it is set to 1 (inflating the df) -used when "true value" of epsilon is believed to be >.75 (assumption that there is no real problems in your data). Can be more liberal - Always report the original df, clarifying that you have made the correct adjustments with the Greenhouse Geisser Fs to ensure adjustment for sphericity violations regardless of the Mauchley's test, which is too conservative and may not be sig.

2 - way ANOVA

Main effect of A Main effect of B Interaction

Controls and Confounds in Blocking

Main effect of blocking = sign of good control variable - shows systematic variability due to blocking factor, which has been removed from 'error' variance - increases power of test for focal IV Blocking factor x IV interaction = sign of confound variable - increases power to detect focal IV main effect (because systematic variability due to interaction is removed from "error") - Positive outcome is outweighed by negative outcome: interaction means that effect of focal IV changes depending on blocking factor - Significant Block x IV interaction shows failure of treatment IV effect to generalise across levels of blocking variable. Non-Significant interaction show generalisability.

Omnibus tests in a 3-way ANOVA

Main effects Differences between marginal means of one factor (averaging over levels of other factors) Two-way interactions (first-order) Examines whether the effect of one factor is the same at every level of a second factor (averaging over a third factor) Three-way interaction (second-order) Examines whether the two-way interaction between two factors is the same at every level of the third factor. or whether the cell means differ more than you would expect given the main effects and the two-way interactions. (more people, time and money)

Advantages of Factorial Designs

More economical in terms of participants - fewer than two one way designs because we average over the levels of the other factor. Allows us to examine the interaction of independent variables Generalisability of results by a main effect across levels of the other factor (are they all the same) One independent variable interact with the other depending on which level (men vs women) One independent variable interact when it changes (moderates or qualifies) the impact of a second variable.

Higher-order Factorial Designs

More than 2 independent variables (factors) Allow for designs with higher external validity 2 (young, old) x 3 (no alcohol, 1 drink, 5 drinks) x 2 (men, women), between-participants design 12 cells (multiply levels of IVs) Column factor 1st IV (focal) Row factor 2nd IV Table factor 3rd IV (least important)

df total

N - 1 N = abn The number of cells multiplied by n (observations per cell) e.g. 2 x 3 design x cell observations.

Assumptions of Factorial ANOVA (between groups)

Normality - treatment populations are normally distributed Homogeneity of variance - treatment populations have the same variance Independent - no two measures are drawn from the same participant Independent random sampling - within any particular sample, no choosing of respondents on any kind of systematic basis At least 2 observations (people) Equal n Data measured on a continuous scale (interval or ratio)

Factorial ANOVA spss

Omnibus tests are main effects and interactions Main effect comparison test which marginal means of the factor are different using either t-tests, linear contrasts or pairwise comparisons. For significant main effects for factors with more than two levels are followed up with main effect comparisons. Interactions are followed up with simple effect tests Simple effect tests for Factor A test whether the cell means for the levels of factor A are different, not overall but separately at each level of Factor B (e.g A1B1 vs A2B1 = simple effect of A at B1). We do a simple effect test of A for each level of factor B. Usually report f-test even if factor has only two levels. Simple comparisons show which cell means of the factor are different. Usually use t-test, linear contrasts or pairwise comparisons focusing on theoretically relevant differences. For significant simple effects for factors with more than two levels are followed up with simple comparisons.

Simple effects

Only do simple of your focal IV. Include F tests for all simple effects regardless is they were significant or not significant (i.e., the effects of the consumption factor at each level of the distraction factor). Error term in the follow up test will be the same error term from the omnibus ANOVA. Simple effects re-partition the main effect and interaction variance. Sum simple effects of factor 1 = Sum main effect 1 + interaction. Sum df for simple effects of factor 1 = Sum df main effect 1 + df interaction. Danger: re-analysising gives you Type 1 error, so that is why we only analysis what is necessary for our hypothesis.

Function of Effect size (d)

Power closely related to effect size Effect size estimates such as omega squared and eta squared can be used in power calculations The most common is Cohn's d for pairwise comparisons - indicating how many standard deviations the means are apart, & thus the overlap of the two distributions. Therefore the smaller the deviation the larger the overlap is between the two means. Aim: to achieve .80 (80% ) minimum optimal level - that you will find a significant effect, if it exists in the population.

Partial eta squared

Proportion of residual variance accounted for by the effect. Residual variance = variance left over to be explained (i.e., not accounted for by any other IV in the model) SPSS Unianova, Manova, or GLM. Proportion of residual variance after other variables/effects are controlled accounted for by our effect. Limitations In factorial ANOVA, [error = effect] is less than [total], so partial is more liberal or inflated sometimes massively. In Factorial ANOVA, eta-square adds up to a maximum of 100%, but partial eta can add to more than 100%. Hard to make meaningful comparisons. Is most meaningful if you are only interested in one effect in the whole design which most people are not.

Issues with Follow-up Comparisons

Redundancy: explaining the same difference more than once Solution: orthogonal (independent) linear contrasts Increases in family-wise error rate Type-1 error rate is alpha for each test, this leads to higher probability of committing a type-1 error over all tests Solution 1: Bonferroni Adjustment for critical t Solution 2: conduct contrasts defined a priori, rather than exhaustive orthogonal set (fewer contrasts).

Disadvantages of Within-participants design

Restrictive statistical assumptions Sequencing effects - Learning, practice - Fatigue - Habituation - insensitivity - Sensitisation - more responsive - Contrast - pre-treatment standards - Adaptation - adjust to pre-manipulations changes reaction to later - Direct carry-over effect - thru learning something that changes your approach to it later Counterbalance to reduce sequencing effect that systematically bias your focal IVs, but still can get treatment x order interactions (contrast - reinforcement to no reinforcement)

Power on 'SALE'

Sample size - increase sample size Alpha level - increase alpha level (dangerous) Larger effects - focusing on larger effects (mean differences) Error variance - decrease error variance

Factors affecting power

Significance level, Alpha critically, (more chances of false positives) (relaxed alpha = more power) Sample size, N (more N = more power) - an increase in sample size will increase power (easily controlled) Mean differences (larger differences = more power) - however we have zero control over the effect size for our focal effects Error Variance (less error variance = more power) - less variance in the group is the preferred way of increasing our power (decrease variance will decrease error variance)

One - way Treatment Variance

Systematic differences due to our IV (e.g., experimental manipulation).

One - way Between-groups variance

Systematic variance due to membership in different groups / treatments. Distribution of group means around the grand mean.

Simple 2-way interactions in a 3-way design

Test the 2-way interaction at each level of the third factor. Is a follow up test conducted after a significant 3-way interaction. Test the 2-way P x R interaction for each gender group separately. Treatment means within a level of a moderator. Is the same pooled error term (is a more reliable error than 4 one-way ANOVA, which are different from one another) as the Omnibus ANOVA.

Power

The degree to which we can detect treatment effects (includes main effects, interactions, simple effects etc) when they exist in the population. 1. An effect must exist for you to find it - increasing power can help you detect even very small effects, do not reflect what's actually going on in the population. 2. Large samples can be bewitched - a large sample can detect very small effects that may be relatively unimportant and unstable (particularly in clinical psychology) - leading you to overestimate the importance of a small effect (or even chasing a small effect) 3. Error variance is also important - high error variance (noisily) means that a large effect may still turn out to be non-significant. Power analysis only occurs for relevant or theoretical tests, not all tests as such in a 2-way or 3-way Factorial ANOVA.

Disordinal Interaction

The effect disappears or reverses. Failure of generalisability. (Lines cross - signs reverse)

Cell means

The effect of one factor at one level of the other factor is called a simple effect. Cell is a combine of levels from more than one factor. A row of cells is called a condition. When the effect of one factor is conditional upon the levels of the other factor we have an interaction (the simple effects are used to interpret an interaction). All cell means in the design must be represented on the graph. These are the average of the n observations in each cell.

Factorial Design

The experiment has at least two factors (IVs), each with at least two levels. The IVs are crossed, meaning that you look at every combination within each level of the factors has at least two levels of Factor such as Factor A has two levels and Factor 2 has three levels = total of 6 combinations. For each pair of factors, there are two main effects and one interaction. Interactions and main effects can occur in any combination, they are independent. A significant interaction may qualify significant main effects: the simple effects of one IV depend on the level the other IV under consideration - then, the main effects may need to be reinterpreted.

Expected mean squares treatment E(MStreat)

The long term average of the variances within each sample PLUS any variance between each sample.

Expected mean squares error E(MSerror)

The long term average of the variances within each sample would be the population variance.

One-way design

The mean dependent variable scores of the populations for each level of the factor different from the grand mean.

Effect in ANOVA

The means are different

Two-way Factorial design

The means of the populations corresponding to the levels the first/second factor different- is there a main effect of factor 1 of factor 2.

n

The number of people in a cell.

Ordinal Interaction

The same effect all the time just the effect changes shape. For example the effect benefits everyone but is larger for some people. (Lines do not cross - signs do not reverse)

F ratio is > 1

The treatment effect (variability between groups) is bigger than the "error" variability (variability within groups).

Linear Contrasts

To determine if one group or set of groups is different from another group or set of groups, using a set of weights to define the contrast which are orthogonal such as contrast 1 compares 0 vs 2 & 4, contrast 2 compares 2 vs 4. The bigger the L the more variability, if L is closer to 0, the things that you are comparing are not very different. The protected t-test is a special case of this technique. Contrast 1 compares (for distracted participants only) the mean creativity rating for participants who have had 0 pints with the mean creativity rating for participants who have had 2 or 4 pints (Averages betweens 2 and 4 pints, can give miss leading data).

N

Total of people in all cells.

Within- Participants Designs - mixed-model approach

Treatment is a fixed factor, participants is a random factor - Fixed Factor - You chose the levels of the IV either by sampling all levels or based on a theoretical reason - Random Factor - The levels of the IV are chosen at random. Thus, have different error terms - Powerful when assumptions are met - Mathematically user-friendly _ Restrictive assumptions, 1. randomly drawn 2. DV scores are normally distributed 3. Compound Symmetry - homogeneity of variances (variances are roughly equal) in levels of repeated-measures factor - homogeneity of covariances (equal correlations/covariances between pairs of levels)

Limitations of p value

Use of an arbitrary acceptance criterion with a binary outcome (significant or non-significant) No information about the practical significance of findings Large p-value (non-significant) will eventually slip under the acceptance criterion as the sample size increases. The magnitude of experimental effect, or effect size, has been proposed as an accompaniment (if not an outright replacement)

Within participants ANOVA

We partition out and ignore the main effect of participants and compute an error term estimating inconsistency as participants change over WP levels In simple comparisons, use only data for conditions involved in comparison & calculate separate error terms each time e.g. linear contrast B2 vs B3 x P and/or B1 vs B4 x P Partition treatment variance and residual variance for follow-ups. Each contrast effect is tested against error term = C x P interaction

df interaction

product of the df for factors in the interaction (b - 1) x (a - 1)

SS

sum of squares Index of variability around a mean

df error

total # of observations - # of treatments N - ba or df for each cell x # of cells (n - 1)ba


Set pelajaran terkait

Thermodynamics Reading Quizzes (#2)

View Set

POLS 2305 - Module 13 - The American Legal System and the Courts (The Judiciary)

View Set

ch 5-Cost volume profit relationships

View Set

ORL 577 - Strategy Development and Implementation Test 1

View Set

Microeconomics Chapter 19 Mcgraw Hill

View Set

Chapter 15: Mistakes, Fraud & Voluntary Consent

View Set

CCNA 100-101 Chapter 1. TCP/IP and OSI Networkinig Models

View Set

Economics: GACE prep, Fundamental Economic Concepts, Microeconomics, Macroeconomic, International Economics, Personal Finance

View Set

Unit 1: Digital Information Assessment Prep

View Set

Biology Chapter 7 Part 1 (Homework)

View Set

Survey of Communication Disorders Final

View Set

Chapter 37 Pathophysiology NCLEX-Style Review Questions

View Set