Simple and Multiple Linear Regression

Ace your homework & exams now with Quizwiz!

Predicted/Average Value of y for a Given Value of x

as estimated from the fitted regression line, is denoted by ŷ = a + bx thus the point (x, a + bx) is always on the regression line

(z₁, z₂)

(z₁, z₂) = z ± z₁₋ₐₗₚₕₐ/₂ / √(n - 3)

Corrected Sum of Cross Products

denoted by Lₓᵧ defined by Σⁿᵢ ₌ ₁(xᵢ - x̅) (yᵢ - ȳ) it can be shown that a short form for the corrected sum of cross products is given by (see picture)

Corrected Sum of Squares for x

denoted by Lₓₓ defined by Σⁿᵢ ₌ ₁ (xᵢ²) - (Σⁿᵢ ₌ ₁ xᵢ)² / n represents the sum of squares of the deviations of the xᵢ from the mean

Assumptions Made in Linear-Regression Models

1. for any given value of x, the corresponding value of y has an average value α + βx, which is a linear function of x 2. for any given value of x, the corresponding value of y is normally distributed about α + βx with the same variance σ² for any x 3. for any two data points (x₁, y₁), (x₂, y₂), the error terms e₁, e₂ are independent of each other to assess whether these assumptions are reasonable, we can use several different kinds of plots the simplest plot is the x − y scatter plot; check for no obvious curvilinearity we see that the errors (e) about the true regression line (y = α + βx) have the same variance σ² however, it can be shown that the residuals about the fitted regression line (y = a + bx) have different variances depending on how far an individual x value is from the mean x value used to generate the regression line specifically, residuals for points (xᵢ, yᵢ) where xᵢ is close to the mean x value for all points used in constructing the regression line (i.e., xᵢ − x̅ is small) will tend to be larger than residuals where |xᵢ − x̅| is large interestingly, if |xᵢ − x̅| is very large, then the regression line is forced to go through the point (xᵢ, yᵢ) (or nearly through it) with a small residual for this point

Short Computational Form for Regression and Residual SS

Regression SS = bLₓᵧ = b² Lₓₓ = L²ₓᵧ / Lₓₓ Residual SS = Total SS - Regression SS = Lᵧᵧ - L²ₓᵧ / Lₓₓ

a summary measure of goodness of fit frequently referred to in the literature is R² defined as Reg SS / Total SS can be thought of as the proportion of the variance of y that is explained by x if R² = 1, then all variation in y can be explained by variation in x, and all data points fall on the regression line in other words, once x is known y can be predicted exactly, with no error or variability in the prediction if R² = 0, then x gives no information about y, and the variance of y is the same with or without knowing x if R² is between 0 and 1, then for a given value of x, the variance of y is lower than it would be if x were unknown but is still greater than 0 in particular, the best estimate of the variance of y given x (or σ²) is given by Res MS (s²ᵧ•ₓ) for large n, s²ᵧ•ₓ ≈ s²ᵧ(1 - R²) thus R² represents the proportion of the variance of y that is explained by x

Relationship Between the Sample Regression Coefficient (b) and the Sample Correlation Coefficient (r)

b = rsᵧ / sₓ interpretation the regression coefficient (b) can be interpreted as a rescaled version of the correlation coefficient (r), where the scale factor is the ratio of the standard deviation of y to that of x - note that r will be unchanged by a change in the units of x or y (or even by which variable is designated as x and which is designated as y), whereas b is in the units of y/x

Linear Relationship Model

between x and y E(y | x) = α + βx

Sample (Pearson) Correlation (r)

defined by Lₓᵧ / √[(Lₓₓ) (Lᵧᵧ)] not affected by changes in location or scale in either variable and must lie between -1 and +1 the sample correlation coefficient can be interpreted in a similar manner to the population correlation coefficient ρ

Influential Points

defined heuristically as points that have an important influence on the coefficients of the fitted regression lines outliers and influential points are not necessarily the same an outlier (xᵢ, yᵢ) may or may not be influential, depending on its location relative to the remaining sample points

Raw Sum of Cross Products

defined Σⁿᵢ ₌ ₁ (xᵢ) (yᵢ)

Corrected Sum of Squares for y

denoted by Lᵧᵧ defined by Σⁿᵢ ₌ ₁ (yᵢ²) - (Σⁿᵢ ₌ ₁ yᵢ)² / n notice that Lₓₓ and Lᵧᵧ are simply the numerators of the expressions for the sample variances of x and y

(x̅, ȳ)

falls on the regression line common to all estimated regression lines because a regression line can be represented as: y = a + bx = ȳ - bx̅ = ȳ + b(x - x̅) or equivalently y - ȳ = b(x - x̅)

x and y in a Linear-Regression Equation

for any linear-regression equation of the form y = α + βx + e, y is called the dependent variable and x is called the independent variable because we are trying to predict y as a function of x if σ² is 0, then every point would fall exactly on the regression line the larger σ² is, the more scatter occurs about the regression line if β is greater than 0, then as x increases, the expected value of y = α + βx will increase if β is less than 0, then as x decreases, the expected value of y = α + βx will decrease if β is equal to 0, then there is no relationship between x and y

Residual (Component)

for any sample point (xᵢ, yᵢ), the residual, or residual component, of that point about the regression line is defined by yᵢ - ŷᵢ for any sample point (xᵢ, yᵢ), the regression component of that point about the regression line is defined by (ŷᵢ - ȳ)

Normality Assumption

most important in small samples in large samples, an analog to the central-limit theorem can be used to establish the unbiasedness of b as an estimator of β and the appropriateness of test of significance concerning β (such as the F test for simple linear regression or the t test for SLR), or formulas for confidence-interval width of β, even if the error terms are not normally distributed

Two-Sided 100% × (1 - α) Confidence Intervals for the Parameters of a Regression Line

if b and a are, respectively, the estimated slope and intercept of a regression line and se(b), se(a) are the estimated standard errors, then the two-sided 100% × (1 − α) confidence intervals for β and α are given by b ± tₙ₋₂, ₁₋ₐₗₚₕₐ/₂ se(b) and a ± tₙ₋₂, ₁₋ₐₗₚₕₐ/₂ se(a), respectively

Interpretation of the Sample Correlation Coefficient

if the correlation is greater than 0, then the variables are said to be positively correlated - two variables (x, y) are positively correlated if as x increases, y tends to increase, whereas as x decreases, y tends to decrease if the correlation is less than 0, such as for pulse rate and age, then the variables are said to be negatively correlated - two variables (x, y) are negatively correlated if as x increases, y tends to decrease, whereas as x decreases, y tends to increase if the correlation is exactly 0, such as for birthweight and birthday, then the variables are said to be uncorrelated - two variables (x, y) are uncorrelated if there is no linear relationship between x and y thus the correlation coefficient provides a quantitative measure of the dependence between two variables: the closer |r| is to 1, the more closely related the variables are; if |r| = 1, then one variable can be predicted exactly from the other as was the case for the population correlation coefficient (ρ), interpreting the sample correlation coefficient (r) in terms of degree of dependence is only correct if the variables x and y are normally distributed and in certain other special cases if the variables are not normally distributed, then the interpretation may not be correct

What does the corrected sum of cross products mean?

if β > 0, then as x increases, y will tend to increase as well another way of expressing this relationship is that if (xᵢ − x̅) is greater than 0 (which will be true for large values of xᵢ), then yᵢ will tend to be large or yᵢ − ȳ will be greater than 0 and (xᵢ − x̅)(yᵢ − ȳ) will be the product of two positive numbers and thus will be positive similarly, if xᵢ − x̅ is < 0, then yᵢ − ȳ will also tend to be < 0 and (xᵢ − x̅)(yᵢ − ȳ) will be the product of two negative numbers and thus will be positive thus if β > 0, the sum of cross products will tend to be positive if β < 0, then when x is small, y will tend to be large and when x is large, y will tend to be small in both cases, (xᵢ − x̅)(yᵢ − ȳ) will often be the product of 1 positive and 1 negative number and will be negative thus if β < 0, the sum of cross products will tend to be negative finally, if β = 0, then x and y bear no linear relation to each other and the sum of cross products will be close to 0

σ²

in general, σ² is unknown however, the best estimate of σ² is given by s²ᵧ•ₓ hence, se(b) ≈ s²ᵧ•ₓ / (Lₓₓ)¹/² finally, under H₀ t = b / se(b) follows a t distribution with n - 2 df therefore, the following test procedure for a two-sided test with significance level α is used

Independence Assumption

is important to establish the validity of p-values and confidence-interval width from simple linear regression specifically, if multiple data points from the same individual are used in fitting a regression line, then p-values will generally be too low, and confidence-interval width will generally be too narrow using standard methods of regression analysis (which assume independence)

b

it can be shown that the estimate b of the underlying slope β, which minimizes S, is given by b = Lₓᵧ /Lₓₓ thus, we refer to b as the least-squares slope because Lₓₓ is always positive (except in the degenerate case when all x's in the sample are the same), the sign of b is the same as the sign of the sum of cross products Lₓᵧ furthermore, for a given estimate of the slope b, it can be shown that the value of the intercept for the line that satisfies the least-squares criterion (i.e., that minimizes S) is given by a = ȳ − bx̅

Outliers

it is more difficult to detect outliers in a regression setting than in univariate problems, particularly if multiple outliers are present in a data set

Standard Deviation of Residuals about the Fitted Regression Line

let (xᵢ, yᵢ) be a sample point used in estimating the regression line, y = α + βx if y = a + bx is the estimated regression line, and êᵢ = residual for the point (xᵢ, yᵢ) about the estimated regression line, then êᵢ = yᵢ - (a + bxᵢ) and (see picture)

Interval Estimation for Predictions Made from Regression Lines

one important use for regression lines is in making predictions frequently, the accuracy of these predictions must be assessed the accuracy depends on whether we are making predictions for one specific y or for the mean value of all y's given x

How can the plots be quantified?

one strategy is to square the deviations about the mean yᵢ - ȳ, sum them up over all points, and decompose this sum of squares into regression and residual components

Presence of Unequal Residual Variances

one strategy that can be employed if unequal residual variances are present is to transform the dependent variable (y) to a different scale this type of transformation is called a variance-stabilizing transformation

How can the regression line be used?

one use is to predict values of y for given values of x

Least-Squares Method

process of fitting a regression line that minimizes the distance of each point from the line S = sum of the squared distances of the points from the line = Σⁿᵢ ₌ ₁ (yᵢ - a - bxᵢ)²

Linear Regression

relating a normally distributed outcome variable y to one or more predictor variables, x₁, . . ., xₖ, where the x's may be either continuous or categorical variables

One-Sample z Test for a Correlation Coefficient

sometimes the correlation between two random variables is expected to be some quantity ρ₀ other than 0 and we want to test the hypothesis H₀: ρ = ρ₀ vs. H₁: ρ ≠ ρ₀ the problem with using the t test formation is that the sample correlation coefficient r has a skewed distribution for nonzero ρ that cannot be easily approximated by a normal distribution Fisher considered this problem and proposed the following transformation to better approximate a normal distribution

Interval Estimates for Regression Parameters

standard errors and interval estimates for the parameters of a regression line are often computed to obtain some idea of the precision of the estimates if we want to compare our regression coefficients with previously published regression coefficients β₀ and α₀, where these estimates are based on much larger samples than ours, then, based on our data, we can check whether β₀ and α₀ fall within the 95% confidence intervals for β and α, respectively, to decide whether the two sets of results are comparable

Interval Estimation of a Correlation Coefficient (ρ)

suppose we have a sample correlation coefficient r based on a sample of n pairs of observations. To obtain a two-sided 100% × (1 − α) confidence interval for the population correlation coefficient (ρ): 1. compute Fisher's z transformation of r = z = 1/2 ln [(1 + r) / (1 - r)] 2. let z(ρ) = Fisher's z transformation of ρ = /2 ln [(1 + ρ) / (1 - ρ)] a two-sided 100% × (1 − α) confidence interval is given for z(ρ) = (z₁, z₂) where z₁ = z - z₁₋ₐₗₚₕₐ/₂ / √(n - 3) z₂ = z + z₁₋ₐₗₚₕₐ/₂ / √(n - 3) and z₁₋ₐₗₚₕₐ/₂ = 100% × (1 - α/2) percentile of an N(0, 1) distribution 3. a two-sided 100% × (1 - α) confidence interval for ρ is then given by (ρ₁, ρ₂) where (see picture)

Residual Mean Square (Res MS)

the ratio of the Res SS divided by (n - k -1) or Res MS = Res SS / (n - k - 1) for SLR, k = 1 and Res MS = Res SS / (n - 2) we refer to n - k - 1 as the degrees of freedom for the residual sum of squares or Res df Res MS is also sometimes denoted by s²ᵧ•ₓ in the literature under H₀, F = Reg MS / Res MS follows an F distribution with 1 and n − 2 df, respectively H₀ should be rejected for large values of F thus, for a level α test, H₀ will be rejected if F > F₁, ₙ₋₂, ₁₋ₐₗₚₕₐ and accepted otherwise

Predictions Made from Regression Lines for Individual Observations

suppose we wish to make predictions from a regression line for an individual observation with independent variable x that was not used in constructing the regression line the distribution of observed y values for the subset of individuals with independent variable x is normal with mean = ŷ = a + bx and standard deviation given by (see picture) furthermore, 100% × (1 - α) of the observed values will fall within the interval ŷ ± tₙ₋₂, ₁₋ₐₗₚₕₐ/₂ se₁(ŷ) this interval is sometimes called a 100% × (1 - α) prediction interval for y the magnitude of the standard error depends on how far the observed value of x for the new sample point is from the mean value of x for the data points used in computing the regression line (x̅) the standard error is smaller when x is close to x̅ than when x is far from x̅ in general, making predictions from a regression line for values of x that are very far from x̅ is risky because the predictions are likely to be more inaccurate

Power and Sample-Size Estimation for Correlation Coefficients

suppose we wish to test the hypothesis H₀: ρ = 0 vs. H₁: ρ = ρ₀ > 0 for the specific alternative ρ = ρ₀, to test the hypothesis with a one-sided significance level of α and specified sample size n, the power is given by power = Φ(z₀ √(n - 3) - z₁₋ₐₗₚₕₐ for the specific alternative ρ = ρ₀, to test the hypothesis with a one-sided significance level of α and specified power of 1 − β, we require a sample size of (see picture)

Relationship Between the Sample Correlation Coefficient (r) and the Population Correlation Coefficient (ρ)

s²ₓ = Lₓₓ / (n - 1) and s²ᵧ = Lᵧᵧ / (n - 1) if we define sample covariance by sₓᵧ = Lₓᵧ / (n - 1), then we can re-express r = sₓᵧ / sₓ sᵧ = (sample covariance between x and y) / (sample standard deviation of x)(sample standard deviation of y)

Linear-Regression Methods

techniques for assessing the possible association between a normally distributed variable y and a categorical variable x

Regression Mean Square (Reg MS)

the Reg SS divided by the number of predictor variables (k) in the model (not including the constant) thus Reg MS = Reg SS / k for SLR, k = 1 and thus Reg MS = Reg SS for MLR, k > 1 we will refer to k as the degrees of freedom for the regression sum of squares, or Reg df

Standard Error and Confidence Interval for Predictions Made from Regression Lines for the Average Value of y for a Given x

the best estimate of the average value of y for a given x is ŷ = a + bx its standard error, denoted by se₂(ŷ), is given by (see picture) furthermore, a two-sided 100% × (1 - α) confidence interval for the average value of y is ŷ ± tₙ₋₂, ₁₋ₐₗₚₕₐ/₂ se₂(ŷ) note again that the standard error for the average value of y for a given value of x is not the same for all values of x but gets larger the further x is from the mean value of x (x̅) used to estimate the regression line

Estimation of the Least-Squares LIne

the coefficients of the least-squares line y = a + bx are given by b = Lₓᵧ /Lₓₓ and a = ȳ − bx̅ = (see picture) sometimes, the line y = a + bx is called the estimated regression line or, more briefly, the regression line

F Test for Simple Linear Regression

the criterion for goodness of fit used is the ratio of the regression sum of squares to the residual sum of squares a large ratio indicates a good fit, whereas a small ratio indicates a poor fit in hypothesis-testing terms we want to test the hypothesis H₀: β = 0 vs. H₁: β ≠ 0, where β is the underlying slope of the regression line

Variance Stabilizing Transformation

the goal of using such a transformation is to make the residual variances approximately the same for each level of x (or, equivalently, each level of the predicted value) the most common transformations when the residual variance is an increasing function of x are either the ln or square-root transformations the square-root transformation is useful when the residual variance is proportional to the average value of y (e.g., if the average value goes up by a factor of 2, then the residual variance goes up by a factor of 2 also) the log transformation is useful when the residual variance is proportional to the square of the average values (e.g., if the average value goes up by a factor of 2, then the residual variance goes up by a factor of 4) the use of the appropriate transformations in certain situations is crucial and each of the linearity, equal-variance, and normality assumptions can be made more plausible using a transformed scale however, occasionally a transformation may make the equal-variance assumption more plausible but the linearity assumption less plausible another possibility is to keep the data in the original scale but employ a weighted regression in which the weight is approximately inversely proportional to the residual variance - this may be reasonable if the data points consist of averages over varying numbers of individuals (e.g., people living in different cities, where the weight is proportional to the size of the city) other issues of concern in judging the goodness of fit of a regression line are outliers and influential points

Least-Squares Line/Estimated Regression Line

the line y = a + bx that minimizes the sum of squared distances of the sample points from the line given by S = Σⁿᵢ ₌ ₁ (dᵢ)² this method of estimating the parameters of a regression line known as the method of least squares

t Test for Simple Linear Regression

the procedure is widely used and also provides interval estimates for β the hypothesis test here is based on the sample regression coefficient b or, more specifically, on b/se(b), and H₀ will be rejected if |b|/se(b) > c for some constant c and will be accepted otherwise

Using a Regression vs. Correlation Coefficient

the regression coefficient is used when we specifically want to predict one variable from another. The correlation coefficient is used when we simply want to describe the linear relationship between two variables but do not want to make predictions in cases in which it is not clear which of these two aims is primary, both a regression and a correlation coefficient can be reported

How can the slope of the regression line be interpreted?

the slope of the regression line tells us the amount y increases per unit increase in x

Studentized Residual

the studentized residual corresponding to the point (xᵢ, yᵢ) is êᵢ / sd(êᵢ)

Total Sum of Squares (Total SS)

the sum of squares of the deviations of the individual sample points from the sample mean Σⁿᵢ₌₁ (yᵢ - ȳ)²

Regression Sum of Squares (Reg SS)

the sum of squares of the regression components: Σⁿᵢ₌₁ (ŷᵢ - ȳ)²

Residual Sum of Squares (Res SS)

the sum of squares of the residual components Σⁿᵢ₌₁ (yᵢ - ŷ)²

Two-Sample Test for Correlations

the use of Fisher's z transformation can be extended to two-sample problems suppose we want to test the hypothesis H₀: ρ₁ = ρ₂ vs. H₁: ρ₁ ≠ ρ₂ it is reasonable to base the test on the difference between the z's in the two samples if |z₁ − z₂| is large, then H₀ will be rejected; otherwise, H₀ will be accepted this principle suggests the following test procedure for a two-sided level α test

Fisher's z Transformation of the Sample Correlation Coefficient r

the z transformation of r given by z = 1/2 ln [(1 + r) / (1 - r)] is approximately normally distributed under H₀ with mean z₀ = 1/2 ln [(1 + p₀) / (1 - p₀)] and variance 1 / (n - 3) the z transformation is very close to r for small values of r but tends to deviate substantially from r for larger values of r Fisher's z transformation can be used to conduct the hypothesis test as follows: under H₀, Z is approximately normally distributed with mean z₀ and variance 1 / (n - 3) or, equivalently, λ = (Z - z₀) √(n - 3) ~ N(0, 1) H₀ will be rejected if z is far from z₀ thus, a one-sample z test for a correlation coefficient should be used

F Test for SLR

to test H₀: β = 0 vs. H₁: β ≠ 0, use the following procedure 1. compare the test statistic F = Reg MS / Res MS = (L²ₓᵧ / Lₓₓ) / [ (Lᵧᵧ - L²ₓᵧ / Lₓₓ) / (n - 2)] that follows an F₁, ₙ₋₂ distribution under H₀ 2. for a two-sided test with significance level α, if F > F₁, ₙ₋₂, ₁₋ₐₗₚₕₐ, then reject H₀; if F ≤ F₁, ₙ₋₂, ₁₋ₐₗₚₕₐ, then accept H₀ 3. the exact p-value is given by Pr(F₁, ₙ₋₂ > F) these results are usually summarized in an ANOVA table

t Test for SLR

to test the hypothesis H₀: β = 0 vs. H₁: β ≠ 0, use the following procedure 1. compute the test statistic t = b / (s²ᵧ•ₓ / Lₓₓ)¹/² 2. for a two-sided test with significance level α if t > tₙ₋₂, ₁₋ₐₗₚₕₐ/₂ or t < tₙ₋₂, ₐₗₚₕₐ/₂ = -tₙ₋₂, ₁₋ₐₗₚₕₐ/₂ then reject H₀; if -tₙ₋₂, ₁₋ₐₗₚₕₐ/₂ ≤ t ≤ tₙ₋₂, ₁₋ₐₗₚₕₐ/₂ then accept H₀ 3. the p-value is given by p = 2 × (area to the left of t under a tₙ₋₂ distribution) if t < 0 p = 2 × (area to the right of t under a tₙ₋₂ distribution) if t ≥ 0

One-Sample t Test for a Correlation Coefficient

to test the hypothesis H₀: ρ = 0 vs. H₁: ρ ≠ 0, use the following procedure: 1. compute the sample correlation coefficient r 2. compute the test statistic t = r(n - 2)¹/² / (1 - r²)¹/² which under H₀ follows a t distribution with n - 2 df 3. for a two-sided level α test, if t > tₙ₋₂, ₁₋ₐₗₚₕₐ/₂ or t < -tₙ₋₂, ₁₋ₐₗₚₕₐ/₂ then reject H₀ if -tₙ₋₂, ₁₋ₐₗₚₕₐ/₂ ≤ t ≤ tₙ₋₂, ₁₋ₐₗₚₕₐ/₂ then accept H₀ 4. the p-value is given by p = 2 × (area to the left of t under a tₙ₋₂ distribution) if t < 0 p = 2 × (area to the left of t under a tₙ₋₂ distribution) if t ≥ 0 5. we assume an underlying normal distribution for each of the random variables used to compute r

One-Sample z Test for a Correlation Coefficient

to test the hypothesis H₀: ρ = ρ₀ vs. H₁: ρ ≠ ρ₀, use the following procedure: 1. compute the sample correlation coefficient r and the z transformation of r 2. compute the test statistic λ = (z - z₀) √(n - 3) 3. if λ > z₁₋ₐₗₚₕₐ/₂ or λ < -z₁₋ₐₗₚₕₐ/₂ then reject H₀ if -z₁₋ₐₗₚₕₐ/₂ ≤ λ ≤ z₁₋ₐₗₚₕₐ/₂ then accept H₀ 4. the exact p-value is given by p = 2 × Φ(λ) if λ ≤ 0 p = 2 × [1- Φ(λ) if λ > 0 5. assume an underlying normal distribution for each of the random variables used to compute r and z

Sample Regression Coefficient b

unbiased estimated of the population regression coefficient β and, in particular, under H₀, E(b) = 0 furthermore, the variance of b is given by (see picture)

Correlation Coefficient

useful tool for quantifying the relationship between variables and is better suited for this purpose that the regression coefficient

When is the least-squares method appropriate?

whenever the average residual for each given value of x is 0--that is, when E(e | X = x) = 0 normality of these residuals is not strictly required however, the normality assumption is necessary to perform hypothesis tests concerning regression parameters

a Good-Fitting Regression Line

will have regression components large in absolute value relative to the residual components, whereas the opposite is true for poor-fitting regression lines the best-fitting regression line will have large regression components and small residual components the worst-fitting regression line will have small regression components and large residual components

Regression Line

y = α + βx α is the intercept and β is the slope of the line this relationship is almost always never perfect, so we must add an error term (e), which represents the variance of y with a given x assuming e follows a normal distribution, with mean 0 and variance σ², the full linear regression model then takes the form: y = α + βx + e

Raw Sum of Squares for x

Σⁿᵢ ₌ ₁ (xᵢ²)

Raw Sum of Squares for y

Σⁿᵢ ₌ ₁ (yᵢ²)

Decomposition of the Total Sum of Squares into Regression and Residual Comonents

Σⁿᵢ₌₁ (yᵢ - ȳ)² = Σⁿᵢ₌₁ (ŷᵢ - ȳ)² + Σⁿᵢ₌₁ (yᵢ - ŷ)² or Total SS = Reg SS = Res SS


Related study sets

CH. 31 BIO II STUDY GUIDE (Ecdysozoa)

View Set

PrepU Chapter 10: Documentation and Communication in the Healthcare Team

View Set

4490 7 - Business Strategy: Innovation and Entrepreneurship

View Set

chapter 6 study guide entrepreneurship

View Set

Chapter 50: Assessment and Management of Patients With Biliary Disorders

View Set

NUR 3212 Professional Development Foundations for Practice (Midterm Exam)

View Set