Statistics

Ace your homework & exams now with Quizwiz!

Box plot (HF)

A box plot, also known as a box and whisker plot, was developed by John Tukey to examine the dispersion of data, specifically examining outliers around the median. A boxplot uses the 1st and 3rd quartiles which bracket the middle 50% of scores. The range between the 1st and 3rd quartiles is known as the interquartile range. We would draw a box around the 1st and 3rd quartiles, with a vertical line representing the median. Then, we would draw the whiskers of the box plot. The whiskers show the lower and upper quartiles of 25% of the data. The maximum and minimum are determined by the equation: 1.5xIQR of the 1st and 3rd quartiles. Any point beyond these is considered an outlier. Examining the box plot allows us to tell if the distribution is symmetric by examining whether the median lies in the center of the box. Skewness can also be determined by the length of the whiskers compared to one another. Outliers are determined as values outside the whiskers.

Confidence intervals/95% confidence interval around mean (HF) (Memo)

A confidence interval gives an estimated range of values, which is likely to include an unknown population parameter (e.g., µ). With a 95% confidence interval, we expect that 95% of our interval estimates, constructed from repeated random samples of the same size, will include the population parameter. Importantly, confidence intervals are statements of probability that the interval encompasses the target population parameter. Confidence intervals do not mean that the target statistic is within the interval. In other words, confidence intervals indicate the confidence we have in the process used to generate the interval, not the probability of the parameter lying within any particular interval. Confidence intervals are conducted using statistical methods, such as a t-test. For example, when calculating an individual's score on the WISC, you provide confidence intervals to explain how certain we are that the score falls within that variable, which provides a more accurate description of where the true score likely falls, accounting for measurement error and sampling variability.

Shape of frequency distribution (location, spread, and shape)

A frequency distribution is a visual representation of scores. In examining a frequency distribution, we can look at modality, skewness, and kurtosis. Modality is the number of major peaks in a distribution based on the mode. A unimodal distribution has one major peak, whereas a bimodal distribution has two major peaks. Notably, the peaks do not need to be the same height. A normal distribution is unimodal. Skewness is the degree of asymmetry in the distribution. A distribution can be positively or negatively skewed, in which case the distribution stacks up on the right or left side of the distribution, rather than the center. A normal distribution is stacked at the center with no skew. Lastly, kurtosis is the relative concentration of scores in the center of the distribution as well as the concentration in the tails and shoulders of the distribution. A distribution can be mesokurtic in which the tails are neither too thin nor thick, and not too many or too few scores are concentrated in the center. A normal distribution is mesokurtic. A platykurtic distribution is when the distribution flattens out due to more scores concentrated in the shoulders. A leptokurtic is a peaked distribution in which too many scores are in the center and the tails of the distribution.

Normal distribution (contaminated/mixed) (HF)

A normal distribution is a distribution that is unimodal (i.e., one peak), mesokurtic (meaning not too tall and not too flat), and symmetrical (i.e., not skewed). A normal distribution will have an equal mean, median, and mode. Many statistical tests have the assumption that the data is normally distributed. A mixed/contaminated normal distribution is a mixture or weighted sum of two normal distributions with different means, variances, or both. This occurs when people from a desired sample are not actually part of the intended population. Therefore, your distribution may not truly be normal because some individuals were never intended to be in the distribution, leading to potential skewness or other deviations from normality. For example, if you wanted to measure community reaction to violence, and you intended to measure the reaction of only "normal" people; however, when you go to sample people for your study, you accidentally end up with people high in psychopathy in your sample. This would be a mixed normal distribution and could be skewed. Mixed normal distributions typically have thicker tails, indicating more outliers. A solution to this problem is to trim your sample (i.e., cut off the bottom and top 10%) or winsorize your sample (replacing extreme values with the last number in the distribution). Using robust methods such as these protects your sample from outliers that may be coming from unintended individuals in your sample.

Z-scores (HF) (Memo)

A z-score is a type of standardization score that can rescale data. A z-score represents the number of standard deviations that an observation is above or below the mean. For instance, z = 1 represents a score that is 1 standard deviation unit above the mean. A negative z-score would be below the mean. The formula for obtaining a z-score is , where X is the score you want the z-score for, µ is the mean, and σ is the standard deviation. The distribution of z-scores has a mean of 0 and a standard deviation of 1. Notably, the shape of the distribution will stay the same, so if the distribution of raw data is not normal, changing to z-scores will not make it normal. Converting scores to z-scores allows us to compare standard distributions across samples/studies and compare individual scores against each other. Z-scores can also help you find the percentile of a score (i.e., the percentage of scores that fall below the score you are looking at).

Pearsons correlation coefficient/correlation

Correlation looks at the relationship between two variables. There are four assumptions to run a correlation: both variables must be continuous, linearity (i.e., the relationship between the variables is linear), independent random sample, and bivariate normality (i.e., both variables are normally distributed, and they are normally distributed for every value of the other variable). The Pearson's product-moment correlation coefficient ranges between values of 1 and -1. The coefficient is calculated by dividing the covariance of the two variables by the variable's standard deviations. The closer the value is to either -1 or 1, the stronger the relationship between the two variables. For example, a study may examine whether depression and anxiety are correlated. If they are correlated at 0.9, that is close to 1, indicating a strong relationship between the two variables. If the variables are correlated at 0.1, then they are likely not related. While the correlation coefficient can explain whether the variables are related, it cannot explain cause and effect. With smaller samples, Pearson's correlation coefficient can be biased, and it can also be weakened due to a restricted range.

Effect size (vs. statistical significance) (HF) (Memo)

Effect size estimates the magnitude of an observed effect. This allows for comparison between effects and can assist in real-world applications. An effect size is often reported using Pearson's r, Cohen's d, eta squared, or omega squared, depending on the statistical test used. For Cohen's d effect sizes, effect sizes range from small (Cohen's d = 0.2), medium (Cohen's d = 0.5), and large (Cohen's d = 0.8). Effect size also allows us to assess the stability of results across samples, designs, and analyses. The APA task force recommends always reporting effect sizes. On the other hand, statistical significance is the probability that the results occurred by chance. This is equal to or below the confidence level selected for that test. Confidence levels are often set at the 0.01 or 0.05 level, which allows us to say that a difference would occur less than 5% or 1% of the time if the null hypothesis were true. This is an all-or-nothing conclusion; either an effect is likely or unlikely and does not refer to the magnitude of the effect. Overall, statistical significance tells you if something works whereas effect size tells you how much.

Family-wise error rate/alpha corrected post-hoc tests (HF) (Memo)

Family-wise error rate is the probability that the family (set of comparisons) will contain at least one Type I Error. Type I Error is when you reject the null hypothesis when the null hypothesis is true. This is calculated by 1-(1-α)^c where c is the number of comparisons. For example, if α=0.05 with 4 comparisons 1-(1-0.05)^4=0.185, meaning we have an 18.5% chance of making at least one Type I Error. Error inflation is created by multiple post-hoc comparisons made in a single experiment; as you conduct more contrasts, the family-wise error rate goes up. Alpha corrected post-hoc tests control for family-wise error. Post hoc refers to tests of the data that are planned after the experimenter has collected the data. There are different tests including: Bonferroni (most stringent corrected), Sidak (allows for greater power than Bonferroni), Tukey HSD, etc. These post-hoc tests decrease the likelihood of committing Type I errors.

Homoscedasticity in regression (vs. heteroscedasticity) (HF) (Memo)

Homoscedasticity is one of the 6 assumptions in regression and refers to the constant variance of residuals (i.e., there is a constant spread of residuals throughout all values of X). Ideally, we want the variance for Y for each value of X to be constant. Homoscedasticity is examined using the plot of the regression standardized predicted values by the regression standardized residuals (zpredicted x zresidual). In examining this plot, we want the scatter to be randomly distributed around 0 on the y-axis. This is opposed to heteroscedasticity which is when there is a difference in the variance of residuals across values of X and are not constant. We expect errors in prediction, but we want errors to be random in nature and, therefore, uniformly distributed around the regression line. If there is a pattern in the distribution or tighter/larger discrepancies along different parts of the regression line, then we likely have heteroscedasticity, which violates the assumption of homoscedasticity.

Measures of dispersion/variability

Measures of dispersion look at the variability around the median, the mode, the mean, or any other point. Some measures of dispersion are range, interquartile range, standard deviation, and variance. The range is a measure of distance, namely the distance from the lowest to the highest score. The range is impacted by outliers. An interquartile range is obtained by discarding the upper 25% and the lower 25% of the distribution and taking the range of what remains, which reduces the impact of outliers. Variance is the sum of the squared deviations divided by N - 1 (for sample variance). The standard deviation is defined as the positive square root of the variance

Mediation vs. moderation (HF)

Mediation is when there is some variable that explains the relationship between two variables. The independent variable predicts the mediator which predicts the dependent variable. For example, let's say the levels of care from your parents (independent variable) lead to feelings of competence and self-esteem (mediator), which, in turn, leads to high confidence in your abilities when you become a mother (dependent variable). Baron and Kenny argued that to claim a mediation, we need to first show that there is a significant relationship between the independent variable and the mediator. The next step is to show that there is a significant relationship between the mediator and the dependent variable and between the independent and dependent variables. The final step consists of demonstrating that when the mediator and the independent variable are used simultaneously to predict the dependent variable, the previously significant path between the independent and dependent variables is now greatly reduced. Moderation refers to situations in which the relationship between the independent and dependent variables changes as a function of the level of a third variable (the moderator). The moderator influences the magnitude or direction of the relationship between the independent and dependent variables. For example, a study hypothesized that individuals who experience more stress, as assessed by a measure of daily hassles (the independent variable), would experience higher levels of symptoms (the dependent variable) than those who experience little stress. However, they also expected that if a person had a high level of social support to help deal with their stress (the moderator), symptoms would increase only slowly with increases in hassles.

(High) multicollinearity/Tolerance as used in regression

Multicollinearity is when the predictors within an equation or model are correlated amongst themselves. High multicollinearity can be problematic because what a given X variable explains in Y may be redundant with another X variable. It also increases the standard error of a regression coefficient, which ultimately increases the width of the confidence interval and decreases the t-value for that coefficient. It also increases the instability of the regression equation. At the extreme, high multicollinearity results in singularity in which a multiple regression equation cannot be calculated. To examine multicollinearity, we examine tolerance. Tolerance tells us the degree of overlap among predictors, helping us see which predictors have information in common and which are relatively independent. Tolerance also alerts us to potential problems of instability in our model. You want a high tolerance (at least above 0.10). To correct for multicollinearity, 1) eliminate redundant X variables from the analysis, 2) combine X variables through factor analysis, or 3) transform the data.

Null hypothesis testing/null/Type II error (vs. Type I error rate) in null hypothesis testing (HF) (Memo)

Null hypothesis testing is a term used to describe research. In null hypothesis testing, there is a research hypothesis, which is what we want to test, and a null hypothesis, which assumes there is no difference. For example, if you wanted to test the idea that people backing out of a parking space take longer when waiting, that is your research hypothesis. Your null hypothesis is that leaving times do not depend on whether someone is waiting. After completing the study, there are two options you can either reject or fail to reject the null hypothesis. You would reject the null hypothesis if your results suggest that people backing out do take longer when waiting. You would fail to reject the null hypothesis if there is not enough evidence to suggest there is a difference. When you make a decision about the null hypothesis, it is possible that you have an error. A Type I error can occur when you reject the null hypothesis when in fact the null hypothesis is true (e.g., there is no difference in backing out times, but you said there was). Type I error is equal to alpha because alpha is the probability the results would occur if the null hypothesis is true. For example, if the alpha equals 0.02, then there is a 2% chance the results would occur if there was no difference in backing out times. The larger the alpha the higher the chances of a Type I error. By reducing alpha, we are increasing the probability of committing a Type II error. A Type II error occurs when you fail to reject the null hypothesis when the null hypothesis is false. Type II error is symbolized by beta.

Outliers and their effects

Outliers are data that are widely separated from the rest of the data and sometimes represent errors in recording data. An example of an error causing an outlier is coding 98 for age instead of 28. Outliers affect the skewness of a distribution by pulling the distribution in the direction of the outliers. This means that the mean value is pulled toward the outliers. Outliers can be detected using both visual techniques, like frequency distributions, scatterplots, and box plots, and statistical techniques like Cook's D. To deal with outliers, we can correct the data entry or collection errors by double-entering the data. Researchers can also exclude extreme cases using trimmed means or winsorizing data. Further, outliers can be dealt with by removing the value completely.

Slope and intercept in bivariate (1-predictor) regression (HF) (Memo)

Regression is a statistical technique used to investigate how variation in one or more variables predicts or explains variation in another variable. A bivariate regression is if only one variable is used to predict or explain the variation in another variable. The formula of a bivariate regression is Ŷ = bX + a, where Ŷ is the predicted value of Y (the dependent variable), b is the slope of the regression, X is the value of the predictor variable (independent variable), and a is the intercept. For more detail, the slope of the regression (b) represents the amount of change in Ŷ associated with a one-unit change in X (the independent variable). The slope can be positive or negative, and it represents the steepness or rate of change in the line. For example, a slope of 3 is steeper than a slope of 1, indicating that more change happens in Ŷ as X changes. The intercept (a) is the value of Ŷ when X = 0. We can plug in any X value to get the predicted Y value (Ŷ). Importantly, the reason this equation uses Ŷ instead of Y is because the hat represents that there is error in our prediction. We can examine residuals (i.e., the difference between Ŷ and Y observed) by looking at this line and the actual Y values.

Residuals

Residuals are the values we obtain when subtracting predicted scores from actual scores. For example, in a regression, there is a regression line created for the predicted Y value (dependent variable) we would expect based on an X value (independent variable). A residual would be the difference between the predicted Y value on the regression line and the actual obtained score from the data. In other words, residuals represent the error in prediction. The lower the residual, the more accurate the predictions in your regression. Scatterplots with the regression line help visualize the residuals in a data set. In regression, residuals are important because one of the assumptions of regression is homoscedasticity, which assumes that the variance of the residuals is consistent across the predictor.

Statistical interaction/Simple effects, main effects, interaction (HF) (Memo)

Statistical interaction refers to when one independent variable's effect on the dependent variable differs according to the levels of another independent variable. This is also referred to as moderation. For example, imagine a study that measures the amount of time to run a race (the dependent variable). The first independent variable of interest may be the number of hours spent practicing the run (low vs. high), and the second independent variable may be age (young vs. old). The results could be that on the lower end of practicing running, everyone takes about the same amount of time. However, if as practice increases, the older athletes take longer to finish a race than younger athletes who practiced the same amount of time, this would be an interaction. The main effects are whether there are differences between levels of one variable while ignoring the other independent variables (e.g., difference in time to run the race based on number of hours spent practicing, regardless of age). Simple effects are whether there are differences between levels of one independent variable for a level of another independent variable (e.g., looking at older people's amount of time practicing). The interaction looks at the effect of both independent variables on the dependent variable (e.g., how age interacts with practice time to affect race time).

Student's t-test

Student's t-test is a null-hypothesis significance test typically used to test if there are mean differences between two groups using the t-statistic and t-distribution. T-tests involve a continuous dependent variable and a nominal independent variable. There are three types of Student t-tests. The one-sample t-test is investigated when the variance of the population is unknown, but the population mean is known. This type compares sample mean to population mean using the sample variance as an estimate of the population variances based on the Central Limit Theorem. A matched/dependent samples t-test has two samples that are paired together and tested to see whether the difference scores are significantly different from 0. For instance, this may be used when you are trying to see if a treatment worked for the same participants. You could compare their baseline and post scores. Lastly, an independent sample t-test tests the difference between the means of two independent groups. This may be used when comparing a treatment group to a control group. The assumptions for t-tests are normality of sampling distribution, homogeneity of variance, and independence of observations

Assumption in ANOVA

The Analysis of Variance (ANOVA) test is utilized to explore differences between or among means of two or more groups as well as examine the individual and interacting effects between multiple independent variables. To use an ANOVA, it is important that the independent variable consists of two or more categorical, independent groups (e.g., ethnicity). The dependent variable should be a continuous variable (i.e., interval or ratio). After ensuring ANOVA is the right test for the independent/dependent variables, there are multiple assumptions that must be met. The first assumption is homogeneity of variance, or homoscedasticity, which indicates that both populations have the same variances. A second assumption is that the dependent variable for each group is normally distributed around the mean. The third assumption is that the observations are independent of one another. For repeated measures ANOVA, sphericity is another assumption that refers to examining the covariance matrix in which all covariances are equal; however, this assumption is rarely met.

F-ratio

The F-ratio is a statistic used in analysis of variance (ANOVA) to determine whether there are significant differences in group means. Its calculation depends on the group design. For a within-groups design, it is calculated using MS(treatment)/MS(error). For between-groups design, it is calculated as MS(between groups)/MS(within groups). If the null hypothesis is true, then the F ratio should be equal to 1, indicating no significant difference between group means, meaning you would fail to reject the null hypothesis. You want the F-ratio to be significantly different than 1 to reject the null hypothesis; the larger the F-ratio showcases greater variation in group means. Importantly, the results of the F-ratio tell us if the means are different, but they do not tell which means are different from each other. Further post-hoc testing (e.g., Bonferroni correction) is required to determine which means are different.

Central Limit Theorem (HF)

The central limit theorem is a factual statement about the distributions of means. It states: Given a population with mean µ and variance σ^2, the sampling distribution of the mean (the distribution of sample means) will have a mean equal to µ, a variance equal to σ^2/n, and a standard deviation equal to σ/squareroot(n). The distribution will approach the normal distribution as n, the sample size, increases. This also means that as n increases, the shape of this sampling distribution becomes normal, whatever the shape of the parent population. The rate at which the sampling distribution of the mean approaches normal as n increases is a function of the shape of the parent population. If the population is itself normal, the sampling distribution of the mean will be normal regardless of n. If the population is symmetric but nonnormal, the sampling distribution of the mean will be nearly normal even for small sample sizes, especially if the population is unimodal. If the population is markedly skewed, sample sizes of 30 or more may be required before the means closely approximate a normal distribution.

Measures of central tendency/mean, median, and mode (Memo)

The phrase "measures of central tendency" refers to the set of measures that reflect where on the scale the distribution is centered. The three major measures of central tendency are mode, median, and mean. The mode (Mo) can be defined simply as the most common score, that is, the score obtained from the largest number of subjects. If two nonadjacent reaction times occur with equal (or nearly equal) frequency, we say that the distribution is bimodal and would most likely report both modes. The median is the score that corresponds to the point at or below which 50% of the scores fall when the data are arranged in numerical order. By this definition, the median is also called the 50th percentile. The median location can be calculated by the equation: (N + 1) / 2. The most common measure of central tendency is the mean, or what people generally have in mind when they use the word average. The mean is the sum of the scores divided by the number of scores. For example, let's say the data set includes 2, 2, 4, 5, and 7. The mode would be 2 because it occurs the most frequently. The median would be 4 because it is in the center of the distribution. The mean would be 4 because the sum of the numbers (20) divided by how many numbers occur (5) is 4. Only when the distribution is symmetric will the mean and the median be equal, and only when the distribution is symmetric and unimodal will all three measures be the same. It is often that one number is prioritized. For mode, this would be used if you want to understand what the largest number of people experience. The major advantage of the median is that is unaffected by extreme scores. The mean is most commonly used because the sample mean is generally a better estimate of the population mean than the mode or the median.

Robustness of an estimator

The robustness of an estimator refers to the extent to which a statistical test is more or less affected by moderate departures from the underlying assumptions. Non-robust statistics have reduced ability to 1) detect the true differences between groups (power), 2) detect differences among random variables, and 3) find confidence intervals. Factors most influencing the robustness of estimators are skewness, outliers, and heteroscedasticity. For instance, the M estimator is a modern robust regression technique to help with outliers or extreme observations. Small shifts away from the assumption of normality should not drastically affect some statistics like the M estimator, but standard regression and ANOVA are heavily influenced by heteroscedasticity, skew, and outliers, making them less robust under such conditions.

Sampling distribution of the mean

The sampling distribution of the mean is the distribution of values we would expect to obtain for the mean if we drew an infinite number of samples from the population in question with replacement and calculated the mean on each sample. The sampling distribution of the mean allows us to evaluate the likelihood that a specific mean actually exists within the population or not. The sampling distribution of the mean is integral to the central limit theorem which states that in a given population with a mean and variance, the sampling distribution of the mean will have a mean equal to the population mean and a variance equal to the population variance divided by the sample size and a standard deviation equal to the population standard deviation divided by the square root of the sample size. The sampling distribution of the mean will approach normal as n (sample size) increases.

Scatter plot of bivariate regression

The scatter plot of bivariate regression examines the relationship between predictor (independent) and criterion (dependent) variables. It is best practice to check the scatterplot of every predictor with the criterion in exploratory data analysis. The scatter of points around the regression line indicates the strength and direction of the linear relationship and how well the predictor predicts the criterion. The regression line gives the best prediction of Y given a value of X, and the degree to which cases cluster around regression lines is related to correlation. The data in the scatter plot is displayed as a collection of points with the independent variable on the X-axis and the dependent variable on the Y-axis. This allows us to visually see the relationships (correlations) and also the shape of the relationship (e.g., linear). One of the most powerful aspects of a scatter plot is its ability to show nonlinear (e.g., curvilinear) relationships between variables, which a correlation coefficient cannot tell you.

Standard error of estimate (as used in regression)

The standard error of the estimate (SEE) is the standard deviation of Y predicted values from X values. It is the most common measure of the error of predictions, specifically measuring the accuracy of the predictions made by a regression model. The SEE helps us determine how good our regression line is at predicting Y values. SEE can be calculated by the square root of the sum of squared deviations about Y predicted (SSresidual) divided by the degrees of freedom. The greater the distance between the obtained value and the predicted value, the higher the SEE. Squaring the SEE results in residual variance or error variance and can be shown to be an unbiased estimate of the corresponding parameter in the population.

Standard error of the mean

The standard error of the mean (SEM) refers to the standard deviation of the distribution of sample means. In other words, it is the average distance of each sample mean from the population mean. This gives us confidence when we are estimating the population mean using a sampling distribution of the mean. This works according to the Central Limit Theorem, which essentially states that the mean of the sampling distribution of the mean equals the mean of the population and that the standard error of the mean equals the standard deviation of the population divided by the square root of N. SEM will decrease as the sample size gets larger. In addition, the sampling distribution of the mean will approach a normal distribution.

Transformation of data/re-expression

Transformation or re-expression of data is characterized as altering how data is expressed. Data transformations are used for many reasons, such as to improve non-normal data, to change the shape of the distribution, to correct the homogeneity of variance assumption, or to simply make the data more meaningful to interpret. Although transformation can make data more interpretable, transformations can reverse differences of means from original values, so it is important to be careful to see how the transformation impacts relationships between variables. There are two main types of transformations: linear and nonlinear. Linear transformations convert data to be more meaningful but do not change the shape of the distribution. For instance, centering a distribution or converting to standard scores are linear transformations. Nonlinear transformations change the shape of the distribution and can be used to correct skewness or reduce the effects of outliers. Nonlinear transformations include logarithmic to reduce a positive skew or square rooting the distribution to compress the upper tails of the distribution.

Trimmed means

Trimmed means are means calculated on data for which we discard a certain percentage of the data at each end of the distribution. For example, if we have a set of 100 observations and want to calculate a 10% trimmed mean, we simply discard the highest 10 scores and the lowest 10 scores and take the mean of what remains. This may be used when we have a sample with a great deal of dispersion, meaning that it has a lot of high and low scores. By trimming extreme values from the sample, our estimate of the population mean is a more stable estimate. We may also do this to control problems in skewness. If you have a very skewed distribution, those extreme values will pull the mean toward themselves and lead to a poorer estimate of the population mean. Trimmed means eliminates the influence of those extreme scores. However, this reduces the number of observations, which is problematic. A fix to this could be using a Winsorized sample, where the scores are replaced by the highest or lowest score rather than deleted. Importantly, trimming and Winsorizing may result in removing meaningful scores, so researchers should make an informed decision prior to using these techniques.

Variance (e.g., residual variance in regression)

Variance is the sum of squared deviations which is then divided by n where n is the number of scores. We use n-1 for the sample variance. Variance tells us the level of dispersion of variability of distribution around the mean. Because it is the average of the squared deviations of each score from the mean, it can be hard to interpret. Instead of interpreting the variance, many people choose to interpret the standard deviation, which is the positive square root of the variance. Residual variance in regression specifically looks at the dispersion of variability of distribution around the regression line. It quantifies the difference between observed values and the predicted values in the regression model.


Related study sets

Unit 1: Earning and Protecting Money

View Set

Personality Disorders: Cluster C: Avoidant Personality Disorder, Dependent Personality Disorder, Obsessive-Compulsive Disorder

View Set

Apologetics and Ethics - Hicks - Fall Final Exam

View Set

Ch 36: Mental Health Assessment of Children and Adolescents

View Set

Ch. 5: The Skeletal System Osseous Tissue and Bone Structure

View Set

IS 290 Cronk FINAL - Test 1, 2, & Ch 11

View Set

Promulgated Contracts (Course 6)

View Set