PSY. 266 CHAP. 6-11

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

Influence of Sample Size and Sample Variance

# of scores in the sample and the magnitude of the sample variance both have a large effect on the t statistic and thereby influence the statistical decision; Because the estimated standard error, sM, appears in the denominator of the formula, a larger value for sM produces a smaller value (closer to zero) for t. -Any factor that influences the standard error also affects the likelihood of rejecting H0 and finding a significant treatment effect. -Estimated Standard Error is directly related to the sample variance so that the larger the variance, the larger the error; Thus, large variance means that you are less likely to obtain a significant treatment effect. -Large variance means that the scores are widely scattered, which makes it difficult to see any consistent patterns or trends in the data. -The estimated standard error is inversely related to the number of scores in the sample; The larger the sample, the smaller the error. -If all other factors are held constant, large samples tend to produce bigger t statistics and therefore are more likely to produce significant results.

Characteristics of the Distribution of Sample Means

-Sample means should pile up around the population mean. -The pile of sample means should tend to form a normal-shaped distribution. -In general, the larger the sample size, the closer the sample means should be to the population mean, μ.

Steps of Hypothesis Testing with the T Statistic

1. State the hypotheses and select an alpha level; Although we have no information about the population of scores, it is possible to form a logical hypothesis about the value of μ. 2. Locate the critical region; The test statistic is a t statistic because the population variance is not known. Therefore, the value for degrees of freedom must be determined before the critical region can be located. 3. Calculate the test statistic; The t statistic typically requires more computation than is necessary for a z-score and can be divided into a 3-stage process. a. First, calculate the sample variance. Remember that the population variance is unknown, and you must use the sample value in its place S^2 = SS / n-1 = SS / df b. Next, use the sample variance (s^2) and the sample size (n) to compute the estimated standard error. This value is the denominator of the t statistic and measures how much difference is reasonable to expect by chance between a sample mean and the corresponding population mean. Sm = Square root of s^2 / n c. Compute the t statistic for the sample data. T - M - μ / sM 4. Make a decision regarding H0.

Steps of the One-Tailed Test

1. State the hypotheses; and select an alpha level. 2. Locate the critical region. 3. Calculate the test statistic. 4. Make a decision.

Independent-Measures Designs

1. The two sets of data could come from two completely separate groups of participants. 2. The two sets of data could come from the same group of participants. -The first research strategy, using completely separate groups, is called an independent measures or a between-subjects design.

Assumptions of the t Test

1. The values in the sample must consist of independent observations. - In everyday terms, two observations are independent if there is no consistent, predictable relationship between the first observation and the second. 2. The population sampled must be normal - Violating this assumption has little practical effect on the results obtained for a t statistic, especially when the sample size is relatively large.

How to Find Probabilities for Specific X Values

1. Transform the X values into z-scores. 2. Use the unit normal table to look up the proportions corresponding to the z-score values. z = X - μ/σ

Sampling Distribution

A distribution of statistics obtained by selecting all the possible samples of a specific size from a population.

Probability

A fraction or a proportion of all the possible outcomes; obtained by random sampling.

Measuring Effect Size

A measure of effect size is intended to provide a measurement of the absolute magnitude of a treatment effect, independent of the size of the sample(s) being used. -Cohen's d measures the size of the mean difference in terms of the standard deviation. COHEN'S D = mean difference / standard deviation

More About Hypothesis Tests

A result is said to be significant or statistically significant if it is very unlikely to occur when the null hypothesis is true; that is, the result is sufficient to reject the null hypothesis. -Thus, a treatment has a significant effect if the decision from the hypothesis test is to reject H0.

More About Standard Error

A sample typically will not provide a perfectly accurate representation of its population; there typically is some discrepancy between a statistic computed for a sample and the corresponding parameter for the population. -Standard error provides a way to measure the "average" distance between a sample mean and the population mean.

Hypothesis Test

A statistical method that uses sample data to evaluate a hypothesis about a population; general goal is to rule out chance (sampling error) as a plausible explanation for the results from a research study. Steps: 1. State hypothesis about the population. 2. Use hypothesis to predict the characteristics the sample should have. 3. Obtain a sample from the population. 4. Compare data with the hypothesis prediction. -If the individuals are noticeably different, we have evidence that the treatment has an effect; however, it is also possible that the difference between the sample and the population is simply sampling error. Purpose: To decide between two explanations: 1. The difference between the sample and the population can be explained by sampling error. 2. The difference between the sample and the population is too large to be explained by sampling error.

Factors that Affect Statistical Power

As effect size increases, the probability of rejecting H0 also increases, which means that the power of the test increases. -One factor that has a huge influence on power is the size of the sample. -Reducing the alpha level for a hypothesis test also reduces the power of the test. -If the treatment effect is in the predicted direction, changing from a two-tailed test to a one-tailed test increases power.

Assumptions for Hypothesis Tests with Z-Scores

Assumed that the participants were selected randomly. -Values in the sample must consist of independent observations: two events are independent if the occurrence of the first event has no effect on the probability of the second event. -The Standard Deviation for the unknown population (after treatment) is assumed to be the same as it was for the population before treatment. -To evaluate hypotheses with z-scores, we h ave used the unit normal table to identify the critical region; this table can be used only if the distribution of sample means is normal.

Assumptions Underlying the Independent Measures t Formula

Assumptions underlying the independent measures t formula: 1. The observations within each sample must be independent. 2. The two populations from which the samples are selected must be normal. 3. The two populations from which the samples are selected must have equal variances. -The first two assumptions should be familiar from the single-sample t hypothesis test.

The Shape of the Distribution of Sample Means

Distribution is almost perfectly normal if either of the following TWO conditions is satisfied: 1. The population from which the samples are selected is a normal distribution. 2. The number of scores (n) in each sample is relatively large, around 30 or more.

The Hypothesis Test: Step 3

Compare the sample means (data) with the null hypothesis. -Compute the test statistic. -The test statistic (z-score) forms a ratio comparing the obtained difference between the sample mean and the hypothesized population mean versus the amount of difference we would expect without any treatment effect (the standard error).

Degrees of Freedom

Describes the number of scores in a sample that are independent and free to vary. -Because the sample mean places a restriction on the value of one score, there are n-1 degrees of freedom for a sample with n scores.

Selecting an Alpha Level

Designed to minimize the risk of a Type I error; alpha levels tend to be very small probability values. -By convention, the largest permissible value is α = .05. -However. as alpha level is lowered, the hypothesis test demands more evidence from the research results.

Directional Tests

Directional Hypothesis Test: A one-tailed test, the statistical hypotheses (H0 and H1) specify either an increase or a decrease in the population mean; that is, then make a statement about the direction of the effect. -When a specific direction is expected for the treatment effect, it is possible for the researcher to perform a directional test. 1st Step(Most Critical): State the statistical hypotheses. -The null hypothesis states that there is no treatment effect and that the alternative hypothesis says that there is an effect. -The two hypotheses are mutually exclusive and cover all of the possibilities. -Critical Region: Defined by sample outcomes that are very unlikely to occur if the null hypothesis is true (if the treatment has no effect); Because the critical region is contained in one tail of the distribution, a directional test is commonly called a one-tailed test. -Also note that the proportion specified by the alpha level is not divided between two tails, but rather is contained entirely in one tail.

Matched Subjects Design

Each individual in one sample is matched with an individual in the other sample; the matching is done so that the two individuals are equivalent (or nearly equivalent) with respect to a specific variable that the researcher would like to control.

The Null Hypothesis

For Independent Measures Test: H0: µ1 − µ2 = 0 (No difference between the population means) -The alternative hypothesis states that there is a mean difference between the two populations: H1: µ1 − µ2 ≠ 0 -The alternative hypothesis can simply state that the two population means are not equal: µ1 ≠ µ2.

The Central Limit Theorem

For any population with mean μ and standard deviation σ, the distribution of sample means for sample size n will have a mean of μ and a standard deviation of σ/√n and will approach a normal distribution as n approaches infinity. -Distribution of sample means for any population, no matter what shape, mean, or standard deviation. -"Approaches" a normal distribution very rapidly; by the time the sample size reaches n=30, the distribution is almost perfectly normal.

Confidence Intervals

For the independent-measures t, we use a sample mean difference, M1 - M2, to estimate the population mean difference, µ1 − µ2. 1. Solve the t equation for the unknown parameter. For the independent measures t statistic, we obtain µ1 − µ2 = M1 - M2 +/- ts (M1-M2) -The values for M1-M2 and for s(M1-M2) are obtained from the sample data. -Although the value for the t statistic is unknown, we can use the degrees of freedom for the t statistic and the t distribution table to estimate the t value. -Using the estimated t and the known values from the sample, we can compute the value of µ1 − µ2.

Binomial Distribution

Formed by a series of observations for which there are exactly two possible outcomes. The two outcomes are identified as A and B, with probabilities of p(A) = p and p(B) = q; shows the probability of each value of X, where X is the number of occurrences of A in a series of n observations; when pn and qn are both greater than 10, the binomial distribution is closely approximated by a normal distribution with a mean of μ = pn and a standard deviation of σ = √npq. -In this situation, a z-score can be computed for each value of X and the unit normal table can be used to determine probabilities for specific outcomes.

The Null Hypothesis and the Independent-Measures t Statistic

Goal is to evaluate the mean difference between two populations (or between two treatment conditions); using subscripts to differentiate the two populations, the mean for the first population is µ1, and the second population mean is µ2. -The difference between means is simply µ1 − µ2. -As always, the null hypothesis states that there is no change, no effect, or no difference.

Uncertainty and Errors in Hypothesis Testing

Hypothesis testing is an inferential process, which means that it uses limited information as the basis for reaching a general conclusion. -A sample provides only limited or incomplete information about the whole population, and yet a hypothesis test uses a sample to draw a conclusion about the population; there is always the possibility that an incorrect conclusion will be made.

The Hypothesis Test: Step 4

If the test statistic results are in the critical region, we conclude that the difference is significant or that the treatment has a significant effect; in this case we reject the null hypothesis. -If the mean difference is not in the critical region, we conclude that the evidence from the sample is not sufficient, and the decision is fail to reject the null hypothesis.

Repeated Measures and Matched-Subjects Designs

In a repeated-measures design or a matched subjects design comparing two treatment conditions, the data consist of two sets of scores, which are grouped into sets of two, corresponding to the two scores obtained for each individual or each pair of subjects. -Because the scores in one set are directly related, one-to-one, with the scores in the second set, the two research designs are statistically equivalent and share the common name related-samples designs (or correlated-samples designs).

Factors That Influence A Hypothesis Test

The final decision in a hypothesis test is determined by the value obtained for the z-score statistic. -Factors that help determine whether z-score will be large enough to reject H0: 1. In a hypothesis test, higher variability can reduce the chances of finding a significant treatment effect. 2. Increasing the number of scores in the sample produces a smaller standard error and a larger value for the z-score.

Hypothesis Tests for the Repeated-Measures Design

In a repeated-measures study, each individual is measured in two different treatment conditions and we are interested in whether there is a systematic difference between the scores in the first treatment condition and the scores in the second treatment condition. -A difference score is computed for each person; the hypothesis test uses the difference scores from the sample to evaluate the overall mean difference, µD, for the entire population. -The hypothesis test with the repeated-measures t statistic follows the same four-step process that we have used for other tests. 1. State the hypotheses, and select the alpha level. 2. Locate the critical region. 3. Calculate the t statistic 4. Make a decision.

Concerns About Hypothesis Testing: Measuring Effect Size

Limitations with using a hypothesis test to establish the significance of a treatment effect: 1. When the hull hypothesis is rejected, we are actually making a strong probability statement about the sample data, not about the null hypothesis. 2. Demonstrating a significant treatment effect does not necessarily indicate a substantial treatment effect.

Unit Normal Table

Lists several different proportions corresponding to each z-score location; Column A lists z-score values, Columns B & C list the proportions in the body and tail, respectively, Column D lists the proportion between the mean and the z-score location. Table values can also be used to determine probabilities.

Type II Errors

Occurs when a researcher fails to reject a null hypothesis that is really false. -Means that the hypothesis test has failed to detect a real treatment effect. -Occurs when the sample mean is not in the critical region even though the treatment has an effect on the sample;often this happens when the effect of the treatment is relatively small. -Consequences of a Type II errors are usually not as serious as those of a Type I error; a type II error means that the research data do not show the results that the researcher had hoped to obtain. -Unlike a Type I error, it is impossible to determine a single, exact probability for a Type II error. -Represented by β.

Type I Errors

Occurs when a researcher rejects a null hypothesis that is actually true; In a typical research situation, a type I error means the researcher concludes that a treatment does have an effect when in fact it has no effect. -Occurs when a researcher unknowingly obtains an extreme, non representative sample; The hypothesis test is structured to minimize the risk that this will occur.

Independent Random Sample

Requires that each individual has an equal chance of being selected and that the probability of being selected stays constant from one selection to the next if more than one individual is selected.

Random Sample

Requires that each individual in the population has an equal chance of being selected; the sample that is obtained by this process is called a SIMPLE RANDOM SAMPLE.

Test Statistic

Simply indicates that the sample data are converted into a single, specific statistic that is used to test the hypotheses. EX: The z-score statistic that is used in the hypothesis test. -In a hypothesis test with z-scores, we have a formula for z-scores but we do not know the value for the population mean, μ. -Therefore, we try the following steps: 1. Make a hypothesis about the value of μ. This is the null hypothesis. 2. Pul the hypothesized value in the formula along with the other values. 3. If the formula produces a z-score near 0 (which is where z-scores are supposed to be), we conclude that the hypothesis was correct. 4. On the other hand, if the formula produces an extreme value (a very unlikely result), we conclude that the hypothesis was wrong.

The Hypothesis Test: Step 1

State the hypothesis about the unknown population. -The null hypothesis, H0, states that there is no change in the general population before and after an intervention. In the context of an experiment, H0 predicts that the independent variable had no effect on the dependent variable. -The alternate hypothesis, H1, states that there is a change in the general population following an intervention. In the context of an experiment, predicts that the independent variable did have an effect on the dependent variable.

Probability and the Normal Distribution

Symmetrical with the highest frequency in the middle and the frequencies tapering off as you move toward either extreme; Because locations are identified by z-scores, the percentages shown in the figure apply to any normal distribution regardless of the values for the mean and the standard deviation.

The Unknown Population

T Statistic also permits hypothesis testing in situations for which you do not have a known population mean to serve as a standard; The t test does not require prior knowledge about the population mean or the population variance. -All you need to compute a t statistic is a null hypothesis and a sample from the unknown population.

The T Statistic

The Estimated Standard Error (SM) is used as an estimate of the real standard error σM when the value of σ is unknown. -It is computed from the sample variance or sample standard deviation and provides an estimate of the standard distance between a sample mean M and the population mean μ. -Used to test hypotheses about an unknown population mean, μ, when the value of σ is unknown. -The formula for the t statistic has the same structure as the z-score formula, except that the t statistic uses the estimated standard error in the denominator.

Distribution of Sample Means

The collection of sample means for all the possible random samples of a particular size (n) that can be obtained from a population.

The Final Formula and Degrees of Freedom

The complete formula for the independent measures t statistic is as follows: t = (M1 - M2) - (µ1 - µ2) / s(M1-M2) = Sample mean difference - population mean difference / estimated standard error -The degrees of freedom for the independent measures t statistic are determined by the df values for the two separate samples: df for the t statistic = (n1-1)+(n2-1)= n1+n2-2

T Distribution

The complete set of t values computed for every possible random sample for a specific sample size (n) or a specific degrees of freedom (df). -The t distribution approximated the shape of a normal distribution. -The exact shape of a t distribution changes with degrees of freedom.

Repeated-Measures Designs

The dependent variable is measured two or more times for each individual in a single sample; The same group of subjects is used in all of the treatment conditions. -The main advantage of a repeated-measures study is that it uses exactly the same individuals in all treatment conditions. -There is no risk that the participants in one treatment are substantially different from the participants in another.

Difference Scores: The Data for a Repeated-Measures Study

The difference score for each individual is computed by: difference score = D = X2 - X1 -Where X1 is the person's score in the first treatment and X2 is the score in the second treatment.

Hypothesis Tests with the Independent Measures t Statistic

The independent measures t statistic uses the data from two separate samples to help decide whether there is a significant mean difference between two populations or between two treatment conditions. 1. State the hypotheses and select the alpha level. 2.Compute the df for an independent-measures design. 3. Obtain the data and compute the test statistic. 4. Make a decision.

Measuring Effect Size for the Independent Measures t

The independent-measures t hypothesis test also allows for measuring effect size by computing the percentage of variance accounted for, r^2. -The calculation of r^2 for the independent measures t is exactly the same as it was for the single-sample t.

The Formulas for an Independent Measures Hypothesis Test

The independent-measures t uses the difference between two sample means to evaluate a hypothesis about the difference between two population means. Thus, the independent measures t formula is: t = sample mean difference - population mean difference / estimated standard error = (M1 - M2) - (µ1 - µ2) / s(M1-M2) -In each of the t-score formulas, the standard error in the denominator measures how accurately the sample statistic represents the population parameter. -In the single-sample t formula, the standard error measures the amount of error expected for a sample mean and is represented by sM. -For the independent-measures t formula, the standard error measures the amount of error that is expected when you use a sample mean difference (M1 - M2) to represent a population mean difference (µ1-µ2); Standard error for the sample mean difference is represented by the symbol s(M1-M2). -The estimated standard error of M1 - M2 can be interpreted in two ways: 1. The standard error is defined as a measure of the standard or average distance between a sample statistic (M1 - M2) and the corresponding population parameter (µ1 -µ2) 2. When the null hypothesis is true, the standard error is measuring how big, on average, the sample mean difference is.

Comparison of One-Tailed vs. Two-Tailed Tests

The major distinction between one-tailed and two-tailed tests is in the criteria they use for rejecting H0. -A one-tailed test allows you to reject the null hypothesis when the difference between the sample and the population is relatively small, provided the difference is in the specified direction. -A two-tailed test requires a relatively large difference independent of direction.

The Mean of the Distribution of Sample Means

The mean of the distribution of sample means always is identical to the population mean. This mean value is called the expected value of M. -The sample mean is an example of an unbiased statistic, which means that on average the sample statistic produces a value that is exactly equal to the corresponding population parameter; the average value of all the sample means is exactly equal to μ.

Sampling Error

The natural discrepancy, or amount of error, between a sample statistic and its corresponding population parameter.

Directional Hypotheses and One-Tailed Tests

The nondirectional (two-tailed) test is more commonly used than the directional (one-tailed) alternative; On the other hand, a directional test may be used in some research situations, such as exploratory investigations or pilot studies or when there is a priori justification.

Percentile Rank

The percentage of individuals with scores at or below the value; when a score is referred to by its rank, the score is called a PERCENTILE. The percentile rank for a score in a normal distribution is simply the proportion to the left of the score.

Statistical Power

The probability that the test will correctly reject a false null hypothesis;power is the probability that the test will identify a treatment effect if one really exists. -Researchers typically calculate power as a means of determining whether a research study is likely to be successful; i.e. before they actually conduct the research. -To calculate power, however, it is first necessary to make assumptions about a variety of factors that influence the outcome of a hypothesis test; Factors such as the sample size, the size of the treatment effect, and the value chosen for the alpha level can all influence a hypothesis test.

Alpha Level

The probability that the test will lead to a Type I error. -That is, the alpha level determines the probability of obtaining sample data in the critical region even though the null hypothesis is true.

The Hypotheses for a Related-Samples Test

The researcher's goal is to use the sample of difference scores to answer questions about the general population; the researcher would like to know whether there is any difference between the two treatment conditions for the general population. -We are interested in difference scores; we would like to know what would happen if every individual in the population were measured in two treatment conditions (X1 and X2) and a difference score (D) were computed for everyone. -For a repeated-measures study, the null hypothesis states that the mean difference for the general population is zero. In symbols: H0: μD = 0 -The alternative hypothesis states that there is a treatment effect that causes the scores in one treatment condition to be systematically higher (or lower) than the scores in the other condition. In symbols, H1: µD ≠ 0.

The t Statistic for a Repeated-Measures Research Design

The single sample t-statistic formula will be used to develop the repeated-measures t test. t = M - μ/sM -The sample mean, M, is calculated from the data, and the value for the population mean, µ, is obtained from the null hypothesis. -The estimated standard error, sM, is calculated from the data and provides a measure of how much difference can be expected between a sample mean and the population mean. -For the repeated-measures design, the sample data are difference scores and are identified by the letter D, rather than X; therefore, we will use Ds in the formula to emphasize that we are dealing with difference scores instead of X values. -The population mean that is of interest to us is the population mean difference (the mean amount of change for the entire population), and we identify this parameter with the symbol µD. -With these simple changes, the t formula for the repeated-measures design becomes t - Md - µD / SMD. -In this formula, the estimated standard error, SMD, is computed in exactly the same way as it is computed for the single-sample t statistic; 1. Compute the variance (or the SD) for the sample of D scores. 2. The estimated standard error is then computed using the sample variance and the sample size, n.

The Standard Error of M

The standard deviation of the distribution of sample means, σM. The standard error provides a measure of how much distance is expected on average between a sample mean (M) and the population mean (μ). -Standard error measures how well an individual sample mean represents the entire distribution, specifically, how much distance is reasonable to expect between a sample mean and the overall mean for the distribution of sample means. -Magnitude is determined by: 1. The size of the sample: The Law of Large Numbers states that the larger the sample size (n), the more probable it is that the sample mean will be close to the population mean. 2. The standard deviation of the population from which the sample is selected: there is an inverse relationship between the sample size and the standard error. -Bigger samples have smaller error, and smaller samples have bigger error.

The t Statistic for a Repeated-Measures Research Design

The t statistic for a repeated measures design is structurally similar to the other t statistics we have examined;the major distinction of the related-samples t is that it is based on difference scores rather than raw scores (X Values).

Homogeneity of Variance

The third assumption is referred to as homogeneity of variance and states that the two populations being compared must have the same variance; homogeneity of variance is most important when there is a large discrepancy between the sample sizes.

The Hypothesis Test: Step 2

The α level establishes a criterion, or "cut-off", for making a decision about the null hypothesis. The alpha level also determines the risk of a Type I error. α = .01, α = .05 (most used), α = .001 -The critical region consists of outcomes that are very unlikely to occur if the null hypothesis is true. That is, the critical region is defined by sample means that are almost impossible to obtain if the treatment has no effect.

Calculating the Estimated Standard Error

To develop the formula for s(M1-M2) we consider three points: 1. Each of the two sample means represents it own population mean, but in each case there is some error. 2. The amount of error associated with each sample mean is measured by the estimated standard error or M. 3. For the independent-measures t statistic, we want to know the total amount of error involved in using two sample means to approximate two population means. a. To do this, if the samples are the same size, we will find the error from each sample separately and then add the two errors together. b. When the samples are of different sized, a pooled or average estimate, that allows the bigger sample to carry more weight in determining the final value, is used.

Probability and the Distribution of Sample Means

Used to find the probability associated with any specific sample; because the distribution of sample means presents the entire set of all possible sample means, we can use proportions of this distribution to determine probabilities. -"All the possible sample means" refers to the distributions of sample means.

The Role of Probability in Inferential Statistics

Used to predict the type of samples that are likely to be obtained from a population; establishes a connection between samples and populations.

Hypothesis Tests with the T Statistic

We begin with a population with an unknown mean and an unknown variance, often a population that has received some treatment. -The null hypothesis states that the treatment has no effect; specifically, H0 states that the population mean is unchanged; Thus, the null hypothesis provides a specific value for the unknown population mean. -The sample data provide a value for the sample mean. -The variance and estimated standard error are computed from the sample data. -When these values are used in the t formula, the result becomes : t = sample mean - population mean /estimated standard error

Determining Proportions and Probabilities for t Distributions

We use a t distribution table to find proportions for t statistics. -A close inspection of the t distribution table will demonstrate that, as the value for df increases, the t distribution becomes more similar to a normal distribution.

Measuring Effect Size for the T Statistic

With a t Test, it is also possible to measure effect size by computing the percentage of variance accounted for by the treatment; This measure is based on the idea that the treatment causes the scores to change, which contributes to the observed variability in the data. -By measuring the amount of variability that can be attributed to the treatment, we obtain a measure of the size of the treatment effect. For the t statistic hypothesis test: percentage of variance accounted for = r^2 = t^2 / t^2 +df -A confidence interval is an interval, or range of values centered around a sample statistic; the logic behind a confidence interval is the a sample statistic, such as a sample mean, should be relatively near to the corresponding population parameter. Step 1: Select a level of confidence and look up the corresponding t values in the t distribution table; this value, along with M and sM obtained from the sample data, will be plugged into the estimation formula: μ = M +/- t*sM

Z-Scores and Location within the Distribution of Sample Means

Within the distribution of sample means, the location of each sample mean can be specified by a z-score: M - μ z = ───── σM -The sign tells whether the location is above (+) or below (-) the mean. -The number tells the distance between the location and the mean in terms of the number of standard deviations.

The Z-Score Formula

z = M - μ/ σM = sample mean - hypothesized population mean/standard error between M and μ


Kaugnay na mga set ng pag-aaral

1:6 HR Competencies: Business Acumen Quiz

View Set

informatics/Health Information Technology PrepU

View Set