Marketing Research Exam 4

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

p-value

"probability value" and is essentially another name for an observed or computed significance level. They are compared to significance levels to test hypotheses. low p-values mean there is little likelihood that the statistical expectation is true higher p-values equal more support for a hypothesis

calculating degrees of freedom for x^2 test

(R-1)(C-1) multiplying number of rows minus one times the number of columns minus one.

degrees of freedom in a test of two means

(n - k)

Statistical Power

A measure of how much ability exists to find a significant effect using a specific statistical tool. mathematically, power is a direct function of Type II error rate (1 - beta) Increases as sample size (n) increases

One tailed test

A one-tailed univariate test is appropriate when the research hypothesis implies that an observed mean can only be greater than or less than hypothesized value. Thus, only one of the tails or the bell shaped curve is relevant. Example One tailed hypothesis: "H1: The number of pizza restaurants with a postal code in FL is greater than five." A one tailed test can be determined from a two-tailed test result by taking half of the observed p-value. Whenever there is any doubt about whether a one or two tailed test is appropriate, opt for the less conservative two tailed test.

Type II Error

An error caused by failing to reject the null hypothesis when the alternative hypothesis is true; has a probability of beta. Practically, a type II error occurs when a researcher concludes that no relationship or difference exists when in fact one does exist.

Type I Error

An error caused by rejecting the null hypothesis when it's true; has probability of alpha. Practically, a Type I error occurs when the researcher concludes that a relationship or difference exists in a population when in reality it does not exist.

The hypothesis testing procedure

Process: -The specifically stated hypothesis is derived from the research objectives -a sample is obtained and the relevant variable is measured -the measured sample value is compared to the value either stated explicitly or implied in the hypothesis ----if the value is consistent with the hypothesis, the hypothesis is supported ----if the value is not consistent, the hypothesis is not supported

Alternative hypothesis

States the opposite of the null, which normally conforms to one of the common types of relationships above. The researcher's hypothesis is generally stated in the form of an alternative hypothesis. Example: Null hypothesis: The mean is equal to 3.0 Alternative hypothesis: The mean does not equal 3.0. Substantive hypothesis: Customer perceptions of friendly service are significantly greater than three.

goodness of fit

a general term representing how well some computed table or matrix of values matches some population or predetermined table or matrix of the same size. X^2 test is generally associated with this. How well some matrix or table of numbers matches or fits another matrix of the same size.

t-test

a hypothesis test that uses the t-distrubtion. A univariate t-test is appropriate when the variable being analyzed is interval or ratio. mean against some specific value

f-test

a procedure used to determine whether there is more variability in the scores of one sample than in the scores of another sample.

t-distribution

a symmetrical, bell-shaped distribution that is contingent on sample size, has a mean of 0, and a standard deviation equal to 1. When sample size (n) is larger than 30, the t-distribution and z-distribution are almost identical.

z-test for comparing proportions

a technique used to test the hypothesis that proportions are significantly different for two independent samples or groups.

independent samples t-test

a test for hypotheses stating that the mean scores for some interval--or ratio scaled variable grouped based on some less-than interval classificatory variable are not the same. tests the differences between means taken from two independent samples or groups. the t-value is a ratio with the information about the difference between means (provided by the sample) in the numerator and the standard error in the denominator

hypothesis test of proportion

a test that is conceptually similar to the one used when the mean is the characteristic of interest but that differs in the mathematical formulation of the standard error of the proportion.

x^2 test

allows us to conduct tests for significance in the analysis of the R X C contingency table where R = row and C = column

cross tabulation (contingency

among the most widely used statistical techniques among marketing researching. It's a joint frequency table distribution of observations on two or more nominal or ordinal variables. Used the most because the results can be easily communicated. much like tallying. Two categories, four cells result. The x^2 test for a contingency table involves comparing the observed frequencies (O) with the expected frequencies (F) in each cell of the table.The goodness (or closeness) of fit of the observed distribution with the expected distribution is captured by this statistic.

paired sample t-test

an appropriate test for comparing the scores of two interval variables drawn from related population. Used to examine the effect of downsizing on employee morale.

pooled estimate of the standard error

an estimate of the standard error for a t-test of independent means that assumes the variances of both groups are equal.

bivariate tests of differences

an investigation of hypothesis stating that two (or more) groups differ with respect to measures on a variable. Only involve two variables: a variable that acts like a dependent variable and a variable that acts as a classification variable.

One way analysis of variance ANOVA

analysis involving the investigation of the effects of one treatment variable on an interval scaled dependent variable --a hypothesis -testing technique to determine whether statistically significant differences in means occur between two or more groups. A categorical independent variable and a continuous dependent variable are involved. Statistical hypothesis for ANOVA: u1 = u2 = u3 =...uk Example: "At least one group mean is not equal to another group mean." The problem requires comparing variances to make inferences about the means.

level of scale measurement: nonparametric statistics

appropriate when the variables being analyzed do not conform to any known or continuous distribution. distribution free.

Two methods to determine whether the test result is significant

calculated value vs. critical value:

f-statistic/ratio/value

can be obtained by taking the larger sample variance and dividing by the smaller sample variance. Variance between groups/ variance within groups

Null hypothesis

can be thought of as the expectation of findings as if no hypothesis existed; if nothing is really happening. The state implied by the statistical null hypothesis is generally the opposite of the state represented by the actual hypothesis. Example: Actual hypothesis: The average number of children per family is greater than 1.5 Null hypothesis: The average number of children per family is equal to 1.5 (not greater than). statement about status quo

Choosing the appropriate Statistical technique

consider the following points: 1. the type of question to be addressed 2. the number of variables involved 3. the level of scale measurement involved in each variable

Significance level

critical probability associated with a statistical hypothesis test that indicates how likely it is that an inference supporting the difference between an observed value and some statistical expectation is true. Acceptable significance level is .1, .05, or .01. The acceptable level of Type 1 error

Relational hypotheses

examine how changes in one variable vary with changes in another. This often is tested by assessing covariance in some way, very often with regression analysis.

Hypotheses about differences from some standard

examine how some variable differs from some preconceived standard. These tests can involve either a test of a mean for better-than-ordinal variables or a test of frequencies if the variable is ordinal or nominal. These tests typify univariate statistical tests

Hypotheses about differences between groups

examine how some variable varies from one group to another. These tests are very common in casual designs, which very often involve a comparison of means between groups

Beta

incorrect decision

Level of scale measurement: parametric statistics

involve numbers with known, continuous distributions; when the data are interval or ratio-scaled and the sample size is large, parametric statistical procedures are appropriate. Normal (bell-shaped) distribution

chi-square test

one of the most basic tests for statistical significance that is particularly appropriate for testing hypotheses about frequencies arranged in a frequency or contingency table. Univariate tests involving nominal or ordinal variables are examined with a X^2. Appropriate way for testing whether the values in a one-way frequency table are different than would be expected by chance.

two tailed test

one that tests for differences from the population mean that are either greater or less. z-tests and t-tests can be one or two tailed. The extreme values of the normal curve (or tails) on both right and left are considered. When a research question does not specify whether a difference should be greater than or less than, a two-tailed test is most appropriate. Example of two-tailed research question: "The number of take-out pizza restaurants within a postal code in Germany is not equal to 5."

f-distribution

probability distribution of the ratios of sample variances.

x^2 distribution

provides a mean for testing the statistical significance of a contingency table. involves comparing frequencies (Oi) with expected frequencies (Ei) in each cell of the table. Captures the goodness (or closeness) of fit of the observed distribution with the expected distribution is captured by this statistic.

SSB

systematic variation of scores between groups due to manipulation of an experimental variable or group classifications of a measured independent variable or between group variance.

bivariate tests: compare whether two interval or ratio variables are correlated to one another

t-test for correlation--low p-values indicate the variables are related to one another

multivariate statistical analysis

tests hypotheses and models involving multiple (three or more) variables or sets of variables

Univariate statistical analysis

tests hypotheses involving only one variable

bivariate statistical analysis

tests hypotheses involving two variables

calculating the x^2 value

the bivariate can be calculated in the same manner as univariate except for the degrees of freedom.

grand mean

the mean of a variable over all observations

degrees of freedom

the number of observations minus the number of constraints or assumptions needed to calculate a statistical term.

within group error or variance

the sum of differences between observed values and the group mean for a given set of observations; also known as total error variance

between groups variance

the sum of differences between the group mean and the grand mean summed over all groups for a given set of observations.

SST

the total observed variation across all groups and individual observations

critical value

the values the lie exactly on the boundary of the region of rejection

univariate hypothesis

typified by tests comparing some observed sample mean against a benchmark value. The test addresses the question: is the sample mean truly different from the benchmark?

SSE

variation of scores due to random error or within-group variance due to individual differences from the group mean. This is the error of prediction

bivariate tests: compare whether two less-than interval variables are related using cross tabs

x^2--low p-values indicate the variables are related to each other

Compare an observed frequency with a predetermined value

x^2--low p-values indicated that observed frequency is different than predetermined value

compare an observed proportion with some predetermined value

z or t-test for proportions--low p-values indicate that the observed proportion is different than the predetermined value

bivariate tests: compare whether two observed means are different from one another

z-test or t-test--low p-values indicate the means are different

Compare an observed mean with some predetermined value

z-test or t-test--low p-values indicated the observed mean is different than some predetermined value (often 0).


Ensembles d'études connexes

Chapter 18 The Limited Partnership

View Set

US Gov / Econ Chapter 1 Study Guide:

View Set

Chapter 25, Newborn Nutrition and Feeding

View Set

Chapt 1 Introduction to Computer Programming

View Set

Review Nernst and Goldman Equations

View Set

Chapter 14: Consumer Decision Process and Problem Recognition

View Set