Research H & L Ch 6

Ace your homework & exams now with Quizwiz!

12. What is an effect size and and how is it calculated?

- effect size is the difference between two means (e.g., treatment minus control) divided by the standard deviation of the two conditions. -ES=(M1-M2)/SD -It is the division by the standard deviation that enables us to compare effect sizes across experiments.

5. a. What is a directional hypothesis?

-A precise statement indicating the nature and direction of the relationship or difference between the variables and -because there is already some background information on the topic.

Type I error

-An error that occurs when a researcher concludes that the independent variable had an effect on the dependent variable, when no such relation exists; a "false positive"

16. What is considered small, medium, or large effect?

Cohen proposed a general method for interpreting these type of effect sizes d = .2 small effect d = .5 medium effect d = .8 large effect This is a guideline for interpretation

4. What is the lowest probability level accepted? pg 170

p+.05 level as the cut off

15. What is r squared?

-coefficient of determination r= correlation r sq=coefficient

practical significance

An important result with practical implications; different from statistical significance.

11. What is another name for practical significance?

Clinical significance

d= m1-m2/SDpooled

Key to symbols: d = Cohen's d effect size m = mean (average of treatment or comparison conditions) s = standard deviation Subscripts: t refers to the treatment condition and c refers to the comparison condition (or control condition).

2. What are common terms for probability level? p170 ...

Level of confidence or significance level, the probability that the results of an experiment could be due to chance or sampling error Also know as Alpha Level

Type I error

Many people decide, before doing a hypothesis test, on a maximum p-value for which they will reject the null hypothesis. This value is often denoted α (alpha) and is also called the significance level

9. a. What is statistical power?

The power of any test of statistical significance is defined as the probability that it will reject a false null hypothesis. Statistical power is inversely related to beta or the probability of making a Type II error. In short, power = 1 - β.

Cohen's D

d=mean of population 1-mean of popuation 2di divided by standard deviation

9. b. What is an adequate number for statistical power?

- 80% or .80 is an acceptable level of power indicates that the study had enough power to avoid a Type 2 error and and detect a significant difference = 8/100 of finding a significant difference if it existed

How can type 1 error be avoided?

- avoid making a Type I error by selecting a lower significance level of the test, e.g. by rejecting the null hypothesis when P<0.01 instead of P<0.05. -On the other hand, when you accept the null hypothesis in a statistical test (because P>0.05), and conclude that there is no difference between samples, you can either: -have correctly concluded that there is no difference; -have accepted the null hypothesis when in fact it is false, and therefore you have failed to uncover a difference where such a difference really exists. -In this case you make a Type II error. β is the probability of making a Type II error.

3. What does probability level mean? aka alpha level or level of confidence

- number calculated with statistical techniques that tells researchers how likely it is that the results of their experiment occurred by chance and not because of the independent variable or variables; the convention in science is to consider the probability level is less than 5 in 100 that the results might be due to chance factors and not the independent variables studied

statistical power

-The ability of a study to demonstrate an association/correlation/causation if one exists. Frequency of the condition under study, magnitude of the event, study design, and sample size -The probability of concluding that there is a difference when one does exis

How can the researcher control for this? con't

For a 95% confidence level, the value of alpha is 0.05. This means that there is a 5% probability that we will reject a true null hypothesis. In the long run, one out of every twenty hypothesis tests that we perform at this level will result in a type I error

What is statistical power?

Statistical power is affected chiefly by the size of the effect and the size of the sample used to detect it. Bigger effects are easier to detect than smaller effects, while large samples offer greater test sensitivity than small samples.

What is statistical power?

The ability of a study to demonstrate an association or causal relationship between two variables, given that an association exists. For example, 80% power in a clinical trial means that the study has a 80% chance of ending up with a p value of less than 5% in a statistical test (i.e. a statistically significant treatment effect) if there really was an important difference (e.g. 10% versus 5% mortality) between treatments. If the statistical power of a study IS low, the study results will be questionable (the study might have been too small to detect any differences). By convention, 80% is an acceptable level of power. See also p value.

14. What is the relationship between effect size and standard deviation?

effect size is measured ( Cohen's D) in terms of the number of standard deviations the means differ by

Type II error

-An error that occurs when a researcher concludes that the independent variable had no effect on the dependent variable, when in truth it did; a "false negative"

8. a. What is a Type II error?

-An incorrect decision to accept the null hypothesis when in fact the null is false -Failing to find an effect of the IV on the DV when in reality an effect does exists.

-The p value determines whether or not we reject the null hypothesis -We use it to estimate whether or not we think the null hypothesis is true. -The p value provides an estimate of how often we would get the obtained result by chance, if in fact the null hypothesis were true.

-If the p value is small, reject the null hypothesis and accept that the samples are truly different with regard to the outcome. -If the p value is large, accept the null hypothesis and conclude that the treatment or the predictor variable had no effect on the outcome.

12. What is effect size and how is it calculated?

-Reflects the degree to which a comparison and treatment group differ on a particular measure (e.g., recidivism) -The amount of variance among scores in a study accounted for by the independent variable.

6. a. What is a Type I error?

-Rejecting the null hypothesis when it is in fact true is called a Type I error -Error of rejecting null hypothesis when in fact it is true (also called a "false positive"). You think you found a cause effect relationship but ONE IS NOT THERE

P-value

-The probability (ranging from zero to one) that the results observed in a study (or results more extreme) could have occurred by chance -we accept a p value of 0.05 or below as being statistically significant =means a chance of 1/ 20, & not very unlikely.

P-value

-The probability (ranging from zero to one) that the results observed in a study (or results more extreme) could have occurred by chance. -Convention is that we accept a p value of 0.05 or below as being statistically significant. That means a chance of 1 in 20, which is not very unlikely. This convention has no solid basis, other than being the number chosen many years ago. When many comparisons are being made, statistical significance can occur just by chance. A more stringent rule is to use a p value of 0.01 ( 1 in 100) or below as statistically significant, though some folk get hot under the collar when you do it.

-Inferential statistic is used to test some hypothesis -Inferential statistics are used to make generalizations from a sample to a population -There are two sources of error that may result in a sample's being different from (not representative of) the population from which it is drawn These are: 1. Sampling error - chance, random error 2. Sample bias - constant error, due to inadequate design

-The reason for calculating an inferential statistic is to get a p value (p = probability) -The p value is the probability that the samples are from the same population with regard to the dependent variable (outcome) -Usually, the hypothesis we are testing is that the samples (groups) differ on the outcome -The p value is directly related to the null hypothesis.

13. What is Cohen's D? d= m1-m2/SDpooled

-a measure of a effect size or practical significance between two independent groups -measure of effect size that assesses the difference between two means in terms of standard deviation d=mean of population 1-mean of popuation 2di divided by standard deviation

Bonferroni procedure

-a multiple comparisons procedure in which the familywise error rate is divided by the number of comparisons - as a control for Type I error -it is used when a multiple comparisons are done

8. b. How can a Type II error be avoided? con't

-a quantification of the study objectives, i.e. decide what difference is biologically or clinically meaningful and worthwhile detecting -In addition, you will sometimes need to have an idea about expected sample statistics such as e.g. the standard deviation. This can be known from previous studie

7. a. What is the Bonferroni procedure?

-an adjustment where the alpha level is made more stringent when a statistical analysis is used multiple times on data gathered from the same participants

7. b. Why is it used?

-as a way of "correcting" Type I error rate for number of comparisons. Divide alpha level by number of comparisons to keep experiments alpha at .05

r squared

-explains how much of the variation in the dependent variable can be explained by variation in the independent variables -the closer you get to 100% the better proof of correlation, beyond chance and we reject the null

10. How does sample size affect statistical power?

-increase sample size will increase statistical power and detect significant difference that are really present -but too large of a sample size can detect significant differences that are statistical significant but not practical

alpha level of significance

-probability of making a type I error; -conditional probability of rejections the null hypothesis when it is actually true

statistical power

-probability that the study will give a significant result if the research hypothesis is true. -The probability of concluding that there is a difference when one does exis

6. b. How can the researcher control for this?

-set the A (alpha) 0.05 -set alpha level stringent (low) enough to detect significant difference, and not too high as to detect difference when there is none -the more stringent the alpha level the less like a chance to commit Type I Error

What is effect size and how is it calculated?

-statistical calculations (a metric) that estimate the clinical the significance of a particular intervention - is uneffected by sample size

Type I error info

-the probability of Type I error is α -The significance level α is the probability of making the wrong decision when the null hypothesis is true.

1. What is the process of hypothesis testing? pg 169

1. State the null and the alternative hypothesis 2.Set the Alpha Level or significance level to use in evaluating hypothesis 3. Gather data 4. Perform Statistical test to produce calculated values 5. Compare calculated value to critical value to determine statistical significance 6. make hypothesis decisions

What is statistical power?

In plain English, statistical power is the likelihood that a study will detect an effect when there is an effect there to be detected. If statistical power is high, the probability of making a Type II error, or concluding there is no effect when, in fact, there is one, goes down.

How can the researcher control for this?

Type I errors can be controlled. The value of alpha, which is related to the level of significance that we selected has a direct bearing on type I errors. Alpha is the maximum probability that we have a type I error

8. b. How can a Type II error be avoided?

You can avoid making a Type II error, and increase the power of the test to uncover a difference when there really is one, mainly by increasing the sample size. To calculate the required sample size, you must decide beforehand on: -the required probability α of a Type I error, i.e. the required significance level (two-sided); -the required probability β of a Type II error, i.e. the required power 1-β of the test;


Related study sets

PEDS: Chapter 47 Nursing Care of the Child with an Infection

View Set

CE Shop Practice Sales Exam Questions

View Set

Approach to patient with Dyspnea

View Set

Syllabus Quiz/Academic Integrity Quiz

View Set

Nicos Weg: Von Kopf bis Fuß; Bist du fit; Fitness; Ist das gesund

View Set

NUTRITION AND DRUGS BSC 1025C QUIZ 1

View Set