PSYC 2005 Reading #8

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

example of type II error in psychology

- depression example - in truth: the new therapy is better at treating the patients' depression than the usual therapy method - however, when you conduct your study, the results for the sample patient are not strong enough to allow you to reject the null hypothesis - for example, the sample patient may be the rare case in which they respond to the new therapy method just the same as the usual therapy - the results would not be significant and you would decide the research hypothesis (that the new therapy is different from the usual therapy) is not supported - by not rejecting the null hypothesis, and thus refusing to draw a conclusion, without knowing it, you would be making a type II error

why are decision errors possible in hypothesis testing?

because you are making decisions about populations based on information in samples; hypothesis testing is all about probabilities, but it can not eliminate the possibility of errors entirely

what is the probability of making a type II error called?

beta --> β

effect size conventions

construct came from jacob cohen; based on the effects found in psychology research in general; recommended that, for the kind of situation we are considering in this chapter, we should think of a small effect size as about .20 - standard rules about what to consider a small, medium, and large effect size, based on what is typical in psychology research - also known as Cohen's conventions

medium effect size: what is the d and % overlap?

d = 0.50, which means an overlap of about 67%

large effect size: what is the d and % overlap?

d = 0.80. this is only about a 53% overlap

standardized effect size

divide the raw score effect size for each study by its respective population standard deviation

effect size indicates the extent to which two populations __ ___ overlap

do not; how much they are separated due to the experimental procedure

in hypothesis-testing situations you don't know the mean of population 1, so you actually use an

estimated mean; you are actually figuring an estimated effect size

cohen's effect size conventions provide a _____ for deciding on the importance of the effect of a study in relation to what is typical in psychology

guide; they are only a ~guide~; it is important to consider the magnitude of effect that is typically found in that specific area of research, as well as the potential practical or clinical implications of such an effect.

we often want to know not only whether a result is significant, but...

how big the effect is

effect size _______ with greater differences between means

increases

when it comes to setting significance levels, protecting against one kind of decision error _______ the chance of making the other

increases

the relationship between type I and type II errors is

inverse insurance policy for type I error: set significance level at p < 0.001 --> HOWEVER, that increases your risk of committing a type II error because the results have to be quite strong for you to reject the null hypothesis, even if the research hypothesis is true insurance policy for type II error: set significance level to p < 0.20 --> HOWEVER, that increases your risk of committing a type I error because even if the null hypothesis is true, it is fairly easy to get a significant result just by accidentally getting a sample that is higher or lower than the general population before doing the study

what is the chance of making a type I error?

it is the same as the significance level you set (ex. p < 0.05)

does knowing the statistical significance tell you about the extent of the effect (how big or small it is)?

knowing statistical significance does not give you much information about the size of the effect; significance tells us that the results of the experiment should convince us that there is an effect. but significance does not tell us how big this effect is

do researchers know when they've committed a type II error?

no

what kinds of psychologists/researchers are concerned about type II errors? why?

psychologists/researchers that are interested in practical applications --> because if a type II error occurs, that could mean that a valuable practical procedure (i.e. new therapy) is not used when it really should be used

can researchers tell when they have made a type I error?

researchers cannot tell when they have made a type I error. however they can try to carry out studies so that the chance of making a type I error is as small as possible

how can researchers carry out their studies to reduce the chance of making a type II error?

set a very lenient significance level, such as p < 0.10 or even p < 0.20 even if the study produces only a very small effect, this effect has a good chance of being significant

decision errors

situations in which the right procedures lead to the wrong decisions; how, in spite of doing all your figuring correctly, your conclusions from hypothesis testing can still be incorrect; it is not about making mistakes in calculations or even about using the wrong procedures incorrect conclusions in hypothesis testing in relation to the real (but unknown) situation, such as deciding the null hypothesis is false when it is really true.

the lower the alpha, the ______ the chance of a type I error

smaller

raw score effect size

the effect size is shown as the difference between the Population 1 mean and the Population 2 mean ex. (220 - 200 = 20 --> 20 is the raw score effect size) because the effect size is given in terms of the raw score on the measure

how do researchers lower their risk of committing a type I error?

they decrease the alpha from p < 0.05 to potentially p < 0.001 this way, the result of a study has to be very extreme for the hypothesis-testing process to reject the null hypothesis

type II error (false negative)

this kind of decision error is in not rejecting the null hypothesis when in reality the null hypothesis is false the decision error you would make here is when the hypothesis-testing procedure leads you to decide that the results of the study are inconclusive when in reality the research hypothesis is true failing to reject the null hypothesis when in fact it is false; failing to get a statistically significant result when in fact the research hypothesis is true

what if you want to compare this effect size with the result of a similar study that used a different measure? (i.e. your measure used a 0 to 400 scale while another study used a 1-10 scale).

use a standardized effect size

estimated effect size

using the estimated mean in hypothesis-testing situations (you don't know the mean of population 1) to figure out this score (the estimated effect size)

example/extension of decision errors

we decide to reject the null hypothesis only if a sample's mean is so extreme that there is a very small probability (say, less than 5%) that we could have gotten such an extreme sample if the null hypothesis is true. but a very small probability is not the same as a zero probability. therefore, in spite of your best intentions, decision errors are always possible.

type I error (false positive)

when you conclude that the study supports the research hypothesis when in reality the research hypothesis is false or if you reject the null hypothesis when in fact the null hypothesis is true

what is the trade-off between the two conflicting concerns for committing a type I or type II error?

worked out by a compromise: the standard 5% (p < 0.05) and 1% (p < 0.01) significance levels

is there a cost in setting the significance level at too extreme a level?

yes; this results in potentially committing a type II error

if you set a very extreme significance level, such as p < 0.001, you run a different kind of risk...

you may carry out a study in which, in reality, the research hypothesis is true but the result does not come out extreme enough to reject the null hypothesis

example of a type I error in psychology

- researcher is testing whether or not a new therapy method will yield the same response (responding well to the treatment) as the current therapy method - suppose the new therapy is in general about equally effective as the usual therapy - if the researcher randomly selected a a sample of one depressed patient to study, the clinical psychologists might just happen to pick a patient whose depression would respond unusually well to the new therapy - randomly selecting a sample patient like this is unlikely. but such extreme samples are possible - should this happen, the clinical psychologists would reject the null hypothesis and conclude that the new therapy is different than the usual therapy. their decision to reject the null hypothesis would be wrong—a type I error what reassures researchers is that they know from the logic of hypothesis testing that the probability of getting a sample like this if there really is no difference, and thus making a type I error, is kept low (less than 5% if you use the .05 significance level)

does the Publication Manual of the American Psychological Association (2009) recommend that you include some measure of effect size in research results?

- the Publication Manual of the American Psychological Association (2009), the accepted standard for how to present psychology research results, recommends that some measure of effect size is included along with the results of significance tests - whenever you use a hypothesis-testing procedure, you should also figure effect size - 6th step in hypothesis-testing: figure the effect size

why would we want to compare the results of this procedure to that of other procedures studied in the past? why do psychologists look at more than just the comparison between statistical significant results?

- you might think that psychologists could just compare studies using significance levels - and it is true that we are more confident of a result if it is significant at the .01 level than at the .05 level. - however, significance, including whether you make the .01 or .05 level, is influenced by both the effect size and the number of people in the study. - thus, a study that is significant at the .05 level with a sample of 20 people would have had to have a much bigger effect size to be significant than a study of 1000 people that came out significant at the .01 level

what is the significance level (the chance of making a type I error) called?

alpha (greek letter a) represented by α

effect size plays an important role for which two other important statistical topics?

1. meta-analysis 2. power

what are the two types of decision errors in hypothesis testing?

1. type I error 2. type II error

even when you set the probability at the conventional .05 or .01 levels, you will still make a type I error sometimes... what percentages would that be?

5% to 1% of the time

small effect size: with a d of .20, the populations of individuals have an overlap of about __%

85

in hypothesis testing, effect size is

a measure of the difference between population means; standardized measure of difference (lack of overlap) between populations


Set pelajaran terkait

HI 101: Making of the Modern World

View Set

Kapitel 11 - Freizeit und Unterhaltung (Teil 2)

View Set

MHR 461 Law Final Chapter 8, chapter 8- land use and torts, Ch 8

View Set

Chapter 4: Muscular System (Part 2)

View Set

LS Chapter 9: Long-Term Liabilities

View Set