Unit 3 Chapter 9 Stats

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

In the controlled environment of the laboratory simple designs often work well.

Field experiments and experiments with living subjects are exposed to more variable conditions and deal with more variable subjects.

The simplest form of control is comparison. Experiments should compare two or more treatments in order to avoid confounding of the effect of a treatment with other influences, such as lurking variables.

Randomization uses chance to assign subject to the treatments. Randomization creates treatment groups that are similar (except for chance variation) before the treatments are applied. Randomization and comparison together prevent bias, or systematic favoritism, in experiments.

The Logic of Randomized Comparative Experiments

Randomized comparative experiments are designed to give good evidence that differences in the treatments actually cause the differences we see in the response.

Principles of Experimental Design

The basic principles of statistical design of experiments are: 1. Control the effects of lurking variables on the response, most simply by comparing two or more treatments. 2. Randomize -- use chance to assign subjects to treatments. 3. Use enough subjects in each group to reduce chance variation in the results.

Confounding

Two variables (explanatory variables or lurking variables) are confounded when there is effects on a response variable cannot be distinguished from each other.

The logic is as follows:

-Random assignment of subjects forms groups that should be similar in all respects before the treatments are applied. -Comparative design ensures that influences other than the experimental treatments operate equally on all groups. -Therefore, differences in average response must be due either to treatments or to the play of chance in the random assignment of subjects to the treatments.

Good experiments require attention to detail as well as good statistical design. Many behavioral and medical experiments are double-blind. Some give a placebo to a control group. Lack of realism in an experiment can prevent us from generalizing its results.

A matched pairs design compares just two treatments. In some matched pairs designs, each subject receives both treatments in a random order. And others, the subjects are matched in pairs as closely as possible, and each subject in a pair receives one of the treatments.

Matched Pairs Design

A matched pairs design compares two treatments. Choose pairs of subjects that are as closely matched as possible. Use chance to decide which subject in a pear gets the first treatment. The other subject and the pair gets the other treatment. Sometimes each "pair" in a matched pairs design consists of just one subject, who gets both treatments one after the other. Use chance to decide the order in which subjects receive the treatments.

Randomized Comparative Experiment

An experiment that uses both comparison of two or more treatments and random assignment of subjects to treatment is a randomized comparative experiment.

Statistical Significance

An observed effect so large that it would rarely occur by chance is called statistically significant.

You can carry out randomization by using software or by giving labels to the subject in using a table of random digits to choose treatment groups.

Applying a treatment to many subjects reduces the role of chance variation and makes be experiment more sensitive to differences among the treatments.

Matched Pairs Designs

Completely randomized designs are the simplest statistical design for experiments. They illustrate clearly the principles of control randomization and adequate number of subjects. However, more elaborate designs are common. In particular, matching the subjects in various ways can produce more precise results than simple randomization. One common design that combines mashing with randomization is the matched pairs design.

Experiment

Deliberately imposes some treatment on individuals in order to observe their response; the purpose of an experiment is to study whether or not a treatment causes a change in the response.

Completely Randomized Design

In a completely randomized experimental design, all the subjects are allocated at random among all the treatments.

Even well-designed experiments often face another problem: lack of realism. Practical constraints may mean that the subjects or treatments or setting of an experiment don't realistically duplicate the conditions we really want to stud.

Lack of realism can limit our ability to apply the conclusions of an experiment to the settings of greatest interest. Statistical Analysis of an experiment cannot tell us how far the results will generalize. Nonetheless, the randomized comparative experiment, because of its ability to give convincing evidence for causation, is one of the most important ideas in statistics.

Observational study

Observes individuals and measures variables of interest but does not attempt to influence the responses. The purpose of an observational study is to describe some group or situation.

The remedy for the confounding is to do a comparative experiment in which some students are taught in the classroom and other, similar students take the course online. The classroom group is called a control group. Most well-designed experiments compare two or more treatments.

Personal choice will bias our results in the same way that volunteers bias the results of online opinion polls. The solution to the problem of bias in sampling is random selection, and the same is true in experiments. The subjects assigned to any treatment should be chosen at random from the available subjects.

In an experiment, we impose one or more treatments on individuals, often called subjects. Each treatment is a combination of values of the explanatory variables, which we call factors.

The design of an experiment describes the choice of treatments and the manner in which the subjects are assigned to the treatment. The basic principles of a statistical design of experiments are control and randomization to combat bias and using enough subjects to reduce chance variation.

Cautions About Experimentation

The logic of a randomized comparative experiment depends on our ability to treat all these subjects identically in every way except for the actual treatments being compared. Good experiments therefore require careful attention to details to ensure that all subjects really are treated identically.

We can produce data intended to answer specific questions by observational studies or experiments. Sample surveys that select a part of a population to represent the whole are one type of observational study. Experiments, unlike observational studies, actively impose some treatment on the subjects of the experiment.

Variables are confounded when their effects on a response can't be distinguished from each other. Observational studies and uncontrolled experiments often failed to show that changes in an explanatory variable actually cause changes in a response variable because the explanatory variable is confounded with lurking variables.


Ensembles d'études connexes

Midterm Chapters 1-6 Social Psych

View Set

Europe Thinking Spatially and Data Analysis - Europe - Physical Geography

View Set

Convection in the Atmosphere and Wind

View Set

Community Development and Planning Quiz 2

View Set

Chapter 41 Pathophysiology NCLEX-Style Review Questions

View Set

Chapter 23: Lower Respiratory Tract Disorders

View Set

English 2 Online Learning Expectations

View Set