Research methods midterm

Ace your homework & exams now with Quizwiz!

The premise that the Minnesota Attitude Inventory is a valid measure of attitude toward school is an example of a

basic assumption

quantitative research

data collected through measuring things, analysis by numerical comparisons (stats)

5 characteristics of research

systematic logical empirical reductive replicable

maturation

Processes within the participants that operate as a result of time passing (aging, fatigue, hunger) Ex: physical education test given in the fall and again in the spring to elementary students, improvements can be due to growing larger, stronger, and thus being able to perform better

replicable

Research process recorded, enabling replication

empirical

Researcher collects data on which to base decisions

A research hypothesis

is directly testable

The researcher is proceeding on the belief that self-concept can be changed in a summer (three months). This is an example of a(n)

basic assumption

Just before the posttest, students in the control group were shown a film in another class that related directly to the subject matter being studied in the treatment group. This represents what kind of threat to internal validity?

history

Interviewers may become more skilled with practice, which could cause differences between the results of participants interviewed early in the study and those of participants interviewed later. This threat to internal validity is known as

instrumentation

logical

Examination of procedures used allows researchers to evaluate the conclusions drawn

instrumentation

Changes in instrument calibration, including lack of agreement within and between observers Ex: using a spring to measure force, unless spring is calibrated regularly, it will decrease in tension which will result in different readings. Having different observers may not rate the same performance in the same way (this can be avoided by training or using same observer)

selection bias

Choosing comparison groups in a nonrandom manner Ex: groups are different to begin with, results in any differences being in initial selection bias and not the treatment

stability reliability

Coefficient of reliability measured by the test-retest method on different days

construct validity

Degree to which a test measures a hypothetical construct; usually established by relating the test to some behavior Ex: What you are trying to measure is a non observable thing, measuring happiness/depression levels Ex: person with high sports performance are expected to have certain behaviors, such as complementing the opponents more often than someone who has low sports perforce. Observer could compare the two by counting the number of times the high performer and low performer complaints the opponent.

The internal validity of an experimental design is concerned with what question?

Did the independent variable really produce a change in the dependent variable?

internal validity

Did the treatment (IV) cause the change in the outcome (DV)? Ex: basic research

testing

The effects of one test on the subsequent administrations of the same test Ex: taking a multiple choice test and then retaking it a few days later. Having beginner tennis players do 20 serves one day and then not practice and do it again a few days later, they will usually do better the second time.

Validity is determined by finding the correlation between scores on

a test and some independent criterion

intuition

considered to be common sense or self-evident. Many self-evident truths, however, are subsequently found to be false

3 criteria for cause and effect

cause must precede the effect in time cause and effect must be correlated with each other correlation between cause and effect cannot be explained by another variable

The author of a test of anxiety proceeds on the premise, based on the literature, that the performance of people with high anxiety suffers when they are under stress. In an experiment, the author finds that people who scored as highly anxious on the test performed more poorly on a stressful task. The test author maintained that this finding was evidence of what kind of validity?

construct

Comparing test items with the course objectives (course topics) checks which type of validity?

content

A physical education teacher develops a skill test in volleyball. After administering the test to 50 students, she asks the volleyball coach to rate the students on volleyball skills. She then correlates the students' test scores with the coach's rating. This is an example of what type of validity?

criterion validity

qualitative research

data collected through participant observation

logical validity

degree to which a measure obviously involves the performance being measured (also known as face validity) Ex: Measure speed using a watch Ex: static balance test that consists of balancing on one foot Ex: speed of movement test that involves a person being timed while running a specific distance

predictive validity (type of criterion validity)

degree to which scores of predictor variables can accurately predict criterion scores Ex: ACT on college performance, results in the future, Correlating scores on a test with scores on another test in the future

criterion validity

degree to which scores on a test are related to some recognized standard or criterion 2 types: concurrent & predictive

The number of participants, the selection of the self-concept inventory, and the length of the study are all examples of

delimitations

rationalistic method

derive knowledge through reasoning. A good example is the following classic syllogism: Basketball players are tall. Tom Thumb is a basketball player. Therefore, Tom Thumb is tall.

In a research study in which the treatment involved quite intense physical training, 40% of the participants in the treatment group dropped out as compared with 5% of the control group. This threat to internal validity is called

experimental mortality

experimental research

involves manipulation of treatments in attempt to establish cause and effect

Special language that is regularly used in a particular field but which may not be meaningful to people outside the field is called

jargon

Recognizing that grade point average may not completely reflect success in school is an example of a

limitation

The researcher states that he is aware that responses to questions on a self-concept scale may not always be what the person really believes. This is an example of a(n)

limitation

basic research

limited direct application but researcher has careful control of conditions

concurrent validity (type of criterion validity)

measuring instrument is correlated with some criterion that is administered concurrently, or at about the same time Ex: measuring max oxygen consumption as cardiovascular fitness- instead of using a complex O2 consumption test, researcher decides to use a stair stepper test. He tests a group of participants on both the O2 test and stair stepper to determine if the stair stepper is a valid test to measure O2 consumption.

observational definition

observational phenomenon that enables the researcher to test empirically whether the predicted outcomes can be supported Ex: if a study looks at the effect of music on fatigue, you need to define fatigue. You would not use the word fatigue in your purpose statement, but you would say it is when the participant was unable to maintain pedaling rate of 50 rev per min for 10 sec (or however you are going classify fatigue in your study) Ex: if you are observing learning, you need to define learning as something like 5 successful tests or some other observable criterion

The researcher states that self-concept is represented by scores on the Tennessee Self Concept Scale. This is an example of a(n)

operational definition

Defining volleyball skill as a score on the AAHPERD Volleyball Test is a(n):

operational definiton

The researcher's statement that "the boys in the adventure program will make significantly greater gains in self-concept than the boys in the control group" is an example of a(n)

research hypothesis

At the end of a well-written introduction, the reader should be able to predict the

statement of the problem (purpose of study)

In experimental design, when comparisons are made of groups that have been selected on the basis of their extreme scores, the posttest means of the groups tend to move toward the mean of the entire population from which the extreme groups were selected. This threat to internal validity is called

statistical regression

4 unscientific methods of problem solving

tenacity intuition authority rationalistic method

A pretest of knowledge about microcomputers is given to a group of students 5 min prior to a film on the subject. A posttest given 10 min after the film showed a 10-point gain from the pretest. The researcher concludes that the film produced the gain. Which of the following is likely the threat to internal validity?

testing

assumptions

what you assume to be true in the study Ex: you assume that the people in your fitness program are not doing any other type of exercise, following the rules, being truthful Ex: in skin fold measurement study, you assume the caliper is a valid and reliable instrument for measuring subcutaneous fat, skin-fold measurements taken at the body sites indicate the fat stores in limbs and trunk, sum of all skin-fold represents valid indication of body fatness

independent variable

"cause" variable manipulated also categorical or moderator variables

dependent variable

"effect" variable

A researcher finds that the control group performs above its usual performance when compared with the experimental group. This effect is named the

Avis effect

history

Events occurring during the experiment that are not part of the treatment Ex: in study evaluating physical education on physical fitness of fifth graders, the fact that 60% of children participated in a recreational soccer program. Soccer program is likely to produce fitness benefits that are difficult to separate from benefits of physical education program.

Individuals performing well merely because they are being observed (and not necessarily because of any effect of treatment) are considered to be under the influence of the

Hawthorne effect

Reliability

How consistent a measure is If you measure more than once, will it give you the same answer? Test can be reliable but not valid, but a test can not be valid but not reliable (must be able to get eh same answer every time in order for it to be reliable)

experimental mortality

Loss of participants from comparison groups for nonrandom reasons Differential loss of participants Ex: participants get bored, the treatment is hard or time consuming

authority

Reference to some authority has long been used as a source of knowledge. Although this approach is not necessarily invalid, it does depend on the authority

reductive

Researcher takes individual data and uses them to establish general relationships

limitations

Shortcoming or influence that either cannot be controlled or is the results of the delimitations imposed by the investigator Very similar/same as extraneous variable cannot control but has/can effect your study Ex: In exercise study, participant dehydration or over consumption of caffeine

interaction of selection bias and experimental treatment

When a group is selected on some characteristic, the treatment may work only on the groups possessing that characteristic Ex: drug education program may be more effective in changing attitudes about drugs in first year college students compared to medical student who are already aware of their effects.

threats of internal validity (9)

history maturation testing instrumentation statistical regression selection bias experimental mortality selection-maturation interaction expectancy

The extent to which the results of a study can be attributed to the treatments used in the study is the definition of what kind of validity?

internal validity

non-experiemental research

no manipulation of treatments

If a thermometer measured the temperature in an oven as 400° five days in a row when the temperature was actually 337°, this measuring instrument would be considered

reliable but not valid

reliability altered by...

Behavior of tester Behavior of participants Calibration

content validity

Degree to which a test (usually educational settings) adequately samples what was covered in the course Ex: driving test will test what you were taught and covered in the driving class

validity

The degree to which a test or instrument measures what it is supposed to measure how accurate is your measurement??

external validity

The degree to which you can generalize To what populations, settings, or treatments can the outcome be generalized Ex: applied research

relative or interactive effects of testing

The pretest may make the participant more aware of or sensitive to the upcoming treatment. As a result, the treatment is not as effective without the pretest. Ex: fitness pre test may make participants aware of their low fitness levels and then motivate them to follow the prescribed program closely, while an untested population would not be aware of their levels so the program may be less effective.

reactive effects of experimental arrangement

Treatments that are effective in constrained situations (labs) may not be effective in less constrained settings (real world) Ex: lab vs real life conclusions. Lab is very controlled/different environment

categorical variable

Type of independent variable that cannot be manipulated (age, gender)

Known group difference method

Used for establishing construct validity in which test scores of groups that should differ on a trait or ability are compared Ex: when testing aerobic power, compare test scores of sprinters/jumpers and distance runners. Sprinters/jumpers should have better aerobic power and so if the test scores report that when compared to distance runners, that provides evidence that the test measures aerobic power.

When we say that research is empirical, we mean that the researcher

collects data on which to base decisions

Reliability of a measure may be established by

giving the test to the same people on two different occasions and correlating the two sets of scores correlating scores from alternate forms of the same test

applied research

has direct value to practitioners but researcher has limited control over research setting

statistical regression

The fact that groups selected based on extreme scores are not extreme on subsequent testing Ex: comparing behavior of children on playground on an activity scale (very active children to very inactive), the next observation the very active are likely to be less active and the less active will be more active (both moving away from the extreme and to the overall average)

threats to external validity (4)

relative or interactive effects of testing interaction of selection bias and experimental treatment reactive effects of experimental arrangement multiple-treatment interference

inter-tester reliability

Degree to which different testers can obtain the same scores on the same participants, also called objectivity How can the objectivity of a test be improved? Clear instructions Training for instructors Use the same ranker

expectancy

Experimenters' or testers' anticipating that certain participants will perform better Ex: in an observational study: observer may rate posttest higher than pretest because they expect change. In participants: subs in a basketball may perform worse due to knowing they are not a starter and the coach treats them differently.

delimitations

Limitation imposed by the researchers in the scope of the study; a choice that the researcher makes to define a workable research problem Ex: number of participants, choice of your dependent variable, choosing only one gender, choosing age of participants,

selection-maturation interaction

The passage of time affecting one group but not the other in nonequivalent group designs Ex: studying differences of fitness program in 6 year olds from different schools, yet the schools have different enrollment policies so one group is actually 5 months older. Makes it hard to determine if differences are from advanced age or fitness program

For a test to have good content validity, the following statement(s) must be true:

The test adequately samples what was covered in the course. The percentage of points for each topic area reflects the amount of emphasis given that topic.

multiple-treatment interference

When participants receive more than one treatment, the effects of previous treatments may influence subsequent ones Ex: volleyball players learn two new types of moves to get into hitting position, yet learning one move may enhance or interfere with the other. A better design would have two separate groups learn different steps.

evidence of validity (4)

logical validity content validity construct validity criterion validity (2 types: concurrent, predictive)

research hypothesis

Hypothesis that is based on logical reasoning and predicts the outcome of the study

null hypothesis

Hypothesis that says that there is no differences among treatments (or relationships among variables)

systemic

Identify and label variables, then followed by design of research that tests relationships among those variables

learning effect

Improvement in a score due to already taking the test How do we avoid this problem? Control group, Familiarize the people with the task, once there is no more improvement, then start the experiment Only do the test once

tenacity

People sometimes cling to certain beliefs despite the lack of supporting evidence. Our superstitions are good examples of the method called tenacity. "we always have done it this was"

control variable

Potential independent variable that is consistent across all participants Factor that could possibly influence the results and that is kept out of the study Ex: gender, fitness levels of participants selected to be tested

extraneous variable

Potential independent variables that are not controlled Factor that could affect the relationship between the independent and dependent variables but that is not included or controlled usually brought up in discussion section Ex: Researchers want to determine if listening to fast-paced music improves performance during a marathon. Extraneous variables might include the volunteers' physical condition, motivation to succeed, and overall energy levels on the day of the marathon.

The accuracy with which a 12-min run estimates maximal oxygen consumption in a group of male high school seniors represents

concurrent validity


Related study sets

Biology Practice Questions (Chapter 9)

View Set

Chemistry Unit 2: Measurement and Analysis

View Set

Chapter 23 Assessing the Abdomen

View Set

CH 21 Lower Respiratory Disorders

View Set

Physiology Chapter 4 CELL MEMBRANE TRANSPORT

View Set