Chapter 12: Principles of Test Selection and Administration
what is the degree to which different raters agree in their test results over time or on repeated occasions, also known as objectivity?
interrater reliability
what is the lack of consistent scores by a given tester?
intrarater variability
what is a lack of consistent performance by the person being tested?
intrasubject variability
the difference between two sets of scores can arise from a number of different factors, by which four factors?
intrasubject variability, lack of interrater reliability, intrarater variability, and failure of the test itself to provide consistent results
what is the purpose of a pretest?
it allows the coach to design the training program in keeping with the athlete's initial training level and the overall program objectives
what is the process of collecting test data?
measurement
what is a test administered one or more times during the training period to assess progress and modify the program as needed to maximize benefits?
midtest
what is a procedure for assessing ability in a particular endeavor?
test
which kind of validity is the ability of a test to distinguish between two different constructs and is evidenced by a low correlation between the results of the test and those of tests of a different construct?
discriminant validity
what is the process of analyzing test results for the purpose of making decisions?
evaluation
what is this an example of: a coach examines the results of physical performance tests to determine whether the athlete's training program is effective in helping achieve the training goals or whether modifications in the program are needed?
evaluation
which kind of validity is the appearance to the athlete and other casual observers that the test measures what it is purported to measure?
face validity
what is a test used to assess ability that is performed away from the laboratory and does not require extensive training or expensive equipment?
field test
what is a periodic reevaluation based on midtests administered during the training, usually at regular intervals, monitoring of the athlete's progress and adjustment of the training program?
formative evaluation
what is a logical sequence of administering tests?
1. non-fatiguing tests (height, weight) 2. agility tests 3. maximal testing 4. spring tests 5. local muscular endurance 6. fatiguing anaerobic capacity tests 7. aerobic capacity tests
tests should be separated by at least _____ minutes to prevent the effects of fatigue from confounding test results.
5
a college basketball coach would like to know which one of her players has the most muscular power. which of the following is the MOST valid test for measuring muscular power? a. vertical jump b. 1RM bench press c. 5RM squat d. 100m sprint
a
what is test-retest reliability?
administering the same test several times to the same group of athletes
all of the following procedures should be followed when testing an athlete's cardiovascular fitness in the heat EXCEPT a. performing the test in an indoor facility b. using salt tablets to retain water c. scheduling the test in the morning d. drinking fluids during the test
b
which of the following sequences will produce the MOST reliable results? a. 1RM power clean, T-test, 1.5 mile run, 1RM bench press b. t-test 1RM power clean, 1RM bench press, 1.5 mile run c. 1.5 mile run, 1RM bench press, T-test, 1RM power clean d. 1RM bench press, 1RM power clean, T-test, 1.5 mile run
b
when measuring maximal strength of a soccer player, which of the following could potentially adversely affect the test-retest reliability of the results? i. using multiple testers ii. retesting at a different time of day iii. an athlete's inexperience with the tested exercise iv. using an established testing protocol a. i and iii b. ii and iv c. i, ii, and iii d. ii, iii, and iv
c
which kind of validity is the extent to which test scores are associated with those of other accepted tests that measure the same ability?
concurrent validity
what are the three types of criterion-referenced validity?
concurrent, predicitive, discriminant
which kind of validity is the ability of a test to represent the underlying construct (the theory developed to organize and explain some aspects of existing knowledge and observations), "the test actually measures what it was designed to measure"?
construct validity
which kind of validity is the assessment by experts that the testing covers all relevant subtopics or component abilities in appropriate proportions?
content validity
which kind of validity is this an example of: a test battery for potential soccer players should include, at minimum, tests of sprinting speed, agility, endurance, and kicking power?
content validity
which kind of validity is evidenced by high positive correlation between results of the test being assessed and those of the recognized measure of the construct (the "gold standard")?
covergent validity
which kind of validity is the extent to which test scores are associated with some other measure of the same ability?
criterion-referenced validity
the bench press, vertical jump, and 10m sprint are the MOST valid tests for which of the following American football positions? a. quarterback b. defensive back c. wide receiver d. defensive lineman
d
what is a test administered after the training period to determine the success of the training program in achieving the training objectives?
posttest
which kind of validity is the extent to which the test score corresponds with future behavior or performance?
predictive validity
what is a test administered before the beginning of training to determine the athlete's initial basic ability levels?
pretest
what is a measure of the degree of consistency or repeatability of a test?
reliability
what is a test battery?
series of test items
any difference between the two sets of scores represents measurement error. another statistic that can be calculated is the ________ ________ of ________ (TE), which includes both the equipment error and biological variation of athletes.
typical error of measurement
what is this an example of: close correspondence between the readings on a spring scale and the readings on a calibrated balance scale indicates validity of weighing with the spring scale?
validity
what refers to the degree to which a test or test item measures what it is supposed to measure, and is one of the most important characteristics of testing?
validity
test results are useful only if the test actually measures what it is supposed to measure (______) and if the measurement is repeatable (______).
validity, reliability