CH 12: Principles of Test Selection and Administration

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Typical Error of Measurement (TE)

-Includes both the equipment error and biological variation of athletes.

Difference between 2 sets of scores can arise from a number of different factors

-Intrasubject (within subjects) variability -Lack of interrater (between raters) reliability or agreement -Intrarater (within raters) variability -Failure of the test itself to provide consistent results.

Test-retest reliability

-Statistical correlation of the scores from two administrations. -Any difference between 2 sets of scores represents measurement error.

Discriminant validity

-The ability of a test to distinguish between two different constructs and is evidenced by a low correlation between the results of the test and those tests of a different construct. -Best if tests in a battery measure relatively independent ability components (e.g. flexibility, speed, aerobic endurance) -Good discriminant validity of tests in a battery avoids unnecessary expenditures of time, energy, and resources in administering tests that correlate very highly with each other.

Construct Validity

-The ability of a test to represent the underlying construct (theory developed to organize and explain some aspects of existing knowledge and observations). -Refers to overall validity or the extent to which the test actually measures what it was designed to measure.

Face Validity

-The appearance to the athlete and other casual observers that the test measures what it is purported to measure. -If a test or test item has face validity, the athlete is more likely to respond to it positively.

Content Validity

-The assessment by experts that the testing covers all relevant subtopics or component abilities in appropriate proportions. -For athletic testing, these include all the component abilities needed for a particular sport or sport position (ex: jumping ability, sprinting ability, and lower body strength).

Interrater Reliability (Objectivity or Interrater agreement)

-The degree to which different raters agree in their test results over time or on repeated occasions. -It is a measure of consistency .

Criterion-referenced validity

-The extent to which test scores are associated with some other measure of the same ability. -Three Types: Concurrent, predictive, and discriminant

Concurrent validity

-The extent to which test scores are associated with those of accepted tests that measure the same ability. -Often estimated statistically -Ex: A Pearson product-moment correlation coefficient based on the scores on a new body fat assessment device and those from dual-energy x-ray absorptiometry would provide a measure of the concurrent validity of the new test.

Predictive Validity

-The extent to which the test score corresponds with future behavior or performance. -This can be measured through the comparison of a test score with some measure of success in the sport itself. -Ex: Calculate the statistical correlation between the overall score on a battery of tests used to assess potential for basketball and a measurement of actual basketball performance as indicated by a composite of such quantities as points scored, rebounds, assists, blocked shots, forced turnovers, and steals.

Intrarater Variability

-The lack of consistent scores by a given tester. -Ex of causes: Inadequate training, inattentiveness, lack of concentration, or failure to follow standardized procedures for device calibration, athlete preparation, test administration, or test scoring.

Test Administration

1. Health and Safety Considerations: Strength and conditioning coaches must be aware of testing conditions that can threaten the health of athletes and be observant of signs and symptoms of health problems that warrant exclusion from testing. 2. Selection and Training of Testers: Test administrators should be well trained and should have a thorough understanding of all testing procedures and protocols. 3. Recording Forms: Scoring forms should be developed before the testing session and should have space for all test results and comments. 4. Test Format: A well-organized testing session, in which the athletes are aware of the purpose and procedures of the testing, will enhance the reliability of test measures. 5. Test Batteries and Multiple Testing Trials: When time is limited and the group of athletes is large, duplicate test setups may be employed to make efficient use of testing time. 6. Sequence of Tests: Determine the proper order of tests and the duration of rest periods between tests to ensure test reliability. 7. Preparing Athletes for Testing: The date, time, and purpose of testing battery should be announced in advanced to allow athletes to prepare physically and mentally.

Test Selection

1. Metabolic Energy System Specificity-A valid test must emulate the energy requirements of the sport for which ability is being assessed. 2. Biomechanical Movement Pattern Specificity: Important movement in the sport. Sports differ in their physical demands. 3. Experience and Training Status: Well trained athlete versus a beginner athlete. 4. Age and Sex: Affect the validity and reliability of a test. 5. Environmental Factors: It is necessary to consider the environment when selecting and administering tests of basic athletic ability. High ambient temperature, especially in combination with high humidity, can impair endurance exercise performance, pose health risks, and lower the validity of an aerobic performance endurance exercise test.

Sequence of Tests Order

1. Non-fatiguing Tests (e.g. height, weight, flexibility, skinfold and girth measurements, vertical jump) 2. Agility tests (e.g. T-test, pro agility test) 3. Maximum power and strength tests (e.g. 1 RM, power clean, 1 RM squat) 4. Sprint tests (e.g. 40 m sprint with split times at 10m and 20m) 5. Local muscular endurance tests (e.g. pushup test) 6. Fatiguing anaerobic capacity tests (e.g. 300 yard [275m] shuffle) 7. Aerobic capacity tests (e.g. 1.5 mile [2.4 km] run or Yo-Yo intermittent recovery test)

Test

A procedure for assessing ability in a particular endeavor.

Pre-test

A test administered before the beginning of training to determine the athlete's basic ability levels. A pre-test also allows the coach to design the training program in keeping with the athlete's initial training level and the overall program's objectives.

Mid-test

A test administered one ore more times during the training period to assess progress and modify the program needed to maximize benefit.

Field Test

A test used to assess that is performed away from the laboratory and does not require extensive training or expensive equipment.

Convergent Validity

Evidence by high positive correlation between results of the test being assessed and those of the recognized measure of the construct (the "gold standard"). -Type of concurrent validity that field tests used by strength and conditioning professionals should exhibit. -A test may be preferable to the gold standard if it exhibits convergent validity with the standard if it exhibits convergent validity with the standard if it exhibits convergent validity with the standards but is less demanding in terms of time, equipment, expense, or expertise.

Intrasubject Variability

Lack of consistent performance by the person being tested.

Reliability

Measure of the degree of consistency or repeatability of a test.

Formative Evaluation

Periodic reevaluation based on mid-tests administered during the training, usually at regular intervals. It enables monitoring of the athlete's progress and adjustment of the training program according to the athlete's needs. It also allows evaluation of different training methods and collection of normative data.

Post-test

Test administered after the training period to determine success of the training program in achieving the training objectives.

When administering a test battery

Tests should be separated by at least 5 minutes to prevent the effects of fatigue from confounding test results.

Validity

The degree to which a test or test item measures what it is supposed to measure.

Evaluation

The process of analyzing test results for the purpose of making decisions.

Measurement

The process of collecting test data.

Quiz Question 1: A college basketball coach would like to know which one of her players has the most muscular power. Which of the following is MOST valid test for measuring muscular power? a. vertical jump b. 1 RM bench press c. 5 RM squat d. 100 m (109-yard sprint)

a

Quiz Question 3: All of the following procedures should be followed when testing an athlete's cardiovascular fitness in the heat EXCEPT a. performing the test in an indoor facility b. using salt tables to retain water c. scheduling the test in the morning d. drinking fluids during the test

b

Quiz Question 5: Which of the following sequences will produce the MOST reliable results? a. 1 RM power clean, T-test, 1.5 mile run (2.4 km) run, 1 RM bench press b. T-test, 1 RM power clean, 1 RM bench press, 1.5 mile (2.4 km) run c. 1.5 mile (2.4 km) run, 1 RM bench press, T-test, 1 RM power clean d. 1 RM bench press, 1 RM power clean, T-test, 1.5 mile (2.4 km) run

b

Quiz Question 2: When measuring maximal strength of a soccer player, which of the following could potentially adversely affect the test-retest reliability of the results? I. using multiple testers II. retesting at a different time of day III. an athlete's inexperience with the tested exercise IV. using an established testing protocol a. I and III only b. II and IV only c. I, II, and III only d. II, III, and IV only

c

Quiz Question 4: The bench press, vertical jump, and 10 m (11 yard) sprint are MOST valid tests for which of the following American football positions? a. quarterback b. defensive back c. wide receiver d. defensive lineman

d


Set pelajaran terkait

MKT360 Supply Chain Management Final Study Guide

View Set

48 Skin Integrity and Wound Care

View Set

Chapter 7 and 4 Marketing test 5 question

View Set

MKT 381 Exam 2 Prep (Ch07 - Ch08)

View Set