Testing exam 2

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

What is the difference convergent and discriminate validity.

1. Convergent: consists of providing evidence that two tests that are believed to measure closely related skills or types of knowledge correlate strongly; two different tests end up ranking students similarly 2. Discriminant: consists of providing evidence that two tests that do not measure closely related skills or types of knowledge do not correlate strongly (ex: dissimilar ranking of students)

Describe content validity and give one example when it is used.

Refers to the actual content within a test. A test that is valid in content should adequately examine all aspects that define the objective. Content validity is a qualitative measurement; requires the use of recognized subject matter experts to evaluate whether test items assess defined content. An example is standardized assessment for biology.

What is the definition of reliability?

Refers to the consistency of a measure. Reliability is an assessment tool that produces stable and consistent results. A test is considered reliable if we get the same result repeatedly.

Describe criterion validity

Refers to the correlation between a test and a criterion that is already accepted as a valid measure of the goal or question. If a test is highly correlated with another valid criterion, it is more likely that the test is also valid.

What is the definition of validity?

Refers to whether or not a test really measures what it claims to measure. It pertains to the connection between the purpose of the research and which data the researcher chooses to quantify that purpose.

What are two types of criterion validity and describe each one and give one example for each.

1. Predictive validity- if the test accurately predicts what it is supposed to predict. Example: the SAT gives predictive validity for performance in college. 2. Concurrent validity- when the predictor and criterion data are collected at the same time. Example: nursing students take two final exams to assess their knowledge (one-practical test; one-paper test) ***CONCURRENT- "at the same time"

List four methods to measure reliability.

1.Internal: the extent to which a measure is consistent within itself 2.External: the extent to which a measure varies from one use to another 3.Split-half method: measures the extent to which all parts of the test contribute equally to what is being measured 4.Test re-test: measures the stability of a test over time

Describe construct validity.

Constructs are not clearly defined nor do they have established criterion against which validity can be measured. Construct validity evidence is assembled through a series of activities involved in showing a relationship between the test and other measurements.

What is one way to increase reliability when it is low?

Increasing the number of items

Describe the inter-rater method to describe reliability and what can observers do to improve reliability to this method.

Inter-rater reliability is assessed by having two or more independent judges score the same test; researchers observe specified behaviors like crying, yelling, etc. during the same time and then compare their data. If the data is similar then it is reliable. Inter-rater reliability is especially useful when judgements can be considered subjective. **Observers can improve reliability: 1. Training observers in the observation techniques being used and making sure everyone agrees with them, 2. Ensuring behavior categories have been operationalized (objectively defined)

What does discriminability analysis describe reliability?

It determines what questions are pulling reliability down

Make sure you understand if a test is reliable or valid and vice versa

Reliability refers to the degree to which scale produces consistent results, when repeated measurements are made (precision). Validity is a measure of accuracy. A valid instrument is always reliable, but a reliable instrument may not be valid

Describe the parallel forms method to describe reliability

accomplished by creating a large pool of test items that measure the same quality and then randomly dividing the items into two separate tests. The two tests should ten be administered to the same subjects at the same time.

Describe face validity

also called logical validity, where you apply a superficial/subjective assessment of whether or not your study or test measures what it is supposed to measure. It assesses if the test "looks valid"

Describe the split half method to describe reliability.

measures the extent to which all parts of the test contribute equally to what is being measured. This is done by comparing the results of one half of a test with the results from the other halp. The the two halves of the test provide similar results this would suggest that the test has internal reliability.

Describe the test re-test method and then tell why the timing of the test is important.

measures the stability of a test over time. A typical assessment would involve giving participants the same test on two separate occasions but close together in time. If the same or similar results are obtained then external reliability is established. A test-retest correlation of +.70 or greater is considered to indicate good reliability.


Set pelajaran terkait

Ch.3 Protein Synthesis (Transcription & Translation)

View Set

AP Econ - Chapter 7: Consumers, Producers, and the Efficiency of Markets

View Set

Lesson 8: Calculation of Available Fault Current (2023)

View Set

CHAPTER 1: INTRODUCTION TO INFORMATION TECHNOLOGY-HARDWARE, SOFTWARE, AND TELECOMMUNICATIONS

View Set

Anatomy&Physiology Ch 12- Nervous Tissue

View Set

nclex GU, Pediatric GU questions Nclex, renal gu nclex, Renal & GU- NCLEX, GU NCLEX 3500, NCLEX GU

View Set