Week 12: Measurement and Data Collection

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Concepts of Measurement Theory

Measurement theory includes rules that guide how things are measured -Directness of measurement -Measurement error -Level of measurement -Reliability -Validity

Maintaining Consistency

-Developing a system/pattern for data collection -Who will do it -When and where -In what order will specific measures be collected -Assuring that all data is being collected -How will data be collected and stored -Surveys, data collection forms, web-based direct data entry

Critiquing Interview Methods

-How was consistency in the interview process maintained? -Look for the author's description of -The interview questions -Pretesting the interview protocol -Training interviewers -The interview process -Probing -Recording interview data -Are the interview questions relevant to the research purpose and objectives, questions, or hypotheses? -Does the design of the questions tend to bias subjects' responses? -Does the sequence of questions tend to bias subjects' responses? Hypothesis are not appropriate for qualitative -Did more than one person code and interpret the findings? (Inter-rater reliability) -What was the process for reaching a consensus? -Were interview results shared with participants to check for accuracy? (Validity)

Understand the concepts of reliability and validity (psychometric properties).

-It is concerned with how consistently the measurement technique measures the concept of interest Concerned with the amount of random error - one time, unusual, or chance mistakes- marking wrong answer **If a measure is not reliable, it can not be valid. Three Types 1.Equivalence 2. Stability 3. Consistency Reliability and Validity in Quanitative Research: Collectively termed -Psychometrics -Clinimetrics Refers to quality of instruments used to measure variables VALIDITY: General Definition The extent to which an instrument or measurement approach measures the actual parameter of interest or the true score *Concerned with the amount of systemic error Example - error from miscalibrated equipment

Recruiting Subjects

1. Finding participants based on the study eligibility criteria -Talking to colleagues about the study and asking for referrals -Having a focused research associate who is involved in patient care who looks out for eligible patients -Poster, brochures, newspaper ads, the web, etc. 2. Obtaining Consent 3. Group Assignment (if applicable)

The Brief Pain Inventory

A Questionnaire that Combines General Questions with Several Numeric Rating Scales

Indirect Measures

Abstract concepts such as pain, depression, coping, etc. Measured using indicators Ask questions related to the concept Examples: Indicators of coping may be -Speed of effective problem-solving -Optimism -Self-efficacy

Protecting Integrity

Assuring continued integrity of the intervention and data collection procedures -Staff Training -Audits to assure treatment and assessments have been administered correctly -Checking eligibility, randomization procedures, blinding Assuring data collection accuracy -Checking data forms with source documentation -Using techniques to minimize errors

Which of the following can be measured using direct measures?

Concrete factors, such as age, gender, height, and weight

Direct Measures

Concrete things such as oxygen saturation, VS, weight

Apply all you know about measurement and data collection when reviewing and critiquing measurement and data collection approaches described in a research article.

Critiquing the Measurement Process: 1. Is the instrument clearly described? 2. Are the techniques that were used to administer and score the scale provided? 3. Is information about validity and reliability of the scale described from previous studies? 4. If the instrument was used in a new population, did the author perform pilot studies to re-examine the reliability & validity of the instrument? 5. If the scale was developed for the study, was the instrument development process described? Critiquing the Data Collection Process: 1. Was data collection process clearly described? 2.Was data collection conducted in a consistent way? 3. Were research controls maintained? 4. If data collectors were used, were they adequately trained?

Understand concepts related to measurement error.

Difference between the true measure and what is actually measured Research measurement has to do with trying to minimize measurement error Random error: the difference is without pattern -Sometimes scores are high, sometimes low -You don't get the same answer each time -Mark the wrong answer by mistake Systematic error: the variation in measurement is in the same direction -Consistent error occurs when the measurement approach does not represent the true score -A poorly calibrated scale that is always 2 lbs off will not provide a true measurement of weight.

Equivalence Reliability

Equivalence: The consistency of performance (degree of agreement or equivalence) among different raters in assigning scores to the same objects or behaviors in the same measurement situation, using the same tool and/or the same predefined criteria Two or more raters, same patient -INTERRATER RELIABILITY Two or more forms, same patient -PARALLEL FORMS

Understand the various interviewing approaches (unstructured, structured, focus group).

FOCUS GROUPS & CONSIDERATIONS: -Participants are interviewed in groups -Obtain participants' perceptions of narrow subject in A group interview session -Give group a feeling of "safety in numbers" -Nonverbal approaches are included -Discussion helps to provide depth of data CONSIDERATIONS: -Groups should be have 6-10 members -Need to select an effective moderator to keep discussion on track -The setting should be relaxed and comfortable -Interview Guide -Keeps the interview focused and controls variability -High-quality tape recordings should be made with discussion transcribed verbatim CRITIQUING THE USE OF FOCUS GROUPS: Was the group size appropriate? Was group sufficiently homogeneous to speak candidly? Were minority positions identified and explored?

Internal Consistency Reliability

Homogeneity -Are the items related to each other (do they correlate) -Quality of life measurement -Are the items measuring the same thing Determined by split-half reliability or Cronbach's alpha coefficient Lowest acceptable alpha for well-developed measurement tool is 0.80 For newly developed instrument, 0.70 is considered acceptable **What value of a cronbach's alpha tells you that the items are internally consistent or items are homogenous? (measuring the same concept) -The closer the number is to 1 the better

What things could confound test-retest reliability results?

If test retest reliablity is about the consistency of the test, then the thing that you are measuring with that test, has to be stable, therefore, test retest realiability can't be determined Ex: a researcher wants to test the restest a variable that is not constant.

Which of the following is true of interrater reliability?

It is the relationship between two or more data collectors' ratings of the same event.

Scenario

Kid has a rash, nurse is using the scoring sheet to rate the severity of this rash He comes up with a score of 10 He then passes on his scoring to two different people who should be able to interpret what this means (bad, terrible) We would hope they would come up with the same interpretation or equivalent INTERRATER RELIABLITY. - two different raters If these raters were the same person then it would be INTRARATER RELIABLITLY

Be able to recognize a Likert scale, semantic differential scale, and a visual analog scale.

LIKERT SCALE: Most commonly used to describe opinions or attitudes Describes degree of Agreement (agree to disagree) Evaluation (great to terrible) Frequency (Always to never) *great example of an ordinal measurement instrument because it's ranked, still categorical and unknown between the differences between the other categorical options SEMENTIC DIFFERENTIAL SCALE: Consist of questions with two opposing adjectives with 7-point scale Used to measure attitudes or beliefs VISUAL ANALOG SCALE: 100mm line, participant marks rating point on the line NUMERIC RATING SCALES: Rate something on a scale from 0-10

Understand and be able to recognize the four levels of measurement

NOMIAL *Lowest of the four levels of measurement -Categories that are not more or less, but are different from one another in some way -Mutually exclusive and exhaustive categories -Named categories (red, yellow, blue) Examples: Gender Ethnic Background Race ORDINAL -Order/ranking imposed on categories -Numbers must preserve order -You don't know that the spaces between the numbers are equal -Considered to have unequal intervals Examples: Army ranks 1 = Enlisted personnel 2 =Noncommissioned officer 3 = Company officer 4 = Field officer 5 = General officer FACES pain scale INTERVAL Numerical distances between intervals are equal Absence of a zero point Examples: -Temperature Fahrenheit/Celsius Zero temperature doesn't mean that there is an absence of temperature RATIO *Highest level of measurement Continuum of values *IS an Absolute zero point -Weight -Age -Length -Volume

What level of measurement is this and why? How old are you? 15-24 24-34 34 to 44 44 to 54 55 or older

Not mutually exclusive (you can be in multiple categories) Doesn't say age (not specific enough) Not nominal because there is some order to it If you can fill in the blank it would be ratio but you are forced to pick a ratio These options are non exhaustive. You don't have the option to be 0-15 This is ORDINAL data because it's ranked

Common Measurement Approaches

Observation Questionnaires Scales Qualitative Approaches

Reliability and Validity in Qualitative Research

One-on-one or group interviewing may be used to collect qualitative data Approaches to assessing reliability and validity of interview data vary from approaches used in quantitative research

Response Bias

Order and or phrasing of questioning can bias the answers Example regarding the quality of nursing care: "The nurse took really good care of you ... right?" Makes the patient feel like you want them to say yes "How would you describe the quality of the nursing care that you received? " Giving the same response for every question Positively phrased questions should be broken up by negatively phrased questions

Questionaires

Questionnaire questions go into less depth than interview questions - specific questions Example: Demographic questionnaire Advantages : Administration to large samples In person/on phone Self or researcher-administered Mail Disadvantage of mail: Response rates tend to be lower (25-30%), limiting generalizability 50% response rate is considered acceptable Unanswered questions can be a problem

Measurement Scales

Rating Scale Lists an ordered series of categories based on a continuum Categories are assigned a numerical value Items may be summed for a total score - a composite score

Understand the basic components involved in data collection and how these approaches influence internal validity.

Recruiting Subjects Maintaining conistency Maintaing Control Protecting integrity

Validity Types

Regarding psychosocial variables Three Main Types: 1. Content The extent to which the content of a scale is representative of the conceptual domain it is intended to measure 2. Criterion A new method of analysis is compared to a gold standard 3. Construct Validity The extent that the measure relates to other measures of similar concepts *Higher the quality higher the cost of the apple

Scenario

Same student taking a test at 1 pm and the same student taking the test again at 3 pm. Test retest. Assuming this person hasn't studied a lot between 1 pm and 3 pm and their knowledge is stable then test retest is appropriate If the student studied in between then we couldn't test or retest. We are testing knowledge and it isn't stable.

What's wrong with this approach? A researcher wants to assess stability reliability of a pain severity measure. The researcher obtains the first pain measure in women four hours post c-section. The second measure is obtained 24 hours later.

Stability reliability is about the consistency of the test over time, test is about pain. We won't get the same score from 4 hours to 24 hours This is not a reflection of stability realibility because pain is changing over time

Stability Reliability

Test-Retest -Consistency of measurements that one instrument or tool elicits from one group of subjects on two separate measurement occasions **Concerned with the stability of the test (same test) Intrarater Reliability -Consistency with which one rater assigns scores to a set of behaviors on two occasions **Concerned with the stability of the rater (same rater) ex: So if I evaluate a patient's pain, then a different rater evaluates the same patient's pain at a different point and time will the scores be close or similar Stability reliability is about the Consistency of the test or the consistency of the rater over time, which is in contrast to equivalence reliability which is about the consistency of the instrument when using two different approaches or two different raters

Data Collection

The process of acquiring subjects and collecting data for the study Tasks: -Selecting subjects -Collecting data in a consistent way -Maintaining research controls -Protecting the integrity (internal validity) of the study -Solving problems

Be able to recognize observational measurement approaches.

Unstructured observations: -Record what's seen -Less objective Structured observations: -Category systems -Checklists -Rating scales

Maintaining Control

Watching for problems such as: Participants are doing things that will influence the results -Taking extra medications A new confounding variable is discovered Contamination of control group -Study drug becomes available over the counter -Participants read about new intervention and try it on their own

Which uses the higher level of measurement, temperature in Fahrenheit degrees or weight in kilograms?

Weight

The aspect of reliability for which interrater reliability is appropriate is:

equivalence


Set pelajaran terkait

Ch 35 NC of Pts with Liver, Pancreatic, and Gallbladder Diseases

View Set

This is for English 2 answers lol

View Set

Firearms, Tool Marks, and Explosives

View Set

PSI Real Estate Practice Exam - Michigan (Chapter 1)

View Set