FHS 420 Quiz 2

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Sampling Frame

A list (or quasi list) of elements from which a sample is selected. May include individuals, households, or institutions.

What are mail surveys?

Self-administered questionnaires, often including self-mailing questionnaires which are more convenient for the user and increase return rates. Mail surveys must include effective cover letters that peak interest & willingness to respond. Recipient failure to respond to surveys contributes to the non-response bias of the study (the inability/unwillingness of recipients to respond may relate to the research question, and even though a smaller pool of recipients may seem to still represent the larger population, the reasons for non-response may bias and impact the study in certain ways!)

Criterion Related Validity - Known Groups

"Whether an instrument accurately differentiates between groups known to differ in respect to the variable being considered". To test the Known Groups Validity of a scale designed to measure racial prejudice, you might see whether the scores of social work students differ markedly from the scores of KKK members --- obviously they SHOULD be drastically different, and that would confirm the validity of your scale based on assessing the known groups validity of it.

NONprobability sampling

...

TYPES OF CONSTRUCT VALIDITY - Convergent validity

A measure has convergent validity when its results correspond to the results of other methods of measuring the same construct. Example: If your test measuring martial satisfaction produces lower scores for clients who have reported low-satisfaction to a clinician. You cannot have Construct Validity at ALL if you do not have convergent validity. Example of GOOD convergent validity: A high correlation between scale scores measuring student interviewing skills, & field instructor rating of interviewing skill.

Stratified Sampling

A method for obtaining a greater degree of representativeness. (A modification to the simple random/systematic sampling methods). Elements are chosen randomly from homogenous subsets of the population!

Representativeness of sample

A sample will be representative of its population if the sample's aggegrate characteristics closely approximate those same aggregate characteristics in the population (if each characteristic of that population is accounted for in the small sample)

Online surveys

A self administered questionnaire. Quick and inexpensive. Representativeness of respondents may be biased towards those that have access to internet/computers and the knowledge of how to do so. Can the poor participate equally? Can the elderly?

Parameter

A summary description of any given variable in a population - mean income of families, age distribution, etc.

What's a PROBE?

A technique for soliciting (in a nondirective and unbiased manner) a more complete answer to a question. Ex: silence. Or, "in what ways?"..."anything else?"..."Can you elaborate?" Characteristics: neutrality, subtlety. Be a good listener. Remind yourself this is an interview NOT just a normal conversation.

The relationship between reliability & validity

Although it is desirable that a measure be reliable, its reliability does not ensure that it is valid. A measurement can be reliable but not valid, valid AND reliable, neither reliable NOR valid, but a measurement cannot be valid WITHOUT reliability. A great example of reliable, but invalid data: A therapist asking abusive parents multiple times about the abuse they cause their children. For reasons related to custody, they might not want to share this information and will consistently provide false info to the therapist. This is a reliable source of INVALID data.

Qualitative interviewing: The informal conversational interview

An unplanned & unanticipated interaction between an interviewer and respondent that occurs naturally during the course of observation. (The most open-ended form of interviewing.) Very flexible, but more risky for interviewer bias because of its unplanned nature. Able to pursue whatever direction of conversation seems appropriate & interpret answers given to you along the way. Stresses importance of interviewer listening skills & question-forming skills when on the spot. [Listening, thinking, talking at the same time] **Use of neural probes = like silence, or encouraging techniques, etc.

TYPES OF RELIABILITY - Test-Retest Reliability

Assessing a measure's stability over time. To test this give the same test to an individual twice. If the results are similar, you can probably assume that the instrument has acceptable stability. This can be tricky though because PEOPLE might change over time. If you are going to test someone two different times, the circumstances for which they are tested must be identical

TYPES OF VALIDITY - Criterion Related Validity

Based on external criterion that is believed to also measure the variable being measured by our instrument.

Triangulation

Goal of using triangulation: to judge whether the evidence reported in qualitative studies in unbiased and accurate. The use of more than one imperfect data collection alternative in which each option is vulnerable to different potential sources of error, to see if they tend to produce the same findings. Might involve seeing if different interviewers produce the same findings or not and this will unveil how biased or accurate the data is.

Reliability

Has to do with the amount of random error in a measurement. The more RELIABLE a measure is, the less RANDOM ERROR it has. Although reliability is a positive thing it does not ensure accuracy. It is a function of consistency, but something consistent may be consistently Inaccurate. EXAMPLE: using a scale would be a reliable measurement tool but quite possibly inaccurate, due to the potential of setting the scale off.

TYPES OF RELIABILITY: Internal consistency reliability

How well scale items, or subsets of scale items, correlate with each other (for example, if someone were to respond YES that they "enjoy riding bikes" as well as responding NO that they "hate bikes" this would be internal consistency)

Generalizability

If the sample matches the population in a ratio of different characteristics

Qualitative interviewing: The general interview guide approach

Interviews are planned in advance. More structured than informal conversation & involves using a measurement instrument. Participants are asked the same questions, in the same order, so each interview is easily comparable. An interview guide still allows for flexibility - still open-ended and conversational, BUT must cover the same topics with the same amount of breadth.

What does non-response bias in Surveys mean?

It is a threat to the representativeness of survey findings. The seriousness of it depends on the extent to which survey respondents differ from non-respondents, in any important ways relevant to the research question. A small group of respondents to a survey would ideally accurately represent the larger population, but because of potential factors playing in to why they did not respond, it may really bias the survey. What if all the non-respondents didn't respond because they lived in a neighborhood where mail was frequently stolen? Or in a household with so many residents/clutter that mail gets frequently lost? Those could be demographic details important to the study

What are the 5 different survey modes?

Mail surveys, online surveys, interview surveys, telephone surveys, multi-method surveys.

Criterion Related Validity - Concurrent

Measuring if the measurement corresponds to an external criterion that is known concurrently. EXAMPLE: If an instructor tests your interviewing skills before you enter a field placement. The concurrent validity of this exam can be assessed based on whether the scores correspond with the scores students receive in actual practice interviews.

Criterion Related Validity - Predictive

Measuring if the measurement has the ability to predict a criterion that will occur in the future (like, future success in college). Another example would be if an instructor tests student interviewing skills with a formal exam and then assesses how those skills correspond with feedback from field supervisor assessment of student interviewing skills. --- could be even more long term though

Systematic sampling

More efficient than simple random sampling. Every kth element in the total list is chosen systematically for inclusion in the sample. Say the list has 10000 elements and you want a sample of 1000 then you would select every 10th element on the list. With *random start* systematic sampling you would start with say the 7th person and then count every 10 from there - guards against bias.

Types of Criterion Related Validity Include...

Predictive Validity Concurrent Validity Known Group Validity

Probability Sampling

Random sampling in which everyone in the population has an equal chance of being chosen! Includes: Simple random sampling, systematic sampling, stratified sampling and multistage cluster sampling.

The psychometric properties of a measurement instrument

Reliability & Validity

What makes a good survey question? What should you keep in mind when designing questions for a survey/questionnaire?

SIMPLE - is it easy to understand? SPECIFIC - does the respondent know exactly what you are asking? INDIVIDUAL - Are you asking a single question, or several in one? EXHAUSTIVE - Do you give the respondents all possible answer choices for the question? OPTIONAL - Do you allow respondents the opportunity to pass on questions that may involve sensitive information for them? NEUTRAL - Do your questions contain any bias? BALANCED - Do your answer scales have equal amounts of positive & negative responses?

Questionnaire vs. Scale

Scales allows us to represent more complex variables, that cannot be measured by one simple item on a questionnaire. EX: marital satisfaction, level of social functioning, etc. Represented by scores to provide greater potential variance. Scales give a more comprehensive and in-depth, personalized view into something that a questionnaire would only cover in a standardized specific way

Telephone surveys

Sometimes uses random digit dialing instead of going through a phone book. Could also include computer-assisted telephone interviewing. This method is weaker than interviews because the respondent can easily fake a way out of talking on the phone.

Sampling interval

Standard distance between elements selected in the sample (example, in the preceding sample it was 10)

TYPES OF RELIABILITY - STABILITY

The ability of a measurement tool (usually a scale) to measure things over time.

TYPES OF VALIDITY - Construct validity

The degree to which a measure relates to other variables as expected within a system of theoretical relationships and as reflected by the degree of its convergent and discriminant validity. There is construct validity if the "expectations" of what you are measuring align and make sense. So if you are measuring marital dissatisfaction, you will go into it with certain "expectations" about marriage (like if there is violence then there most likely will be some level of dissatisfaction). So if a family admits to having violence but confirms having a SATISFYING marriage, the construct validation of your measurement would be questioned. Basically also says that you wouldn't measure the WEIGHT of something in INCHES - the measure you use must be appropriate for the study.

TYPES OF VALIDITY: Content validity

The degree to which a measurement covers the range of meanings included within the concept. However this cannot technically be guaranteed - so is similar to face validity, because the researcher must make the judgment of whether or not EVERY facet of the concept is covered by the measurement.

Sampling Error

The difference between the true population parameter and the estimated population parameter. (ex: if we were to flip a coin twice, yielding heads both times, and we assumed that this meant we would get heads 100% of the time, this estimate would be off by 50% away from the true population parameter. Because we know its a 50% chance each way.)

Sample Statistic

The element (or set of elements) considered for selection in some stage of sampling.

What are the 3 types of qualitative interviewing?

The informal conversational interview The general interview guide approach The standardized open-ended interview

TYPES OF VALIDITY: Face validity

The least persuasive level of validity. A subjective assessment made by the researcher or other experts. Only a judgment that the measurement APPEARS to measure what it is supposed to.

Response Rate

The number of people participating in a survey divided by number of people who were asked to respond - in the form of a percentage. If 80 people out of 100 respond, the response rate is 80 percent.

Sampling ratio

The proportion of elements in a sample being selected (in above example - 1/10)

TYPES OF RELIABILITY: Paralell forms Reliability

This is alike to internal consistency reliability BUT more time consuming, impractical and difficult. Requires constructing a 2nd measuring instrument that is thought to be equivalent. This is time consuming, and risky, and very rarely used (particularly in social work)

TYPES OF CONSTRUCT VALIDITY - Discriminant validity

This process allows us to check whether our measurement really measures the construct it intends to, and not some other construct that happens to be related (like if our measurement of marital satisfaction actually has more to do with depression). Example of GOOD discriminant validity: For a scale measuring student interviewing skill - low/no correlation between scale scores & field instructor gender/ethnicity.

TYPES OF RELIABILITY - Inter-observer reliability / inter-rater reliability

To assess this you might train two observers to rate the level of empathy they witness in the same videotapes. If they agree over 80% of the time you can assume that the amount of random error in measurement is not excessive

Multistage Cluster Sampling

Used when it is impossible/impractical to create such a huge sample list of the target population. Begins by sampling groups (clusters) of elements in the population and then subsampling individual members of each selected group afterward (like sampling all the churches present in the US and then after obtaining lists of members of each church, subsampling in a systematic way from each list)

Qualitative interviewing: The standardized open ended interview

Used when it's necessary to have the minimum of interviewer effects/biases. Also used when resource limitations leave insufficient time to pursue less structured strategies in a comprehensive way with large #s of respondents. Also used when researchers are tracking individuals over time, and therefore want to reduce the chance that changes observed over time are being caused by changes in the way interviews are conducted. This type of interview will have standardized "directions" to pursue depending on predicted sets of answers that can be given by the respondents. Almost like those adventure books where you choose your path and see where the story takes you after that.

Simple Random Sampling

Using a table of random numbers to select sampling units (sampling units = an element or set of elements being considered in sampling) by assigning a single number to each element in the sampling frame list (example - what atika did in class with our sign in sheet)

Interview surveys

Verbal. Helps decrease "Don't know" or blank answers. Should achieve a completion rate of at least 80-85%. Appearance/demeanor, familiarity with questionnaire, following wording exactly, recording responses exactly, etc. are all important factors. Coordination & control is also crucial.

TYPES OF INTERNAL CONSISTENCY RELIABILITY: The Split Halves Method

assess correlations of subscores, among different subsets, of half of the items. Because this only requires administering the measure one time, it is practical and the most commonly used method for assessing reliability.

Validity

the extent to which an empirical measure adequately reflects the real meaning of the concept under consideration, the accuracy of a measurement. Are you measuring what's supposed to be measured?

TYPES OF INTERNAL CONSISTENCY RELIABILITY: Coefficient Alpha

the most commonly used method for measuring internal consistency reliability; the average of the correlations between the scores of all possible subsets of half of the items on the scale. .9 and above is considered excellent, .8 and above are considered good, etc.


Set pelajaran terkait

Formulas For Cone,cylinder And Sphere

View Set

Introduction of managment pharmacy

View Set

Module 5.2 quiz (chap 10 and 11)

View Set

CCNA 3 v7 Modules 3 - 5: Network Security Exam

View Set

Funda ch 25 Asepsis and infection control PrepU

View Set