Research Test #2
(type of non-probability sampling) what is purposive sampling?
aka judgmental sampling, researcher targets specific population with unique characteristics (LIKE OUR RESEARCH PROJECT!!! or home schooled children)
(type of non-probability sampling) what is quota sampling?
allows comparison of different groups within a population of interests
(type of probability sampling) what is cluster random sampling
allows researchers to create separate random clusters and then randomly select participants from SOME of the clusters
what is random error
an inconsistent error that has no pattern, fluctuations in measurement that can be above or below actual result
what is a cluster
group within the population where participants are going to be randomly selected; should be very similar to population
secondary data vs. primary data
primary data: is collected by a researcher for a specific purpose secondary data: is data collected by other researchers
sampling
procedures used to select a subset of the population
pros & cons of non-probability sampling
pros: cost-effective & easy cons: cannot be generalized to the population
pros and cons of probability sampling
pros: good for larger studies, easier to conduct if you have already completed a study in the past cons: is costly, questions need to be well thought out bc can't go back to participants
(type of probability sampling) what is systematic random sampling
selecting the nth number on the sampling frame (like selecting every 9th person that walks through the door)
what are the types of validity (6)
1. Construct validity 2. face validity 3. content validity 4. Criterion validity 5. predictive validity 6. concurrent validity
what are threats to external validity
1. Hawthorne effect (fear of being watched) 2. explicit description of the experimental treatment 3. multiple treatment interference 4. novelty disruption effects
pros of secondary data
1. availability of information (allows for collection of data that is now impossible to collect like govt. statistics on suicide) 2. opportunity for replication 3. protection of participants 4. time/cost effective 5. large data sets (allows researcher to generalize to the population)
what should well-developed instruments should have?
1. complete list of questions or procedures 2. instructions for how to administer 3. how to calculate scores 4. how to interpret scores
how do you minimize measurement error (3 ways)
1. conduct a pilot study 2. check for data entry errors 3. add questions to your measurement tools that measure similar constructs in different ways
4 types of non-probability sampling
1. convenience sampling 2. snowball sampling 3. purposive sampling 4. quota sampling
what are 4 major sources of secondary data
1. government statistics 2. research university data 3. institutional data 4. online sources
(type of non-probability sampling) what are the two types of purposive sampling?
1. homogeneous: participants chosen bc of common characteristic (our PROJECT) 2. deviant cases: sampling focuses on very unusual cases (like highschool dropouts)
what are the two types of systematic error
1. instrumental error 2. environmental error
3 types of reliability
1. inter-rater reliability 2. test-retest reliability 3. internal-consistency reliability
what are threats to internal validity
1. maturation effect (time's effect) 2. history effect (event happens) 3. testing effect (test-wise) 4. instrumentation effects 5. differential selection 6. attrition/mortality
what are the levels of data
1. nominal 2. ordinal 3. interval 4. ratio
variable types
1. nominal 2. ordinal (nominal & ordinal are categorical) 3. numerical (interval or ratio) (interval and ratio are continuous or scaled)
3 methods of collecting data
1. personally collecting info 2. computer-assisted telephone interview 3. virtual collection of data (online surveys)
two ways that replication can happen
1. primary data collection with different populations in a different location 2. secondary data where new info is added that may be missing from original study
(type of non-probability sampling) what are the two types of quota sampling?
1. proportional quota sampling: is representative, in the same proportion of the population of interest 2. non-proportional quota sampling: uses a different quota from the one found in the population of interest
(type of probability sampling) what are the two types of stratified random sampling
1. proportionate stratified sampling: equivalent to the proportions of the population but looks at specific strata of interest 2. disproportionate stratified sampling: not equivalent to the proportions of the population but looks at specific strata of interest
two types of measurement error
1. random error 2. systematic error
what are the 5 types of probability sampling?
1. simple random sampling 2. stratified random sampling 3. cluster random sampling 4. systematic random sampling 5. multistage sampling
purpose of entering / organizing quantitative data (like we'll do UGH)
1. to categorize results 2. to remove / fix any info that would effect our study
ethical considerations necessary for research (3)
1. treat participants with RESPECtttt 2. notify them about the potential risks 3. get CONSENT
cons of secondary data
1. uncertainty of constructs (original study focus may be different than yours) 2. uncertainty of measurement error 3. passage of time (may be outdated, can be advantage or disadvantage if ur interested in studying how concept has changed over time)
what is instrumentation
ALL of the instruments used in the study
what is systematic error reduced by
a good study design
(type of probability sampling) what is multistage random sampling
combination of two or more types of sampling together that best fit a study
non-probability sampling
a sampling frame is not often available and participants are not selected randomly
probability sampling
a sampling frame is used and participants are selected randomly
sample
a subset of the population
what is predictive validity
ability of a new measurement tool to achieve results that are comparable or better than those achieved by an old, existing measurement tool
what is internal-consistency reliability
achieved by asking the same question about the same variable using more than one question
reliability
consistency, yields the same results every time. but does not always mean accuracy
what is the difference between a cover letter and an informed consent form
cover letter is used for anonymous studies, informed consent form is not anonymous and participants must sign it
what is criterion validity
creating a new measurement tool for a construct that already has a measurement tool in place
what is test-retest reliability
data collected at multiple points in time with the same participants. consistency across administration = high reliability
directional vs. nondirectional statements
directional: manipulation of the IV causes change in the DV (is binge drinking associated with poorer GPA's among college students?) non-directional: are two variables related; no IV DV, just two variables. (is attitude towards academic success related to frequency frequency of binge drinking behavior among college students?)
(type of probability sampling) what is stratified random sampling
dividing the population into strata (smaller groups) based on shared characteristics & then randomly selects participants from EVERY strata
what is quasi research
experimental design used when we can NOT randomly assign participants to control and experimental groups
what is content validity (subcategory of construct validity)
extent of level to which a measurement tool captures all aspects of the construct being tested (example: a cumulative final)
what is construct validity
how well a tool truly measures the construct in question; some constructs are easy to measure some are not as easy --> like measuring a person's level of happiness would be complicated
validity
if the test accuratly measures what it is supposed to measure
what is systematic error
indicates a measure is not accurately measuring a concept, and it can relate to the measurement you use, the way you collect data, or other environmental factors
what is a scoring protocol
instructions for how to use an instrument, and (sometimes) what a score means
what is internal and external validity?
internal validity --> finding "true" causes of the outcome, requires accurate valid reliable instruments (was the research done right) external validity --> ability to generalize study to other situations, to have strong external validity you need a probability (random) sample of subjects drawn using "chance methods" from clearly identified population
what is/ does an instrument do?
it measures a variable (like satisfaction with life scale)
sampling frame
list of the entire population under study (old days: phone books, nowadays: enrollment records)
coding in quantitative data entry
nominal or ordinal data, coding represents categories i.e. freshman = 1 sophmore = 2 etc.
(type of non-probability sampling) what is snowball sampling
participants selected by word-of-mouth; cannot always be representative of population bc participants may know each other / share certain characteristics BUT useful in gathering info from hard to reach populations (like drug traffickers)
in data set, each row = & each column =
people questions
main distinction between probability and non-probability sampling
random selection of participants
(type of probability sampling) what is simple random sampling?
relies on complete randomization, conducted using random number generator or other simple techniques when we know the population of interest; everyone has an equal chance so it's representative of the population
what is random error reduced by
replication tests
the goal of sampling is to make sure that your _________ is a good representation of your ____________
sample, population
population
the entire group of people that the study is focused on
when is non-probability sampling used
used for harder to reach populations or where a sampling frame does not exist
secondary data ethical considerations
though researcher does NOT actually interact with participants, still important to protect identities of participants and ish
true or false: reliability or validity are independent of each other
true; may be valid but not reliable, or reliable but not valid
difference between un-tested and tested questionnaires
un-tested are non-standardized survey questions, tested are standardized survey questions
what is a method of data collection
vehicle of which data is collected (like online questionnaire, interview)
why does measurement error happen
we might not ask questions or collect data in the right way
what is inter-rater reliability
when multiple researchers collected data and consistency is measured. consistency across observers = high reliability
what is concurrent validity
when participants answer similarly on two different tests, a traditional one and the new one you are creating and testing
(type of non-probability sampling) what is convenience sampling
when researcher selects anyone who is readily available to participate; can be cost and time effective
what is face validity (subcategory of construct validity)
when researchers decide that a tool is an accurate measure of a construct (when a tool is an accurate measure of a construct or question)
what is data scraping (creepy)
when researchers take large amounts of info from websites and such and put it into a spreadsheet
measurement error
when we do not collect data that is representative of what we are actually measuring
what is experimental research
where a group is exposed to a variable of interest, and assessed to see if changes seen in experimental group are a result of that variable