Chapter 4 Methodological Issues
interview schedules
a list of questions to be asked orally of a participant
performance checklist
a means to record if a person is engaging in behaviors typically associated with performing a particular task
Interrater reliability
a measure of consistency between the observations made by two or more raters or judges.
experimenter effects
bias occurs when a researcher unintentionally influences participant behavior
sample
little n equals this, the smaller group that is selected from the population and is used in the research to represent the larger population.
undisguised observation
making no attempt to disguise yourself, goodall and fossey
Qualitative research
more concerned with identifying meaningful experiences
convenience sampling
most common sampling method, counselor will select an easily accessible population that most likely does not fully represent the population of interest.
observation forms
contains specific behaviors for the researcher to observe and evaluate and provide a place to document the frequency of such observations
normal curve
distribution of a bunch of sample means is always a
xbar
equals sample mean
observed score
true score plus measurement error, whenever we measure something this is what we get
rating scales
used to provide a score in relation to how a person behaves, it implies a judgment rather than just acknowledgment
action checklist
used to record whether specific behaviors were present or absent during the observation time period.
Homogeneous sampling
selecting a sample where each subject shares some important characteristic
correlation coefficient
-1 to 1 also equals little r
population
Big N equals this, all the people in a particular group
Independent Variable
What does the dependent variable depend on?
Relational, descriptive, causal
What are the 3 major types of research questions?
Independent Variable (IV) Dependent Variable (DV) Confounding Variable (also known as extraneous variable or intervening variable)
What are the 3 types of variables?
Quantitatively, qualitatively, and presence/absense
What are the 3 ways to get manipulating or common way to change variables?
Independent Variable (IV)
a construct that is manipulated or controlled in some way by counselor. Am example would be the amount of medication a group receives. This is the one that is changed to see what happens to the dependent variable.
purposeful sampling
a counselor selects a sample from a population based on who will be most informative about a topic of interest. Participants are selected because they represent needed characteristics
cluster sampling
a sample process in which groups, rather than individuals, are randomly selected
flowcharts
a sheet for recording frequency counts of the behavior as well as the intended direction of the behavior
Research Question
a statement that identifies what a research study hopes to examine.
naturalistic observation
also known as field observation, the presumption that people or animals will display more realistic, natural behaviors, in their natural habitat.
Categorical Variables
also known as qualitative variables, that is, you can manipulate them by changing the quality.
Causal Research Questions
attempt to determine the cause and effect relationship among variables. example would be does studying lead to a higher grade point average?
sample mean
average of the sample represented by xbar
quantitative research/sampling
concerned more with sample size, can be classified as probability and non probability sampling
Descriptive Research Questions
examine and describe what already exist. example would be how many children are in the impact plus program?
Relational Research Questions
examine the relationship between variables (predictive/correlation) example would be what is the relationship between gender and parental styles?
sample bias
gets in the study and confuses the results and is considered bad, also happens when the sample doesnt represent the sample population
disguised observation
here the participants are unaware that the researcher is observing their behavior.
random assignment
individuals are randomly assigned to different groups or treatments
Variable
is any trait, executive, or characteristic that varies. Examples include age, weight, net worth, and IQ. They are contrasted with constants which don't change over time. Examples include color, native language, birthplace, anxiety, self esteem, program type and gender.
Alternate forms reliability
means that different but equivalent tests are given on different occasions to see how individuals score.
random selection
means that individuals are selected randomly as representing a population
Operational Definition
must include how the researcher is going to identify and measure the variables
experimental and casual comparative studies
need a minimum of 30 individuals for a group
correlation studies
need a sample of at least 50
descriptive studies
need a sample with a minimum number of 100
expectancy effect (reactivity)
occurs when the researcher pays more attention to behaviors that they expect, or that support their hypothesis.
Parsimony
only using the number of variables you need
mue
population mean equals
anecdotal records
records containing specific and actual recordings, usually in paragraph form, through observations deemed important to the researcher. based on our experiences or info from others
static checklist
refers to a means of collecting data on characteristics that will not change while the observations are being made
John Henry Effect
refers to participants in the study who are in the control group, these participants try to outperform the subjects in the experimental group and as a result biased the result
ecological validity
refers to research that is conducted in situations that are similar to the everyday life experiences of the participants.
reliability
refers to the consistency for stability of the measuring instrument, consistency
construct validity
refers to the degree that an instrument accurately measures the theoretical construct or trait that is supposed to measure
Face Validity
refers to the extent that a measuring instrument appears valid on its surface. It looks like its capturing what its suppose to measure
content validity
refers to the extent to which a measuring instrument covers a representative sample of the domain of behavior to be measured. Its been looked at and influenced by others, experts way in on it. Its a step up from face validity.
Hawthorne effect
refers to the generally accepted notion that participants are motivated to perform better when they know they are being studied for research (electric plant)
Halo Effect
refers to the tendency to allow one trait of an individual that is usually irrelevant to the purposes of the research to influence how we view other traits that are relevant to the research.
Determining the Methodology
refers to variable selection, population sampling, instrumentation, and bias reduction.
Validity
refers to whether a measure is truthful or genuine, accuracy
stratified random sampling
sample looks exactly like the population
split-half reliability
splitting items on the test in two equivalent halves and correlating the scores on one half of the items with the scores of the other items.
theoretical population
target population
Methodology
the method you use to get the answer to the question. An example would be putting your hand in the shower, to get the right temperature
Dependent Variable (DV)
the outcome variable that is influenced by the independent variable. An example would be level of depression or anxiety.
Confounding Variable
variable that can have an effect on the dependent variable and are not controlled by the researcher. This messes up the results of the study.
Numerical Variables
variables that can be manipulated quantitatively, meaning you can increase or decrease them.
time and motion logs
very detailed observations of group or person that occur over a specified period of time in an effort to understand underlying reasons for behavior.
systematic sampling
when a researcher takes every 4th or 10th or 100th name off of a list.
Judgment sampling
when the research uses his or her judgment to create a sample that is believed to be representative of the population
self fulfilling prophecy/Pygmalion effect
when you intentionally or unintentionally tip people off as to what our expectations are and people pick up on them and perform according to them
sample error
whenever the mean of the sample is different than the mean for the population (not a big deal)
quota sampling
where a certain number of people are provided by request for an interview.