ESS 5346 FINAL

¡Supera tus tareas y exámenes ahora con Quizwiz!

What are ethical issues in qualitative research?

- confidentiality and informed consent

What are three threats of bias in qualitative research?

- history - maturation - experimenter bias

Statistical test that adjusts for variables that can't be controlled?

ANCOVA

Factorial ANOVA

ANOVA with more than one IV Fisher's innovation in the 1930's increased efficiency of research and exploration of interactions of IVs Sensitive to randomization and equal n in groups a 2x3 factorial ANOVA means the study/ANOVA is testing 2 conditions of IV1 with three conditions of IV2 F tests for - main effects (IV): Does intensity or frequency matter? - interaction (combined effects of IVs): Combining intensity and frequency No parallelism means the variables work together

Simple ANOVA

ANOVA with more than two groups is an omnibus test of global equality, requires follow-up post hoc tests to see which means are significantly different - If significant and more than 2 grouos: conduct appropriate post hoc test - =SS treatment/SS tot

4 levels of measurement

Nominal (male/female; case/control): weakest - name only attribute - can be more than two names Ordinal (strongly agree, agree, disagree, strongly disagree; low, medium, high) - applying order; specifically, unique names that we can put in order Interval (Fahrenheit temperature scale) - unique name, unique order, and distance between each of the ticks is a meaningful standard size unit - can calculate means, but zero is not absolute Ratio (Kelvin temperature scale, EMG, force, displacement): strongest/better representation of true actual value - unique name, unique order, unique distance, and 0 is absolute (zero amount of stuff) - ex: starting point of a race is 0

Writing the report

Numerous revisions and editing No standard format, but typically include - Introduction to the problem (theoretical framework) - Description of method - Results and discussion

Categories of statistical tests

Parametric (inferential) Non-parametric Statistical analysis - can either be univariate ( test one independent variable) or multivariate (test combination of 2 or more independent variables) - want to try and minimize the # of dependent variables

Determinants of disease?

Risk factors

Explain the meaning of the finding that there is a negative correlation between class attendance (number of absences) and achievement in class (as measured by the final examination). Use the concept of r square in your explanation

The more they don't show up, the more likely they will have lower grades. R squared tells % variation.

Statistics are used and often misused in research. In a conceptual sense, explain what statistics can tell about (a) the reliability of a result (significance) and (b) the meaningfulness of it. Be specific and provide examples of how each of these two aspects would be presented in a research paper.

The size relative to other measurements is what makes it meaningful. Compare effect size. Discuss these. Don't use vague language, "group one was better than group two". We want to know if one is better than the other, how it's better, and by how much it's better. Reliability is presented in the results section. Discussion section discusses the meaningfulness - training is how much better?

central tendency

avoids the term "average" - because it is a colloquial terms (meaning a bunch of different measures) Mean, median, and mode - can all be done when you have normal distributions - sometimes median and mode are when you have a skew; median and mode with be pulled less toward the extremes than the mean

So what measure of central tendency report depends of whether you have a skewed distribution or not?

Variability - variance - standard deviation (SD): square root of variance (variability of the sample mean) - coefficient of variation: CV = SD/M - range (max - min) - interquartile range ( 25th to 75th percentile); report the median as well - confidence intervals (typically 95% to 99% corresponding to type 1 errors of p <0.05 and p < 0.01) - variability score tied to profitability

Unit of Analysis controversy

What is the basic element of the study? Is it a particular individual or a particular whole class of students? - is it a n =5 for 5 teams or n= 50 for ten women on each team of 5 teams? we want as much randomization as we can actually feasibly, financially and pragmatically implement

Standardized scores

Z scores - can convert any interval/ratio score into standard deviation (SD) units - Z = (x-M)/SD useful because SD has a relationship to the normal curve - if your data is normally distributed, use the scores and compare them to any other normally distributed data you have ex: want to know if a person's strength level is better than their endurance level or if their strength level is better than their % body fat - do that by z scores

pandemic

a disease/condition that spreads across regions of the world

A researcher finds a Pearson correlation of .70 between popularity ratings and self-concept scores. What percentage of common association can be inferred between the two variables? a. 36% b. 49% c. 64% d. 70%

b. 49%

In a research study in which the treatment involved quite intense physical training, 40% of the participants in the treatment group dropped out as compared with 5% of the control group. This threat to internal validity is called a. experimental mortality b. statistical regression c. history d. selection bias

a. experimental mortality

If two measures have a high positive correlation and a person has a low score on one measure, his or her score on the other measure is most likely to be a. low b. high c. the same score d. dependent on whether one variable is the cause of the other

a. low

What tool is used to create argument that interprets or assigns meaning to qualitative data? a. narrative vignette b. audit trail c. quotations d. member checking

a. narrative vignette

A major limitation in the use of videotape in observational research is its: a. obtrusiveness b. audio limitations c. inability to observe multiple participants d. poor value in complete observation

a. obtrusiveness

Which is NOT a major assumption of most parametric statistics? a. ordinal measurement b. normality c. homogeneous variance d. random sampling from population

a. ordinal measurement

The term describing the concentration (hump) and tail of scores of a distribution is: a. skewness b. the stem c. kurtosis d. variance

a. skewness

When a researcher claims that there is a difference between treatments (i.e., rejects the null hypothesis) when there really is no difference, what type of error is this? a. type I error b. type II error

a. type I error

When an experimenter states that the level of significance is the .05 level, he or she is setting the probability of committing which type of error? a. type I error b. type II error

a. type I error

Mean (M)

arithmetic mean - add up all the scores and divide by the # of scores you have (most typical score)

Different statistics test for our hypotheses about

associations between variables differences between groups (M, Me, or shape of distribution) - shape of distribution = typical values

Region of rejection

assume - sample is randomly selected - normal distribution - H0 true - and alpha is set a 0.05 can be either: - non-directional hypothesis (two-tail test) - directional hypothesis (one-tail test)

Concurrent validity

at the same time - measured sample of gas into a gas sensor instrument to test it - set amount of force on top of a force measuring device to test it concurrently at the same time

The internal validity of an experimental design is concerned with what question? a. To what extent can the findings be generalized? b. Did the independent variable really produce a change in the dependent variable? c. How representative is the setting selected for the experiment? d. Will findings provide information about situations in which variations of the independent variable are present?

b. Did the independent variable really produce a change in the dependent variable?

Which is not an example of a Descriptive Study? a. Case Study b. Experimental c. Observational d. Survey Methods

b. Experimental

Which of the following smoke detector situations is analogous to a type II statistical error? a. Alarm with a fire b. Alarm with no fire c. No Alarm with a fire d. No alarm with no fire

b. alarm with no fire

A researcher sought to find out which of two exercises was more effective in building endurance. One group used exercise A, and another group used exercise B. At the end of study, the researcher should compare the two groups' scores by a. a dependent t test b. an independent t test c. the Spearman rank-difference correlation d. multiple regression

b. an independent t test

Choices that a researcher makes in focusing and designing a study are called a. limitations b. delimitations c. assumptions d. research hypothesis

b. delimitations

A case study design: a. attempts to predict conditions that are likely to prevail in the future b. gathers data from a single person or from a limited number of similar people c. examines the content of written or printed materials to detect trends d. seeks to infer the results to a much larger population

b. gathers data from a single person or from a limited number of similar people

What is not a characteristic of qualitative research? a. in-depth description b. large sample c. time-consuming data analysis d. inductive construction of a theory

b. large sample

A research proposal has the following title: "Gender differences in mile-run times for 10 and 15 year old children. What is/are the dependent variable(s)? a. gender b. mile run times c. age d. age and gender

b. mile run times

Accepting the null hypothesis testing the correlation between two variables indicates means a. a positive association between the variables b. no association between the variables c. a negative association between the variables

b. no association between the variables

What's an example of a type II error? a. Fire alarm goes off - No fire b. No fire alarm - Fire c. No fire alarm - No fire

b. no fire alarm - fire

The statistical tool for predicting variables from other variables is: a. ANOVA b. regression c. correlation d. ANCOVA

b. regression

Enhancing the power of an investigation will increase the likelihood of: a. accepting a false null hypothesis b. rejecting a false null hypothesis c. accepting a false research hypothesis d. rejecting a false research hypothesis

b. rejecting a false null hypothesis

A pretest of knowledge about microcomputers is given to a group of students 5 min prior to a film on the subject. A posttest given 10 min after the film showed a 10-point gain from the pretest. The researcher concludes that the film produced the gain. Which of the following is likely the threat to internal validity? a. statistical regression b. selection bias c. testing d. maturation

b. selection bias

The standard deviation represents the a. error in measurement b. spread of scores about the mean c. single score that best expresses the group's performance d. error in sampling from the population

b. spread of scores about the mean

If ANOVA has found a significant difference among four treatment groups, a follow-up test such as the Scheffé is needed to determine a. whether the F ratio is significantly different from chance b. which of the groups differ significantly from the others c. the percentage of variance accounted for by the treatments d. all of the above

b. which of the groups differ significantly from the others

Parametric (inferential)

based on normal curve probability and have three major assumptions - make an inferences from the sample to the population (if we have good sampling) 3 assumptions: - normal distribution of Y (independent variable) for population - equal variance across conditions (dependent variable) - independence of observations/measurements - can't systematically treat the groups differently other than your independent variable

What is the coefficient of determination of a Pearson r=.5? a. 1% b. 5% c. 25% d. 50%

c. 25% - we square r!

A researcher seeking to know if a new training method was superior to weight training, compared randomly sampled participants in three groups (control, WT, new training). What statistical test is most appropriate? a. dependent t tests b. multiple regression c. ANOVA d. independent t tests

c. ANOVA

A study is designed to determine whether a vitamin supplement treatment affects red blood cell counts, and whether this effect differs between men and women. The independent variable(s) in this study is/are: a. vitamin treatments b. sex c. both A and B d. red blood cell counts

c. both A and B

A quasi-experimental study design does not have: a. treatments b. independent variable c. control group d. interval level measurement

c. control group

A study examining one independent variable and is interested in its combined/simultaneous effect on three dependent variables should be analyzed by: a. three t tests b. correlation c. discriminant analysis d. ANCOVA

c. discriminant analysis

For a study with two (or more) independent variables, such as type of instruction and sex, and one dependent variable, such as achievement, the most appropriate analysis technique is a. t test b. Spearman r c. factorial ANOVA d. simple ANOVA

c. factorial ANOVA

The most appropriate relative reliability statistic because it detects change over trials and can examine more than 2 trials is: a. Bland-Altman plot b. Pearson correlation c. intraclass correlation d. mean difference

c. intraclass correlation

The MAXICON principle stands for: a. maximize true variance, minimize extraneous variance, control error variance b. maximize extraneous variance, minimize error variance, control true variance, c. maximize true variance, minimize error variance, control extraneous variance d. none of the above

c. maximize true variance, minimize error variance, control extraneous variance

Which of the following terms least belongs with the others? a. mode b. mean c. standard deviation d. median

c. standard deviation

A t test is used to a. adjust for initial differences within the groups b. estimate the error of prediction c. test whether two groups differ significantly d. see if the tests are reliable

c. test whether two groups differ significantly

Greater attention may be paid to methodology in qualitative research because: a. numbers are used b. validity issues c. the investigator(s) are the instruments d. reliability issues

c. the investigator(s) are the instruments

When a researcher states that a result is significant, this means that a. the effect is an especially important one b. the scores are not correlated c. the result is unlikely to be a chance occurrence d. the scores are correlated

c. the result is unlikely to be a chance occurrence

When a researcher states that a result is "significant" this means that: a. the result is especially important b. the scores are not correlated c. the result is unlikely to be an observation due to chance d. the result is an effect/treatment close to zero

c. the result is unlikely to be an observation due to chance

Qualitative research is more likely than quantitative research to have a: a. hypothesis b. statistical test c. theoretical framework d. generalization to a population

c. theoretical framework

The equivalent of validity in qualitative research is: a. triangulation b. negative case checking c. trustworthiness d. narrative vignette

c. trustworthiness

If the researcher fails to reject the null hypothesis when there really was a difference, this is an example of a a. one-tailed test b. two-tailed test c. type I error d. type II error

c. type I error

External validity

adequate procedures (sampling and inferential statistic power) to support inference to generalize results to populations - can those results be generalized to inference to the whole population Was there adequate sampling, treatment, and setting for the results to be generalized to what population? - if you get an internally valid study with some results that are big enough to be meaningful, then you try to make an inference about is that representative of the whole population? is the sample large enough and randomized and enough statistical power for us to use the inferential statistic and say "these results can be applied to similar people in the population" - is the whole population going to behave the same way as the sample data in this study?

Internal validity

adequate procedures that one can conclude that changes in Y are primarily a result of X - all extraneous variables and potential biases are minimized was there adequate experimental design and control to conclude that the changes in the DV were the result of the IV? - does the study measure what it intends to measure? does the IV systematically change the DV? - tentatively = one study - replications = more studies can critically evaluate all the aspects of the research that's done correctly in statistics and have not violated assumptions - then can say the results are believable for that data set

Scales for affective variables

affective domain variables address attitudes, opinion, personality, etc. types of scales used - Likert scale (5-7 categories): ordinal or interval property controversy - Semantic difference scale (opposing anchors) - Visual analog scale

population

all possible observations or measurements of a particular variable (parameter: true mean SAT) - sometimes can't know the vast majority of the population ex: all seniors at a university or all adults between the age 30 and 35 in the U.S.

Repeated Measures ANOVA

an analysis of scores for the same individuals on successive occasions, such as a series of test trials - increases power and separates individual differences from the error term good ANOVA model for ESS research - control for individual differences in within-subjects designs - study variables over time additional assumption of Sphericity ( > 0.75) between-subjects ANOVA - SStotal = SStreatment + SSerror Within-subjects RMANOVA - SS total = SS treatment + SS subjects + SS residual(error)

Advanced Correlation Procedures

Canonical Correlation (Rc): index of the maximum association between combinations of predictor and criterion variables ex: these risk factors together can cause CVD Factor Analysis: procedure to reduce a large number of correlated variables into smaller sets of latent or hidden variables/constructs - find underlying constructs - explore characteristics (fitness components, sport success) - Most likely to provide strong association evidence that tells us about relationships - Structural explanation - Reduces a large number of correlated variables into smaller sets of latent or hidden variables/constructs - Take a bunch of measurements, throw into analysis, finds out how variables clump together ex: When is an endurance, strength, or speed event?

Rates of disease are calculated by dividing number of cases by?

Cases over a period of time

What are three major methods of qualitative data collection?

Collection, reduction, analysis

Findings from an investigation revealed a "significant interaction"; therefore the statistical analysis used to analyze the data was: a. ANOVA b. t-test c. Multiple Regression D. Factorial ANOVA

D. Factorial ANOVA

Descriptive studies

Describe current status a person, group, or population Typically first studies of a new topic, measurement, or phenomena - Fairly solid case for novelty - Typical values and variability: conditions, populations, or time points - Validity and reliability - Generating hypothesis Often less explanatory power - Cannot demonstrate cause and effect - Hard to justify importance/contribution - Perception of weak value of scientific "fishing expedition"

Not one of the three kinds of epidemiological designs?

Ethics

Both t test and ANOVA are parametric tests (population value) of means assuming

Population distribution of Y is normal Samples from population are random Samples have homogenous variance (homoscedasticity) - Cant have huge variability Interval or ratio level measurement - Can't cheat

Skewness deviation

positively skewed - tails off to the right (+) - most of the scores are low ex: MLB player salary or professor's salary negatively skewed - tails off to the left (-) - most of the scores are high

Epidemiology is one of the research tools used in public health toolkit

public health is the science that promotes and protects the health of people and their communities at the population level

Epidemiology

study of the distribution and determinants of health-related states among specified populations and the application of the study to the control of health problems - focuses on demographics or distributions of various diseases

Sample

subset of a population measured (parameter: true mean SAT approximated by M (sample mean; true population value) is a point estimate) - can use sample to infer what it might mean to the whole population because two statistics aren't the same

Qualitative Research: Getting beyond numbers

qualitative research is systematic and theoretical qualitative research is not anti-numbers (numbers and statistics can be used), just pro-meaningfulness - constructs in behavioral sciences - what people are thinking (metacognition) - exploring unknown constructs/concepts/thinking - somethings cannot be measured quantitatively time intensive

trustworthiness

qualitative research nomenclature for research validity/reliability/quality two major issues of qualitative research trustworthiness - is the study conducted ethically - is the study conducted competently attention to trustworthiness occurs during both data collection and data analysis Usually in the methodology section to show methods used were trustworthy - Did you follow usual standards used in qualitative research? - Data analysis and data collection is all influenced by you

What is the main difference between experimental and quasi-experimental research designs?

random selection and control group

Sensitivity to change/responsiveness over time indicates

smallest real difference or reliable change index is the 95% CI or SEM difference minimum clinically important difference - another way to express measurement inconsistency in the actual units

Most common Ho (null hypotheses) is not directional or no difference between two means (M1 - M2 = 0)

so one can look at the relative frequency of this yes/no decision and see the probability of our statistical test to establish statistical error rates (type 1 and type 2)

t tests

t ratio of estimated true variance / error variance - are those two means different from each other? - true variance = one mean is bigger than the other independent t-test: two independently sampled/different groups - most common ex: did fitness improve from pretest to posttest? dependent t-test: two paired or repeated dependent measured groups - between the means of two sets of scores that are related, such as when the same participants are measured on two occasions

categorial variable

sometimes have to control for (injured/uninjured, male/female) - even though they are at low levels of versions, they may have a very powerful influence

random sampling

tables of random numbers or computers - washes away systematic bias - most insurance against biases

Histogram

sorting all the data, rank ordering it, and putting it into about 10-20 groups called class intervals scores are plotted against the frequency of occurrence - height of the bars in the histogram represents frequency or the # of observations in that class interval facilitate hand calculations with grouped data formulas - useful because it very quickly gives you a snap shot of all your data to check for accuracy to clean and correct errors note that the bars are touching each other in histograms

Confidence intervals

statistic combining point estimation (M or other statistics) with variability/probability - gives range of values of expected estimate (CI = M + SEm(alpha) 95% CI (alpha < 0.05) 99% CI (alpha < 0.01) confidence interval about the mean calculated with some statistics based on a statistical error rate and the standard error of the mean

Two kinds of validity

statistical (external/internal) and measurement (and assigning #s to things)

If we are interested in changes over a long time, we need statistics

statistics to describe performance (group and individual) P: percentile (knowing where scores are relative to each other) Z: z-score (looking at standard deviation) C: correlation Development interprets with a change in time; challenges of developmental research - population specificity - unclear semantics: can be outliers; might be 3 standard deviations above the mean - lack of reliability: testing over time - statistical problems: especially in longitudinal studies, trying to follow people for 10 years we lose people

Probability (p)

the odds or chance (ranges between 0 and 1) that an event will occur - 0 = something will never happen - 1 = certain it will happen Can be established as: - Theoretically (Bayesian): e.g., Law of Complimentary Probability (1-p) - Experimentally (Frequentists): relative frequency - all statistical tests that we talk about are frequentists

Percentiles

the percentages of scores at or below the specific score ( 1 to 100) - sort largest --> smallest helpful to see where scores are relative to each other from the smallest score to largest score can be done as quartiles (every 25%) or deciles (every 10%)

Experimental designs

use randomized clinical research trials - serious cost and ethical issues in providing treatment and control treatment - can be performed at the individual level or community level (statistical unit of analysis issue) integrated with bench/basic science combined with meta-analyses

Unobtrusive research

collecting data when people don't know they are being studied - Study that is done without IRB approval of people out in the real world in public - Expected to be unbiased Collecting data about people as they behave naturally - nature of cities - who watches tv - records of people and events (birth certificates, magazines, newspaper, nature of graffiti)

Absolute effect sizes (d) are important in

compare different kinds of variables (SD units like Z score) systematic review papers (meta-analysis) sample size calculations cohen (behavioral science) standards: 0.2 small, 0.5 med., >0.8 large ESS d's may need to be bigger - other things you plug into the program are: what are the typical dependent variable scores, typical standard deviation, and size of difference that you want to detect (like the effect size)

Discriminant function analysis (DFA)

correlating variables to predict group membership - Whether you are injured or not; can separate athletes into bench warmers, starters, award winners - Want to know performance variables you can measure and find out if you can discriminate and sort people from different groups

Logistic regression

correlating variables to predict probability of an occurrence - Disease or condition; able to sort athletes in to categories after a period of time - How many ankle injuries you have

Pearson r correlation is often reported in a

correlation matrix

Partial correlation

correlation of two variables vs the other - Association between muscle strength and age in elementary school children - Examine the association after removing the effect of another variable (remove weight and keep age and strength) - Allows you to see association between two variables

Structural equation modeling (Path analysis)

creates structural or path model of causal relationships - Correlation analysis that you build the structure of how things are related

4 issues of competently conducted qualitative research

credibility: accurate description of the subjects and setting - can you make a case? transferability: would the results be useful to those in other settings or conducting research in similar settings? - are you showing that you are trying to ask unobjectively issues as objectively as possible? dependability: how well the researcher dealt with change - why is there a change in research confirmability: could another individual confirm the results - check assessment

Developmental research in which the data collected comes from subjects of different ages for the same test is called:

cross sectional

Normative study

cross-sectional study used to establish norms (typical values and variability) of performance, beliefs, or attitudes of a population - growth charts (height and weight): median value - fitness tests - occupational test ("job analysis") - health screenings precise definition of the population - often two (original and validation) large samples

What % of scores lie within the 1st standard deviation (SD) of a normal distribution d. 60% e. 95% f. 99% g. 100%

d. 60%

What are two kinds of validity? a. Statistical b. Measurement c. Constructive d. A and B

d. A and B - statistical (internal/external) and measurement

When the purpose of the research is to evaluate the effects of an independent variable on a dependent variable while controlling the influence of another characteristic, the best choice of a statistical analysis is: a. ANOVA b. multiple regression c. discriminant analysis d. ANCOVA

d. ANCOVA

Common post HOC tests are? a. Scheffe b. Tukey c. Newman-Kewls d. All of the above

d. All of the above

Interviews have what kind of ecological validity? a. Poor b. Alright c. Good d. Better e. All of the above

d. Better

For a study with two (or more) independent variables (e.g., type of instruction and sex) and one dependent variable (e.g., achievement), the most appropriate statistical technique is: a. t test b. Spearman r c. repeated measures ANOVA d. factorial ANOVA

d. Factorial ANOVA

Which is the highest level of measurement? a. Nominal b. Ordinal c. Interval d. Ratio

d. Ratio

t ratio and F ratio represent: a. Error Variance/True Variance b. Probability of Type I error c. Probability of Type II error d. True Variance/Error Variance

d. True Variance/Error Variance

A study failing to reject a null hypothesis when there really was a difference is an example of: a. a one-tailed test b. a two-tailed test c. Type I error d. Type II error

d. Type II error

A null hypothesis is: a. the same as the research hypothesis b. a statistical hypothesis that assumes there is a difference between groups/treatments c. a statistical hypothesis assuming equal variability between groups/treatments d. a statistical hypothesis assuming no difference between groups/treatments

d. a statistical hypothesis assuming no difference between groups/treatments

Protecting the rights of participants in research is important in a. experimental research and quasi-experimental research b. quasi-experimental research and correlational research c. experimental and qualitative research d. all of the above

d. all of the above

A Pearson correlation is an inappropriate measure of association between two variables when: a. the variables are ordinal b. there are many outliers c. the scatterplot is not linear d. all these factors make correlation inappropriate

d. all these factors make correlation inappropriate

Alpha and beta error rates: a. sum to 1.0 b. are unrelated c. should always be less than 0.4 d. are inversely related

d. are inversely related

The accuracy with which a 12-min run estimates maximal oxygen consumption in a group of female high school seniors represents: a. content b. logical c. construct d. concurrent

d. concurrent

The most efficient (least expensive) way to study a variable over large amounts (years) of time a. longitudinal b. correlational c. case d. cross-sectional

d. cross-sectional

The only type of research that can manipulate treatments and establish a cause and effect is a. descriptive research b. analytical research c. correlational research d. experimental research

d. experimental research

A threat to the validity of an instructor's ratings of student participation in a course from previous experience with some students in previous courses is an example of: a. Hawthorne effect b. reminiscence effect c. personal bias effect d. halo effect

d. halo effect

The primary variability statistic illustrated with a box plot is: a. the standard deviation b. the range c. the coefficient of variation d. the interquartile range

d. interquartile range

Which is not a technique to document trustworthiness of qualitative research? a. rich descriptions b. negative case checking c. peer debriefing d. narrative vignette

d. narrative vignette

If there is a Pearson correlation of .80 between a strength test and performance on a track field event, one can infer that: a. 40% of performance is caused by strength b. 64% of performance is caused by strength c. 80% of performance is caused by strength d. no causal relationship between strength and performance

d. no causal relationship between strength and performance

The statement that there is no relationship between the variables is the: a. research hypothesis b. alternate hypothesis c. significant hypothesis d. null hypothesis

d. null hypothesis

Which of the following is not a descriptive study? a. uses correlation b. case study c. developmental d. quasi-experimental

d. quasi-experimental

A good absolute statistic for measurement reliability is: a. Pearson correlation b. standard deviation c. intraclass correlation d. standard error of measurement

d. standard error of measurement

After a population of 1,600 high school seniors from a school district is divided by sex and school attended, the random selection of a sample to represent these proportions of the population is called: a. convenience sampling b. systematic sampling c. cluster sampling d. stratified random sampling

d. stratified random sampling

The standard deviation represents: a. the error in the measurement b. the distance of sample mean to the population mean c. the error in sampling from the population d. the spread or variability of scores

d. the spread or variability of scores

procedures in qualitative research

define the problem formulate questions and theoretical framework collect data: try to get opinion without asking - training and pilot work - selection of participants - entering the setting sorting, analyzing, and categorizing the data - data reduction analysis and interpretation

determinants

defined characteristics associated with change in health (risk factors, exposure)

Epidemiology study designs

descriptive - cross-sectional designs - ecological designs analytical - Cohort studies - Case-control studies Experimental designs: randomized trials - highest level of research (more expensive and rare)

Surverys

descriptive study seeking to determine present practices, opinions, or perceptions of a population - questionnaire - interview - delph method: consensus of experts - normative survey questionnaire - paper vs. electronic/online - systematic - 8 steps - Make sure questions have objectives in mind - Define sample size: Alpha level(type 1 error rate; due to chance)

Types of statistics or kinds of questions answered with these theories and mathematical tests

descriptive techniques: what are typical values? - when you make a new instrument or measurement what are the typical scores you are looking for? ex: where does 120/80mmHg come from? (need to describe) correlational techniques: are these things associated? - if variables are associated with each other or do they co-vary together? - something might be going on that we later infer as cause and effect for other studies differences among groups: are these things different? - the statistical comparisons between groups that are part of those inferential experiment or type studies

purposes of epidemiology in public health

discover the agent, host, and environmental factors that affect health determine the relative importance of causes of illness, disability, and death identify segments of the population that have the greatest risk evaluate the effectiveness of health programs and services in improving population health

endemic

disease or condition among a population at all times - precedent amount of a particular condition (obesity, low levels of physical activity)

stratified random sampling

divide into groups first on some characteristic, then random sample

Studies designed with manipulations of independent variables in perspective nature

do this in order to make some initial evidence (sample level evidence) for establishing a logical deduction of a potential cause and effect - remember that research does not try to take sample evidence and say it proves anything we use sample evidence when it's statistically significant and big enough to matter and has no serious threats to its internal validity - then we can apply inference to try and make claims about is potential external validity or its actual strength of evidence to support a conclusion that there might be causative factors involved - initial evidence: one study never proves anything

Which type of error is a false positive? e. Type 1 error f. Type 2 error g. Type 3 error h. Error

e. Type 1 error

What is the formula for the line of best fit? e. Y=mx+b f. Y=b+a-c g. Y=xb-a h. Y=mc^2 i: e and g

e. Y = mx+b

Interventions

efficacy: benefit expected of a health intervention based on the highest-quality design study evidence (experimental design with control like RCT) effectiveness: benefit expected in of a health intervention in real world conditions (high variability from heterogeneous people, multimodal treatment, compliance/delivery)

Statistics are needed because of

error and variation in all measurements - this is why statistical tests are used and answer very basic questions

Remember that most statistical tests in research reports are frequentists, meaning

evidence is collected against or designed to reject a null hypothesis (Ho) - meaning there is no difference or association

Interviews

face-to-face better compliance/return rate if handled correctly good ecological validity adaptable and versatile dynamic: responses and interaction can lead to new insights, but beware of potentially imposing experimenter bias also requires extensive planning and pilot study

Linear regression

formula for line of best fit: Y = a + bX - get constant in a slope Minimizes the error of line to actual paired data points - Want residuals minimized accuracy of estimate is established by examining the residuals and calculated an average called the standard error of estimate - aka sum of squares error

Distributions

frequency: prevalence, incidence, mortality rate - body counts or disease case counts that are usually turned into rates - can study the # of cases per population in a certain amount of time patterns: person, place, time, exposure - helps us figure out what diseases might be caused by

Predictive validity

future time - can compare same measurements to something that happens later that might be related to changes in some measurement value or trying to get

Within a longitudinal study, which ANOVA would be more appropriate? e. Multifactorial ANOVA (MANOVA) f. ANCOVA g. Repeated Measures ANOVA (RMANOVA) h. T test

g. Repeated Measures ANOVA (RMANOVA)

Inferences controversy

inferential stats vs Non-parametric stats Inferential statistical procedures: only to infer to the population and to take your sample data and say it might be representative - inference of the results might be representative of the population Non-parametric statistics: non-population based - only give results that relate to the sample - should not try to explain how this might apply to other people in the real world - because we don't have strong enough data to make an inference and compare it to the population value

Single studies that test multiple DVs with univariate statistics will

inflate the experiment-wise type 1 error FWα = 1 - (1-α)c so if α = 0.05 and 6 statistical tests on same sample data the FWα = 0.26 (for independent/uncorrelated DV) - have to ask if it is real or due to chance? most studies test multiple DVs from the same single data set - inflating alpha and can result in meaningless effects are easily falsely identified as significant

Plotting change scores

interested in differences or changes because it's an experimental study and we're comparing differences between multiple groups - or studying intervention over time and want to see if someone improves or doesn't improve

Assumptions of Pearson correlation

interval or ratio level (equal distance a part in scores) linear association between variables - data has to be in a straight line - if not linear, r is inappropriate Values free to vary, homoscedastic, and with no systematic sampling (variance introduced) - Two variables measured have to be free vary and have normal distribution without systematic cheating. Ex: number of months, skill level (beginner, intermediate, advanced), body fat % - Want continuous data that can take value - If see missing or cheating data = big variance (nothing in the middle)

Multiple regression

modelusedforpredicitingacrietreionfromtwoormoreindependent,or,predicitorvariables;correlating more than one predictor with a continuous criterion variable - Statistical technique that allows variables to be combines at the same time and get a better estimate Y = a +b1x1 + b2x2 .... coefficient of multiple determination r^2 - Multiple R2 to get multiple determinations - Take a scatter plot and get a better prediction Turn numbers into standard scores Turn constant in formula to beta scores - Go from 0 to 1 - How much variable contributes to the guess - Make formula easy to look at and understand

Statistics provide a set of

more objective procedures for interpreting data - however this field and its application are controversial - remember that research is empirical data

random assignment

most common; have a recruiter incentivizing to get participants that might have certain characteristics - can randomly assign them into groups after

Mode (Md)

most frequent score

What are the types of scales used for affective variables?

nominative and ordinal

Pearson r is a

non-linear measure of association (not interval scale!) third step is the evaluate the size coefficient of determination (r^2) and Fisher Z transform to average r's - Explained variance: turns it into a percentage from 0-100 - Have to square the strength of the association r = 0.3 9% r = 0.4 16% r = 0.5 25 .........................

systematic sampling

not at random - ex: can do every 1 out of 5 - easy way to collect data

Non-parametric

not based on assumed distribution and cannot be used to infer to population - description (including validity and reliability) - drawings, photographs, sample items - scoring method

Power in ESS research

not good calculated power for 43 papers published in the Research Quarterly - 0.18 for small d - 0.39 for medium d - 0.62 for large d most studies with small sample sizes had power of 0.5 ESS has similar "confidence crisis" and poor replication seen in other areas of science

rate

number of cases occurring during a specific period (depends on size of population during that period)

Integration/synthesis of experimental research and logical deduction are what is needed to make a judgment about potential cause and effects

one study does not make external validity inference typically make our causative mechanism identify - have to integrate and synthesize a lot of experimental research and a logical deduction you make at the end of the story is a judgment you make about potential cause and effects

one-tailed t test vs two-tailed t test

one-tailed: a test that assumes that the difference between two means lies in one direction only two-tailed: a test that assumes the the difference between two groups could favor either group

Extraneous variable

ones that we do control that really aren't of interest to us but they do confound and mess up (ex. health)

Semantic difference scale

opposing anchors - done by taking the Likert scale and adding some anchors on opposite sides that tries to force the person to interpret the scale as a linear distance scale

Z scores

will be + or - or above/below the mean Z scores tells us how big the difference might be between a measurement and another measurement

Criterion-referenced validity

used most common when measuring with instruments in the lab or field calibrate instruments: compare measurements to some known standard or criterion compare test/measures to "gold standard" measurement - reference instrument - reference materials (gasses, objects, masses): 2 measurements - one from instruments and one from reference value Pearson r is inappropriate - because it only tests for association Mean difference of Bland-Altman plot - compare 2 numbers, calculate the mean difference between them, and then plot the difference between the two measurements (criterion and measurement instrument) - want them to basically have these dots all around 0 = high quality value measurement because you want the measurement to be the exact same thing as the other one

Experimentally (Frequentists, objectivist, physical)

uses f as frequency; the number of times certain scores happen - can experimentally determine what's the probability of certain scores by counting up your distribution net histogram (frequency of certain observations (Fo) divided by the total frequency or total sample size (Ftot)) - that % gives you this probability of that particular score happening

Association between variables

various correlation and regression techniques are used to measure the direction and strength of the association between two or more variables/DV Covariation: does standing long jumps score covaries with athlete's height? - As people get taller, they have longer standing long jumps - Taller athletes have better long jumps Try and think of association: direction, how strong, relationships - How this something makes another something happen Associations do not infer causation (correlation b/n 2 variables does not mean one causes the other), explain mechanisms, or relationships - only way causation can be shown in an independent study is when an IV can be manipulated to bring about an effect

Research is empirical

we collect data and turns out when we measure something, there are different levels (4)

Observational

what, who, where, when, and number of observations - Observe people in their natural environment - Know what you want to measure - Collect info in different ways of what the person is doing/what we are observing how to score observations: - narrative (continual recording) - tallying (frequency counting) - interval methods - duration method - video-recording

T-tests compare ____ while ANOVA compare ____?

2 groups; 2 or more

Predicting a continuous criterion variable from more than one variable is?

Multiple regression

Major assumption of parametric statistical tests?

Normal distribution, equal variance, and independence of observations

Case studies

Types: descriptive: develops a detailed picture - factors that affect something; Key things to how people respond (interpretive and evaluative) - interpretive: uses data to classify or conceptualize - evaluate: determines merits of practice or events selecting participants gathering and analyzing data

In developmental studies, when one wants to determine how individual children change rather than what is typical at each stage, one would prefer the ________ method. a. longitudinal b. normative c. multiple baseline d. cross-sectional

a. longitudinal

Likert rating scales are somewhat controversial because the data are not considered _____ level scales, at best, by some scientists. a. ratio c. nominal b. ordinal d. interval

a. ratio

"To what populations, settings, treatment variables, and measurement variables can this effect be generalized?" might most appropriately be asked in relation to a. criterion validity b. external validity c. internal validity d. construct validity

b. external validity

If a thermometer measured the temperature in an oven as 400° five days in a row when the temperature was actually 337°, this measuring instrument would be considered a. reliable and valid b. valid but not reliable c. reliable but not valid d. unreliable and invalid e. concurrent validity

c. reliable but not valid

epidemic/outbreak

disease occurrence among a population that is in excess of what is expected in a given time and place

Correlational

explores associations (potential relationships) - no manipulation or grouping within variables - sound, hypothesized rationale for potential association does not establish cause and effect may lead to regression/prediction methods remember limitations of correlation analysis - population and sampling - spread of scores - scatterplot and outliers - shrinkage effect for prediction equations

What is the coefficient of determination r=0.7? e. 16% f. 36% g. 49% h. 81%

g. 49% - remember the coefficient is equal to r squared

Is the study ethically conducted? (qualitative)

important issue in qualitative/ethnographic research because - nature of topics addressed (individual, sensitive, personal) - time spent with participants - interaction with participants and potential guidance/bias Take steps to ask objectively and non-biased questions

Qualitative research

in-depth observation, description and analysis of human though and behavior (interpretive - beyond descriptive) - Studies things that aren't easily measured with instruments. - Sample size: large, so it is representative of population and hopefully have results that are statistically big enough to matter - Asks more in depth questions about individuals thinking, more time consuming to sort through

qualitative methods

interviews - individual groups - formal/informal - formal interviews generally use a pilot-tested protocol focus groups - Can formulate questions to ask you (the researcher) observations - Audio - Video - Field notes - Make notes of behaviors and comments - Come later when you video record interviews

convenience sample

justifying post hoc explanations or inferences to the population

Factorial ANOVA and KISS principle

keep it simple stupid frequency by intensity (2 x 3) ANOVA has 1 interaction - frequency x intensity frequency by intensity by durations ( 2 x 3 x2) ANOVA has 7 F ratios: 3 main effects and 4 interations - F x I - I x D - F x D - I x F x D

Controversy over robustness of assumptions

serious violations of assumptions (transforms data or use adjusted tests)

independent variable

variable that you are testing that might be having that causative mechanism/theory

Consider an "unexplained variance" of r

remember that a strong correlation does not infer causation - correlations can be symptoms of other associations - spurious correlations can be from one or more variables (multi-collinearity) ex: are 8th graders better at math than 1st graders? Do 8th graders have bigger feet than 1st graders? partial correlation (r12*3) and semipartial correlation (r 1(2-3))

quasi-experimental designs

reversal (can also do as a crossover design): 2 groups and do a sequence of treatments and then switch orders and then do the other treatment ex post facto: not a true control group switched replication: like solomon except isn't perfectly crossed - 4 groups that all receive treatment at different times time series: one group, 2 observations, a treatment, 2/3/4 observations - series of observations before and after treatment single-subject: has treatment (might measure recovery over time) - no control

Interactions

significant interactions are key to determining factorial ANOVAs since they don't distort tests of main effects - disordinal - ordinal

Law of complimentary probability (theoretically)

simplest laws of probability - based on prior knowledge of conditions, updating p of two hypotheses ex: probability of flipping a coin and it coming up as heads is 0.5, what is the probability of it coming up as tails? - heads: p = 0.05 - tails: p = 0.05 ex: if there is a 0.2 chance that it will rain, there is a 0.8 that it won't rain this is more advanced statistics that allows us to combine observations from all previous observations to calculate a probability, collect data, and then combine them all together and update the probability of a particular hypothesis

Selecting predictor variables

- Forward selection - Backward selection - Max. r2 - Stepwise - Hierarchical

Name three kinds of experimental designs

- Pre-experimental - True-experimental - Quasi-experimental

What are three techniques to document trustworthiness in qualitative research?

- triangulation - member checking - audit trail

Descriptive designs examples

- questionnaires - Delphi Method - Interviews - Normative survey - developmental research - case studies - observational and unobtrusive - correlations

Examples of sampling

- random - stratified random - systematic - random assignment - convenience as we go down the less insurance against biased in out sample being representative

The condition under which the subjects' responses or performance is measured is called the: a. dependent variable b. independent variable c. control variable d. extraneous variable

a. dependent variable

Qualitative research is?

-Systematic and theoretical

P-value of standard type II error?

.2

Median (Me)

midpoint (50th percentile)

3 criteria there has to be for initial evidence

1. sequence criteria (cause must precede effect) 2. correlation (strong association) - has to be consistent set of results - correlation does not equal causation but prospective studies that have an independent variable causing an effect in sequence 3. no intercorrelations (effect explained by other variable) - reversibility and sole/primary factor - no other confounding variables - can reverse this effect when we take away that independent variable remember. if the condition is necessary and sufficient t produce the effect, that it is the cause if all 3 criteria is present

Theory construction

2 approaches - theory-discovering some abstract categories, relationships, or explanations within the qualitative data (inductive logic) - some studies attempt to generate theories or use techniques to ground theory from the data (grounded theory)

When should an independent t test be used in a study?

2 independent different groups with different treatments

Explain what is meant by the statement that a statistical test may be statistically significant but not meaningful. Give an example of such a circumstance.

2% increase in strength may be statistically significant, but not meaningful because strength can be increased by a lot more than 2%. - Can be bigger than 0 but still not be meaningful. Should focus more on magnitude.

Qualitative research can answer important question

Asking a really good research question - Even if double-blind, good sample size, good scientist never assumes that the answer is always right - Might be sample bias or sample error - People might not be honest

Additional factors that can affect the relationship between physical activity and blood cholesterol such as smoking, body fat, and so on?

Confounding factors

Distinguish between what information a significant F in an ANOVA tells you and what Omega squared tells you about the ANOVA results

F ratio compares 2 or more groups (tells you if effect is 0T ratio is 2 groups. Omega squared tells you the size of the effect

Interpreting r

First step: examine the scatterplot for linearity and assumption violations - homoscedasticity: follow a straight line without curves or scatter - sample size and outliers - Use pictures of plot to make sure you are not violating anything Second step: test if r = 0 with t test or correlation table (N-2 degrees of freedom) - Run a test to say that the affect is big enough not to be 0 - Confidence interval: r is somewhere in that range - The bigger the sample size, the better the estimate - Fewer statistical errors, confidence, and association

Prediction (Linear Regression)

For variables with strong correlations from good sampling and no confounding variables, linear regressions can create a formula to predict one variable from the other - Association can be used to guess one from another - Straight line pattern is predictable - Need a large sample or hidden variables - formulas are unique (not symmetric) Not symmetric: can't reverse the formula or go backwards (can't guess a 1rm from 5 reps or 6 reps) - limited to the range of all data don't guess beyond the data - shrinkage or population sensitivity - sample size - cross-validation: build formula and check with another sample to make sure we get same formula

Two-tailed test

Ho: M1 - M2 = 0 and H1 = 0 - making sure variability says they're different from 0

Holms vs. Bonferroni corrections on type 1 errors

Holms: p = 0.08 is not more significant to p = 0.05 - do number of statistical tests, find the observed p value, and rank order them - go through them with progressive Bonferroni correction Bonferroni: taking p = 0.05 and dividing by X (say it's 6) - really strict alpha level; gives p = 0.08

Reporting Stats

How was the power analysis done? Always report complications (screen your data) - all errors are removed, do any outliers or any skews, deviations from normal challenge the inferences of your stat test? Select appropriate and minimal statistical analyses - only the DV that really matters Report observed p values or confidence intervals (type 1 error) Report magnitudes of the effects (d, R^2, etc.) Control of multiple comparisons of DV Report variability using SD not SE Report data and probability (0-1) to appropriate level of precision

Reliability statistics - relative

Interclass (Pearson r) correlation should be avoided (not used) - bivariate, not univariate - limited to 2 scores (stability - single test-retest) - cannot examine sources/elements of reliability interclass correlation (R or ICC) ANOVA-based model that can tests lots of different times - ranges from 0 to 1 ( can't have negative association) - can be expanded to multiple sources of variance (Generalizability study)

4 kinds of measurement validity

Logical (face) validity - did students feel the questions asked on a test were appropriate or out of left field? Content validity - testing people over certain concepts, skills, or attitudes in a course; if can show that items were covered - logical and content are related Criterion-referenced validity: have some standard high-quality measurement system or an international standard or precisely measured amount of concentration of gas that we run through metabolic carts systems and compare measurements of our metabolic cart to the criterion - whenever our measurement is compared to some gold standard criterion to check its accuracy or how accurate the measurement is (minimizing the bias between what we'd expect to have) - can be either concurrent or predictive Construct validity (group difference method) - latent or hidden variables that aren't immediately measurable obvious (ex: anxiety, creativity, mental toughness) - tested by a group difference method: find groups that we know are qualitatively and through qualitative judgment are definitely different ex: novices that are very anxious about an activity vs. experts that are highly successful - if instruments can discriminate between groups = construct validity

Multivariate Comparisons

Only makes sense when not only one variable matters univariate: one DV or IV multivariate: combining more than one independent and dependent variables discriminate analysis: can linear combination be made of dependent variables that will identify group membership of one X? MANOVA: can combination of several dependent variables identify group membership of several levels of Xs - hard to interpret MANCOVA

ANOVA

RA fisher technique to simultaneously test for equality of two or more means. increased potential efficiency of research to examine multiple levels of IV and combinations of IV - usually do more than two Many kinds of ANOVA that all use the statistic: F ratio = MStreatment / MSerror - f ratio is the square of t test - could do ANOVA and still get the same answer as t test Like regression but the general linear model (Y = M + X + e) has nominal Xs - Categorical variables - Do this for efficiency and type of error (type 1/type 2)

Manual calculation vs. computer software

SAS SPSS (PSPP) R jmp, Stata, Minitab, Systat, Excel... - these are to avoid the "deadly sins" - didn't randomize, accepted the alternative, extrapolated

Common post hoc tests

Scheffe Tukey Newman-Keuls Duncan t tests - these are advanced planned comparisons (not omnibus) ANOVAs that can be performed

Two common deviations from normal (Normal curve/ bell shape curve)

Skewness - 2nd moment - most important deviation from normality - can be positive or negative Kurtosis - 3rd moment - less damaging to statistical tests

Size and meaning of effects

Statistically significant effects must then be interpreted in relation to size, meaning, or pragmatic value (discussions section) - talk about mean difference (maybe wasn't 0 compared to previous research) or you calculate effect sizes (how big is the difference or association) previous descriptive research effect size - Cohen effect size: d or ES = (M-M)/SDc in absolute terms - relative effect size (variance accounted for) - 0.2 small, 0.5 medium, and 0.8 large - kind of like p values that are in correlation coefficient units and %; useful for comparing "apples to oranges" and interpreting those probabilities - if relative effect size has a R^2 of 0.6, you know that 60% of the improvements is related to the variable of interest ex: training program dominates peoples strength improvement response or weight loss

Significance tests (t, F, and X^2)

True variance / error variance - a significant t ratio means that true variance significantly exceeds error variance

Correlation (r^2, R^2, % variance accounted for)

True variance / total variance

Comparisons

Usually comparing means - Dependent variable is indicator is independent variable - Hope to find if they are different

When an experimenter states that the level of significance is the .05 level, he or she is setting the probability of committing which type of error? a. Type I b. Type II c. Power d. sampling

a. Type I

Which of the following is a method to strengthen the internal validity of a study? a. blinding b. history c. expectancy d. selection bias

a. blinding

Comparing test items with the course objectives (course topics) checks which type of validity? a. content b. predictive c. concurrent d. construct

a. content

A physical education teacher develops a skill test in volleyball. After administering the test to 50 students, she asks the volleyball coach to rate the students on volleyball skills. She then correlates the students' test scores with the coach's rating. This is an example of what type of validity? a. criterion-referenced b. logical c. content d. face

a. criterion-referenced

A researcher wishes to determine whether a treatment group made a significant improvement from the pretest (M = 25) to the posttest (M = 30). The correct statistical test to use to compare the two means is the a. dependent t test b. Spearman r c. Pearson r d. independent t test

a. dependent t test

Developmental research

changes in motor performance in populations over long time frames - years to lifespan; Find that all the things we measure decline with age (best in our 20s) - longitudinal - cross-sectional - retrospective (most commonly seen; someone who has already been collecting data but didn't use) Following people and measuring as time goes by; "gold-standard" - Last only 10-20 years; hard to track people for that long - Longitudinal following a child for 15 years if best

Interpreting ANOVA

check data distributions for assumptions Compare Fobs to Fc for df or F numerator and denominator Size/magnitude/meaningfulness

interpreting t

check sample data for test assumptions compare observe t ratio to critical t value for the degrees of freedom (df) in the data - independent: df = n + n -2 - dependent: df = n -1 where n is number in pairs critical t ratio depends if your hypothesis is directional (one-tailed) or not (two-tailed) size/magnitude/meaningfulness - d = 0.07/0.14 = 0.5 - ratio of estimated true variance / total variance = 5%

Questionnaire steps

clear objectives defines/delimited sample - balance cost and error specific questions (objective - analysis - scale/measurement level) - ranking - category - VAS appearance/design pilot study letter/correspondence follow-up data analysis

Writing clear questions of questionnaire

clear worded items and possible response - avoid usually, most, and other vague words - clear scale 'anchors' - clear 0 or midpoint values short questions (one idea --- no compound concepts) avoid negatively worded items avoid technical language and jargon avoid leading questions (cues to answer)

Sorting, analyzing, and categorizing data

getting from data to conclusions: collection, data reduction, analysis and interpretation 1st steps may be transcribing interviews and transferring field notes to another medium so they can be analyzed - manual: index cards - computer programs now are typically used (NVivo) discovery may shape subsequent analysis - Might have people check your work - Could follow up During data analysis, researcher attempts to write out the story from the internal answers from your participants are - Analytic narrative: explanation that gives VIVID description of the environment, person, and context - Narrative vignette: stores that people tell about something; Some response where they explain how they reacted, or how stress negatively affected their test performance, or what may have caused that stress - Using quotes and examples to support conclusions; Okay the have a small sample size, if we know these people well (incredible depth of knowledge) As conclusions are generates, you are supporting it with your stories and quotations - Try to make a case for why you have trustworthiness in your discussion - When presenting results and discussion together, use some of the tools to prove your data is trustworthy

cluster

group of cases in a specific time and place that might be more than expected - is there something causing that or is it a random variation

Visual analog scale

have anchors that represent opposite characteristics and then a person is allowed to move a little cursor/pen and make a mark at the point that they most strongly believe or agree with - measured as a %

9 threats to internal validity

history: events that are not part of treatment - sequence which you do things: can sometimes affect people's responses ex: break-up caused emotional stress which can mess things up maturation: events due to passage of time - physical; if it's not part of the main purpose of your study it will tend to mess things up ex: as people get older, they will get better at certain things/ younger = get worse/ mortality causes rates to go down - usually in large clinical trials following people for several years testing: effects of more than one test administration - people will get better or scores will change just because of random variation and time changes that may be aren't related to your treatment - the more times you test people, the more variability in your study instrumentation: change in calibration of measurement - not well-calibrated instruments or poor quality instrument - if you don't measure or can't measure very well - common in clinical settings statistical regression: selection based on extreme score - idea that anytime you might select people from an extreme population due to chance, or have people in a study who you're measuring a bunch of times - high --> low, low --> high ex: tall parents tend to have short children, short parents have tall children selection biases: nonrandom participant selection - randomization helps to minimize threats to selection bias experimental mortality: differential loss of participants - losing participants over time selection-maturation interaction: passage of time influencing groups differently - random variation to select certain groups; some might be really persistent people and continue on and respond while others may lose interest and not participate and comply as well through the rest expectancy: influence of experimenters on participants - can kinda control with really good instruments; is an experimenter bias that can indirectly control your participants - give instructions so you don't imply that one treatment is better than the other, etc.

risk rates

incidence: rate of persons who acquire a condition in a given time (new cases/population at start of time interval) prevalence: rate of all cases (current and new cases/mean population

dependent variable

indicator variable that changes with the effects of IV

If you want to gain/consensus of expert opinion, what kind of survey would you use? i. Questionnaire j. Interview k. Delphi Method l. Normative Survey

k. Delphi Method

Analytical designs

provide preliminary tests hypotheses between exposure and disease/death cohort: in-tact group (categorized by exposure/risk factor) are compared by disease/death over time aka follow-up/longitudinal study case-control: recruit persons with disease/exposure identified and compared over time to matched controls - track relative risk between groups with an Odds ratio

Sample size?

literature justification power calculations given assumed effect and statistical error rates - G Power 3.1 (free program written to get an estimate of what would be the sample size needed)

What type of developmental research produces the highest quality? m. Longitudinal n. Normative o. Cross-Sectional p. Retrospective

m. Longitudinal

A Likert Scale is used for which of the following? i. Survey j. Qualitative k. Quantitative l. Questionnaire m. More than 1 above

m. More than 1 above - survey, quantitative, and questionnaire

Norman (Gaussian) Curve

many dependent variables are normally distributed ("bell shaped") in a whole population - squash out systematic variations can use numerous tests of sample distribution for normality - Shapiro-wilk (W) - Chi-squared (x2)

Pearson r Correlation

measures the association between two interval-level or higher variables. two or more Y's from a large sample - Used when we have paired data to see If things are related - Must be continuous level data - Whenever you can measure things, want to see if there is association in a large score to plot and see how well they are associated or varied - if it is large, we use G*Power to calculate the sample size (ideally want a lot of scores so between 150-200) sample statistics is r (ranges between -1 and 1) - negative or positive is dependent on relationship and scoring direction of X and Y variables) if score is 0: no association between to variables - Unrelated If score is Positive 1.0: All dots are in straight line - upward slope If score is Negative 1.0 - Downward slope

Measurement Validity

measuring with accuracy (what you intend to measure by minimizing systematic error (bias)) - results should be close to the actual true value want to be able to measure something, want to get the # assigned to our measurements as accurately as possible by minimizing any systematic error

Statistical Power

power is the probability of rejecting the null when the null was false (detecting an effect when it was real/present: 1 - beta/type 2 error) - usual convention p > 0.80 - power calculations are done before and after the experiment (G*Power software) if p > 0.2 ( for type 2 error), statistical test would be 1 - 0.2 or no smaller than 0.8 ( 4 out of 5)

3 types of experimental designs

pre-experimental true-experimental quasi-experimental (very common) design notation - R: random assignment to groups (minimizing the threat to internal/external validity) - O: observation or test (# of times the person is tested) - T: treatment (IV) is applied (medicine, pill, rehab, exercise) - subscripts - sequence of T (T1, T2, T3...)

Statistics are somewhat objective theory and mathematical techniques that tell us

probability of possible effects (X or IV): "statistical significance" or probability the effect is not likely 0 once effects are established as not likely 0: what is the strength/meaningfulness/size of the effect?

Type 2 error (beta)

probability you are willing to accept in falsely accepting the null when it was FALSE (false - ) - false or concluding no effect when effect was real (not 0) - usual convention p > .20 alpha and beta are INVERSLY related - if alpha is smaller, beta is bigger - if alpha is bigger, beta is smaller

Type 1 error (alpha)

probability you are willing to accept in falsely rejected the null when it was TRUE (false +) - false rejection of the null hypothesis (rejected but was true); should have accepted the null - what is the risk I am willing to take to talk about stuff as being real when it might not be real false + convention: Fisher 1/20 comment --> p < 0.05 or lower - in general, expect 1 in 20 studies testing one DV and finding a significant effect to a type 1 error

Providing evidence of trustworthiness

prolonged engagement with the participants and setting audit trail of changes during the study - Keeping track of how you analyze the data - Plan to come up and collect themes and see how they come together providing a thick description of setting and context - narrative and things that help you document that you are engaged with triangulation of sources to support conclusions - try to get multiple responses that support a particular inference that we make to support - Confirmation - Have at least three independent sources or pieces of data negative case checking: is phenomenon as pervasive as thought? - Look for opposite of yours and don't fine compelling evidence, your opinion is probably right member checking: do participants have information to add; do they agree with conclusions? peer debriefing: colleagues challenge results; researcher provides support - Team of researchers going through and triangulation with another research checking work - Using research peers to check each other's evaluation Clarification of potential research bias: Anything that may have been bias - important since the researcher is the instrument; important to come clean about perspective and techniques used to minimize them - often discussed in the method section of thesis or research report

Controlling threats to external validity

randomization placebos/nocebos: psychological treatment that gives the person the sense that they are getting a treatment - placebos: + expectation (getting better) - nocebos: - expectation (getting worse) blinding of participants and researchers - double-blind study is a good way to control for internal validity pretest elimination to reduce reactive/interactive effects - teaching/practicing a test beforehand; pre-test happens after the person figures out how to do the test instruments: quality, calibration, maintenance - Increases MAXICON principle reducing experimental mortality - losing participants selecting from larger population - participants: sample size is large - treatments: rigorous enough to protect potentially changed mechanisms or theory your testing - situations: controlling all situations possible - if all are consistent you get results for a stronger biases for inference ecological validity: refers to the validity of the research in the whole real world environment - often times it's not the same laboratory/field-based research you did because different communities have different flavors (different lab, participants, researchers)

True-experimental designs

randomized groups: randomize and try to get controls to limit the effects of past history, maturation, non-equivalents of the group - independent t-test between 2 groups - DV could differ from the treatment Pretest-posttest randomized groups: to see if they are starting out at the same level - analyzed 3 ways: - RMANOVA - ANCOVA - dependent t-test on changed scores pretest-posttest randomized groups - with repeated measures factor ANOVA Solomon four-group: examines the treatment and interactions of testing and treatment - does pretest increase sensitivity of the participants to the treatment? - RMANOVA - best evidence for it treatment of IV likely cause a certain response

Threats to external validity

reactive or interactive effects of testing: pretest may make participants sensitive to treatment interaction of selection biases and treatment: treatment may work only on participants selected on specific characteristics - treatment works on some people and not others reactive effects of experimental arrangements: setting constraints may influence generalizability (hawthrone effect) - changing in a person's response because thay are part of the experiment/treatment multiple-treatment interference: one treatment may influence the nest treatment

Statistical validity

relates to research design, statistics, and inference logic that researchers use

Reliability statistics - absolute

same units of measurement or % (or what typical values are) CV (coefficient variation): SD/M - measure of variability - tells you the size of the standard deviation as the % of the mean --> about what % different are different scores from each other Standard Error of Measurement (SEM) = SD (or sample of variability of group)*(sqrt(1-R)) - gives you # in original units - tells you exact units something has to be in to be different from the measurement perspective Objectivity (intertester/observer agreement) (IOA) - maybe you have a rating by people in different categories - just use probability (how many times different agrees have exactly the same score) - Agreements/agreements + disagreements (total)

Good experimental research requires

selection of a good theoretical framework: has to be some theory or logic of action that you are testing with your experimental design application of appropriate experimental design: some quasi-experimental designs don't allow us to do that because it doesn't have a control group use of correct statistical model and analysis: to make an inference and get a plausible defensible answer with a low chance of being wrong proper selection and control of independent variables: can test independent variable and if there are confounders, you can control those IV and keep them out appropriate selection and measurement of dependent variables: always want a limited number of DV correct interpretation of results: knowing your data and looking at it - is the mean/median the most of central tendency or what statistical variable we're changing?

Delphi Method

series of questionnaires given in rounds to a group of experts, hopefully, coming to consensus of opinion/perspective - Serial survey to experts on a panel; Go through several rounds where you ask open -ended questions - initial round uses open-ended questions - content analysis round - followed up by additional rounds of guided questions to confirm or resolve trends/constructs

ANCOVA

test that adjusts scores for variables that you cannot control (e.g., pre-test levels of intact groups in training or education or reaction time on training to improve sprinting) - Can account for systematic difference by running covariance

Reliability

the consistency of measurements/scores - when you measure a person in exactly the same condition; consistency is also a desired measurement property requires large samples of participants to tech check the instruments - very much like the correlation on associations your checking the validity and reliability of instruments sources of inconsistency/variability - participant: ex: HR can be at different values of rest; isn't always exactly the same - testing/procedures: how we take blood pressure and instructing methods - scoring/instruments: coding an instrument has to be calibrated and used just so - calculation/data analysis: combine lots of variables together

First step in all data analysis is to look at

the data (check for errors and meeting assumptions) - often done by looking at the distribution of the scores through a histogram

Prevalence of disease differs from incidence of disease that prevalence refers to?

the rate of all cases (current and new cases/ mean population)

Application

translational research or ecological validity of knowledge to practice (real world effectiveness) - apply knowledge to reduce the incidence of disease and disability and death in various populations

Pre-experimental designs

treat people and observe their responses - have nothing to compare to one-shot studied: can only qualitatively describe the observation results to any previous study - one group, treatment, and observation one group pretest-posttest: observe them to get some baseline data and then treat them. do a 2nd observation to tell you the change - might do a dependent t-test or RMANOVA Static group comparison: 2 groups; group receives a treatment and see what their performance is and another group that is not receiving (control group) - independent t-test

t test compares

two groups - test of the null hypothesis which states there is no difference between the sample mean and population mean

Analysis of Variance (ANOVA)

two or more groups

The yes/no (accept/reject) of the null hypotheses (Ho of association or comparison statistical test) has

two ways to be correct and two errors - Truth table estimate these, rarely know true values except Bayesian Stats

Box plots

typically uses the distribution stuff from the histogram - may or may not have a mean value


Conjuntos de estudio relacionados

Feeling Sad/Worried Attitude Words

View Set

Hesi Med Surg nclex prep green book

View Set