Research Methods

¡Supera tus tareas y exámenes ahora con Quizwiz!

Given a particular study, be able to describe the conceptual difference between a one-tailed versus a two-tailed statistical test, and be able to explain which one would likely be more appropriate and why

- NEED TO MENTION IN CAPSTONE PROJECT - typically should run two tails Non directional - two tailed - difference in relationship in either direction - researcher considers questions or hypotheses that are nondirectional then a two tailed should be applied - there will be a difference in scores between group X and Y group OR scores will be significantly different from the average. Directional - one tailed - difference goes in either direction - if the researcher has made a directional hypothesis then applies a one tailed test - scores of group X will be higher than those of the Y group OR scores will be below the average. (A)more sensitive to a difference or relationship (D) should only be used if you are truly interested in the research going in that particular direction - cheating to use a one tailed test

Explain the difference between correlation and multiple regression

Correlation coefficient describes the relationship between one independent variable (the independent variable may also be referred to as a predictor variable) and one dependent variable (the dependent variable may also be referred to as an outcome variable). - Relationship between the two variables - Correlations just tell us that two variables are related to each other - can't tell us whether one causes the other Multiple regression is used to describe the relationship between more than one independent variable and one dependent variable, all in one equation. - Rather than attempt this task of finding the optimal combination of predictors through trial and error, the researcher uses a statistical technique known as multiple regression. - mathematically enables the researcher to determine the order in which the predictor variables should be entered in a prediction equation to maximize prediction - assigns a weight to each predictor variable entered into the equation AND indicates the contribution of each new added variable to the predictive validity of the equation. - Helps to determine the relationship between multiple predictor (independent) variables to a single predicted variable. - Assigns a weight to each variable that indicates the unique contribution of that variable in predicting the dependent variable. - One way to scale the rates to have them between -1 and +1 and then can clearly see what has a greater weight than the others. - the success and meaning of multiple regression analysis depends on a number of factors the researcher must consider: care in selection of the initial variables for the analysis, the reliability and validity with which the variables are measured, the size and representativeness of the sample used for study, the reliability and validity of the criterion measure, and the practicality of gathering all of the predictor data appearing in the equation

Looking at results of a study, be able to determine/estimate simple effect sizes and standardized effect sizes (if standardized effect sizes are given)

Effect size estimates are scale-free, standardized values (similar to a z score) that provide an indication of the extent to which the value of a dependent variable is explained by an independent variable. Effect sizes are not interpreted as being statistically significant or insignificant. Instead they provide a means for interpreting a statistical result that is independent of statistical significance. Effect size estimates provide an independent index of the plausibility of a null hypothesis. Experimental data completely consistent with the null hypothesis: the effect size is zero. Experimental data completely inconsistent with the null hypothesis: differs from zero. Often won't give a standardized effect size - so if not there more than likely not reporting an effect size Effect size = the magnitude of the difference (between groups or between conditions) in the dependent variable - in other words, how large of a difference (or effect) needs to be seen in the dependent variable in order for that difference (or effect) to be meaningful? - DIFFERENT from statistical significance - Statistical significance does not indicate anything about the magnitude/amount of the finding, nor does it mean a result is meaningful or clinically significant! - Might have to use your judgement if the study does not tell you the size Example: say you test speech recognition of a large sample of males and females. - You find mean scores for males of 96% and for females of 98%. - It is theoretically possible that this result could be statistically significant; if so, it would mean that females really do have slightly higher speech recognition scores than males BUT, would you consider this difference between 98% and 96% to be meaningful? Is it large enough to really matter? Simple effect size - using expertise, but not super familiar to you - not obvious if it is clinically meaningful as well

Know how to describe an interaction, or lack thereof, between independent variables.

If there is more than one IV being analyzed simultaneously, the study is parametric. - An ANOVA with a parametric study allows the researcher to simultaneously evaluate the effects of two or more variables. - Did Guthrie and Mackersie (2009) find a significant interaction? What does this mean? Yes, between level method and hearing loss - want to focus on the p value; The affect of presentation level depended on the hearing loss of the participant - The more independent variables examined, the more participation size you need - Most studies do not examine interactions of more than three variables, because the interpretation becomes very difficult, and the required sample size may be hard to obtain. - If the study is parametric (i.e., if more than one IV is being analyzed simultaneously), ANOVA can also be used to determine if there are main effects of each independent variable, and/or interactions between the independent variables.

Looking at results of a study, be able to identify the statistically significant results

Low P-value is more statistically significant - less likely for a type error to occur < .05 P-value High P-value is less statistically significant > .05 P-value

Explain why nonparametric vs. parametric statistics may be used.

Nonparametric procedures (Spearman rank-order correlation coefficient): when assumptions about the populations cannot be met nonparametric procedures are used - called distribution-free statistics because they do not rest on assumptions about the distributions of the populations. - Nominal or ordinal level OR ratio or interval data can be converted into a rank order - Wilcoxon Matched-Pairs Signed-Ranks Test (within groups) - Mann-Whitney U Test (between groups): see Caporali et al. (2013) - Why would they have used this nonparametric test (as opposed to a parametric test)? What was their purpose for this test? Overall purpose was to find the difference between sites Needed to use a u test because the it is nonparametric - but didn't use it for the main purpose - the data can be pooled or callapsed together if the data is similar Parametric: base on certain assumptions about the population from which the sample data are obtained - Assumptions: 1. The population parameter should be normally distributed 2. The level of measurement of the parameter in question should be interval or ratio 3. When there are two or more distributions of data to be analyzed (two groups of subjects are tested under two different conditions) the variances of the data in the two different distributions should be about the same 4. The sample should be large - 30 subjects to be considered significantly large - Pearson product-moment correlation coefficient - Plotting one variable against another variable - look at the correlation or relationship between them - Negative: As pure tone results increase, word recognition scores decrease - Positive: As pure tone results increase, word recognition scores also increase - Can have different strengths between each of these - Absolute value of the correlation explains the strengths - .2 is just has strong as -.2 - No absolute cutoffs for what makes the correlation strong or weak - Most of the time correlations that are less than .5 are considered weak - often times - BUT the precision method above .5 can determine the strength - just depends on the measurement tool

Interpret correlation matrices

Rather than reporting an entire list of correlation coefficients showing relationships between variable pairs in a multivariate study, experimenters use a table of intercorrelations or a correlation matrix - can find correlation between two variables - sign and the numerical value

Given the purpose, research question, or research hypothesis of a study, be able to write a null hypothesis

Research hypothesis is what you would expect to happen in your study - similar to the alternative hypothesis and the opposite of the null hypothesis (80% of the time) - Null hypothesis - will say that there is no relationship or not any difference and is the opposite of the hypothesis - Used for statistics and don't normally see the null hypothesis in the actual study - NEED TO MENTION IN CAPSTONE PROJECT - typically should run two tails - Non directional - two tailed - difference in relationship in either direction - Directional - one tailed - difference goes in either direction - (A)more sensitive to a difference or relationship (D) should only be used if you are truly interested in the research going in that particular direction - cheating to use a one tailed test - Hearing aid manufacturers use one tailed tests - not fair - NULL HYPOTHESIS: a playgroup program does not work

Spearman correlation vs. Pearson correlation

Spearman correlation uses rank order correlation coefficient; example - for people with severe-profound SNHL, relationship between PTA and rated expressive speech intelligibility - used for ordinal data or when the sample size is less than 25 - If the data is nominal you can't do a correlation - Nonparametric - Denoted with a P Pearson correlation uses actual scores in the calculations - sample size of 25 or more can also be used. 0 = no effect, .1 = small effect, .3 = medium effect, and .5 = large effect. Can square it to determine the percent of variance accounted for (and squaring it makes the interpretation essentially the same as eta-square). - Parametric - Donoted with an R

Explain why a post-hoc analysis may be done.

Specific tests may include: Tukey, Duncan, Newman-Keuls, Sheffe, Bonferroni, and other procedures. These are referred to as pairwise comparisons or post-hoc testing. - Need to do an ANOVA first before you can do post-hoc comparisons. - if the ANOVA is significant then the post-hoc can be done - In the text, where do Parsa et al. mention post-hoc testing? (below figure 5) allows them to determine what condition is significantly different from others - Y = yes, N = no Determine whether authors' conclusions (typically in the discussion) were fairly drawn from the results of the study, and explain specifically.

Statistical significance of correlation coefficients

Statistical analysis is concerned with making decisions about the existence versus the nonexistence of differences between groups or relationships among variables - this is done by looking at the plausibility of a null hypothesis Determining the difference and relationship between variables - p value (calculated strength)

Direction of correlation coefficients

The direction of the relationship (the sign of the correlation coefficient) - a negative or inverse relationship OR a positive relationship Can be put into a scatterplot to describe the relationship This can then reveal the direction of the relationship - if the scores on one variable increases as the other variable also increases = POSITIVE relationship - if the scores on one variable decrease as the other variable increases = NEGATIVE relationship

Strength of correlation coefficients

The strength of the relationship (the numerical value of the correlation coefficient) - strength of the relationship takes on an absolute value ranging from 0.00 (no relationship) to + 1.00 (a perfect positive relationship) OR -1.00 (a perfect negative relationship) Can be put into a scatterplot to describe the relationship The density with which the data points on the plot are clustered together reveals the strength of the relationship - more clustered = strong - more dispersed = weak

Given a particular study, be able to state what a Type I error would be and what a Type II error would be

The technology label will have a significant effect on participants' hearing aid preferences. - Type 1: wouldn't have an effect on the participants' hearing aid preferences, but the study and stats say that it does effect it - Type 2: would have an effect on the participants' hearing aid preferences, but the study and stats say that it doesn't There will be no significant difference in the safety of care provided by audiologists compared to ENTs. Type 1: there would not be a significant difference in the safety of care, but the study is concluding that there is a significant difference Type 2: there would be a significant different in the safety of care in the real world, but the study is concludes that there is not Presentation level will significantly affect word recognition scores. Type 1: the presentation level does not affect word recognition scores, but the study concludes the presentation level really does affect word recognition scores Type 2: in reality the presentation level does affect word recognition scores, but the study concludes that it does not

From a particular study or correlation result, identify/determine the index of determination, and be able to describe conceptually what it means**

To evaluate the practical meaning of a correlation coefficient of a given magnitude, an index of determination is used. - The square of the correlation coefficient - An indication of the amount of shared variance between variables - how much of one variable is accounted by the other variable - Stated differently, how much of the dependent variable is accounted for by the independent variable - For example, a correlation of .5 (r=.5) means that 25% (.5 squared = .25) of the dependent variable is accounted for by the independent variable. - Another example is that the correlation between two variables is +0.60, this indicates that 36% of the two domains actually overlap - leaving a full 64% of the domain variability unaccounted for.

When reading a study, know how to find/identify the chance that a Type I or Type II error was made

When a researcher makes a decision about a null hypothesis, one of four things can happen: the hypothesis can be true or false and the researcher can reject or accept it. Correct decision: accept null hypothesis and the null hypothesis is true Type I error: reject the null hypothesis and the null hypothesis is true Type II error: accept the null hypothesis and the null hypothesis is false Correct decision: reject the null hypothesis and the null hypothesis is false Type I: rejecting a null hypothesis that is actually true in the real population - A difference does not exist in the real population, but the statistical analysis in the study concludes that a difference does exist. - The p-value represents the probability of a Type I error. P value - gets you the probability that the error will occur - .05 or less is the chance that a type one error occurring; less than a 5% chance that a type 1 error occurs - Giving medication that does work and it doesn't actually work or has serious side-effects - Usually treated overall more serious than type two because it is that .05 or less rather than .02 Type II: accepting a null hypothesis that is actually false in the real population - A difference does exist in the real population, but the statistical analysis in the study concludes that a difference does not exist. Could be more serious when have a medical condition

Be able to explain whether you believe a particular simple effect size is clinically significant/meaningful and why or why not

When you read a study and the author reports a statistically significant result, always ask if the result is also meaningful, or clinically significant, and look for this information in the article. • Two types of effect size: o 1. Simple effect size: The author reports the absolute difference between groups. The 2% difference in speech recognition on the previous slide is a simple effect size. o 2. Standardized effect size: - Used if the simple effect size is difficult to interpret; for instance, if an unfamiliar or new outcome measure is used - Also used to compare effect sizes across studies, especially in an evidence-based review or meta-analysis - If there is some kind of test that you aren't familiar with - hard to judge if it is significant Example: - Cohen's d tells you by how many standard deviations the two groups or two conditions differ. .2 may be considered a small effect, .5 a medium effect, and .8 a large effect. - Eta-square tells you how much variance in the dependent variable is accounted for by the independent variable. It's essentially the same thing as r2. - Pearson's r correlation coefficient can also be used. 0 = no effect, .1 = small effect, .3 = medium effect, and .5 = large effect. Can square it to determine the percent of variance accounted for (and squaring it makes the interpretation essentially the same as eta-square). - R value: reporting standardized effect size if the r value pops up out of no where Looking at a particular study, be able to identify the statistic(s) that were used to analyze the results, and be able to describe/explain why that statistic was an appropriate choice.


Conjuntos de estudio relacionados

Enrolled Agent Forms- Representation

View Set

ch 24 parathyroid/ adrenal disorders patho davis

View Set

Video 1: Beliefs That Make You Fail...or Succeed

View Set

Simplified Acquisition Procedures

View Set

Consumer Math - Sequences, The Rule of 78

View Set

Chapter 1: What is Science? Vocabulary

View Set

CS-4451 Quiz 13 - Incident Preparation Response and Investigation Study Questions

View Set