Research & Methods Exam 1

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Statistical Power

(1-Beta) or the probability of correctly deciding the IV had an effect

Strategies to reduce experimenter expectancy effects

1. increase number of experimenters (decreases learning about hypothesis; randomizes expectancies; increases generality of results) 2. monitor behavior of experimenters (facilitates standardization/identifying expectancy-related behavior) 3. analyze experiments for order effects 4. maintain "blind" contact (preferably "double-blind") 5. minimize experimenter-subject contact (use of computer) 6. employ expectancy control group (manipulate as an IV)

8-step Estimation Approach

1. research questions in estimation format 2. identify ES to best answer question 3. pre-specify procedure and data-analysis 4. calculate point estimates and CIs for ES 5. produce figures with CIs 6. interpret both ES and CI (outcome and precision) 7. meta-analytic thinking 8. complete reporting (transparency)

Other suggested alternatives or modifications to NHST

1. use of inferential confidence intervals 2. 3-alternative outcome NHST procedures (reject null, fail to reject null, not enough info to decide) 3. modeling (does your model better account for the data than competing models?)

Design and Statistical Inference

1.Treatment variance 2. Error variance In short *trying to maximize the first and minimize the second* -stronger manipulation of IV will increase treatment variance -use of within-subjects design is one way to reduce error variance

Latin Square Counterbalancing

2 kinds 1. basic version; condition appears once and only once in a given ordinal position of the sequence 2. balanced version; each condition appears once and only once in a given ordinal position *and* no two conditions are juxtaposed in the same order more than once

Abduction (Proctor & Capaldi)

3 Elements 1. explaining patterns of data -surveying number of phenomena, observing a pattern, and generating explanatory/ causal hypothesis -->often includes use of analogy 2. entertaining multiple hypotheses (comparative theory evaluation) -"evaluation of theory is matter to be decided in context of other theories, not isolation" 3. inference to the best explanation -attempt to come to a general conclusion about which theory best explains the available evidence

John Kruschke (2010)

*Gave a general description of bayesian analysis* 1. various candidate beliefs are specified as a space of possibilities 2. degree of belief in each value is specified -beliefs may be diffuse and uncertain, or concentrated on a narrow range *Priors*- either subjective or objective 3. based on data, we decrease our beliefs in parameter values that are less consistent with the data *or* increase our belief in parameter values that are more consistent with the data *bayesian idea*: data indicate how to reallocate beliefs In NHST, *intentions* influence p values Issues: CI depend on experimenter intentions same way as p values can use posterior odds to estimate likelihood of replication 2 major ways of doing bayesian analysis: bayesian parameter estimation, or model comparison (hayes factor) interpretation *should* depend on prior beliefs (specified and deemed reasonable by scientific community) possible to do analysis with different plausible priors and see if it makes a difference priors can/should *be based on previous research* posterior odds from 1 study can be used for prior odds for subsequent one

Rosenthal (2002)

*Interpersonal Expectancy Effects* Lessons: lots of evidence that people's expectancies can change behavior of others; a type of self-fulfilling prophecy Implications: -methodological: line of research behavioral scientists take notion of *"double-blind"* research so seriously -newer research: impact of non-verbal behavior -understanding role of interpersonal influence on people's ability to make sense of the world; how difficult it is to see actual causal influences in real-world situations like classrooms or legal settings

Tversky & Kahneman (1971)

*Law of Small Numbers*= belief that a sample (regardless of size) randomly drawn from the pop. will have same characteristics as the pop. from which it is drawn Belief often leads scientists to think that samples have similar characteristics to each other and the population, & the sampling is a self-correcting process Beliefs: -have exaggerated confidence in validity of conclusions drawn from small samples -have unrealistic belief in likelihood of replicating an obtained finding -overestimate stat power ("gamble" on small samples) -assign causal explanations to results which arise from sampling variability -express undue confidence in early trends Suggests proper recourse is to compute stat power and CI in addition to significance testing

Types of Analogies

*Local* from same area; share both surface & deep structure *Regional* from related areas; share deep and less surface structure *Distant* from less obviously related areas; share deep but little if any surface structure ex. benzene ring and the snake; both circular in *deep* structure, not similar in surface structure (Kekule)

Multitrait-multimethod Matrix (MTMM)

*Reliability Diagonal (monotrait-monomethod)* *Validity Diagonals (monotrait-heteromethod)* Heterotrait-Monomethod Triangles= correlations among traits that are being measured the same way Heterotrait-Heteromethod Triangles= correlations that differ in both trait and method

Wells (1978)

*System Variables* EW memory variables are potentially under control of the system i.e. instructions to witnesses, choice of lineup members, lineup administrator, format of lineup *Estimator Variables* EW memory variables that are not under control of system i.e. stress, lightening, point of view, race of perpetrator (cross-race bias), presence of a weapon

Criteria for Deciding Between Theories

*fruitfulness* theory generates new predictions *parsimony* number of assumptions made *quantification* expressible in mathematical terms *scope* diversity of explained phenomena *progressiveness* predictions lead to progress in understanding *internal consistency* free of internal contradictions *external consistency* do not violate assumptions of related theories theory should be *falsifiable*

Leshner

1. Avoiding overly simple answer to complex questions 2. Understanding the complexity has practical implications 3. Comprehensive approach -multiple causes -multiple research areas -multiple treatments

Before Collecting Data

1. Decide on your sample -Issue of Generalizability (concerns of inferences you wish to draw from the results of your study- what is your research question-*assumptions* you are assessing) -Issue of Practicality (populations can you sample? not a trivial issue) Understanding your assumptions helps you think about the limitations of your inferences 2. Develop/Test (debug) your procedure -coherence -simplicity -psychological involvement -consistency initial procedure can be flawed can have hindsight bias try it out first with "low cost" tests pilot studies be willing to use other techniques to gather more data instructions often difficult to tell whether participants understand instructions helpful to ask participants understanding in pilot study test out the "data" don't forget procedures for storing data

Three Component Logical Processes [Constructing and Testing Theory]

1. Induction 2. Deduction 3. Abduction

Two General Concerns [with conducting/analyzing data]

1. Null hypothesis significance testing should we use it? should we modify it? what should we use instead? 2. Replication, research practices and publication bias

Problems in Applying Criteria of Theories

1. Rules in conflict with one another (theory may be superior to rivals on some criteria, but inferior on others) ex. could be broad in scope but less falsifiable, more parsimonious but less fruitful 2. Shifting standards (often judged primarily on criteria deemed important theorist(s)-thus disagreement about appropriate criteria) -better because solves practical problems vs. theoretical issues/because supported by qualitative vs. quantitative data 3. Rule Emphasis (agreement of criteria for evaluation, but disagree on importance) 4. Vagueness of criteria (i.e. simplicity/parsimony. What respects is one theory simpler than another?)

Criticisms of NHST

1. all (point) null hypotheses can be rejected if sample size is large enough 2. rejecting null does not provide logical or strong support for the alternative 3. failing to reject the null does not provide logical or strong support for the null 4. NHST is backwards, because it evaluates the probability of the data given the hypothesis, rather probability of the hypothesis given the data 5. statistical significance does not imply practical significance

Summary of Problematic Practices

1. data "peeking"-stopping when p is significant, or continuing until it is 2. uncritically viewing negative results of pilot studies as flawed but positive results as sound 3. running many studies and selectively reporting ones with positive results 4. selectively reporting "clean" results 5. including multiple IVs and DVs and only reporting those that worked 6. data analysis focusing on subset of analyses that worked

Ways to Build Knowledge on Research Ideas

1. electronic searching 2. get articles 3. stay up on literature 4. sorting/ organizing 5. reading/ annotating

Pashler and Harris

3 counter-arguments of replicability crisis 1.low alpha level in NHST limits false-positives -need to know more than alpha level 2.conceptual replications are enough/are better because they provide generalizability -differ from original, so more wiggle room to interpret -conceptual failures to replicate do not lead scientists to suspect original finding is incorrect -conceptual replications more likely to be published 3. science is self-correcting, over time -argues: most replication attempts made relatively recently after publication, scientists don't more on from less valid, important fields to more important ones, rather can happen because field has been successful and new interests have arisen problem of replication is not purely academic b/c it can affect practical applications

ANOVA and MANOVA

ANOVA= single DV MANOVA=multiple DVs MANOVA can detect when groups differ on a *system* of variables. The process involves finding a linear composite of DVs that maximize the separation between groups

Proctor & Capaldi

All objective, all relative We can interpret the same things differently b/c there is a *human element of bias* Science is a better way to explain what is going on rather than qualitative research traditions practice of science modern philosophical approach (logical positivism, falsificationism, kuhn, post-modernism)

Type I error

Alpha- probability of rejecting null when it's true (concluding IV had an effect or *false positive*)

Where you start depends on what you...

Already know. 1. Try to identify major reviews/meta-analysis ex. psychological bulletin/psych review, annual review of psychology, more specialized journals, books/book chapters Goal to develop a "forest" view of topic identify major theories/researchers/ and findings 2.Use these to identify specific articles which are going to be most helpful to your specific goal empirical articles ensure you are not dependent on "word" of authors who reviewed lit. get to see actual results. use reference section to locate other articles that may be relevant 3. use multiple sources to "triangulate" important literature 4. record notes about important points/connections practical reason-try to avoid need to reread articles even better reason- you are building up your knowledge (compare/contrast views, conflicting findings, overlooked variables) "Trying to see the forest"

Blocking

Another technique for reducing error variance strengthens internal validity Involves taking an extraneous variable and including it as a factor in your design way to control confound

In Vivo/In Vitro

Approach combining lab and field using strengths and weaknesses of both types of research What is learned in one venue is tested in the other Rare to see this work done by the same researcher

Fellows (2005)

Argue that because lesion studies can provide evidence for necessity of involvement of brain area in aspect of cognition/behavior, you might expect such studies to have relatively greater scientific impact than brain imaging (which is correlational in nature) Clear that types of inferences afforded by a technique do not necessarily affect its use or impact (imaging studies increase proportion of pubs, impact of imaging was greater than lesion studies, much (but not all) variance explained by the fact that imaging studies were more likely to be published in prestigious high-impact journals) *Techniques have different strengths, weaknesses and assumptions (important to understand) Importance of *converging operations* to move field forward "within-method (citation) bias" Active efforts to increase cross-method literacy; shift in what agencies fund and encourage (as far as converging methods)

Farah & Hooke (2013)

Argue that brain imaging is not given undue weight McCabe & Castel's comparisons are not "informationally equivalent" Furthermore, one published study and two unpublished series of experiments have not replicated their effects

Racine (2005)

Argued popular press coverage of brain imaging research has led to type of *neuro-realism*, such that the phenomena under study become "uncritically real, objective or effective in eyes of the public"

Cohen (1994)

Argues NHST does not tell us what we want to know (Given data, likelihood that H0 is true?) Mistaken assumptions about p<.05 p is probability that H0 is false (it is not) 1-p is probability of successful replication (it is not) if one rejects H0, one affirms theory that led to the test Nil Hypothesis (ES=0) is almost always false Suggestions: explore data first, report ES in form of confidence intervals, improve measurement, use NHST with range null hypotheses (effect size no larger than x), and replication

Fiedler, et al: Are Type I errors only concern?

Argues that debate *overlooks Type II errors*, which they argue are even more frequent Type I error can be caught with replication, but less likely for Type II False negative rate for sm and med ES ranges from .52-.82 Strong strategies for reducing type I error could also inhibit discovery and innovation Theoretical innovations often come from overcoming false negatives, but rarely ever from overcoming false positives Their answer: strong inference (Platt), more explicit theoretically-focused contrast between alternative hypotheses, promoting "good science" instead of preventing "bad science"

Rozin

Argues that psych undervalues/overlooks research that describes phenomena or makes contributions outside of "normal" experiments -ex. description of previously unreported interesting phenomena, examination of robustness and generality of an effect, flawed experiments that report something important Idea is that these types of contributions can often lead to new theory/research -warns against ignoring important findings because study is not "perfect" Psychology should acknowledge there are many ways to advance our understanding, and that *contribution* should be primary criterion for judging research -to do this, field must change what it rewards

Prior Research

Arises from skepticism (don't like the operational definition) Unresolved conflicts in the literature Extensions of prior research into previously untested areas

Russano (2005)

Article is good illustration of some of the constraints involved in trying to study real-world issue in laboratory -does good job of justifying who this is an improvement over previous methods and what is still lacking compared to real-world interrogations -good example of how ethics constrain potential methods -issue of diagnosticity of real-world techniques applied to other areas as well

Theory

At the core, attempts to explain Can use theory+knowledge of "real-world" to imagine situations. What would the theory predict in a given situation? Getting to know theories at a deep level allows you to see how competing theories could be discriminated

Using Baseline and Standard Control Conditions

Baker & Dunbar (2000): real-world scientists study, used two different types of control groups 1.*baseline control condition*: identical to experimental manipulation, one key feature absent 2.*"Known Standard" control condition*: technique is known to reliably produce expected result Found that results of these conditions are helpful in different ways -baseline controls used to *test hypotheses* -known standards used to *identify potential errors* *Control Conditions* -Unexpected results of baseline control conditions alert you to possibility that your hypotheses are incorrect and encourage you to *reformulate hypotheses* -Results on known standard control conditions helps you to know whether you trust your data

Stanovich

Central importance of *converging operations* -tasks and measures have strengths/weaknesses -also have unique "method" variance -multiple measures allow for assessment of *patterns* of results that are shared (and not shared) -confidence increases to the extent it is shared *Connectivity principle* -new theory must be able to explain new facts/accounts for old ones -suggests skepticism when principle is violated Science is about *developing consensus*, not "breakthroughs" Practical implications -no single study is definitive -no need to become despondent when you have single disconfirmation/early studies are contradictory (against *simple* falsificationist views) -the *pattern* across studies is what matters -->Meta-analysis can be a powerful tool in this effort

Variable Selection Problem

determining if fewer DVs can be used for an interpretation

True Experiment

Because of random assignment, covariate should share NO VARIANCE with group (IV), thus no variance will be removed from IV Only effect is to remove variance from DV, resulting in a larger effect size and more powerful significant test

Source of Type II errors

Before you accept H0, need to consider number of factors (partial list, Whitley box) -construct validity of IV and DV -was the independent variable strong and interpreted as intended? -was the DV sensitive enough to detect differences? -are there potential mediators, moderators, or extraneous variables, and were these controlled or measured? -stat power adequate (sample size)?

Type II error

Beta- probability of failing to reject when alternative is true (concluding IV had no effect or *false negative*) In general, psychologists tend to ignore potential rate of type II error Important b/c=waste of resource if not reasonably expectation that one is likely to obtain hypothesized result (if in fact present) computing *stat power* is thus an important part of doing research

Alternatives to ANCOVA

Blocking -enter the CV into the analysis as an IV Use of meta-analysis (aggregating results of many studies) Placement of group means on regression line. Study the performance of many different types of groups, construct regression line, and compare group of interest to the regression line -approach may be problematic in many cases Regressing DV on the IV for control participants of a wide range of performance (expected to have wide range of performance on the CV)

Other Technological Modes of Communication

Blogs (outward-facing), facebook, multi-modal research presentation (video/screencast form), laboratory wikis (inward-facing)

Logical Positivism

Induction is the central path of science Look for increasing number of specific instances to confirm proposition ex. more red apples you find, more you can have confidence that apples are red but... cannot be used to conclusively demonstrate claim (can find green apples) inferences are limited to specific characteristics being observed, and do not tell you necessarily about causal mechanisms (knowing apples are red do not tell you *why*)

How is Persuasion an issue for Applied Research?

Common for practitioners to want to rely on experience rather than research Common criticism is that research is conducted in the laboratory rather than more realistic conditions (Basic vs. Applied) "not like real world" "does not have all features of particular real world situation", will be subject of debate Clear implication- Applied Researchers need to learn about the assumptions and goals of practitioners if their findings are to be applied

Broader Issues

Communication and collaboration Audience: other scientists, policy makers, general audience What is communicated? raw/analyzed data articles (peer-reviewed or not) more general info/implications

Bayes Factor

Compares at least two models H0 must not simply be unlikely, *it must be less likely than H1 to reject it* Characterization: ratio of the probability of the data given one model, relative to the probability of the data given a second model. The BF indicates how much the relative credibilities of the models should change.

Threats Related to Internal Validity

Confounds *Time-related threats* -history -maturation -testing -instrumentation change -stat regression Importance of control groups in pretest/posttest designs... or being sure you actually need a pretest *Selection threats* -selection bias (nonrandom assignment, pre-existing groups) -mortality (participants drop out from a study in ways that are *SYSTEMATICALLY* unequal across groups)

Consilience

E.O. Wilson: "Literally a *jumping together* of knowledge by the linking of facts and fact-based theory across disciplines to create a common groundwork for explanation." Analysis at multiple levels that is connected -extending social psych theories to other social and bio levels of explanation -Cacioppo is a pioneer in the area of social neuroscience

Carpenter

Criticisms of psychology's "ecosystem" of data collection, analysis and reporting practices (scientific credibility) High pressure to publish new and counterintuitive findings (p<.05 or "bust") Difficulty replicating such findings (are many findings false?) Not just psychology's problem Some initial attempts to address: psycfiledrawer.org open science collaboration openscienceframework.org

Multiple Univariate Questions

DVs are *conceptually independent* there is no interest in an underlying variable Research is exploratory in nature Some/all DVs have been studied in univariate context and the analysis is for comparison purposes Selecting a comparison group when designing a study variable selection problem variable ordering problem Identifying and interpreting underlying constructs (system structure)

Bayesian Idea

Data indicate how to reallocate beliefs

How to Become an Expert: Deliberate Practice & Scardamalia & Bereiter (1991)

Definition: intentional, effortful task practice with feedback Scardamalia & Bereiter (1991) Reinvestment of mental resources (crucial is *what you do with these extra resources*) *Progressive Problem Solving* leads to expertise: take on increasingly challenging problems & redefine task in new/more complex way--learn new skills Opposed to *Problem Reduction* which reduces difficulty of the problem-never get better at anything, just make it easier and easier

Relativism

Denies any privileged means of knowing about world (or creating theory) Truth is *created* rather than discovered

MANOVA

different from ANOVA because it has multiple dependent variables (see if groups differ on different things but they have to be kind of related) group of dependent variables called a vector Appropriate way to follow up: step-down regressions

Kuhn

Interested in how science actually practiced, relied primarily on *historical approach*; hypotheses about how science is conducted can be empirically evaluated scientific domains change over time: *pre-science* many schools of thought, none dominant *normal science* single dominant paradigm *revolutionary science* anomalous results lead to new view. if view takes hold it is *"paradigm shift,"* however, may not be an advance

Open Science Collaboration (2015) *Science*

Estimating reproducibility of psychological science Findings include: 36% of replications statistically significant in same direction Size of effects in replication (r=.20) about half of original study (r=.40) 47% of original studies were in the 95% confidence intervals of replication study Some differences according to research area Replications no more likely for highly experienced research teams Aftermath: discussions -->would expect drop due to *regression of the mean* due to sampling and measurement error (Klaus Fiedler) Should compare drop versus what would be expected -->indictment of psychological research versus NHST?

What can change? Relationship between research variables and potential application

Ex. Eyewitness memory (EW) & Legal System

What makes a good research idea (according to Whitley text)?

Extent to which it expands our knowledge 1. well-grounded in current knowledge 2. question can be researched 3. importance (supports/refutes influence of potential variables & testing b/w competing theories)

Criticisms of Bayesian View

Focused on priors Priors refer to odds of a hypothesis prior to data collection *Subjective* Bayesians say that priors quantify researcher's *personal belief* about the hypothesis Much criticisms of bayesian inference focus on *subjective quality of prior probabilities* -counter argument: even when researcher holds different prior subjective probabilities (beliefs), posterior subjective probabilities will tend to converge with enough repeated observation -others have argued that is not always the case *Objective* Bayesians specify priors according to certain *predetermined rules* -argue there are reasonable assumptions about prior probabilities -critics say current suggested rules do not involve reasonable assumptions Bayesians are one class of critics of NHST and bayesian is an *alternative*

Only legitimate use of ANCOVA is...

For reducing variability of scores in groups that vary randomly It is possible for there to be group differences on CV even in groups that were randomly assigned

Weisberg (2008)

Found that including cognitive neuroscience data with explanations of cognitive phenomena led introductory psychology students to increase their ratings of satisfaction for poor scientific explanations, but not for good ones -referred to brain imaging, but did not show brain images "good explanations" and "bad explanations" with or without neuroscience

Factorial Designs

Have multiple IVs

Everyday Experience

Holding interest Everyday experience seen through lens of your developing knowledge of psychology Connect this experience to other sources (theory, previous research)

What are some prejudices against the Null?

If it's not sig, it doesn't get published (*"file drawer" problem*) -if unpublished null are correct, published sig findings are Type I errors (failure to correct) -waste of resources (knowledge not transmitted) Type I and II errors

What makes a good/valuable problem (according to Webb 1964)

Individual (knowledge, dissatisfaction/healthy skepticism, together: must know lit to move past it) Problem (Generalizability: advances science broadly. How narrow/broad are the implications?)

Ioannidis

Is science self-correcting? progress in science is not self-evident and must be evaluated Key need is preservation/evaluation of scientific products -protocols/data is lost within years of publication -scientific papers simply summaries of studies/not enough to allow for replication Discover is an initial claim studies with low power have low probability of rejecting, but are also less likely to predict replication and exaggerates effect size summarizes suggested remedies in special issue: challenge evaluations based only on number of pubs and impact factor, peer review standards, using registry to clearly delineate between exploratory/confirmatory research recommendations are fine but should not be adopted without actual verification of utility ex. lowering standards could mean more junk in literature *pursuit of truth should take precedence over other goals in science*

Replication

Large multi-lab, multi country replication of 13 effects in literature ("classic and contemporary") -standardized protocol, presented from common website Findings: 11 of 13 findings replicated (one weakly) 2 findings did not (flag priming of political conservatism, currently priming of "system justification" size of effect was generally not moderated by site or online vs. lab Demonstrates how it is feasible to do large scale replication studies

Thagard's 6 Basic Habits

Make new connections Expect the unexpected Be persistent Get excited Be sociable Use the world (*M*ake *E*xceptional *B*ursts & *G*et a *B*etter *U*) What did he miss? Self-care

"True" Experiments

Manipulate one or more independent variable Hold the situation constant for all groups except for the IV Ensure that the participants in the groups are equivalent on relevant characteristics before the manipulation -most commonly by random assignment or using a within-subjects design Strength is the ability to make causal claims about the relationship of the IV and DV

Treatment confound

Manipulated IV is confounded with another variable; confound comes from experimental procedure

Measurement Confound

Measure assesses more than one hypothetical construct

Mixed-model factorial designs

Mix of between-subjects and within-subjects IVs

Follow-up to Significant MANOVA

Multivariate group contrasts, linear discriminant functions, LDF-DV contrasts NOT by univariate ANOVAs

Between-subjects

different levels of IV are given to different participants

Logical Analysis

Noticing interesting similarities between concepts or areas (analogy) Looking at problems from different points of view Doing a "task" analysis

Null Reminders

Null computes probability p(DIH0) -given null is true, what is probability these (or more extreme) data? Often interested in the probability that our hypothesis is true -we are looking for p(H1ID)

Bayes Theorem

Null inferred prior to new evidence Related to degree of belief -provides rule for how to update or revise strengths of evidence-based beliefs in light of new evidence (opposed to *frequentist* view of probability: long-run expected frequency of occurrence, underlies standard hypothesis testing approach)

Induction

Observation of a variety of instances to a general characterization (data-driven) -induction does not assure any inference is true -its scope is limited to characteristics that are observed

Platt (1964)

One important suggestion concerns the use of developing multiple hypotheses and testing between them importance of habitually thinking in these terms *not getting emotionally attached to your hypothesis* being *"problem-oriented"* not "method-oriented"

Issues With Technology & Science

Open-access science Pros: greater transparency (see what worked and what did not), greater opportunity for collaboration, problem solving, and feedback, and implicit knowledge is made explicit (knowledge mgmt) Cons: ideas can be stolen or info can be misused, diminished or lack of peer review, lack of reward structure, copyright or parents

Stability of CI vs p values

P values can jump around "dance of the p values" and it's almost random chance to get a p value of significance Confidence intervals are more a more reliable gage of significance

Criticisms of Platt

Poor historiography (fields that progressed rapidly did not use strong inference?) Impossible to test/list *ALL* alternatives Relatively vague about how to carry out strong inference in practice but some argue it's unfair to place expectations on article that had a potentially different agenda

Falsificationism

Popper's view central role of scientists- to disprove (falsify) hypotheses should make "risky predictions" bold and wide in scope in order to "count" more time theory has predictions tested, but not falsified, better corroborated it is (conversely, if falsified, should not be modified ad hoc to account for the results) If a theory cannot be falsified, it is not scientific

Suggestions/Problems associated with ANCOVA

Problem in how people are conceptualizing their research: how would groups differ on DV if they did not differ on CV? Instead, should consider whether the CVs are actually variables of interest and should be studied in their own right

Discriminant analysis

Procedure that maximizes the differences between groups on a categorical variable. In MANOVA, focus is on mean differences among the groups. In DA, the focus is on prediction of group membership and dimensions on which the groups differ

Practical Problems

Process of definition of a problem can often lead to competing explanations that can be tested Helpful to look at it from a number of vantage points (getting beyond status quo)

Misunderstanding ANCOVA

Random-assignment of participants to conditions, observed differences in groups prior to the study due to chance pre-existing groups studied, observed pre-treatment differences may reflect meaningful differences between groups ANVOVA is generally an inappropriate strategy for dealing with these differences If you remove a covariate, it'll remove too much variance from the IV of interest

Post-Modernism

Rejects tenets of modernism- truth, reality, and objectivity

Within-subjects

Repeated measures All levels of IV are given to each participant

Priors: Subjective & Objective

S: Quantifies personal belief of experimenter (specific or vague) O: Prior specified according to predetermined rules (generally vague/ uninformative rules)

Criticisms of Falsificationism

Science does not work this way: sometimes data appearing to falsify is wrong (darwin evolution and age of earth), sometimes difficult in empirical realm to interpret data as clear falsification (is it white crow or another species?) Revisions *(Lakatos)* importance of testing b/w theories (inference to best explanation) Theories must have better support than rivals, must lead to discovery of new findings that are not predicted by rival theories

Cacioppo

Scientifically informed *intuition* versus everyday intuition a. lay or naive theories: people naturally "theorize" about human behavior b. beliefs often conflict with scientific explanations c. Cacioppo calls these *entry* biases getting beyond common sense to informed intuition a. constuctive argument with colleagues b. learning from data c. seek theories that are broad in scope d. knowing history/philosophy of science helps you to consider your underlying assumptions Naivete can sometimes be useful a. don't forget you can bring fresh perspective because you are new to an area b. suggests open-mindedness is something even seasoned researchers should practice

Neuroscience of Screwing Up

Scientists can be biased as well Even if you screw up, it can still lead you to a positive outcome Science can be discouraging, but there are ways of maximizing your success "The problem with science then, isn't that most experiments fail--it's that most failures are ignored." Taking "failures" and "anomalies" and adjusting theory accordingly Embrace collaboration with people of different backgrounds --may help resolve issues more efficiently *Diverse* research groups

ANCOVA

Should only be used if random variance, must be random assignment can't be used with preexisting groups can't assign people if you take anxiety away from depression you're left with something different Alternative: use blocking (plug in as independent variable to see if there is an effect)

Wells: Suggests different approaches

System variables can be addressed through reforms to eyewitness identification procedures The impact of estimator variables can only be inferred -one way to address this would involve educating the decision-makers (judge or jury) -->introducing experts to testify about the research findings in these areas -->judge's instructions to jurors

Importance of *Diversity*

Sources of knowledge, methods (converging operations), projects, makeup of research teams

Deduction

Specific predictions are derived from general premises (theory-driven) -even when predictions match data, possible other theories could make same prediction -If predictions don't match data, may not be a problem with theory (could be problem with construct measurement)

fMRI

T1 Images (anatomical images) high resolution images 3D data 64 anatomical slices takes in 4 minutes T2 Images (functional images) indirectly related to neural activity low resolution images all slices at one time= a volume sample many volumes 4d data=3 spatial dim+*time* fMRI uses blood oxygenation level dependent signals (*BOLD*) indirect measure of neural activity

Neuroimaging

Structural-anatomy (CT/MRI) Functional-process (PET/fMRI)

Dunbar (1995)

Studied Molecular Biology for 5 years in preparation of study How to make a discovery Members of research team must have different, but overlapping research backgrounds Analogical reasoning Combo of high & low risk projects Surprising results should be noted Opportunities of lab members to interact/discuss their research (overlapping projects, break lab into smaller groups)

Whitfield: Group Theory

Studies about social nature of science in infancy, but growing Much interest in what makes an effective team Initial ideas about productivity: good to have turnover in team members between-school collaboration having published together previously

Higgins

Takes *heuristic* view of theory-evaluated on whether it generates new ideas/discoveries Suggests importance of intimate knowledge of theory -its domain of applicability -what it does/does not predict -assumptions that have to be added to make a prediction in a specific study One good way to do this is to compare to another theory When you do a study, consider *manipulation checks* to test your additional assumptions Argues against strict Popperian view of falsifying theories -"A theory can be improved; it can grow." Suggests characteristics that make for a good theory -Testable, coherent, economical, generalizable, explains known findings

Science 2.0

Technology has/will continue to affect science practice always side effects (not introduced into a vacuum) some effects will be under your control/others will not (mandates or implicit rules) reflective, intelligent use (is it useful/efficient)

Cumming (2014)

The New Statistic: suggests changes in general research practice, not simply statistical analysis 25 guidelines -move beyond NHST to estimation techniques use of effect sizes and confidence intervals precision-for-planning analysis instead of power analysis -meta-analytic thinking -pre-specifying all aspects of study -full reporting Major points: drop dichotomous thinking -*use of professional judgment in interpretation of findings*, cumulative focus, no more relentless pursuit of p<.05 replication -value of close (exact) *and* general (conceptual) replication, no need to characterize replications as success or failure pre-specified vs. exploratory research (and analysis)

Multiple-group (factorial) design

typical to analyze using ANOVA Main effect= IV has an effect independent of other IVs Interaction effect= effect of IV differs depending on the level of another IV

Chronback & Meehl (1955)

The nomological network: representation of the concepts (constructs) of interest in a study Theoretical: the constructs Empirical: observable manifestations *and the interrelationships among and between them* Used to establish reliability and construct validity (convergent and discriminant)

Complete Counterbalancing

each condition appears in each order of the sequence, and each condition appears before and after every other condition

Source of Research Ideas (As told by Whitley)

Theory Practical Problems Prior Research Logical Analysis Everyday experience [*T*he *P*restigious *P*ractitioner *L*ooks *E*verywhere] These are not independent

Does brain imaging affect credibility of research? McCabe & Castel (2007)

Use of brain images to represent level of brain activity associated with cog. processes influenced ratings of scientific merit of the reported research, compared to identical articles including no image, a bar graph, or topographical map Brain images may be *more persuasive* than other representations of brain activity because they *seem to* provide tangible physical explanation for cognitive processes Neuro-realism: Racine (2005) & Weisberg (2008)

Analogy

Using knowledge from known domain to infer about another domain *Source analog-->Target analog* Surface vs. Deep structure: level of similarity b/w analogs

Treatment Variance

Variability in scores associated with IV

Error Variance

Variability in scores not associated with IV

Natural Confound

Variables that tend to be associated with each other in nature -can "unconfound" by manipulation/measuring them

Null Hypothesis Testing [context of ANOVA and MANOVA]

Vector- set of DVs in ANOVA, the null is conducted on a single mean, but in MANOVA the focus is on vectors of means

Variable ordering problem

assess contribution of a DV to an IV or interaction

Reasons to do ANOVAS after MANOVA

assess relative variable importance -clarify meaning of significant IVs -explain results of MANOVA -document effects reflected by significant MANOVA

*Dependent variable contribution*

Way to follow up a MANOVA Similar to step-down except look at decrease in effect as variables are removed

*Stepdown analysis*

Way to follow up a MANOVA Used when there are theoretical reasons for ordering DVs. Choose order and after first variable enters, the effect of subsequent variables is assessed with the effects of prior variables removed.

Conceptual problem with ANCOVAs and differing groups

When covariate is affected by treatment, the regression adjustment may remove part of the treatment effect or produce a spurious treatment effect

Mediatior

accounts for the relation between the predictor and the criterion implies *causal* sequence among *three variables* X to M to Y (independent variable causes mediator and mediator causes the dependent variable)

Multivariate Contrasts

compare vector of means. Can be followed by discriminant function analysis or univariate contrasts (with appropriate adjustment)

Wilkinson and APA task force (1999)

concerns about NHST General themes: clear/accurate description, clear justification, due diligence (stat power and ES, exploratory data techniques) Interpreting your data: credibility (relationship to previous data and theory), generalizability, robustness Discussion sections: place importance on relevance *Use NHST more mindfully*

ANCOVAs are invalid when...

groups differ on the covariate important to verify that treatments had no effect on CV, otherwise a covariance adjustment may remove much of real treatment effect (when you use existing groups, you have no way of verifying this) ANCOVA cannot be used for controlling impact of differences on the covariate in pre-existing groups

Considerations of fMRI

limited temporal resolution (great spatially) hemodynamic response aggregates over distance populations of neurons issue of head motion affects what tasks can be studied with fMRI need a big, expensive magnet requiring team of researchers other issues discussed in Fellows

Reasons for not conducting MANOVA

low DV intercorrelations, small number of DVs, small cell frequencies

Counterbalancing

means of controlling for order effects not just for within-subjects designs can also be used to control for order of stimuli presentation subject-by-subject counterbalancing (ABBA design) Across subjects counterbalancing *complete counterbalancing* all possible orders are tested (n! orders) *partial counterbalancing* only some orders are tested *random*-randomly select enough different orders for the number of participants available (use at least as many randomly selected sequences as there are values of the IV) *latin square*-each treatment appears only once in any order in the sequences

Why devote time to neuroscience techniques?

neuro is having effects on nearly every aspect of psychology -cog, affective, social, clinical sometimes neuroscience measures can help distinguish between competing theories of a phenomenon even if you yourself won't use these techniques, you should be aware of basic concepts so you can be an informed consumer

Non-random assignment

often pre-existing groups differ on more than covariate. Using the covariate will leave those differences intact, and bias the results (specification error) no basis for determining whether pre-treatment group differences are due to error or to true group differences often a problem with psychopathology research because can't randomly assign to a diagnostic category

Sensitization Effects

performance changes as a reaction to receiving earlier conditions (contrast effects or when leads participants to form hypotheses)

Practice Effects

performance will be confounded with some improvement as a result of practice, experience, or familiarity

Fatigue Effects

performance will be confounded with some performance deficit resulting from progressive fatigue, boredom, or drop-off in attention

Order Effects

primarily concern of within-subjects designs order effects occur when DV is affected by order in which participants encounter different levels of the IV -practice effects, fatigue effects, carryover effects, sensitization effects

Moderator

qualitative or quantitative variable that affects the direction and or strength of the relation between an independent or predictor variable and a dependent or criterion variable (ANOVA-see this as an interaction)

Cognitive Neuroscience Techniques

single-cell recording: palm of hand (of a monkey), sensory data *lesion studies* effect of selective damage to a particular neuron or circuit -can think of brain-based damaged patients as lesion studies except lesion area is not controlled -possible now to make "temporary lesions" in humans using Transcranial Magnetic Stiumlation (TMS) Electrical Activity of the Brain *EEG/ERP* EEG-elecroencephalogram; reflects global activity ERP-links activity to a particular stimulus event or overt response

Qualitative Research

specific assumptions that differ from qualitative: adopts relativistic approach, rejects quant research, focus on lived experience thus interviews play a large role, participants are collaborators, goal is to provide description/interpretation has been more influential in humanities, education, and some social sciences (sociology/political) than in psychology

Carryover Effects

the persistence of the effect of a treatment after it ends; common in drug research (washout period)

Different Types of Relationships between IV and DV

to study curvilinear relationships, you need more than two levels of the IV

Multiple ANOVA situation

when dependent variables are different variable ordering problem when you need to find out if you need less variables and when you try to rank variables on how related they are (order them)

Conceptual independence

when dependent variables are not related DVs-they can be from the same domain DVs from different domains (good discriminate validity)


Set pelajaran terkait

PSY 2012 Ch. 13: Social Psychology Focus Practice Set

View Set

Chapter 9: Understanding and Capturing Customer Value

View Set

Funds lippincott Fluid and electrolyte ex 4

View Set