Experimental Midterm

Ace your homework & exams now with Quizwiz!

How does an experiment demonstrate external validity?

-Generalizability -Representative Sample/ Setting/ Event -Sample size (small sample size is less representative) -Are they responding like other people? (due to sample, not manipulation) - Different opinions (heterogeneity) -Distribution of responses "evens out"

State at least one of the basic tools of psychological science (observation, measurement, experimentation) that have been violated, and explain correct procedure: —> Deanna wanted to do an experiment on gas mileage to see whether the name brands give better mileage. She filled her tank with fuel-up one week and with a well-known brand the following week. At the end of that time, she thought things over and said "well, I didn't notice much difference between the brands. I filled the car with fuel-up on a tuesday and needed gas again the following tuesday. It was same story with big name brand, so they must be about the same."

-Measurement of efficacy of fuel -distance -time of day -starting point - Fill empty tank with same amount of gas, predetermine amount of miles you will travel, same time of day (traffic), go same speed at all times, same car, blind study (no bias, doesn't know which gas she is using) - Violated principles of measurement and control. To accurately measure gas mileage, she should have recorded the exact number of miles traveled on each tank of gas and the amount of fuel consumed during each condition. To control antecedent conditions.....

What is meant by internal validity? Why are non-experimental designs often lower in IV?

-Non experimental designs have lack of control and lack on randomization -Directed to use, not chosen -Internal Validity= only antecedent condition changes DV -Higher in external validity -Degree to which researcher is able to demonstrate a causal relationship between AC and subsequent observed behaviors -Often lower in InternalV because do not create AC or randomly assign subjects to these conditions -Prevent researchers from concluding that AC, not uncontrolled variables, are responsible for group differences in behavior

Evaluate the pros and cons of open ended and close ended questions

-Open ended: require subjects answer with more detail than y/n , can provide extensive information and can clarify or expand answers to closed questions. -->Answers to open ended questions harder to quantify than answers to closed questions -Closed: restrict number of possible alternatives. While questions are easier to quantify than open ended, often generate less complete information

Why are unobtrusive methods preferred?

-Procedures designed to assess behavior without subjects knowledge -Preferred because avoid problem of reactivity (subjects act differently because they know they are being observed)

Science

-content: what we know -process: activity that includes the systematic ways in which we go about gathering data, noting relationships, and offering explanations

Causation

1) A must come before B (temporal precedence) 2)IV and DV must covary 3) There must be no other explanation for the DV

Three strategies for developing an experimental hypothesis

1) read an issue in experimental journal (or meta-analyses on topic of interest) 2) observe how people behave in public 3) choose real-life problem and try to identify the cause

Belmont Report Proposes..

1) respect for persons : maintains that every individual is an autonomous person with the right to make his or her own decisions about research (basis of informed consent) also provides extra protection for vulnerable populations 2) justice: fairness in both burdens and benefits of research (ex: Tuskegee Syphilis Study) 3) beneficence: minimizes harm and maximizes potential benefits

Applied Research

- addresses real-world problems -ex:how to improve student graduation rates -ex: helping patients deal with grief or improving employee morale

Methodology

- consists of scientific techniques we use to collect and evaluate data - data are the facts

Four main objectives of science

- description (mean, median, mode, basic) - prediction (correlations) - explanation (experimental hypothesis, conditions) - control (control condition for confounds---> pilot study?)

Rosenthal Effect

- experimenters treat subjects differently depending on what they expect from them

Full disclosure/ Debriefing

- explaining true nature and purpose at the end or completion of participation - principle of full disclosure by completely debriefing subjects at the end of the experiment (explaining the true nature and purpose of the study)

Outliers

- extreme scores that usually affect correlations by disturbing trends in the data (range truncation removes outliers) - ex: grades decrease as listening to music increases, except for minority of students, skews data

Types of validity

- face validity - content validity - predictive validity -concurrent validity - construct validity

scatterplot

- graphic display of pairs of data points on the X and Y axes (illustrates linearity) -scores belong to each subject in study - also called scattergraphs/ scattergrams

Paradigm shift

- increased use of qualitative research may represent a paradigm shift - change in attitudes, values, beliefs, measures, and procedures during a specific time period

Types of reliability

- interrater reliability -test-restest reliability - interitem reliability

Interval

- measures the magnitude of the DV or quantitative size using equal intervals between values with no absolute zero point - ex: Fahrenheit and Centigrade temperatures ( 0 degree F does not represent absence of measurable temperature, temps below zero on both scale so can't say that 40 degrees is twice as hot as 20 degrees) -ex: Likert Scale

Ordinal

- measures the magnitude of the DV using ranks, but does not assign precise values - rank ordering or response items -ex: pre-election polls asking to rank order a number of presidential candidates (gives us idea about relative popularity of candidate, but does not tell us precisely how popular any particular candidate is) -ex: 1st to last place in baking contest (by how much more better is 1st to last place? don't know...no magnitude)

Plagiarism

- misrepresenting someones "ideas, words, or written work" as your own - form of fraud in which individual claims false credit for another's ideas, words, or written work -not giving someone proper credit for their work

Main tools of psychological science

- observation -measurement (nominal, ordinal, interval, ratio) - experimentation (how?)

Double-blind experiment

- one in which the subjects nor the experimenter know which treatment the subjects are receiving

Power

- probability of correctly rejecting the null hypothesis - carried by statistical conclusion validity

Consent form

- provides subjects with information relevant to their participation in the experiment: --> nature of the experiment --> overview of the procedures that will occur --> how long it will take (duration) --> potential risks and benefits --> what they will be required to do

How does a hypothesis affect external validity?

- responses on DV aren't representative of people in the "real world" - limits generalizability

Observation

- systematic way noting and recording of events - consistently applied - events and signs must be observable -observations must be objective

Response styles

- tendencies to respond to questions or test items in specific ways, regardless of the content --> willingness to answer --> position preference --> yea sayers/ nay sayers -->

Synthetic Statment

- those that can be either true/false (ex: hungry students read slow, can be proven either true/false) - "if.... then" form, expresses potential relationship between particular antecedent condition -non synthetic statements should be avoided at all cost : -->analytic statement: one that is always true (ex: i am pregnant or i am not pregnant, statement itself can explain all outcomes) be concise enough to be proven wrong --> contradictory statement: statements with elements that oppose each other, always false (ex: I have a brother and I do not have a brother)

For each of the following terms provide an experimental operational definition (as if the term represented an IV manipulation in experiment) and a measured operational definition (as if term represented DV measured in experiment) TERM: antibiotic

- (IV NOMINAL SCALE )Participants are randomly assigned to two different conditions, one group exposed to antibiotic and one isn't - (DV) Milligrams taken in last ten days (ratio)

For each of the following terms provide an experimental operational definition (as if the term represented an IV manipulation in experiment) and a measured operational definition (as if term represented DV measured in experiment) TERM: coziness

- (IV NOMINAL SCALE) Participants are randomly assigned to three different conditions, in bed, on couch, one chair - (DV) to what extent they felt cozy, Likert Scale (interval scale) -DV rank most to least coziness: bed, couch, chair (ordinal scale)

IACUC (Institutional Animal Care and Use Committee)

- Determines if researchers have explored all other alternatives and documented no other feasible alternatives - must be comprised of scientists and laypersons, but must also include a veterinarian with expertise in animal laboratory science - APA's committee on animal research ethics (CARE) has been influential in establishing national guidelines for animal welfare and includes standards for animal care among its ethical principles - must avoid any unnecessary pain or risk (like with human subjects)

Field Studies

- Non-experimental design conducted in the field (real-life setting) no manipulation or random assignment (low-low/low-high) - types: naturalistic observation & participant- observer studies, unobtrusive measures, surveys

Characteristic of Modern Science

- The Scientific Mentality: behavior must follow a natural order - Gathering empirical data: data that is observable and experienced - Seeking general principles: go beyond cataloging observations to proposing general principles- laws or theories- that will explain them (general principles construct laws and theories) - Good thinking: systematic, objective, and rational way of collecting data - Self- Correction: accept uncertainty of own conclusions, content of science changes as we acquire new scientific information -Publicizing results: continuous exchange of information is vital to scientific process - Replication: ability to repeat out procedures and get the same results

Induction/ Inductive Model

- The process of reasoning from specific cases to more general principles to form a hypothesis, often used in science and mathematics - by explaining individual instances, we may be able to construct an overall explanatory scheme to describe them - use inductive reasoning to construct theories by creating explanation that account for empirical data (observations) -ex: pavlov and his dogs, classical conditioning -ex: BF skinner, operant conditioning

Control

- Use of scientific knowledge to influence behavior -Control condition is critical because of confounds (IV you didn't consider) -ex: OP comparing effects of flexible hours vs. working 9-5 on employee morale

Case Studies

- a researcher compiles a descriptive study of a subjects experiences, observable behaviors, and archival records kept by an outside observer - systematically recording experiences and behaviors as they have occurred over time (clinical psychology mostly) - either low- low because few restrictions on type of data to be included, low-high when case study severely restricts kind of information collected

Selection Interactions threat (to internal validity)

- a selection threat can form with another threat to form a selection interaction - if subjects not randomly assigned to groups (or random assignment has failed to balance out differences among subjects), any one of the other threats may have affected some experimental group and not others - CONTROL AS MANY VARIABLES AS POSSIBLE - ex: weight loss, participants chosen from two different gyms (selection threat), subjects in group a are more body conscious and purchased weight loss machine, making them lose more weight than those in group b (history threat)

Statistical Regression Threat (to internal Validity)

- also called regression toward the mean - can occur whenever subjects are assigned to conditions on the basis of extreme scores on a test (extreme scores tend to have less test-restest reliability than do moderate scores) - scores at both extremes tend to get closer to the mean without any treatment at all -ex: time 1: extreme low score__x1________mean______y1__extreme high score -ex: time 2: extreme low score ______x1___mean___y1_______extreme high score - statistical regression can easily be mistaken for testing effects (those with high anxiety after treatment then get lower anxiety, closer to mean, could be due to statistical regression as opposed to treatment)

range truncation

- artificial restriction of the range of X and Y that can reduce strength of correlation coefficient - ex: as children (4-16) get older, shoe size increases, when looking at only 8-9 year olds, positive correlation goes away

Demand Characteristics

- aspects of an experimental situation that demand people behave in a particular way - what we do is often shaped by what we think is expected behavior in a particular situation (want to conform to what they think is the proper role of a subject) - if subjects know what we expect to find, they might try to produce data that will support the hypothesis (especially in within subjects design)

Measurement

- assigns numbers to objects or events or their characteristic according to conventional rules - we use standardized units, agreed-upon conventions that define such measures as the minute, meter, and the ounce - nominal, ordinal, interval, ratio

Establishing cause and effect

- between antecedent conditions and subjects behaviors - only true experiment allows us to make causal statements (probabilities, never certainties) - temporal relationship (treatment conditions come before the behavior)

Prediction

- capability of knowing in advance when certain behaviors should occur -ex: we know that a death of a grandparent is associated with grief, and we can predict a person will feel grief if a grandparent has died recently -ex: correlation, quasi-experimental (ex: fans of different types of music have different personality types, guess favorite type of music by personality)

Good Thinking

- central feature of scientific method - our approach to collection and interpretation of data should be systematic, objective, and rational

Antecedent Conditions

- circumstances that come before the event or behavior you are trying to explain - we create specific sets of antecedent conditions that we call treatments: treating subjects differently when we expose them to different sets of antecedents - ex from book: different concentrations of antecedent negative ions were the specified antecedent conditions, and mood was the behavior explained by these conditions

Nominal

- classifies response items into two or more distinct categories on the basis of some common feature - categories that can be named (does not quantify/ measure magnitude) - ex: sort professors into exciting vs. dull -ex: true/false tests -ex: political affiliation -ex: 0-4 sec, 5-9 sec, 10-14 sec, more than 14 sec

Willingness to answer

- comes into play whenever questions require specific knowledge about facts or issues - if subjects omit answers to key questions it makes both scoring and interpretation difficult - some researchers attempt to control for this factor by explicitly telling subjects to guess if they are not sure an answer to a question, or telling them theres no right/wrong answers

Test-restest reliability

- comparing scores of people who have been measured twice with the same instrument - degree to which persons scores are consistent across two or more administrations of measurement procedure - reliable measures should produce very similar scores each time the person is measured -ex: highly correlated scores on the WAIS twice, two weeks apart

IRB (institutional review board)

- composed of laypeople and researchers who evaluate research proposals to make sure they follow ethical standards (composed of both laypeople and researchers to ensure that the views of the general community, as well as scientists, taken into consideration - protect safety of research participants: --> first task us to decide whether a proposed study increases participants risk of injury since psychological research can cause physical and/or psychological discomfort --> we must accurately estimate the degree of risk in our research. We typically do this by reading the literature and consulting colleagues -IRB's will also help researchers estimate the degree of risk involved in studies - studies that place subjects at risk increase the chance of harm compared with not participating in the study

Hypothetical Constructs

- concepts (ex: anxiety---> ordinarily good student does poorly on exam) - unseen processes postulated to explain behavior - constructs that cannot be observed directly - infer their existence from behaviors we can observe - effects produced on behavior may differ from one operational definition to another

The Psychology Experiment

- controlled procedure in which at least two different treatment conditions are applied to subjects - procedures carefully controlled - use of random assignment to ensure subjects are as similar as possible (equivalent)

Single-blind experiments

- controls for demand characteristics - experiment in which subjects do not know which treatment they are getting - can disclose what is going to happen to them in the experiment, can keep them fully informed about the purpose of the study, but keep them "blind" to one thing: we do not tell them which treatment condition they are in -placebo effect: when you give subjects a substance and they might react based on what he/she expects the drug to do ( to control, give one group of subjects a placebo pill and the other group of subjects the actual treatment)

Authorship

- credit should only be given to those who made a major contribution to the researcher or writing - researchers should not take credit for the same research more than once - ethnical solution is to cite original publications when republishing data in a journal article or republishing journal articles in an edited volume

Confidentiality

- data securely stored and only used for purpose explained to the subject - only using data for purpose told to subjects

Empirical

- data that is observed or experienced - ex: Galileos empirical approach was superior to Aristotle's common sense method (light object falls just as rapidly as heavy objects in a vacuum)

Internal Validity

- degree to which a research design allows us to make causal statements (causal relationship b/w IV and DV) - experiment has high internal validity when we can demonstrate that only the antecedent conditions are responsible for group differences in behavior - allows us to draw cause and effect conclusions (systematic difference vs. error) - one of the most important concepts in experimentation

External Validity

- degree to which research findings can be generalized to other settings and individuals (outside the research setting) - when observations can be generalized to other settings and other people, they are high in external validity -replication for high external validity with different samples/settings - true experiments usually lack external validity, while non-experimental studies are higher in external, but lower in internal (what we gain in external, we lose in internal, and vise versa)

Measured Operational Definition

- describes exactly what procedures we follow to assess the impact of different treatment conditions - exact descriptions of the specific behaviors/responses recorded and explain how those responses are scored -ex: Scores on culture fair intelligence test

Phenomenology

- description of an individuals immediate experience - rather than looking at behaviors and events that are external to us, we begin with personal experience as a source of data - low-low end because antecedents not manipulated, and data consists of immediate experiences so no constraints are imposed - william james: habits, emotions, consciousness and streams of thought (typically combined with other research methods)

Archival Studies

- descriptive method where researchers re-examine data that were collected for other purposes - crime and death rates, education levels, salaries, housing patterns, disease rates, information about peoples attitudes -ex: Universities and graduation rates

Non-experimental approaches

- do not create levels of the Independent Variable and do not randomly assign subjects to conditions - used when experiments are not ethical/possible, or where we want to test hypothesis in realistic conditions - used to study behaviors in natural settings (children playing, chimps parenting, life in a gang), or to sample personal information (opinions, attitudes, preferences)

Minimal Risk Studies

- do not increase the likelihood of injury, risk that is no greater in probability or severity than that ordinarily encountered in daily life or during the performance of routine physical or psychological examination or test - ex: observations of public behavior anonymous questionnaires, and certain kinds of archival research - Informed consent not always mandatory in minimal risk studies, but should still obtain as safeguard whenever possible

Predictive validity

- do our procedures yield information that enables us to predict future behavior/performance? (should if we're measuring what we intend to measure) - can we use peoples responses to a questionnaire to predict how they will actually behave? -ex: if people have the desire to affiliate, it is reasonable to predict that they will stay near others when they have the opportunity (after study instead of dismissing subjects, bring them all to room together, subjects who said they wanted to wait with other will sit closer to others, and vise versa)

Vulnerable Subjects

- economically or educationally disadvantaged, cognitively impaired children and adults, prisoners, pregnant women -whenever subject is minor or cognitively impaired, researchers need to obtain consent from a parent/legal guardian -subjects should be given as much explanation as they can understand and be allowed to refuse to participate, even though the legal guardian or parent has given permission - assent/ agreement of minor children ages 7 and above - consent forms need to be written in clear, understandable language at the appropriate reading level for participants - researchers need to verbally reinforce information that is important for subjects

Important considerations for survey items

- engaging subjects from the start with interesting questions that are... 1) relevant to your surveys central topic 2) easy to answer 3) interesting 4) answerable to most respondents 5) closed format - use commonly used response options - avoid value-laden questions that might make response embarrassing (ex: --> version 1: do you believe doctors should be allowed to kill unborn babies during the first trimester of pregnancy? --> version 2: do you believe doctors should be allowed to terminate a pregnancy during the first trimester?

Animal Welfare

- ensures humane care and treatment of animals (ex: requires means for engaging in species- typical activities, like primates and other animals known to live in social groups in nature must be provided with opportunities to socialize)

Coefficient of determination (r^2)

- estimates amount of variability that can be explained by predictor variable -ex: handshake firmness accounted for 31% of variability of first impression positivity

Concurrent Validity

- evaluated by comparing scores on the measuring instrument with another known standard for the variable being tested -compares scores on measuring instrument with outside criterion (comparative, not predictive) - reflects whether scores on the measuring device correlate with scores obtained from another method of measuring the same concept

Common Sense Psychology

- everyday, nonscientific data gathering that shapes our expectations and believes and directs our behavior - find that our ability to gather data in a systematic and impartial way is constrained by two very important facts: --> the sources of psychological information --> inferential strategies: tend to be brains way of coping with an immense volume of information (traits might be more useful for predicting how someone will behave over the long term, whereas situations might be better predictors of momentary behaviors)

Necessary and sufficient conditions

- ex: cutting down on carbs to lose weight, sufficient, but not necessary - cause and effect relationships established through scientific research commonly involve identifying sufficient conditions (vs. necessary) -ex: number of studies have shown that being in good mood increases our willingness to help others (yet other factors can also explain phenomena)

Replication

- exact/ systematic repetition of a study - increases our confidence in experimental results by adding to weight of supporting evidence - control for confounds (operational definitions of variables) - pilot study (testing operational definitions of variables)

Naturalistic observation

- examines subjects spontaneous behavior in their actual environments (low-low) - animal behavior research (ethology) - researchers try to remain inconspicuous so that behaviors they observe are not altered by the presence of the observer

Confound

- experiment confounded when value of extraneous variable systematically changes along with the IV - creates another hypothesis -difficult to separate effects

Experimenter bias

- experimenter does something that creates confounding in the experiment (could be a cue to respond in a particular way or might behave differently in different treatment conditions - researchers more likely to make errors that favor the hypothesis

Confederate

- experimenters accomplice - use of confederate is deceptive because subjects are led to believe that the confederate is another subject, experimenter, bystander, or involved in the experimental manipulation -ex: Solomon Asch Line Experiment (conformity) (confederates all chose line that wasn't the longest to see if participate would follow suit due to conformity)

Experimental Operational Definition

- explain the precise meaning of the IV's - describe exactly what was done to create the various treatment conditions of the experiment - includes all steps that were followed to set up each value of the IV - whether IV is manipulated/ selected, we need precise operational definitions ex: word orientation and rate of learning, would need to provide the procedure used to present the words, size of the words, type of printing, level of illumination in room, distance and location of words to subjects visual field, duration of word presentation

Hypothesis

- explanation of the relation between two or more variables (predictor, IV, DV) - thesis, main idea of experiment that makes statement about the predicted relationship between at least two variables

Interitem reliability

- extent to which different parts of a questionnaire, test, or other instruments designed to assess the same variable attain consistent results - scores on different items designed to measure the same construct should be highly correlated --> internal consistency: show a high degree of internal consistency if they are reliably measuring the same variable ---> split-half reliability: splitting test in two halves at random and computing a coefficient of reliability between the scores obtained on the two halves - crombachs alpha

Reliability

- extent to which the survey is consistent and repeatable (consistency, dependability) - refers to the consistency of experimental operational definition and measured operational definitions 1) responses to similar questions in the survey should be consistent 2)survey should generate very similar responses across different survey-givers 3)the survey should generate very similar responses if it is given to the same person more than once ex: reliable bathroom scale should display the same weight if you measure yourself 3 times in 1 minute

replication

- external validity, generalizability if being tested in other setting -internal validity, if you can reproduce your results -adds to body of research, adds to theory - if replication failed, would not be generalizable because did not work in another setting, may have been confounded, alternative explanation, pilot study in order and you didn't do one, faulty manipulation check, features of study that should have been closely followed that weren't, problems with sample that makes result differ, construct validity may not be followed as other study was

Extraneous variable

- factors that are not the focus of the experiment but can influence the findings - many things other than the IV and DV may be changing throughout experiment (ex: time of day, experiments level of fatigue, particular subjects being tested) - anything that varies within the experiment -in a well controlled experiment, we attempt to recognize the potential for extraneous variables and use procedures to control them (as long as influences are infrequent, random events, they do not necessarily invalidate an experiment)

Peer Review Process

- filters submitted manuscripts so that only 15-20% of articles are printed - reviewers task is to asses the merit of a submission, looking for problems and suggesting improvements

Research Ethics

- framework of values within which we conduct research -ethics help researchers identify actions we consider good and bad and explain the principles by which we make responsible decisions in actual situations -responsible research aimed at advancing our understanding of feelings, thoughts, and behaviors in way that will ultimately benefit humanity - research that is harmful to participants is undesirable even though it may increase wisdom

Interrater reliability

- have different observers take measurements of the same responses, look for agreement between measurements - if there is little agreement between them, chances are good that the measuring procedure is not reliable -ex: several observers agree when scoring same personal essay

Manifest content (yea/nay saying)

- how we expect subjects to respond -MMPI-2 (I know that everything will be all right: y/n) - the plain meaning of the words that actually appear on the page BUT (not always when questionnaire asks about feelings or attitudes) -some subjects seem to respond to questions in a consistent way : -->yea sayers/ response acquiescence (apt to agree with questions regardless of its manifest content) --> nay sayers/response deviation (tend to disagree no matter what they are asked) -ex to avoid: In your opinion have prices gone up, gone down, or stayed about the same the past year, or you don't know? (more than y/n answer)

Example of experiment (IV, DV)

- hypothesis: people learn words faster when the words are written horizontally than when they are written vertically - IV: word orientation (levels: vertical, horizontal) -DV : rate of learning - you are predicting the rate of learning depends on the way the words are presented

When an extraneous variable becomes a confound

- if uncontrolled extraneous variables are allowed to changed along with the IV, we might not be able to tell whether changes in the DV were caused by changes in the IV or by extraneous variables that also changed across conditions - confounding sabotages the the experiment because the effects we see can be explained equally well by changes in the extraneous variable or in the IV

Discussion section of report

- implications of your findings, what the results mean - if findings inconsistent with past results reported by other researchers, need to explain by contrasting your study with theres

Cross-sectional studies (QED) (low-high)

- instead of tracking the same group over a long span of time, subjects who are already at different stages are compared at a single point in time - ex: observing groups of families who are already at 1 month before, 1,4,8,12 month mark after birth - requires more participants (while longitudinal takes more time), statistical tests less powerful when comparing different groups than with same group, different subjects may systematically vary in other characteristics that could influence behavior

Limitations of qualitative studies

- interpretation of data may be influenced by researchers own viewpoint - presence of researcher could affect how participants respond - accuracy of self-reports - internal validity ("Is it possible that something else besides shame is an important feature that stops women from reporting their abuse? Is fear important too?) - external validity ( "would we expect to get the same data from non-Isreali women who were victims of spousal abuse?") - reproducibility of conclusions

Scientific Fraud

- involves falsifying (manipulating)/ fabricating (creating) data - researchers graduation, tenure, promotion, funding, or reputation may motivate them to commit fraud, however, competition among colleagues for limited research resources can be a strong deterrent to fraud, because researchers will be on alert for fraud - safeguards against: 1) reviewed by editor and several experts before accepted for publication (peer review) 2) Replication 3) competition amongst colleagues

Participant- Observer study

- involves field observation in which the researcher is part of the studied group (ex: confederate, focus groups) - this approach contrasts with naturalistic observation, where the researcher does not interact with the research subject to avoid reactivity - main problems include invasion of privacy, not telling people you are studying their behavior, and pretending to be a group member (can be serious problem)

Theory

- is an interim explanation, a set of related statements used to explain and predict phenomena - integrate diverse data, explain behavior, and predict new instances of behavior - (theory based expectancies: can cause us to pay more attention to behavioral information that is predicted by the theory and to overlook non predictive behaviors)

Level of measurement

- kind of scale used to measure a response - as we go from lower to higher scales, we gain more precise information about the magnitude of variables and their quantitative size - best type of scale depends on variable you are studying and the level of precision you desire - since psychological variables like traits, attitudes, and preferences represent a continuous dimension, several levels of measurement "fit" equally well - when working with variables like sociability, psychologists often select the highest scale since it provides more information and allows analyses using more powerful statistics - nominal -ordinal -interval -ratio

Explanation

- knowledge of the conditions that reliably produce a behavior - step further than prediction because we also understand what causes a behavior to occur - Must use an experimental research design in which we systematically manipulate aspects of the setting with the intention of producing a specific behavior while also controlling for other factors that might also influence this behavior during an experiment

Why do experimental studies often achieve higher Internal Validity than non-experimental studies?

- laboratory experiments are often higher in internal validity because of control over extraneous variables - researchers create levels of the IV and use procedures like matching (when you need to control for possible confounding variable, like height) and random assignment to conditions

Fruitful

- leads to new studies - difficult to know in advance which hypotheses will be the most fruitful

Spacial and logical relationships

- less convincing than cause and effect - spacial ex: broken object, goes in and sees cat near object, assumes it was cat due to spacial relationship -logical ex: small hole in wall, knows dog runs around with bone that he bangs into wall, assumes dog because of logical relationship

Regression Line

- line of best fit - illustrate mathematical equation that best describes the linear relationship between two measured scores - direction of line corresponds to direction of relationship (positive or negative)

Deception

- may be used when it is the best way to obtain information in study (use of deception most often justified by the knowledge that is gained) - may not be used to minimize participants perceptions of risk or exaggerate perception of potential benefits - subjects must be allowed to withdraw at any time, should never face coercion to remain - deception that is used must be such that subjects would not refuse to participate if they knew what was really happening -ex: Milgram Obedience Study (deception used when participant thought they were administering shock to "other participant" who answered question wrong)

Testable statements

- means for manipulating antecedent conditions and measuring resulting behavior must exist - can be assessed by manipulating the IV and measuring results of DV - without testability, cannot evaluate validity of hypothesis (internal, external, construct validity) -ex: If a dog twitches a lot in its sleep, then it must be dreaming (cannot be tested)

Informed Consent

- means that the subject agrees to participate after having been fully informed about the nature of the study 1) individuals must give their consent freely, without the use of force, duress, or coercion 2) individuals must be free to drop out of the experiment at any time 3) researchers must give subjects a full explanation of the procedures to be followed and offer to answer any questions about them 4) researchers must make clear the potential risks and benefits of the experiment (must explain any possibility of pain or injury in advance) 5) researchers must provide assurances that all data will remain private and confidential 6) subjects may not be asked to release the researchers from liability or to waive their legal rights in case of injury

effect size

- measure of variability, how much variability in DV can be explained by IV -speaks to how meaningful your manipulation/ predictors were - can't have strong effect size if results are not statistically significant, but they do not have to be meaningful (won't have meaningful effect size and a finding that isn't statistically significant, compliment each other, does not rely on it) -ex: statistically significant interaction, p = 0.47 (marginally significant), r^2= .02, 98% of variability in likelihood of use is explained by something other than support, as statistical significance varies, so will effect size - ex: square your r, r= .70, effect size = .49

Ratio

- measures the magnitude of the DV using equal intervals between values and an absolute zero -ex: measurements of physical properties, such as height and weight -ex: time (0 ft, 0 kg, 0 second all represent absence of height, weight, and time)

pretest/posttest design (QED)

- measuring peoples levels of behaviors before and after event and compare levels - use when you believe there are factors other than the IV that will effect DV (predisposed to particular ideas, DV will fluctuate more as function of preexisting condition that function of manipulation) - assess effects of naturally occurring events when true experiment not possible (approval ratings before and after presidential speech) - can also be used in laboratory to measure the effect of a treatment presented to subjects by researcher (effects internal validity) ex: teachers gives SAT prep test before class, and then after, found student scored 20 points higher on second test --> too many other things that can influence improvement (incidental/intentional learning outside class, practice effects (pretest sensitization) --> when there is a long time between the pretest and posttest, the researcher needs to be aware that the event being assessed may not directly be only cause of difference before and after event

Position preference

- multiple choice questions - when in doubt about the right answer on a multiple choice exam, perhaps you always answer C - to prevent this, sophisticated test builders vary the arrangement of correct answers throughout a test

Falsifiable Statements

- must be disprovable by the research findings - needs to be worded so that failures to find the predicted effect must be considered evidence that the hypothesis is indeed false -ex: "if you read this book carefully enough, then you will be able to design a good experiment" (not falsifiable because any failures to produce the predicted effect can be explained away by the researcher)

Survey Research

- obtains data about opinions, attitudes, preferences, and behaviors using questionnaires and interviews (low-low/ low-high depending on type of response questions) - survey approach allows researchers to study private experience, which cannot be directly observed - can efficiently collect large amounts of data - anonymous surveys can increase the accuracy of answers to sensitive questions - allow us to draw inferences about causes of behavior and can complement lab and field studies - does not allow us to test hypotheses about causal relationships because we do not manipulate the IV and control for extraneous variables -written questionnaires and face-to-face interviews are the two most common survey techniques in psychology research

Interviews

- one of the best ways to gather high-quality survey data is to conduct face-to-face interviews, but most expensive - researchers found that female interviewers tend to be more successful than male interviewers in gaining cooperation - frequently made up of completely or mostly of closed questions, whereas in-depth interviews are generally made up of open-ended questions - rapport (winning interviewees trust), avoiding judgmental statements, and knowing how to keep the interview flowing --> structured interview: same questions asked precisely the same way each time --> unstructured interview: more free-flowing, interviewer free to explore interesting issues, but information may not be able to use content analyses or statistics - interviewers physical appearance may affect way subjects respond

Non-experimental approaches

- phenomenology - case studies -field studies - archival studies - qualitative studies

Cover story

- plausible but false explanation for the procedures used in the study (alternative for controlling the possibility that subjects will guess the experimental hypothesis) - told to disguise the actual research hypothesis so that subjects will not guess what it is -debriefing required for all such types of experiments

Context effects

- position of question, where it falls within the question order, can influence how the question is interpreted - can be caught by pretesting questions - particularly likely when two questions related (separate with buffer items) -->ex: not sexy ------ sexy nice ----------nasty (sexualizes question)

Animal Rights

- position that sensate species (those than can feel pain and suffer) have equal rights to humans - animal research is acceptable to further the understanding of behavioral principles and to promote the welfare of humans - CARE reports that animal research has contributed significantly to knowledge about drug abuse and physical dependence as well as development of new drugs for psychological disorders - fewer animals being used each year, about 90% used are rodents and birds, only 5% monkeys or other nonhuman primates

Within-subjects design

- present all treatments to each subject and measure the effect of each treatment after it is presented - carry-over effects, fatigue, guessing hypothesis of study/ study design

Deduction/ Deductive Model

- process of reasoning from general principles to make predictions about specific instances - most useful when we have well-developed theory with clearly stated basic premises - test value of a theory - through induction, we devise general principles and theories that can be used to organize, explain, and predict behavior until more satisfactory principles are found, through deduction we rigorously test the implications of those theories

Introduction section of report

- provides a selective review of research findings related to research hypothesis - identifies which questions have not been definitely answered by previous studies and helps show how your experiment advances knowledge - selective review of relevant, recent research (directly related to research topic) - should provide empirical background for your experiment and guide the reader toward your research hypothesis

Maturation Threat (to internal validity)

- refers to any internal (physical or psychological) changes in subjects that might have affected scores on the dependent measure -ex: children that make cognitive and physical advances during certain ages (longitudinal design) -ex: hypothesis guessing from university psych students -ex: boredom and fatigue during single testing session (within subjects design)

Testing Threat (to internal validity)

- refers to the effects on the DV produced by previous administrations of the same test or other measuring instrument (pretest/postest) - individuals frequently perform differently the second time they are tested (performance tends to improve somewhat with practice even without any special treatment)

Validity

- refers to the extent to which a survey actually measures the intended topic - does the survey measure what you want it to? - does performance on the survey predict actual behavior? -Does it give the same results as other surveys designed to measure similar topics? - pretesting questionnaire to ensure validity - we can formulate precise, objective definitions that may be reliable, but may not be valid - researchers make comparisons of validity of to develop the best procedures (manipulation check) (valid when operational definition accurately manipulates the IV or measures the DV)

History threat (to internal validity)

- refers to the history of the experiment - whether any outside event or occurrence, other than the IV, could have caused the experimental effects - history is most often a problem when a whole group of individuals are tested together in the same experimental condition

Simple correlations

- relationships between pairs of scores from each subject - calculated using The Pearson Product Moment Correlation Coefficient (r) : values can range anywhere from -1.00 to +1.00 -ex: r (50)= +.70 , p = .001

Qualitative Research

- relies on words rather than numbers for the data being collected (gives you context of research) - look at themes that present themselves in particular research setting - focuses on self-reports, personal narratives, expression of ideas, memories, feelings, and thoughts - used to study phenomena that are contextual, meaning they cannot be understood unless in context they appear -fully describe and understand the meaning of the experience for individuals

Limitations of case studies

- representativeness of sample - completeness of data -->can not be sure that we are aware of all relevant aspects of that persons life --> might neglect to mention important information - reliance of retrospective data: data collected in the present that are based on recollections of past events (apt to be inaccurate) (interrater reliability)

Open ended questions

- require that participants respond with more than a yes or 1-10 rating and have a low imposition of units - system must be designed to evaluate and categorize the content of each answer -->content analysis: responses are assigned to categories that are created rom the data according to objective rules or guidelines -ex: "what kinds of things might cause you to hit someone?"

Non-equivilent groups design (QED)

- researcher compares the effects of different treatment conditions on preexisting groups of subjects -ex: company can install fluorescent lighting in company A and incandescent lighting in company B and assess productivity, workers in each factory already pre-determined and may be vary in ways that influence productivity other than lighting (because not randomly assigned, differences aren't evenly dispersed) (pretest to avoid)

Systematic observation

- researcher uses pre-arranged strategy for recording observations in which each observation is recorded using specific rules or guidelines so that observations are more objective (low-high) can be used in NOS

Replication

- researchers attempt to reproduce findings of others - if data falsified, unlikely data can be replicated

Mail surveys

- response rate between 45-75% - consider second mailing to people who did not return the first survey (can add 50% to the number of surveys returned from first mailing) - non response makes interpreting data difficult, sensitive issues they may not answer (especially if question would indicate they had engaged in socially undesirable, deviant, or illegal activities - if non response rates high, those that did respond may be different than others (external validity)

Review research to formulate hypothesis..

- review research that has already been published (both experimental and non-experimental) 1) identifies questions that have no been conclusively answered/ addressed 2) suggests new hypotheses 3) identifies additional variables that could mediate the effect 4) identifies problems other researchers have experienced 5) helps avoid duplication of prior research when replication is not intended - serendipity: knack of finding things that are not being sought, scientist who is open to unexpected results and who is sufficiently informed can understand the significance of unexpected findings (factors you may not account for, better theoretical explanations)

Longitudinal design (QED) (low-high)

- same group of subjects are measured at different points in time to determine effects of behavior (influence of time on behavior) - important for researchers studying human and animal growth and development - ex: assess behavioral changes in firstborn children after birth of second child, interested in regression behavior, 1 month before birth, 1, 4, 8, 12 months after birth,

Types of surveys

- self-administered questionnaires/written questionnaires -mail surveys -telephone surveys -Internet surveys -interviews -focus groups

Parsimonious Statements

- simplest explanation preferred, over one that requires many supporting assumptions - allows us to focus our attention on the main factors that influence our DV -ex: If you look at an appealing photograph, then your pupils will dilate if its a warm sunday in june"

Advantages of case studies

- source of inferences, hypotheses, and theories - source for developing therapy techniques - allow study of rare phenomena - provide exceptions or counter instances to accepted ideas, theories, or practices -have persuasive or motivational value (advertising)

Non-experimental hypothesis

- statement of your predictions of how events, traits, or behaviors might be related/correlate- not a statement about cause and effect - non experimental designs that do not restrict subjects responses do not typically include a hypothesis (phenomenology, naturalistic observation, qualitative studies, surveys of attitudes and opinions vs. correlational and quasi-experiental that do generate hypothesis about predicted relationships between variables)

Closed questions

- structured questions - can be answered using a limited number of alternatives and have a high imposition of units - y/n, t/f, 1-10 rating

Ex-post facto studies (QED) (low-high)

- study in which researcher systematically examines the effects of subject characteristics (subject variables) without actually manipulating them, experimenter has no control over who belongs to each of the treatment groups of the study - forms treatment groups by selecting subjects on the basis of differences that already exist (internal validity problem) -"after the fact" - subjects come into an ex post facto study with attributes that already differ from one subject to another, researchers then look for differences in behavior that are related to group membership - most often study the extremes, subjects who rank highest/lowest on dimension of interest (external validity problem) -ex: hannah's father died last year, she is placed in group of subjects who have experienced a loss of a parent

Reactivity

- subjects alter their behaviors when they know they're being observed - confounding that occurs if experimenters fail to realize how the consequences of subjects performance affect what subjects do (performance can be altered by possible negative/ positive consequences) - use of unobtrusive measures: when behavioral indicators are observed without the subjects knowledge - effects both internal (participants changes in score may be due to reactivity instead of IV) and external validity (participants aren't reacting how they would in the real world) - also effects statistical conclusion validity: decreases the confidence in the researchers statistically significant correct decision... carries power of the study)

Anonymity

- subjects are not identified by name - identified by code numbers or fictitious names

between-subjects design

- subjects receive only one kind of treatment

confirmation bias

- tendency to search for, interpret, favor, and recall information in a way that confirms one's preexisting beliefs or hypotheses.

Experimental Hypothesis

- tentative explanation of an event or behavior - statement that explains the effects of specified antecedent conditions on a measured behavior - must be synthetic statements that are testable, falsifiable, parsimonious, and fruitful -ex: CBT produces less relapse compared to antidepressants

Basic Research

- tests theories and explains psychological phenomena in humans and animals -ex: helping behavior

Independent Variable

- the dimension that the experimenter intentionally manipulate - what will you manipulate/vary to test the hypothesis? - the antecedent the experimenter chooses to vary - its values are created by the experimenter and are not affected by anything else that happens in the experiment - at least two different treatment conditions required

How do operational definitions affect the internal validity of what you're studying?

- the intended cause must lead to the intended effect - poorly defined operational definitions can lead to confounding results (due to non proper operational definition)

Experimentation

- the process we use to test predictions we call hypothesis and establish cause and effect relationships - not always possible because variables are not always testable: --> we must have procedures for manipulating the setting --> predicted outcome must be observable --> must be able to measure outcome (operational definition) - we must be able to manipulate the IV and DV (operational definitions)

Telephone surveys

- those that listed the phone numbers may be different than people whose numbers were unlisted (makes it difficult to draw conclusions or to generalize results of surveys)

Solomon 4 group design

- to control for internal validity 1) group that received pretest, treatment, post test 2) nonequivalent control group: group that took both pretest and posttest but was not exposed to the "treatment" 3) group that received treatment and only took posttest 4) posttest only

whats wrong with this experiment?: Suppose a researcher was interested in the effects of age on communicator persuasiveness. She hypothesized that older communicators would be more persuasive than younger communicators- even if both presented same argument. She set up an experiment with two experimental groups. Subjects listened to either an 18 year old man or a 35 year old man presenting the same 3 minute argument in favor of gun control. After listening, subjects rated how persuaded they were by the argument they just heard. As the researcher predicted, subjects who heard the older man speak were more persuaded.

- too many extraneous variables could have changed along with the IV (age): older speaker may have seemed more attractive, better educated, more intelligent, or more self confident. - could not say with assurance that age, and not other extraneous variables, influenced persuasion

Construct validity

- transition from theory to research application - start with general idea of the qualities that characterize the construct we want to test; then seek to find ways of putting our idea to an empirical test - " Have I succeeded in creating a measuring device that measure the construct I want to test? Are my operational definitions tapping only the construct I want to test? - test should not correlate highly with scores measuring other constructs (discriminant validity), should correlate highly with scores measuring same construct -ex: (intelligence) from IV --> separate subjects into groups on the basis of high/low intelligence scores -ex: intelligence from DV --> might introduce some environmental change and observe its impact on subsequent IQ scores (in either case, want to be sure out intelligence test measures only "intelligence" and not another construct)

Other controls for experimenter bias (if treatments cannot be blind)

- try not to assign participants to their conditions until after we have finished interacting with them - make sure person who scores the subjects responses does not know which treatment each subject received (independent rater) - do not tell the rater which subjects belonged to which group

Risk/benefit analyses & minimal risk studies

- used by IRB and HSRB to determine whether any risks to the individual are outweighed by potential benefits or the importance of the knowledge to be gained - IRB's should approve an "At risk" study when risk/benefit analyses determines that risk to participants are outweighed by benefits

Correlational Designs (low- high)

- used to establish relationships among preexisting behaviors and can be used to predict one set of behaviors from others (such as predicting your college grades from scores on your entrance exams) - show relationships between sets of antecedent conditions and behavioral affects (relationship between smoking and lung cancer) - may serve as basis for new experimental hypotheses - antecedents are preexisting, they are not manipulated or controlled by the researcher (difficult to establish cause and effect relationships conclusively with this technique) - higher the correlation, the more accurate our predictions will be

Focus Groups

- usually small groups of people with similar characteristics( all women, all young Black men, all university students) who are brought together by an interviewer (facilitator), who guides the group in a discussion of specific issues -qualitative researchers often employ focus groups as a method of data collection - pretesting survey questions and procedures

Levels of the IV

- values of IV - varies levels of IV by creating different treatment conditions within the experiment - each treatment condition represents one level of the IV -ex: professor gives test on blue and yellow (levels of IV) paper to see if color (IV ) of paper affects performance - in true experiment, we test the effects of the manipulated IV (make certain our treatment groups do not consist of people who are different on preexisting characteristics, avoid with random assignment) --> ex: if people who took test on blue were introverts, and people taking it on yellow were introverts

Parsimony

- we prefer the simplest useful explanation (crosses many groups, generalizable) - also known as Occam's razor: entities should not be multiplied without necessity. What Occam had in mind was simplicity, precision, and clarity of thought

Self-administered questionnaires

- when handing out survey in person, consider possibility of reactivity (tendency of subjects to alter their responses if they are aware of the presence of an observer) - if subjects know they can be identified, there is a much greater chance that their responses will be affected by the social desirability response set -can collect large groups of data at once, but group may not take survey seriously, sensitive questions embarrassing in group, talkative subjects amongst each other

Subject mortality threat (to internal validity)

- when more subjects drop out of one experimental condition than the other - something about treatment could be making subjects drop out ( (treat as red flag if high dropout rate, treatment could be frightening, painful, or distressing)

Multiple Regression

- when more than two behaviors are correlated, use multiple regression to predict the score on one behavior from scores on others -determine weight of each predictor (beta weights) -ex: predicting vocab scores from tv viewing and age, age holds more weight

overconfidence bias

- when our predictions, explanations, and guesses tend to feel much more correct than they actually are

Laws

- when principles have generality to apply to all situations

Partial correlation

- when we want to hold one variable (age) constant to measure its influence on a correlation between two other variables - ex: tv watching and vocabulary (if we statistically control for age and correlation dropped to .06, we would be able to confidently say that age is an important third variable

Multiple correlation

- when we want to know whether a measured behavior can be predicted by a number of other measured behaviors, rather than a single predictor (ex: age, amount of tv viewing, and vocabulary, want to know how well we could predict vocabulary (criterion variable, Y) to from combination of scores for tv viewing time( X1) and age (X2), r= .61, r^2= .37, 37% of variability in vocabulary scores can be accounted for by viewing time and age considered together

Instrumentation Threat (to internal validity)

- whenever some feature of the measuring instrument itself changes during the experiment -ex: rubber ruler, unknown to you, ruler stretches a bit every time you use it, making each consecutive measurement a little more inaccurate, -ex: mechanical instruments -ex: human observers finding one treatment condition more interesting and subsequently paying more attention to it -ex: administration of written instruments to subjects

Selection threat (to internal validity)

- whenever the researcher does not assign subjects randomly to the different conditions in an experiment (no control for individual differences in subject characteristics)

For each of the following terms provide an experimental operational definition (as if the term represented an IV manipulation in experiment) and a measured operational definition (as if term represented DV measured in experiment) TERM: depression

--> experimental= participants randomly assigned to one of two treatment conditions in which participants are either in a dark room listening to sad song, bright room listening to happy song, nominal scale (two categories, labeled) --> Measured= please rate your level of depression after treatment (not at all depressed---- very depressed) (interval scale)

For each of the following terms provide an experimental operational definition (as if the term represented an IV manipulation in experiment) and a measured operational definition (as if term represented DV measured in experiment) TERM: Intoxicated

--> experimental= participants will be randomly assigned to one of three levels, 3 ounces OJ (placebo), 3 ounces beer, 3 ounces vodka (nominal scale, categorical) --> measured= The number of words a participant can recall from a list of ten words (measured on a ratio scale)

For each of the following terms provide an experimental operational definition (as if the term represented an IV manipulation in experiment) and a measured operational definition (as if term represented DV measured in experiment) TERM: Intelligence

--> experimental= randomly assign participants to three levels, exposed to task (passage), task varies in form of difficulty (easy, moderate, hard) (nominal scale) --> measured= please rate how well you believe you did on the task (fair, poor, excellent) (ordinal scale)

For each of the following terms provide an experimental operational definition (as if the term represented an IV manipulation in experiment) and a measured operational definition (as if term represented DV measured in experiment) TERM: empathy

-->experimental= participants are randomly assigned to three different conditions (high, mid, low, none) in which they are shown video in which someone helps person, ignores person, some what helps person -->Measured= Ask participants how much time they would spend helping someone in minutes during an hour duration (ratio)

For each of the following terms provide an experimental operational definition (as if the term represented an IV manipulation in experiment) and a measured operational definition (as if term represented DV measured in experiment) TERM: creative

-->experimental= participants will be randomly assigned to conditions in which one condition theres only one step in problem, two in another, and 3 in the last -->measured= The strategies I was exposed to are a representation of my creativity through the steps (not at all-completely) (interval scale)

Identify the independent variables and levels of the IV in each of the following hypothesis:

-Absence makes the heart grow fonder • Absence (separation for 1 day and no separation) - It takes longer to recognizer a person in a photo seen upside down • photo position (right side up and down) - People feel sadder in blue room than pink • room color (blue or pink room)

Describe Field Study approach and give example of how it may be used

-Conducted in real life setting, naturalistic observations, be aware of reactivity, can be both obtrusive (reactivity) or nonintrusive measures -Naturalistic observation and unobtrusive measures, survey tools such as questionnaires and interviews

Describe case-study approach and give example of how may be used

-Descriptive record of individuals behavior/experiences (can be longitudinal, doesn't have to be) -Use it to make inferences about individuals life experiences - Medical usually -Descriptive record, makes inferences about developmental processes, impact of life events, persons levels of functioning, and causes of disorders - Could use approach to document a students growth by creating a portfolio that arranges academic work in chronological order

Explain why IRB's are necessary and what their major functions are

-Ensures that human subjects involved in studies are protected, protects rights, Belmont report -Cost-benefit analysis of potential harm of participants vs. what society will gain from study (ensure whether application is exempt, expedited, full board review) -Does debriefing satisfy deceptive role of researcher -sensitive information put into informed consent, withdraw at any time without any penalty -Determining whether informed consent complete, materials appropriate for population, deception, debriefing, anonymity and confidentiality -Evaluates proposed human subject research studies. Determine whether a proposed study puts subjects "At risk" - If it will increase risk of injury, must conduct cost-benefit analysis to determine

How does power effect internal validity of your findings? External Validity?

-IV power= ability to detect when effect in place (results significant, and support other research) (making correct decision, correctly rejecting null hypothesis) -find effect when there is effect, only power when there is effect EV power= responding in way other participants would

What do we mean by objectivity? How does objectivity influence observation, measurement, and experimentation?

-Lack of bias -Means that out use of the scientific method is not biased by personal beliefs/ expectations -Objectivity keeps a scientist "honest" during every aspect of the scientific method: reviewing prior studies, selecting questions for study, framing hypothesis, designing experiment, running experiment, obtaining data, analyzing data, interpreting data, explaining results -Failure to maintain objectivity at any stage can result in false conclusions

How does psychology experiment demonstrate a cause and effect relationship between antecedent conditions and behavior?

-Manipulation of the antecedent conditions -Observe predicted outcome and procedures in place for manipulation -Antecedent conditions (treatments) are the circumstances that come before an event or behavior we wish to explain -An experiment is a controlled procedure in which we apply at least 2 different antecedent conditions to our subjects and measure effects on behavior. -Must satisfy two requirements: -->must have procedures for manipulating the setting --> must be able to observe predicted outcome (only antecedent condition to explain behavior or else confounded) -After we administer the treatments, we measure and compare subject performance across conditions to test the experimental hypothesis

Content validity

-asking whether the content of our measure fairly reflects the content of the quality we are measuring and if all aspects of the content are represented appropriately - high content validity means that the measuring instrument is not evaluating other qualities that we do not intend to measure -ex: instrument to measure depression in college students would have poor content validity if it measured only psychological disturbances and failed to include questions about behavioral disturbances

Problems with correlations (no causation)

-causal direction: when you do not know which variable influenced the other (violent tv --> aggressive tendencies, or aggressive tendencies---> preference for violent tv) - bidirectional causation: when behaviors effect each other (innate aggressiveness results in more violent tv viewing, while the more exposure a person has, the more aggressive he/she becomes) - third-variable problem: when third agent is causing variables to appear related

Define each of the classic threats to internal validity and give example:

-history= outside events that limit the researchers confidence in the extent to which treatment is changing DV (weight loss study, one group ate lunch before) - Maturation= internal event, a study with many treatments can cause participant to be fatigued which may effect how they respond to treatment condition, may learn study when exposed to too many conditions because more knowledgable -Statistical Regression= when participants are chosen on basis of extreme scores and selection interfere with treatment because regression states that extreme scores regress towards mean, treatment not causing change in DV (looking at how far golfers can drive with new golf club just created, two groups, pros, college students, pros hit farther bc extreme scores, CHOSEN!! no random assignment) (match participants on score then randomly assign to avoid statistical- regression) -Instrumentation= change in observer, change in test taker causing differences -Testing effects= changes in test causing differences

Description

-initial step towards understanding any phenomena - systematic and unbiased account of observed characteristics of behaviors -ex: grief--> allow us to understand that people who are grieving are likely to be sad and depressed, crying, etc. - ex: Likert Scale (example she gave?) -ex: case and field studies (example in book)

Why would non-experimental designs achieve higher external validity?

-non-experimental designs are more frequently conducted in the real world setting with a more diverse sample of participants than experiments

Dependent Variable

-particular behavior we expect to change because of our experimental treatment -What behavior are you trying to explain? What will you measure to find out whether your IV had an effect? - outcome we are trying to explain - testing effects of IV on DV (different values of IV should produce changes in DV) - need objective measure of the effect of the IV (observable dimension that can be measured again and again)

quasi-experimental (low- high)

-quasi--> "Seeming like" - superficially resemble experiments, but lack required manipulation of antecedent conditions and/or random assignment to conditions - may study effects of preexisting antecedent conditions (life events or subject characteristics) on behavior -->compare behavioral behavioral differences associated with different subjects (schizophrenic children vs. non) --> naturally occurring situations ( being raised in a one or two parent home) --> wide range of unusual events (birth of sibling, surviving a hurricane) -treatments are either selected life events or preexisting antecedent conditions (can be used in both lab and field settings) - use quasi-experimental designs when subjects cannot be randomly assigned to conditions (unless other antecedent conditions that can influence productivity are carefully controlled, the experiment will not be high in internal validity) -ex: might compare Alzheimers in patients who used Ibueprophin since 50, and those who have not

Operational Definition

-specifies the precise meaning of a variable within an experiment (defines variables in terms of observable operations, procedures, and measurements) - specify what variable means in context of experiment - each IV and DV has two definitions- one conceptual definition that is used in everyday language and one operational definition used when caring out experiment - definition of each variable may change from one experiment to another - make sure procedures are stated clearly enough to enable other researchers to replicate our findings

History Threat example

-testing two different weight loss programs in which subjects exposed to treatments during daily group meetings. -Asses benefits of program by measuring weight of participants at the end of 7 day program. -After weighing, find individuals in group A lost 4 pounds, and individuals in group B lost 2 pounds. -Make sure "history" of both groups before weighing was same (for instance, what if group b ate lunch before weighing while group a did not.)

face validity

-when an assessment or test appears to do what it claims to do -ex: effects of pupil size, easy to know whether we are using a valid experimental operational definition by using standard measuring device (ruler) -ex: response time (DV) valid measure for attitude importance, even though connection between time and attitude strength is not readily apparent - considered least stringent type of validity because it does not provide any real evidence - procedure is self evident; we do not need to convince people that a ruler measures width

Classic threats to internal validity

1) history threat 2) maturation threat 3) testing threat 4) instrumentation threat 5) statistical regression 6) selection threat 7) subject morality 8) selection interactions

Constructing surveys

1) identify specific research objectives (specific as possible, look at previous literature on topic) 2) decide on degree of imposition of units (degree of response description, open vs. closed questions) --> nominal, ordinal, interval, ratio 3) decide how you will analyze the the survey data

Constructing questions

1) keep items simple and unambiguous, avoid double negatives 2) avoid double barreled questions: requires responses about two or more unrelated ideas (ex: How useful will this textbook be for students and young professionals in the field?) 3) use exhaustive response choices (meaning they need to contain all possible options ) (bad example: occupational status: --> full-time employment --> full-time student --> part time student --> unemployed --> retired what is more than one? what if none?

Four main properties of Correlation Coefficients

1) linearity (internal and ratio scales) 2) sign (positive and negative correlation) 3) magnitude (strength of correlation--> 1.00, line of best fit) 4) probability - always carry correlation coefficients out two decimal places

Control most often achieved by..

1) random assignment of subjects to different treatment conditions (sometimes using within subjects design) 2) presenting treatment condition in an identical manner to all subjects 3) keeping the environment, procedures, and measuring instruments constant for all subjects in the experiment so that the treatment conditions are the only things allowed to change

Robert Rosenthal's 3 important reasons why poorly designed research can be unethical

1) students, teachers, and administrators time will be taken from potentially more beneficial educational experiences 2) poorly designed research can lead to unwarranted and inaccurate conclusions that may be damaging to the society that directly or indirectly pays for the research 3) allocating time and money to poor-quality science will keep those finite resources from better-quality science

Describing research activities

1) the degree of manipulation of antecedent conditions (ex: tracking subjects typical diet is low in degree of manipulation vs. placing subjects on fixed diets is high degree of manipulation) 2) the degree of imposition of units: refers to the degree in which the researcher constrains or limits subjects responses (ex: observing group of teenagers and they're behaviors is low imposition of units vs. "how much time do you spend listening to hip hop music?" is high in imposition of units - experiments typically high in manipulation of antecedent conditions, and high levels of imposition of units - non-experimental designs typically low in manipulation of antecedent conditions, and can vary in imposition of units


Related study sets

Practice Assessment for Exam MS-900: Microsoft 365 Fundamentals

View Set

Unit 6 Real Estate Law Questions

View Set

Chapter 18 Intraoperative Nursing Management

View Set