Psych 301W- Exam 2
Respond Set
- A consistent way of answering all the questions I feel that I am a person of worth (1-4) I take positive attitude towards myself I feel like I have many good qualities (opposite way so that people have to read the questions—trying to avoid response set - try switching up the questions or the wording)—switch up scale etc I often feel like a failure
convergent validity
-An empirical test of the extent to which a self-report measure correlates with other measures of a theoretically similar construct. See also discriminant validity. How closely a test is related to other tests that measure the same (or similar) constructs. EX: Sample questions include: "I always eat a healthy diet," and "I take out my bad mood on others now and then." Both scales were administered to a sample of university students. Convergent validity was then assessed by calculating a correlation between scores on the two measures - For new disgust sensitivity survey - "fear of contamination" survey (Correlation .52) - Obsessive-compulsive symptoms (correlation .41) - Trait Anxiety survey (correlation .33)
Measuring attitude
Open-ended vs forced choice format (can get lots of information/ detail from someone) What is your opinion on Taylor Swifts music?
Variables can be measured on different scales 4 types of variables
1 Nominal (categorical) - marital status regular church attendance (yes/no) Ice cream flavour Ordinal (categorical) - have an order but no precise Interval (quantitative) - 0 is not the starting point (either no 0 or if there is a 0 it doesn't mean much) - temperature Fahrenheit - calendar year time of day Ratio (quantitative) - Have more precision (know where everything goes) - Pin down with numbers the precise result. EX—height, % of Americans who own guns, amount of sleep you got in the past 24 hours - Ratio 0 is the starting point (cannot have negative percentage—starts at 0 and goes up) - the absence of something (didn't get any sleep - you didn't get any negative amount of sleep)
3 principals (the Tuskegee Syphilis study)
1. Respect for persons - treat people as autonomous agents - able to make their own decisions with the full knowledge of good/bad - some people cant make decisions for themselves no matter how much information you give them (young infants, disabled-- try to explain the research to them in a way that they understand so that they are able to protect themselves) 2. Beneficence - minimize harm as much as possible (potential risk, physical pain/discomfort) - psychological risks (anxiety, severely upset or embarrassed) - social risk (if you tell them something embarrassing you cant then tell people because you cant risk their social standing) - benefits of research (benefit to society) 3. Justice - people who you are studying should be the same group that could benefit from the research (can't have unfair recruiting-- trusting for experimental drug but only use people below the poverty line, but the potential benefit of the drug could be used for everyone-- you cant test certain people, cant select a specific group when it is widely applicable.) EXAMPLE OF THIS First, the men were not treated respectfully. The researchers lied to them about the nature of their participation and withheld information (such as penicillin as a cure for the disease). In so doing, they did not give the men a chance to make a fully informed decision about participating in the study. If they had known in advance the true nature of the study, some might still have agreed to participate but others might not. After the men died, the doctors offered a generous burial fee to the families, mainly so they could be sure of doing autopsy studies. These low-income families may have felt coerced into agreeing to an autopsy only because of the large payment. Second, the men in the study were harmed. They and their families were not told about a treatment for a disease that, in the later years of the study, could be easily cured. (Many of the men were illiterate and thus unable to learn about the penicillin cure on their own.) They were also subjected to painful and dangerous tests. Third, the researchers targeted a disadvantaged social group in this study. Syphilis affects people from all social backgrounds and ethnicities, yet all the men
CATEGORICAL VS. QUANTITATIVE VARIABLES
1. a variable whose levels are categories (ex male and female) 2. a variable whose values can be recorded as meaningful numbers (height and weight)- IQ score, level of brain activity, and amount of salivary cortisol are also quantitative variables. Ordinal: the data can be categorized and ranked. Interval: the data can be categorized and ranked, and evenly spaced. Ratio: the data can be categorized, ranked, evenly spaced and has a natural zero.
Data Fabrication VS Data falsification
1. occurs when, instead of recording what really happened in a study, researchers invent data that fir their hypotheses. 2. occurs when researchers influence a study's result, perhaps by selectively deleting observations from a data set or by influencing their research subjects to act in the hypothesized way EX: he admitted that at first he changed occasional data points (data falsification), but later he found himself typing in entire datasets to fit his and his students' hypotheses (data fabrication).
construct validity, external validity, internal validity, statistical validity.
1. the extent to which your test or measure accurately assesses what it's supposed to. 2.how well the outcome of a research study can be expected to apply to other settings 3. the degree of confidence that the causal relationship being tested is trustworthy and not influenced by other factors or variables. 4.the extent to which the conclusions drawn from a statistical test are accurate and reliable.
Observer bias
A bias that occurs when observer expectations influence the interpretation of participant behaviours or the outcome of the study. Instead of rating behaviors objectively, observers rate behaviors according to their own expectations or hypotheses
Observer Effect
A change in behavior of study participants in the direction of observer expectations. Also called expectancy effect.
Reactivity
A change in behaviour of study participants (such as acting less spontaneously) because they are aware they are being watched.
Cronbach's alpha
A correlation-based statistic that measures a scale's internal reliability. Also called coefficient alpha. The closer Cronbach's alpha is to 1.0, the better the scale's reliability. For self-report measures, researchers are looking for Cronbach's alpha of .80 or higher
Average inter-item correlation
A measure of internal reliability for a set of items; it is the mean of all possible correlations computed between each item and the others.
ratio scale
A quantitative measurement scale in which the numerals have equal intervals and the value of zero truly means "none" of the variable being measured. See also interval scale, ordinal scale
Interval scale
A quantitative measurement scale that has no "true zero," and in which the numerals represent equal intervals (distances) between levels (e.g., temperature in degrees). See also ordinal scale, ratio scale.
ordinal scale
A quantitative measurement scale whose levels represent a ranked order, and in which distances between levels are not equal (e.g., order of finishers in a race). See also interval scale, ratio scale. interval scale
negatively worded question
A question in a survey or poll that contains negatively phrased statements, making its wording complicated or confusing and potentially weakening its construct validity
confidential study
A research study in which identifying information is collected, but protected from disclosure to people other than the researchers. See also anonymous study.
anonymous study
A research study in which identifying information is not collected, thereby completely protecting the identity of participants. See also confidential study.
response set
A shortcut respondents may use to answer items in a long survey, rather than responding to the content of each item. Also called nondifferentiation. Rather than thinking carefully about each question, people might answer all of them positively, negatively, or neutrally. Response sets weaken construct validity because these survey respondents are not saying what they really think.
Correlation Coefficient r
A single number, ranging from -1.0 to 1.0, that indicates the strength and direction of an association between two variables. The value of r can fall only between 1.0 and -1.0. When the relationship is strong, r is close to either 1 or -1; when the relationship is weak, r is closer to zero. An r of 1.0 represents the strongest possible positive relationship, and an r of -1.0 represents the strongest possible negative relationship. If there is no relationship between two variables, r will be .00 or close to .00 (i.e., .02 or -.04).
Masked design
A study design in which the observers are unaware of the experimental conditions to which participants have been assigned. Also called blind design.
forced-choice question
A survey question format in which respondents give their opinion by picking the best of two or more options.
Likert Scale
A survey question format using a rating scale containing multiple response options anchored by the specific terms strongly agree, agree, neither agree nor disagree, disagree, and strongly disagree. A scale that does not follow this format exactly is called a Likert-type scale.
Semantic Differential format
A survey question format using a response scale whose numbers are anchored with contrasting adjectives.' EX The five-star rating format that Internet rating sites (like Yelp) use is another example of this technique (Figure 6.2). Generally one star means "poor" or (on Yelp) "Eek! Methinks not," and five stars means "outstanding" or even "Woohoo! As good as it gets!
double-barrelled question
A type of question in a survey or poll that is problematic because it asks two questions in one, thereby weakening its construct validity.
leading question
A type of question in a survey or poll that is problematic because its wording encourages one response more than others, thereby weakening its construct validity. (leading to a particular response)
criterion validity
An empirical form of measurement validity that establishes the extent to which a measure is associated with a behavioural outcome with which it should be associated. Evaluates how accurately a test measures the outcome it was designed to measure EX: if employees take an IQ test, the boss would like to know if this test predicts actual job performances
discriminant validity
An empirical test of the extent to which a self-report measure does not correlate strongly with measures of theoretically dissimilar constructs. Also called divergent validity. See also convergent validity. The extent to which a test is not related to other tests that measure different constructs. EX: Self-esteem VS Intelligence - they shouldn't highly correlate with an IQ test.
principle of Justice
An ethical principle from the Belmont Report calling for a fair balance between the kinds of people who participate in research and the kinds of people who benefit from it. See also principle of beneficence, principle of respect for persons.
Principle of Respect for persons
An ethical principle from the Belmont Report stating that research participants should be treated as autonomous agents and that certain groups deserve special protection. See also principle of beneficence, principle of justice.
principle of beneficence
An ethical principle from the Belmont Report stating that researchers must take precautions to protect participants from harm and to promote their well-being. See also principle of justice, principle of respect for persons.
unobtrusive observation
An observation in a study made indirectly, through physical traces of behaviour, or made by someone who is hidden or is posing as a bystander.
acquiescence
Answering "yes" or "strongly agree" to every item in a survey or interview. Also called yea-saying. Acquiescence can threaten construct validity because instead of measuring the construct of true feelings of well-being, the survey could be measuring the tendency to agree or the lack of motivation to think carefully.
BDI
Beck Depression Inventory (studies the intensity of depression) - a 21 item self-report scale with items that ask about the major symptoms of depression - - Mood - Diminished pleasure/interests - Fatigue - Problems concentrating - Sleep disturbances - Appetite How would you measure a person's Disgust Sensitivity? Food- related "I might be willing to try eating horse meat" Animals "It would bother me to see a rat run across the stress" Body Products
Institutional Animal care and use committee (IACUC)
IACUC must approve any animal research project before it can begin - The IACUC requires researchers to submit an extensive protocol specifying how animals will be treated and protected. The IACUC application also includes the scientific justification for the research: Applicants must demonstrate that the proposed study has not already been done and explain why the research is important.
Elements of informed consent
Competence—only competent adults 18+ can give informed consent. Need to have all mental abilities in tack. (no mentally disabled, would not be considered competent) Assent—agreement to participate (still has to visually agree to do research—even if the parent says they can do it, but the child seems to not want to do it then you can not do it. They need to agree to do it regardless of what the parent says) Disclosure—right to withdraw at anytime, disclose contact information Comprehension—explain everything in a way participants can understand (standard American comprehension 9th grade reading level—so than anyone can understand)—no big words Voluntary—they are choosing to do this ( cant make people do this, they need to choose to do it) Right to withdraw—they can stop at anytime - Exemptions from informed consent ( not put at any sort of risk then don't have to do informed consent)
some KEY issues
Confidentiality—going to stay private, nothing personal will be revealed (privacy) anonymous (kept secret ) you cannot connect to who it is, other studies you can always connect Debriefing- after the study is over, you should be telling the people what the study is about. (don't deceive these people—they should not leave upset—giving feedback see how they react, always ask to see how they feel and that they can contact therapy sessions) Deception—deceiving participants (shouldn't be doing it, if you do, you have to give a very good reason as to why you are doing so) Informed consent- potential risk/harm and potential benefit (not necessarily benefitting the person but benefitting the population. Always tell them that they can leave at any time and that if they need additional resources that they can have them help them through whatever emotions they are feeling
faking good
Giving answers on a survey (or other self-report measure) that make one look better than one really is. Also called faking good
socially desirable responding
Giving answers on a survey (or other self-report measure) that make one look better than one really is. Also called faking good.
faking bad
Giving answers on a survey (or other self-report measure) that make one look worse than one really is.
What ethical concerns have been raised regarding the Milgram obedience studies?
Milgram claimed that his results—65% obedience—surprised him (Milgram, 1974; but see Perry, 2013). Experts at the time predicted that only 1-2% of people would obey the experimenter up to 450 volts In interviews years later, some participants reported worrying for weeks about the learner's welfare
fence sitting
Playing it safe by answering in the middle of the scale for every question in a survey or interview. Fence sitters can weaken a survey's construct validity when middle-of-the-road scores suggest that some responders don't have an opinion, though they actually do.
Belmont Report (has 3)
Principle of respect for persons (two provisions 1. treated as autonomous agents-- they should be free to make up their own minds about whether they wish to participate in the study (every participant is entitled to the precaution of informed consent: each person learns about the research project, considers its risks and benefits, and decides whether to participate.) 2. Resepct for persons - researchers are not allowed to mislead people about a study's risks and benefits. (respect for persons states that some people have less autonomy, so they are entitled to special protection when it comes to informed consent. For example, children, people with intellectual or developmental disabilities, and prisoners should be protected, according to the Belmont Report. Children and certain other individuals might be unable to give informed consent because of not understanding the procedures involved well enough to make a responsible decision) Justice - When the principle of justice is applied, it means that researchers consider the extent to which the participants involved in a study are representative of the kinds of people who would also benefit from its results. If researchers decide to study a sample from only one ethnic group or only a sample of institutionalized individuals, they must demonstrate that the problem they are studying is especially prevalent in that ethnic group or in that type of institution. - they must consider how the community might benefit or be harmed. Will a community gain something of value from the knowledge this research is producing? Will there be costs to a community if this research is not conducted? In the Tuskegge Syphilis study failed to treat participants through risky and invasive medical tests, and they harmed the participants families by exposing them to untreated syphilis.
Animal Care Guidelines and the Three Rs
Replacement means researchers should find alternatives to animals in research when possible. For example, some studies can use computer simulations instead of animal subjects. Refinement means researchers must modify experimental procedures and other aspects of animal care to minimize or eliminate animal distress. Reduction means researchers should adopt experimental designs and procedures that require the fewest animal subjects possible.
Encouraging Accurate Responses
Social desirable responding - - Presenting ourselves in a positive or socially acceptable way - Sensitive topics (most likely not going to answer honestly -If you want to get the truth word the question differently
3 types of reliability
Test-retest reliability: - The consistency in results every time a measure is used. Test-retest reliability can apply whether the operationalization is self-report, observational, or physiological, but it's most relevant when researchers are measuring constructs (such as intelligence, personality, or gratitude) that are theoretically stable. Interrater Reliability - The degree to which two or more coders or observers give consistent ratings of a set of targets. Suppose you are assigned to observe the number of times each child smiles in 1 hour at a childcare playground. Your lab partner is assigned to sit on the other side of the playground and make their own count of the same children's smiles. If, for one child, you record 12 smiles during the first hour and your lab partner also records 12 smiles in that hour for the same child, there is interrater reliability. Any two observers watching the same children at the same time should agree about which child has smiled the most and which child has smiled the least. Interrater reliablity is high when both agree and low when they disagree Internal reliability -In a measure that contains several items, the consistency in a pattern of answers, no matter how a question is phrased. Also called internal consistency.
Content Validity
The extent to which a measure captures all parts of a defined construct. (does it cover all relevant parts of the construct it aims to measure.) To produce valid results, the content of a test, survey or measurement method must cover all relevant parts of the subject it aims to measure. EXAMPLE: Job proficiency test- a test for a graphic designer that assesses knowledge of software but ignores understanding of design principles lacks content validity Mathematics examination- an exam claiming to asses all areas of algebra but only covers linear equations does not have comprehensive content validity Does a survey appear to measure all the important aspects of a construct? - Intelligence - Depression Beck's Depression Inventory - Mood - Diminished pleasure/interests - Fatigue - Problems concentrating - Sleep disturbances - Appetite How would you measure a person's Disgust Sensitivity? Food- related " I might be willing to try eating horse meat" Animals " it would bother me to see a rat run across the stress" Body Products
Face Validity
The extent to which a measure is subjectively considered a plausible operationalization of the conceptual variable in question. EX: Does a survey appear, on the surface, to be measuring what it claims to? - Refers to items or questions on a survey - Does it seem like its good to be measured Items to measure self-esteem (agree/disagree) - I have a low opinion of myself - assessing whether a test of happiness genuinely measures levels of happiness
Observational research
The process of watching people or animals and systematically recording how they behave or what they are doing.
informed consent
The right of research participants to learn about a research project, know its risks and benefits, and decide whether to participate. - In most studies, informed consent is obtained by providing a written document that outlines the procedures, risks, and benefits of the research, including a statement about any treatments that are experimental.
Survey Vs Poll
They are the same A method of posing questions to people on the telephone, in personal interviews, on written questionnaires, or via the Internet. Also called poll.
Debrief
To inform participants afterward about a study's true nature, details, and hypotheses.
attitude
a positive or negative evaluation about something (its an opinion—there is always a focus on what you have the evaluation about)
Gambler's fallacy
a psychological misperception - the belief that the next event is influenced by recent independent events
open ended question
a survey question format that allows respondents to answer any way they like.
Scatterplots
can thus be a helpful tool for visualizing the agreement between two administrations of the same measurement (test-retest reliability) or between two coders (interrater reliability). Using a scatterplot, you can see whether the two ratings agree (if the dots are close to a straight line drawn through them) or whether they disagree (if the dots scatter widely from a straight line drawn through them).
known-groups paradigm
compare 2 groups of people who we'd expect to have different score A method for establishing criterion validity, in which a researcher tests two or more groups who are known to differ on the variable of interest, to ensure that they score differently on a measure of that variable.
Ethical Decision Making
does not involve simple yes-or-no decisions; it requires a balance of priorities. Consider the potential benefits of research ("will it contribute something important to society?")
IRB (institutional review board)
is a committee responsible for interpreting ethical principles and ensuring that research using human participants is conducted ethically.
Emotional contagion
is the tendency for emotions to spread in face-to-face interactions. When people express happiness, people around them become happier.
morals
judgement of right and wrong
ethics
judgement of right and wrong but have own specific rules or topic (medical ethics, what okay and not okay?)
Conceptual Definition (or construct)
or construct, is the researcher's definition of the variable in question at a theoretical level.
research ethics
own set of rules (most research needs approval from IRB-- Institutional review board)
Reliability
refers to how consistent the results of a measure are The consistency of the results of a measure.
validity
refers to whether the operationalization is measuring what is supposed to be measured The appropriateness of a conclusion or decision. See also construct validity, external validity, internal validity, statistical validity.
Operational definition
represents a researcher's specific decision about how to measure or manipulate the conceptual variable.
Deception
researchers actively lied to participants—deception through commission. -Deceiving research participants by lying to them or by withholding information is, in many cases, necessary in order to obtain meaningful results. - When researchers have used deception, they must spend time after the study talking with each participant in a structured conversation. In a debriefing session, the researchers describe the nature of the deception and explain why it was necessary. Emphasizing the importance of their research, they attempt to restore an honest relationship with the participant.
confidential study
researchers collect some identifying information (for contacting people at a later date if needed) but prevent it from being disclosed. They may save data in encrypted form or store people's names separately from their other data.
anonymous study
researchers do not collect any potentially identifying information, including names, birthdays, photos, and so on.
3 common types of measures
self-report - operationalizes a variable by recording people's answers to questions about themselves in a questionnaire or interview. observational sometimes called a behavioral measure, operationalizes a variable by recording observable behaviors or physical traces of behaviors. For example, a researcher could operationalize happiness by observing how many times a person smiles. physiological perationalizes a variable by recording biological data, such as brain activity, hormone levels, or heart rate. Physiological measures usually require the use of equipment to amplify, record, and analyze biological data. For example, moment-to-moment happiness has been measured using facial electromyography (EMG)—a way of electronically recording tiny movements in the muscles in the face.
APA (American Psychological Association) - has 5
set of guidelines governs three common roles of psychologists: as research scientists, as educators, and as practitioners (usually therapists) A) Beneficence and nonmaleficence - treat people in ways that benefit them. do not cause suffering . Conduct research that will benefit society B) Fidelity and Responsibility - establish relationships of trust; accept responsibility for professional behaviour (in research, teaching, and clinical practice) C) Integrity - strive to be accurate, truthful, and honest in one's role as researcher, teacher, or practitioner. D) Justice - strive to treat all groups of people fairly. Sample research participants from the same populations that will benefit from the research. Be aware of biases. E) Respect for people's rights and dignity - Recognize that people are autonomous agents. Protect people's rights, including the right to privacy, the right to give consent for treatment or research, and the right to have participation treated confidentially. Understand that some populations may be less able to give autonomous consent, and take precautions against coercing such people.
AWA (Animal Welfare Act)
signed in 1966. applies to many species of animals in research laboratories and other contexts, including zoos and pet stores
leading questions
whether wording encourages a particular response (trying to lead people in one direction or the other) " do you think people should be able to smoke cigarettes anywhere they please, regardless of how they affect the health of others?" -- negative leading them into the response they want