Psych 310 Midterm 1
Understand the scientific method
- Assume natural cause - Make a hypothesis or educated guess - Test it - Revise hypothesis - Retest your guess - Make a conclusion
Understand the role of the IRB
A committee responsible for interpreting ethical principles and ensuring that research using human participants is conducted ethically.
Know the basic layout of a journal article and the purposes of each section
Abstract - a concise summary of the article, about 120 words long Discusses hypotheses, method, and major results Can help you decide if article is what you are looking for Introduction - the first section of regular text first paragraphs typically explain the topic of the study, Middle paragraphs lay out background of research Final paragraph - specific research questions, goals, or hypotheses for the current study Method - explains in detail how the researchers conducted their study Contains subsections such as participants, materials, procedure, and apparatus Results - describes the quantitative and, as relevant, qualitative results of the study, including the statistical tests the authors used to analyze the data Usually provides tables and figures Discussion - summarizes the study's research question and methods and indicates how well the results of the study supported the hypotheses. Usually discuss the studies importance May discuss alternative explanations for their data and pose interesting questions raised by the research References - The References section contains a full bibliographic listing of all the sources the authors cited in writing their article, enabling interested readers to locate these studies.
know the difference between basic and applied research
Applied Research - is done with a practical problem in mind; the researchers con- duct their work in a particular real-world context. Basic Research - in contrast, is not intended to address a specific, practical problem; the goal is to enhance the general body of knowledge.
Understand the different forms of biases, including - availability heuristic, present/present bias, confirmation bias, and bias blindspot
Availability Heuristic - things that pop up easily in our mind tend to guide our thinking. present/present bias - our failure to consider appropriate comparison groups. We often don't think remember to seek out information that isn't readily available. Sometimes there is an absence of something which will give us more information Confirmation bias - the tendency to look only at information that agrees with what we already believe Biased blind spot - the belief that we are unlikely to fall prey to other biases previously described We believe that we are being objective and others are biased
when should we trust what authority figures tell us?
Before taking advice of authorities, ask yourself about the source of their ideas. If they refer to evidence, they may be worthy of attention Not all research is equally reliable
Know the 5 General Principles of the APA Ethics Code
Beneficence and nonmaleficence- Treat people in ways that bene t them. Do not cause suffering. Conduct research that will benefit society. Fidelity and Responsibility - Establish relationships of trust; accept responsibility for professional behavior Integrity - Strive to be accurate, truthful, and honest in one's role as researcher, teacher, or practitioner. Justice - Strive to treat all groups of people fairly. Sample research participants from the same populations that will bene t from the research. Be aware of biases. Respect for people's rights and dignity - Recognize that people are autonomous agents. Protect people's rights, including the right to privacy, the right to give consent for treatment or research, and the right to have participation treated confidentially. Understand that some populations may be less able to give autonomous consent, and take precautions against coercing such people.
In terms of operationalization, understand the three levels of hypotheses
Conceptual - State expected relationships among concepts Research - Concepts are operationalized so that they are measurable Statistical - State the expected relationship between or among summary values of populations, called parameters
Know what elements are necessary when making a causal claim
Covariance - The study's results show that as A changes, B changes; e.g., high levels of A go with high levels of B, and low levels of A go with low levels of B. Temporal Precedence - The study's method ensures that A comes first in time, before B Internal Validity: The study's method ensures that there are no plausible alternative explanations for the change in B; A is the only thing that is changed
Know criterion validity, convergent validity, and Discriminant validity
Criterion validity - evaluates whether the measure under consideration is associated with a concrete behavioral outcome that it should be associated with, according to the conceptual definition. Convergent validity - the pattern of correlations with measures of theoretically similar and dissimilar constructs Discriminant validity - dissimilar traits or constructs
Know the fundamentals of critical thinking, including how it relates to the n of one fallacy
Critical thinkers - avoid oversimplifying - consider alternative explanations - tolerate uncertainty - maintain an air of skepticism but be open minded (i.e. not cynical) - n of one fallacy - drawing conclusions/ generalizations from anecdotal evidence
Understand the danger or necessity regarding rewards for participation, deception, and debriefing.
Deception - When researchers withhold information of the study from participants Omission - leaving things out Commission - actively lying to the participants. Is deception ethical? Researchers must still uphold the respect for persons by informing participants of the study's activities, risks, and benefits. The principles of beneficence also applies. Debriefing - When researchers have used deception, they must spend time after the study talking with each participant in a structured conversation. rewards - can help incentivise and create more diversity, but if given too much, could also deal with issues of consent.
Know the difference between descriptive and explanatory research
Descriptive - Research involves trying to answer "what" questions such as "what is going on?" or "What does a population look like?" Explanatory - researchers involves trying to answer "why" questions
The five tenets of Science
Determination - belief that events have natural causes Empiricism - reliance on real evidence to confirm or refute claims Replicability - findings must be replicable before they are accepted Falsifiability - Hypotheses and theories must be able to be proven false empirical research Parsimony - the simplest explanation for a phenomena is the correct explanation
Understand the difference between empiricism and reasoning
Empiricism - the empirical method or empirical research, involves using evidence from the senses (sight, hearing, touch) or from instruments that assist the senses. Very systematic and reliable Reasoning - use logic to derive an answer --- rational thought - thinking with reason --- major premise - all byu students are smart --- minor premise - cosmo is a byu student --- conclusion - cosmo is smart
Recognize the difference between face validity, construct validity, and content validity
Face validity - Whether the measure seems to be a reasonable measure of the variable Construct validity - Whether the measure is measuring the underlying construct (operational definition of construct) Determined by ow well the measure of a variable fits into a theory Content validity - The degree that a measure assesses all the dimensions of the construct
Know the difference between the three types of claims, and be able to recognize examples
Frequency - Proportion or count (e.g. how many students are international?) Association - Variables are related, correlated, etc. - Positive association - study time and test score - Negative association - party and test score - Zero association - GPA and dance skills Causal - Variable A causes change in Variable B
Know the book's definition of science, its four objectives
Science - a way of acquiring knowledge through empiricism and reason Four objectives - - To describe - Description of subject matter, in human development the subject matter is human or group and mental processes (what) - To explain - explain the the trends that have been observed (why) - To Predict - make predictions from the explanation -- if we can describes something, then we can try to predict it. Predictions are not confirmed, then the explanation is considered faulty and must be revised - Control - attempted to control and apply the phenomena
Understand the importance of confidentiality, informed consent, being careful when working with special populations, and avoiding multiple relationships
Informed consent - The researcher's obligation to explain the study to potential participants in everyday language and give them a chance to decide whether to participate. Usually providing a written document that outlines the procedures, risks, and benefits of the research. May not be necessary if the study is not going to cause harm and takes place in an educational setting Must inform people whether the data they provide in a research study will be treated as private and confidential. Confidentiality - Maintaining confidentiality Any information gathered in your research should remain confidential Special populations - Pregnancy, disabilities, children. Multiple Relationships - multiple roles exist between a therapist and a client. Examples of dual relationships are when the client is also a student, friend, family member, employee or business associate of the therapist.
Understand some accuracy limitations of self-reporting
In some cases, however, self-reports can be inaccurate, especially when people are asked to describe why they are thinking, behaving, or feeling the way they do. When asked, most people willingly provide an explanation or an opinion to a researcher, but sometimes they unintentionally give inaccurate responses. Poor memory can cause wrong answers on self reports
know the difference between independent and dependent variables
Independent - the variable doing the influencing Dependent - the variable being influenced
How is information derived from research different from information derived from personal experience? How do they differ in method?
Information and Research - Researchers have conditioned themselves to avoid biases and outside variables in order to create truly empirical research They create comparison groups They are objective Information and personal experience - Experience has no comparison group. Experience is confounded. Research is better than experience. Research is Probabilistic.
Understand the principles behind socially desirable responding and response set, and know ways to prevent/detect it
Issues with response set --- Acquiescence (its all good or bad) --- Fence sitting (I can't decide) Socially desirable responding --- Faking good --- Faking bad
Know ways to prevent observer bias, observe effects, and reactivity
Masked design - blind design, when the observers are unaware of the purpose of the study and conditions to which participants have been assigned. First and foremost, careful researchers train their observers well. Researchers can assess the construct validity of a coded measure by using multiple observers.
Recognize the difference between a measured and manipulated variable, and a conceptual and operational variable
Measured variable - one whose levels are simply observed and recorded. (IQ, ruler, gender, hair color) Manipulated variables - a variable a researcher controls, usually by assigning study participants to the different levels of that variable Conceptual variables - abstract concepts, such as "spending time socializing" and school achievement." Sometimes is known as a construct. Operational variables - to operationalize means to turn a concept of interest into a measured or manipulated variable.
Know how negative wording, leading questions, and double barreled questions may affect survey results
Negative wording - Whenever a question contains negative phrasing, it can cause confusion, thereby reducing the construct validity of a survey or poll Leading questions - one whose wording leads people to a particular response because it explains why Double barreled questions - it asks two questions in one. Double-barreled questions have poor construct validity because people might be responding to the first half of the question, the second half, or both.
Understand the difference between nominal, ordinal, ratio, and interval variables
Nominal - Differ in name (e.g.. gender, eye color, job type). Qualitative Ordinal - Vary in order of quantity (e.g., first, second, and third place in a race). 1st place isn't twice as good as second place (or half as good) Non-parametric statistical analyses. quantitative Interval - The intervals between the values of the variables are equal (e.g., IQ scores). But a score of 2 isn't twice (or half) a score of 1. Can use parametric statistical analyses. quantitative Ratio - Like interval variables but with a true zero point (e.g., temperature in Kelvin). Can use the greatest variety of statistical analyses. quantitative
Understand the effect observer bias, observer effects, and reactivity may have on the construct validity
Observer Bias - When observers' expectations influence their interpretation of the participants behaviors or the outcome of the study Observer effects - Observers inadvertently change the behavior of those they are observing, such that participant behavior changes to match observer expectations Reactivity - A change in behavior when study participants know another person is watching
Understand the pros and cons of open-ended questions vs. forced-choice questions
Open-ended - - Allows the respondent to provide his or her answer in his or her own words. - Often summarized with content analysis. - Qualitative data -- Can be hard to interpret reliably and limited generalizability Forced-Choice - Easier to get usable data But are you missing something? True/False or Yes/No Set options Fixed Alternative (Multiple Choice) Rating or Likert Scale
Know the definition of research and why we do it
Producers - important to give information to the world and need to know how to correctly create studies Consumers - need to know how to tell what information is useful and what is not. Can be crucial for future career - know what therapies are useful. the diligent and systematic inquiry or investigation into a subject in order to discover or revise facts, theories, applications.
Know the difference between quantitative and qualitative research, and be able to recognize examples
Quantitative - research measures differences in amounts (E.G. behavior) Qualitative - research describes differences in kind or quality (E.G. of behavior)
Understand the difference between qualitative and quantitative (nominal) variables
Quantitative Variables - coded with meaningful numbers. --- Types - Ordinal scale Applies when the numerals of a quantitative variable represent a ranked order. - Interval scale Applies to the numerals or a quantitative variable that meet two conditions: first the numerals represent equal intervals between levels, and second that there is no true zero. - Ratio scale Apples when the numerals of a quantitative variable have equal intervals and when the value of 0 truly means "none" or nothing. Qualitative Variables - differ in quality or type
Know the difference between rating and Likert scales
Rating - Fixed alternative questions where the respondent indicates magnitude on a scale. Likert - Statements in which the respondent is asked to indicate the degree he or she agrees or disagrees with the statement.
Understand internal validity, external validity, content validity, and statistical validity
Refers to the appropriateness of a conclusion or decision internal validity - refers to the appropriateness of a conclusion or decision, and in general, a valid claim is reasonable, accurate, and justifiable. External Validity - Want to assess whether the association between the two variables would generalize to other people and settings Statistical validity - The extent to which the data support the conclusions. It is important to ask about the strength of an association and its statistical significance (the probability that the results could have been obtained by chance if there really is no relationship. content validity - a measure must capture all parts of a defined construct.
Know the basics of conducting ethical animal research
Replacement - means researchers should find alternatives to animals in research when possible. For example - some studies can use computer simulations instead of animal subjects Refinement - researchers must modify experimental procedures and other aspects of animal care to minimize or eliminate animal distress Reduction - means researchers should adopt experimental designs and procedures that require the fewest animal subjects possible
Know the steps of conducting research
Selecting a research topic Generating testable hypothesis Identifying/Classifying variables Selecting an appropriate design Planning the study Conducting the study Analyze the results Drawing conclusions Sharing the findings
Pros and Cons of: Self-report vs. observational measures vs. physiological measures vs. open ended questions
Self-Report Measures Participants answer questions Surveys, interviews, ratings, scales - pros - Observational Measures Behavior is observed Then it is coded by researchers Physiological Measures Objective measurements are taken Biomarkers, brain scans, BMI, heart rate - pros - most accurate, but have to be checked by something else Best when all work together Open-ended - - Not as efficient, but gives rich information
Know the types of response sets and socially desirable responding
Socially desirable responding - (faking good) giving answers that respondents think are appropriate or that they think the researchers might want to hear Response set - giving only moderate answers or always agreeing or disagreeing. acquiescence - or yea-saying; this occurs when people say "yes" or "strongly agree" to every item instead of thinking carefully about each one. fence sitting—playing it safe by answering in the middle of the scale, especially when survey items are controversial.
Recognize differences between the three types of reliability
Test-retest - The researcher gets consistent scores every time he or she uses the measure. - Shows consistent scores for different occasions when tested. Interrater reliability - Consistent scores are obtained no matter who measures the variable - Two or more independent observers will come up with consistent or very similar answers. Internal reliability - A study participant gives consistent pattern of answers, no matter how the researcher has phrased the question. - Applies only to self-report scales with multiple items.
Why does question order matter and what are order effects?
The order in which questions are asked can also a affect the responses to a survey. The earlier questions can change the way respondents understand and answer the later questions.
Know how to avoid survey issues
To avoid socially desirable responding, a researcher might ensure that the participants know their responses are anonymous—perhaps by conducting the survey online, or in the case of an in-person interview, reminding people of their anonymity right before asking sensitive questions Researchers can also ask people's friends to rate them One way to minimize this problem is to include special survey items that identify socially desirable responders with target items Finally, researchers increasingly use special, computerized measures to evaluate people's implicit opinions about sensitive topics.
Know the different ways we obtain knowledge/what forms our beliefs.
Tradition or tenacity - it is true because it has always been that way (gut feeling) Authority - i believe it is true because an expert says its true (professors, blogs) Personal Experience - I believe it is true because I experienced it Empiricism - things which we can measure Reasoning - use logic to derive an answer Rational thought - thinking with reason
Know the difference between Type I and Type II Error
Type 1 error - false positive Type 2 errors - false negative
Know the ways psychologists might measure behavior in an observational study
Typical ways that psychologists measure observable behavior - Accuracy - responses are either right or wrong - Frequency - how often a behavior occurs in a specified period of time - Latency - speed of onset - Duration - how long the behavior lasts - Amplitude - size of response - Choice selection - frequency of choice between alternatives
Understand plagiarism
Usually de ned as representing the ideas or words of others as one's own. "the appropriation of another person's ideas, processes, results, or words without giving appropriate credit" To avoid plagiarism, a writer must cite the sources of all ideas that are not his or her own, to give appropriate credit to the original authors.
Know the difference between reliability and validity
Validity - accuracy Reliability - precision
Know the concept of a p value
We quantify this "likelihood" through a p-value -- Generally we say that p < .05 is a significant difference
Know the basic publication process (including peer review process and reviewed blind)
Write the "manuscript", submit the journal, editor sends to 2- reviewers, reviewers read the manuscript and make comments in a "timely" fashion. Peer reviewed - process whereby editor of a journal sends submitted manuscripts out for review by other researchers in same field of study Reviewed blind - reviewers are not aware of the authors they are reviewing and author does not know who the reviewers are Possible outcomes - Accept with no changes (rare) Accept with minor changes Revise and resubmit (most often) Reject Sends the decision to the author
Know the ways researchers might ask questions, including multiple choice, rating scale, and Likert scale.
multiple choice - four options listed Rating scale - Fixed alternative questions where the respondent indicates magnitude on a scale. measures an amount or frequency of a behavior Likert scale - Statements in which the respondent is asked to indicate the degree he or she agrees or disagrees with the statement. Designed to measure an attitude or opinion
Why might an observational study design be better than a self-report measure in some cases?
self-report questions can be excellent measures of what people think they are doing, and of what they think is influencing their behavior. But if you want to know what people are really doing or what really influences behavior, you should probably watch them. Here are three examples of how observational methods have been used to answer research questions in psychology. -- hockey moms and dadas -- how much people talk -- observing families in the evening
Know the difference between a scale, an inventory, and a test, and be able to recognize examples of each
test - refers to many procedures used to measure a variable. --- Intelligence tests, aptitude tests, achievement tests scale - refers to a measure of a specific psychological characteristic. --- Depression, anxiety, or aspects of intelligence inventory - used to describe interests or personality. -- Sometimes numbers don't make sense ;) -- But a collection of traits does --- ENTJ makes sense to describe personality characteristics, but 47.3 doesn't make sense.
Know the concept of power and how to increase it (see chapter 6 as well)
the probability of not missing a significant effect A powerful test is more likely to find significant differences IF THEY ARE ACTUALLY THERE. Ways to increase power: Be careful about how you measure your variables. Use more powerful statistical analyses. Use designs that provide good control over extraneous variables. Increase your sample size reduces error variance. Maximize treatment manipulation.