Liberal Studies Quiz Psych FSU
How is science a safeguard against bias?
Confi rmation Bias. To protect themselves against bias, good scientists adopt procedural safeguards against errors, especially errors that could work in their favor. In other words, scientifi c methods are tools for overcoming confi rmation bias: the tendency to seek out evidence that supports our beliefs and deny, dismiss, or distort evidence that contradicts them
What conclusions are appropriate when there is or is not statistical significance?
null hypothesis: no significant difference between specified populations, any observed difference being due to sampling or experimental error.
What biases does it guard against?
personal
What is an illusory correlation?
phenomenon of perceiving a relationship between variables (typically people, events, or behaviors) even when no such relationship exists. ex:A child makes a record amount of goals in a soccer game when wearing his red socks, so he continues to wear his red socks for each future game, believing that the socks are related to his play. (good luck charm)
What is reciprocal determinism?
theory set forth by psychologist Albert Bandura that a person's behavior both influences and is influenced by personal factors and the social environment. Bandura accepts the possibility of an individual's behavior being conditioned through the use of consequences.
What are the potential pitfalls in experimental design?
underestimating the amount of time that an experiment will take lack of time management lack of strategy lack of management
Which fallacies are essential to keep in mind when evaluating psychological claims?
1) Emotional Reasoning Fallacy: the logical fallacies that are essential to bear in mind when evaluating psychological claims 2) Bandwagon Fallacy: the error of assuming that a claim is correct just because many people believe it 3) Not Me Fallacy: the error of believing that we;re immune from errors in thing that afflict other people
How is a correlational study conducted?
1. The first of the three types, natural observation, is observing and recording variables in a natural environment, without interfering. For example, you might observe student class attendance in order to predict grade success. This type of research is often used when lab experimentation is not possible or ethical. However, it can be time consuming and does not allow variable control. 2. Survey research consists of gathering information via surveys or questionnaires by choosing a random sample of participants. For example, if you've ever filled out a satisfaction survey on a new product in a mall, you've participated in survey research. Those surveys are used to predict whether a new item will be successful. Survey research is quick and convenient, but participants can affect the outcomes in a variety of ways. 3. Lastly, archival research is analyzing data collected by others. For example, you might look at archive records to predict how crime statistics influence local economics. Archive research is often free. However, large amounts of data are needed in order to see any type of significant relationship. Researchers cannot control the data or how it was gathered.
What is the difference between a conceptual and operational definition?
A conceptual definition tells you what the concept means, while an operational definition only tells you how to measure it. A conceptual definition tells what your constructs are by elplaining how they are related to other constructs.
What's the difference between statistical and practical significance?
After a researcher gathers data for a study, the data typically goes into a statistical test. The results of the test also have a p-value or significance test. The most common choice for a statistical significance level is .05, which means that the probability of a relationship due to random chance is below 5 percent. The significance level helps a researcher determine whether to reject the null hypothesis, the hypothesis that states there is no relationship between the variables. Practical significance shows that the results of the study are meaningful beyond the likelihood of chance. In order to test for application, researchers use effect size, methods of association and confidence intervals, explains Dr. Connie Schmitz of the University of Minnesota Medical School. The effect size measures the difference between the changes in the dependent variable due to the independent variable. Association varies by the type of statistical test and shows the strength in the relationship between variables. Confidence intervals determine the probability that the results are applicable to the larger population instead of just the sample.
What are descriptive and inferential statistics?
Descriptive statistics uses the data to provide descriptions of the population, either through numerical calculations or graphs or tables. Inferential statistics makes inferences and predictions about a population based on a sample of data taken from the population in question.
Why is intuition/common sense good?
Expedient decision making and rapid response are required. The circumstances leave you no time to go through complete rational analysis. Fast paced change. The factors on which you base your analysis change rapidly. The problem is poorly structured. The factors and rules that you need to take into account are hard to articulate in an unambiguous way. You have to deal with ambiguous, incomplete, or conflicting information. There is no precedent.
Why is intuition/common sense bad
I have no idea what's going on, or at any rate, I won't respond to your questions about it."
What are internal validity and external validity?
Internal validity refers to whether the effects observed in a study are due to the manipulation of the independent variable and not some other factor. In-other-words there is a causal relationship between the independent and dependent variable. Internal validity can be improved by controlling extraneous variables, using standardized instructions, counter balancing, and eliminating demand characteristics and investigator effects. External validity refers to the extent to which the results of a study can be generalized to other settings (ecological validity), other people (population validity) and over time (historical validity). External validity can be improved by setting experiments in a more natural setting and using random sampling to select participants.
Why do we need research designs?
Psychologists test research questions using a variety of methods. Most research relies on either correlations or experiments. With correlations, researchers measure variables as they naturally occur in people and compute the degree to which two variables go together. With experiments, researchers actively make changes in one variable and watch for changes in another variable. Experiments allow researchers to make causal inferences. Other types of methods include longitudinal and quasi-experimental designs. Many factors, including practical constraints, determine the type of methods researchers use. Often researchers survey people even though it would be better, but more expensive and time consuming, to track them longitudinally.
What are the steps of the scientific method?
Make an Observation Scientists are naturally curious about the world. While many people may pass by a curious phenomenon without sparing much thought for it, a scientific mind will take note of it as something worth further thought and investigation. Form a Question After making an interesting observation, a scientific mind itches to find out more about it. This is in fact a natural phenomenon. If you have ever wondered why or how something occurs, you have been listening to the scientist in you. In the scientific method, a question converts general wonder and interest to a channelled line of thinking and inquiry. Form a Hypothesis A hypothesis is an informed guess as to the possible answer of the question. The hypothesis may be formed as soon as the question is posed, or it may require a great deal of background research and inquiry. The purpose of the hypothesis is not to arrive at the perfect answer to the question but to provide a direction to further scientific investigation. Conduct an Experiment Once a hypothesis has been formed, it must be tested. This is done by conducting a carefully designed and controlled experiment. The experiment is one of the most important steps in the scientific method, as it is used to prove a hypothesis right or wrong, and to formulate scientific theories. In order to be accepted as scientific proof for a theory, an experiment must meet certain conditions - it must be controlled, i.e. it must test a single variable by keeping all other variables under control. The experiment must also be reproducible so that it can be tested for errors. Analyse the Data and Draw a Conclusion As the experiment is conducted, it is important to note down the results. In any experiment, it is necessary to conduct several trials to ensure that the results are constant. The experimenter then analyses all the data and uses it to draw a conclusion regarding the strength of the hypothesis. If the data proves the hypothesis correct, the original question is answered. On the other hand, if the data disproves the hypothesis, the scientific inquiry continues by doing research to form a new hypothesis and then conducting an experiment to test it. This process goes on until a hypothesis can be proven correct by a scientific experiment.
Explain why natural correlations cannot be used to infer causality due to the possibility of the third variable problem/correlation-causation fallacy.
correlation does not cause causation
What is the descriptive method for conducting research?
Observational Method With the observational method (sometimes referred to as field observation) animal and human behavior is closely observed. There are two main categories of the observational method — naturalistic observation and laboratory observation. The biggest advantage of the naturalistic method of research is that researchers view participants in their natural environments. This leads to greater ecological validity than laboratory observation, proponents say. Ecological validity refers to the extent to which research can be used in real-life situations. Proponents of laboratory observation often suggest that due to more control in the laboratory, the results found when using laboratory observation are more meaningful than those obtained with naturalistic observation. Laboratory observations are usually less time-consuming and cheaper than naturalistic observations. Of course, both naturalistic and laboratory observation are important in regard to the advancement of scientific knowledge. Case Study Method Case study research involves an in-depth study of an individual or group of indviduals. Case studies often lead to testable hypotheses and allow us to study rare phenomena. Case studies should not be used to determine cause and effect, and they have limited use for making accurate predictions. There are two serious problems with case studies — expectancy effects and atypical individuals. Expectancy effects include the experimenter's underlying biases that might affect the actions taken while conducting research. These biases can lead to misrepresenting participants' descriptions. Describing atypical individuals may lead to poor generalizations and detract from external validity. Survey Method In survey method research, participants answer questions administered through interviews or questionnaires. After participants answer the questions, researchers describe the responses given. In order for the survey to be both reliable and valid it is important that the questions are constructed properly. Questions should be written so they are clear and easy to comprehend.
Identify and explain the basic principles and specific rules within psychology of ethical research using human and animal participants.
Principle A: Competence Psychologists strive to maintain high standards of competence in their work. They recognize the boundaries of their particular competencies and the limitations of their expertise. They provide only those services and use only those techniques for which they are qualified by education, training, or experience. Psychologists are cognizant of the fact that the competencies required in serving, teaching, and/or studying groups of people vary with the distinctive characteristics of those groups. In those areas in which recognized professional standards do not yet exist, psychologists exercise careful judgment and take appropriate precautions to protect the welfare of those with whom they work. They maintain knowledge of relevant scientific and professional information related to the services they render, and they recognize the need for ongoing education. Psychologists make appropriate use of scientific, professional, technical, and administrative resources. Principle B: Integrity Psychologists seek to promote integrity in the science, teaching, and practice of psychology. In these activities psychologists are honest, fair, and respectful of others. In describing or reporting their qualifications, services, products, fees, research, or teaching, they do not make statements that are false, misleading, or deceptive. Psychologists strive to be aware of their own belief systems, values, needs, and limitations and the effect of these on their work. To the extent feasible, they attempt to clarify for relevant parties the roles they are performing and to function appropriately in accordance with those roles. Psychologists avoid improper and potentially harmful dual relationships. Principle C: Professional and scientific responsibility Psychologists uphold professional standards of conduct, clarify their professional roles and obligations, accept appropriate responsibility for their behavior, and adapt their methods to the needs of different populations. Psychologists consult with, refer to, or cooperate with other professionals and institutions to the extent needed to serve the best interests of their patients, clients, or other recipients of their services. Psychologists' moral standards and conduct are personal matters to the same degree as is true for any other person, except as psychologists' conduct may compromise their professional responsibilities or reduce the public's trust in psychology and psychologists. Psychologists are concerned about the ethical compliance of their colleagues' scientific and professional conduct. When appropriate, they consult with colleagues in order to prevent or avoid unethical conduct. Principle D: Respect for people's rights and dignity Psychologists accord appropriate respect to the fundamental rights, dignity, and worth of all people. They respect the rights of individuals to privacy, confidentiality, self-determination, and autonomy, mindful that legal and other obligations may lead to inconsistency and conflict with the exercise of these rights. Psychologists are aware of cultural, individual, and role differences, including those due to age, gender, race, ethnicity, national origin, religion, sexual orientation, disability, language, and socioeconomic status. Psychologists try to eliminate the effect on their work of biases based on those factors, and they do not knowingly participate in or condone unfair discriminatory practices. Principle E: Concern for others' welfare Psychologists seek to contribute to the welfare of those with whom they interact professionally. In their professional actions, psychologists weigh the welfare and rights of their patients or clients, students, supervisees, human research participants, and other affected persons, and the welfare of animal subjects of research. When conflicts occur among psychologists' obligations or concerns, they attempt to resolve these conflicts and to perform their roles in a responsible fashion that avoids or minimizes harm. Psychologists are sensitive to real and ascribed differences in power between themselves and others, and they do not exploit or mislead other people during or after professional relationships. Principle F: Social responsibility Psychologists are aware of their professional and scientific responsibilities to the community and the society in which they work and live. They apply and make public their knowledge of psychology in order to contribute to human welfare. Psychologists are concerned about and work to mitigate the causes of human suffering. When undertaking research, they strive to advance human welfare and the science of psychology. Psychologists try to avoid misuse of their work. Psychologists comply with the law and encourage the development of law and social policy that serve the interests of their patients and clients and the public. They are encouraged to contribute a portion of their professional time for little or no personal advantage.
What is pseudoscience and why is not the same as science?
Pseudoscience "promises easy fixes to life's problems and challenges" where science does not. A main difference between the two is the lack of empirical data used to back up the claims in Pseudoscience. Science uses research evidence (empirical data) to strongly support claims and theories that are proposed whereas pseudoscience uses unsupported popular opinion thus taking the validity away from it. In addition to popular opinion as support to claims that take away pseudoscience's validity, so does its vulnerability to critical thinking concepts. Psychological critical thinking concepts 1-8 highlight further the flaws of pseudoscience and aren't highlighted in science. (For Example: Examining the Evidence, Critical Thinking #3
How is random assignment related to confounds?
Psychologists rely on random assignment to assign subjects to different groups in an experiment. Random assignment leaves it completely up to chance to determine which subjects receive the critical part of the experiment, which is imperative for determining that the independent variable is indeed what creates the result. Randomly assigning subjects helps to eliminate confounding variables, or variables other than the independent variable that could cause a change in the dependent variable.
What is the difference between basic(pure) and applied research?
Pure research is driven by interest or curiosity in the relationships between two or more variables. When an individual is interested in learning simply for learning's sake, she is conducting pure research. For example, someone interested in financial markets and investor behavior may watch the stock market to gain a better understanding of how markets move. This type of research is generally not economically profitable, but it may provide a catalyst for applied research that leads to future breakthroughs. Applied research is used to solve a specific, practical problem of an individual or group. This type of research is used in a wide number of fields, including medicine, education, agriculture and technology. Examples of applied research include studying the behavior of children to determine the effectiveness of various interventions, looking into the relationship between genetics and cancer, or testing the waters of a river to determine what types of contaminants are making their way into a municipal water supply.
Be able to identify research method from a description of a study.
Quantitative. Correlation/Regression Analysis. Meta-Analysis. The art disciplines. The science disciplines. The discipline of philosophy. The discipline of history. The disciplines of humanities.
What is random assignment and why is it important?
Random assignment or random placement is an experimental technique for assigning human participants or animal subjects to different groups in an experiment (e.g., a treatment group versus a control group) using randomization, such as by a chance procedure (e.g., flipping a coin) or a random number generator.
What is random selection?
Random selection refers to how sample members (study participants) are selected from the population for inclusion in the study. Random assignment is an aspect of experimental design in which study participants are assigned to the treatment or control group using a random procedure.
principles of scientific thinking
Ruling out rival hypotheses Only one explanation is given for a certain claim. p.22 Correlation isn't causation The thought that one thing causes the other. p.22 Falsifiability The claim must be able to be proven wrong. p.24 Replicability Claims must be able to be tested or replicated again. p.24 Extraordinary Claims Require Extraordinary Evidence Large amounts of evidence must be at hand for huge claims which are placed. p.25 Occam's Razor Some claims are explained in a complicated way instead of simply. p.26
What features are required of an experiment?
The sample groups must be assigned randomly. There must be a viable control group. Only one variable can be manipulated and tested. It is possible to test more than one, but such experiments and their statistical analysis tend to be cumbersome and difficult. The tested subjects must be randomly assigned to either control or experimental groups.
What is critical/scientific thinking?
is that mode of thinking — about any subject, content, or problem — in which the thinker improves the quality of his or her thinking by skillfully analyzing, assessing, and reconstructing it. Critical thinking is self-directed, self-disciplined, self-monitored, and self-corrective thinking. It presupposes assent to rigorous standards of excellence and mindful command of their use. It entails effective communication and problem-solving abilities, as well as a commitment to overcome our native egocentrism and sociocentrism. the objective analysis and evaluation of an issue in order to form a judgment
How is an experiment conducted?
Stage One After deciding upon a hypothesis, and making predictions, the first stage of conducting an experiment is to specify the sample groups. These should be large enough to give a statistically viable study, but small enough to be practical. Ideally, groups should be selected at random, from a wide selection of the sample population. This allows results to be generalized to the population as a whole. In the physical sciences, this is fairly easy, but the biological and behavioral sciences are often limited by other factors. For example, medical trials often cannot find random groups. Such research often relies upon volunteers, so it is difficult to apply any realistic randomization. This is not a problem, as long as the process is justified, and the results are not applied to the population as a whole. If a psychological researcher used volunteers who were male students, aged between 18 and 24, the findings can only be generalized to that specific demographic group within society. Stage Two The sample groups should be divided, into a control group and a test group, to reduce the possibility of confounding variables. This, again, should be random, and the assigning of subjects to groups should be blind or double blind. This will reduce the chances of experimental error, or bias, when conducting an experiment. Ethics are often a barrier to this process, because deliberately withholding treatment, as with the Tuskegee study, is not permitted. Again, any deviations from this process must be explained in the conclusion. There is nothing wrong with compromising upon randomness, where necessary, as long as other scientists are aware of how, and why, the researcher selected groups on that basis. Stage Three This stage of conducting an experiment involves determining the time scale and frequency of sampling, to fit the type of experiment. For example, researchers studying the effectiveness of a cure for colds would take frequent samples, over a period of days. Researchers testing a cure for Parkinson's disease would use less frequent tests, over a period of months or years. Stage Four The penultimate stage of the experiment involves performing the experiment according to the methods stipulated during the design phase. The independent variable is manipulated, generating a usable data set for the dependent variable. Stage Five The raw data from the results should be gathered, and analyzed, by statistical means. This allows the researcher to establish if there is any relationship between the variables and accept, or reject, the null hypothesis. These steps are essential to providing excellent results. Whilst many researchers do not want to become involved in the exact processes of inductive reasoning, deductive reasoning and operationalization, they all follow the basic steps of conducting an experiment. This ensures that their results are valid.
What are positive and negative correlations?
The direction of a correlation is either positive or negative. In a negative correlation, the variables move in inverse, or opposite, directions. In other words, as one variable increases, the other variable decreases. For example, there is a negative correlation between self-esteem and depression.F
What are independent and dependent variables?
The two main variables in an experiment are the independent and dependent variable. An independent variable is the variable that is changed or controlled in a scientific experiment to test the effects on the dependent variable. A dependent variable is the variable being tested and measured in a scientific experiment. If a scientist conducts an experiment to test the theory that a vitamin could extend a person's life-expectancy, then: The independent variable is the amount of vitamin that is given to the subjects within the experiment. This is controlled by the experimenting scientist. The dependent variable, or the variable being affected by the independent variable, is life span.
What are confounds?
are factors other than the independent variable that may cause a result. In your caffeine study, for example, it is possible that the students who received caffeine also had more sleep than the control group. Or, the experimental group may have spent more time overall preparing for the exam. Those factors - sleep and extra preparation - could also create a result that has nothing to do with the caffeine. You cannot be sure that the caffeine caused your result instead of the confounding variables. One way to avoid this type of confound is to randomly assign people to the study.
What is a correlation?
association - more precisely it is a measure of the extent to which two variables are related.
What is the difference between a theory and a hypothesis?
attempt to explain phenomena. It is a proposal, a guess used to understand and/or predict something. A theory is the result of testing a hypothesis and developing an explanation that is assumed to be true about something.
Identify and define the three measures of central tendency and two measures of variability.
central tendency: mean, median, mode variability:range, standard deviation
What do r-values (correlation coefficient values) mean?
correlation coefficient r measures the strength and direction of a linear relationship between two variables on a scatterplot. ... To interpret its value, see which of the following values your correlation r is closest to: Exactly -1. A perfect downhill (negative) linear relationship. -0.70.
Why is it often not used in research?
generalize to the larger population not always accurate
What does it mean to say something is multiply determined?
produced by many factors; one reason that behavior is so difficult to predict is that almost all actions are
What are individual differences?
research typically includes personality, motivation, intelligence, ability, IQ, interests, values, self-concept, self-efficacy, and self-esteem (to name just a few). There are few remaining "differential psychology" programs in the United States, although research in this area is very active.
what is skepticism?
the attitude of doubting knowledge claims set forth in various areas. Skeptics have challenged the adequacy or reliability of these claims by asking what principles they are based upon or what they actually establish. They have questioned whether some such claims really are, as alleged, indubitable or necessarily true, and they have challenged the purported rational grounds of accepted assumptions. In everyday life, practically everyone is skeptical about some knowledge claims; but philosophical skeptics have doubted the possibility of any knowledge beyond that of the contents of directly felt experience.