Qualitative Review Study Guide - Includes article review

¡Supera tus tareas y exámenes ahora con Quizwiz!

What are the strengths of the study? (if nothing else, remember that the use of qual questions to triangulate strengthened the results of the study.

The use of an open-ended question about constraints to teaching through inquiry added depth to the information and likely captured constraints the researchers would not have thought to include as choices (if the question had been multiple choice or Likert) The purpose of the study (better understanding the factors contributing to, supporting, and constraining teacher use of inquiry in teaching science) addressed a practical and relevant educational practice The survey and study seemed to ground itself pretty well in the literature review

Compare and contrast modern and postmodern worldviews

Theories are Fragmented Today, often based on: 1) Nature of logic used- there's not one structure that works for all 2) Identity, Culture & Research: research is situated in time and cultural space 3) Utility and Application: alternative tools to traditional empirical methods 4) Both theory and practice are important 5) Role of Politics: Self interest is inescapable 6) Bold Challenge of post-modernism- all research is in question b/c a lot by white Western males Modernism: we can reason and understand how the world works; this was empowering because the world was seen as a mystery that was revealed from God, (1650s-1950s): ● Epistemology (two dominant approaches): empiricism/science (knowing through sensing) and reason/logic ● narrowly defined ways of knowing; agree objective truth exists and is attainable ● Source of authority: politics and universities (science - professors, research, etc.) ● Science, logic, and reason will benefit the world ● Enlightenment period (birth of science) ● There is a capital T truth - and people will be able to find it with the right tools ● There is one reality and we can understand it with the right tools Post Modernist: too optimistic previously, there's chaos and you are missing women, people of color, etc. (other marginalized groups). Modernism worldview was based on rationalism. A postmodernist would find that the modernist empiricism (there must be empirical evidence) did not recognize knowledge and truth to be socially constructed; experiences of non-dominant groups were excluded, overlooked, or looked down upon. • Epistemology: multiple (no one best or correct way of knowing, some ways are better in certain situations; using multiple ways of knowing helps understand the issue more deeply. • Sources of authority-deconstruct power, many sources of authority. • Suspicious of meta-narrative (one overarching story) • Delegitimize any totalizing or foundationalism (justified belief, secure foundation of certainty) account • May not want all details, but want room for counterexamples, want all the viewpoints, what does it mean for us? • Doesn't have a problem with things not adding up • If it sounds too good to be true, it is! • Shift from personal interpretation to importance of discourse • Characterized by broad skepticism • Unrestrained science, logic, and reason have hurt the world (i.e. atomic bomb, etc.) and has been used as a tool for oppression (people who use science to say that white people are smarter than black people, etc.) Modernism sounds a lot like positivism to me, but it is more of a worldview, umbrella term. It seems as though modernist thought heavily emphasizes empiricism and logic/reason. In both of these, there is the presumption that it is possible to get to "Truth" or an answer. It assumes that objectivity is possible. On the contrary, postmodernist thought rejects the possibility of an objective truth because context is inescapable. It criticizes modernism for its lack of attention to populations that are traditionally underserved in research: (e.g. women and minorities). It rejects meta-narratives and seeks out counterexamples, though not in the pursuit of an alternate truth. It emphasizes the importance of discourse and is characterized by skepticism.

logics in use

"lenses", perspective, individual's view of the world shaped by culture, language, and experiences. Researchers' logics in use will be reflected in any explanations or observations of phenomena. There is no universal rationality or "natural logic" because of researchers' uniqueness.

What are the research questions?

1. To what extent do teachers report preparation in inquiry and employing inquiry in their science teaching? 2. Does teachers' use of inquiry vary by student level? 3. What contextual factors do teachers indicate relate to their inquiry implementation?

What type of design and data collection methods are used in this study?

A Non-experimental, descriptive research design was used. The data collection method was the use of surveys, which included both scaled and qualitative questions.

What are appropriate uses of criterion-referenced and norm-referenced test scores?

Criterion is standards based and you can be proficient like SOL (what should a social studies class know) and norms is percentile (how you stack up) like SAT Criterion Referenced - measure student performance against a fixed set of predetermined criteria or learning standards (think SOL) • What test takers can do and what they know, not how they compare to others • Report how well students are doing relative to a predetermined performance level • Often used when educators want to see how well students have learned the knowledge and skills they are expected to master • Content selected based on how well it matches learning outcomes Norm Referenced - designed to rank test takers on a bell curve. Used to compare a person's performance to what is normal for others similar to them (SAT) • Highlight achievement differences between and among students to produce rank order • A representative group is given test prior to availability to public and considered normed group • Content selected based on how well it ranks students(must discriminate between) • Used to classify students (in the __ percentile)

How do quantitative and qualitative research methods differ?

Educational research often tackles relatively uncharted territory, where a qualitative approach may be more appropriate. A quantitative approach to a problem, on the other hand, assumes that precise hypotheses about the problem have already been formulated in terms of clearly-defined, measurable variables. Qualitative researchers believe that beliefs are the nature of the info that is needed to arrive at credible findings and conclusions. Researchers believe that there are multiple realities represented in participant perspectives, and this context is critical in our interpretation of the phenomena being investigated. The world that it takes place in is also important. Quantitative: • Data reduced to numbers and analyzed using statistics • Common methods are: surveys, face to face interviews, longitudinal studies, polls, systematic observations • Usually used to test pre-specified concepts/constructs • Usually less indepth but more breadth of information across a large number of cases • Fixed response options; statistical tests are used for analysis • Traditionally view human behavior as highly predictable and explainable • "narrow" angle lens- focus only on one or a few causal factors at a time• Biases are a threat to internal validity • scientific method: confirmatory; top-down; hypothesis testing • ontology: objective, material, structural, agreed-upon • epistemology: scientific realism; search for Truth; justification by empirical confirmation of hypotheses; universal scientific standards • common research objectives: quantitative/numerical description, causal explanation, prediction • interest: identify general scientific laws; inform national policy • nature of observation: study behavior under controlled conditions; isolate causal effect of single variables • form of data collected: quantitative (numerical) data based on precise measurement using structured and validated data-collection instruments • nature of data: variables • data analysis: identify statistical relationships among variables • results: generalizable findings providing representation of objective outsider viewpoint of populations • form of final report: statistical report (statistical analyses; descriptive and inferential statistics) • Researchers take great care to avoid their own presence, behavior or attitude affecting the results • External factors controlled for • Often associated with positivist/postpositivist paradigm Qualitative: • Describe and analyze using words • Usually exploratory- to gain an understanding • Sample size typically small • Usually more indepth information on a few cases • Unstructured or semi-structured response options; can't use statistical tests • Less generalizable• Often view human behavior as fluid, changing over time • Biases must be disclosed • Scientific method: exploratory; bottom-up; generates/constructs knowledge, grounded theory, hypotheses • Ontology: subjective, mental, personal, constructed • Epistemology: relativism; individual and group justification; varying standards • View of human thought and behavior: situational, social, contextual, personal, unpredictable • Most common research objectives: qualitative/subjective description, empathetic understanding, exploration • Interest: understand and appreciate particular groups and individuals; inform local policy • Focus: wide-angle and deep-angle lens, examining the breadth and depth of phenomena to learn more about them • Nature of observation: study groups and individuals in natural settings; attempt to understand insider's views, meanings, perspectives • Form of data collected: qualitative data such as focus groups, in-depth interviews, participants observation, field notes, open ended questions. Researcher is primary data collection instrument • Nature of data: words, images, categories, themes • Data analysis: use descriptive data; search for patterns, themes, and holistic features; appreciate difference/variation • Results: particularistic findings; provision of insider viewpoints; not generalizable • Form of final report: narrative report with contextual description and direct quotations from research participants • often associated with social constructivist paradigm which emphasizes the socially constructed nature of reality • Attempt to uncover deeper meaning and significance of human behavior and experience • Want to give participants more freedom than quantitative to allow for spontaneity - methods may be more open-ended and more exploratory • "why and how" of decision making, not just what, where and when• Explore thoughts, feelings, opinions, and personal experiences in detail - can help explore complexity of issue

What is the difference between true experimental and quasi experimental designs?

Experimental • Researcher manipulates the IV to observe its effect on some behavior or process (DV). True experimental can have cause and effect because of randomization. • Randomly assign individuals to separate conditions or levels or an IV (just say IV). You must have random assignment to be a true experiment as you can draw stronger cause-effect conclusions; all factors are completely controlled - manipulation of IV. Quasi Experimental • Can still be in groups, but there is No random assignment. In a quasi-experiment the IV could be something that already exists in the population (such as age, gender) • Uses pre-existing/intact groups • Can not make cause effect conclusions

What are the main results and findings of the study? Do you think the conclusions reached by the researcher(s) are valid? (if nothing else, remember to look at the first sentence of the recommendations section to answer if you think findings are valid)

Findings showed that teacher with prior training for inquiry had more comfort with inquiry methods. They also found that elementary teachers had slightly less prior training for inquiry, but used a larger percentage of their time using inquiry methods--despite having less time in the day dedicated to the study of science. Teachers with more classroom experience had less training in inquiry based methods, while teachers who had less experience had the most experience with it. The non-experimental design made it impossible to make causal statements - so while we know how participants used inquiry, we don't know why some used inquiry more than others. In some sections of the discussion, they bordered on making causal claims (while they never said "caused," they used "suggested") without reminding the reader that causal claims cannot be made from this type of data. The authors wrote "we were pleased with the progress on inquiry implementation in the state" (p42, first sentence under Recommendations). Progress implies comparison between two points in time, yet neither the research questions or the survey questions dealt with progress between multiple points in time. This unfounded blanket statement calls the rest of the implications and recommendations into question.

If you were developing a survey instrument what procedures would you employ to enhance the reliability and validity of measurement?

For reliability, you want to know how much error exists. Extent to which scores are free from error; Reliability/precision is about scores, not measures. Sources of error can be internal (such as sick, tired test taker) or external (poor items, administration). In order to argue for good reliability, you need to build an argument using evidence. Reliability can change depending on context. TO enhance reliability: ● Test-retest with same group (most direct way to find reliability)● Improve internal consistency of measures (check Chronbach's alpha (checks agreement of answers targeted to a specific trait) and drop unnecessary questions) (need at least .7, preferably over .8) (split half reliability - 1st half vs 2nd half correlate, or odd vs. even correlate) ● Standardize as many procedures as possible, including automating when possible ● Train observers and anyone else who interacts with subjects - this increases inter-rater reliability (coefficient tells you the % of variation in scores that is not due to random error) and reduces bias, which reduces error ● Design research to minimize possibility for participatnt error (social desirability bias) ● Pilot-test ● Have experts review test items/directions for clarity ● Select your sample intentionally and carefully ● Multiple measures can reduce many sources of random error ● Throw out items that reduce measure's internal consistency or add more items Often you don't know reliability, but if you established relationship in spite of error that's good. You run into problems if you don't find relationship and don't know reliability- that's a type 2 error.Definition of validity: Extent to which inferences about scores are appropriate, meaningful, and useful; degree to which evidence and theory support the interpretations for the proposed use for tests; Validity is about appropriateness of score inference for specific purpose, not about measure; evidence for validity must align with purpose; predictive interpretation requires predictive evidence. How appropriate is measure for decisions? To enhance validity: ● Construct - internal consistency, discriminant validity, convergent validity (MTMM is evidence) ● Internal structure - internal consistency, how items are related to each other (factor analysis) ● Content - evidence must match the inference you want to make; experts systematically evaluate; subjective measure (must have questions that touch on each aspect of construct, have enough to provide sample for each aspect of construct) ● Theory - does theory support the measure for your specific use ● Empirical evidence from past research ● Response process - thinking patterns for answering a question; consistency between intended and actual response ● Relationship to other variables - two types: convergent (related to things it should be related to); discriminate (not related to things it should not be related to) ● Contrasted Groups- see if groups that are different respond as predicted

Select an alternative research paradigm to the one you identified above and consider how the research might differ if the researcher employed this different paradigm. For example, how would the adoption of say, a pragmatic, critical, race, ethnicity or gender-based perspective alter the research (note: this could be in terms of research questions, design/methodology and even findings)?

I would be interested in seeing this study conducted with the interpretivist lens. If this was the case, there would be more value placed on the qualitative data, and they would more than likely change the design to include interviews (mixed methods?) or interviews instead of surveys. While the data that was collected using the surveys was informative, the interpretivist would have looked for words in the qualitative findings that describe power struggles, as well as any relationship to the gender or race of the teacher to the success or struggles that they encountered in implementing the use of inquiry in the classroom. It could also strengthen the study if the questions were worded in a way that the participant didn't feel 'guided' to respond in a certain way. For example: yes, I like science. It's easy to choose this option on a survey when you are a science teacher and feel as though you should probably respond that way. The interpreted results of the study would have come from the participants themselves as well, as opposed to data being reviewed and then interpreted using the words of the researcher.

Variables

IV: the intervention or predictor DV: the outcome of the result (Less clear difference between IV and DV in nonexperiemental) • Categorical- separate independent groups (ex. Male/female) EX: Nominal or Ordinal • Continuous- range of values (ex: attitude) • Confounding- can't be separated from intervention • Extraneous- external to nature of research that affects results, often unexpected • Control- variable you try to eliminate from explanation- never completely control, but can limit • Assigned: innate, can't manipulate (ex. Gender, family composition) • Mediator: part of logic of understanding results, between IV & DV • Moderator: typical with experiments, moderate effect of intervention

Critical

Inquiry employs social and cultural criticism. The goal is to reveal and challenge oppression; emancipate the oppressed. o Highlight role of research in oppression/dominance/power relations-Purpose of research is to reveal dynamics of power and ideology (over less powerful groups) in traditional research o Promote social justice o Give voice to participants as valid contributors o Reveal how/where power is hidden and promote awareness o Aka neo- Marxist theory o Goal is emancipation of the oppressed o All types of oppression - marginalization, power, and privilege o Give authentic voice to oppressed and marginalized people and use research to expose structural oppression o Critique and deconstruct research, including their own, to break down power and privilege barriers. o problematizes the power drawn from white supremacy o based on the belief that U.S. structures and systems are designed to maintain white supremacy and that the incremental changes undertaken to combat racism have actually benefitted white culture o Its epistemology values stories and narratives as key components of knowledge and analysis.

Arts Based

Inquiry is an "act of personal judgment rather than one of seeking final truth" o Practices to reconfigure meaning through virtual worlds- expand horizons- speak to you differently and heighten your awareness o Raise additional questions, not provide answers o Give voice in ways that traditional scientific writing/research cannot o Prioritize individual interpretation/meaning-making o Impact is key o Art-based values shape all components of research design o Art is explicitly used in data collection, data analysis, and/or dissemination of findings

Postructuralism

Inquiry is inherently biased and involves politics and/or an ethic o Very self-reflective, rhetorical o Reality is contested o Big on language, don't find a resolution often o Similar to post modernism - farthest removed from post-positivists o Reader is more important than author- reject idea of article having a single meaning/goal o World is chaotic- as scientists if we try to impose structure we are off the mark o A lot of theory, very little practice; don't propose very much o Instability in the human sciences due to complexity of humans and inability to escape structures in order to study them o Structures are there and real, but don't define/constrain us • Queer theory o another critical theory that problematizes power and oppression o provides a critique of "normalized" identity by challenging knowledge and assumptions that are considered standard o Growing from LGBTQ movements, queer theorists have now broadened their scope to deconstruct norms and expectations of how things "should be" in a variety of areas

Race, ethnicity and gender

Inquiry is inherently biased and not value neutral o Knowledge is situated in place, time and perspective of knower o Identity formation- individualized by sex/race and gender/ethnicity (defined by person; not stable across place and time) o Move away from generalizing knowledge of the dominant culture to those 'marginalized groups' through use of narratives and autobiographies o Influence of cultural psychology in looking at cognitive development o Not just demographics, but how race/culture affects deeper o focus on the experience of marginalized identities o biological sex and race, gender, and ethnic identity impact how people live in the world - defined by person and not stable across place/time o majority identities are given power and status in society while other identities are oppressed and marginalizes - gives rise to multiple consciousness o since oppression, marginalization, and multiple consciousness are present in life, they are also present in research. o researchers reject the idea of value-neutral research and critique traditional epistemologies so that marginalized identities are more commonly included in research. o Mainly system focused

Interpretivism

Inquiry is value-centered rather than value-free and strives to recover the moral importance and imagination of the social sciences in order to create change in the world o Reality is restricted by language- words to describe, rhetoric, power, gender, etc. o 'interpretive social sciences'- move beyond objects to meaning and focus on values o Closer to literature than physics o Interactive communication is conversational, shows relationship between researcher and subjects, consider restraints between what world expects us to write and describe o Cannot separate what is being described from describer; cannot separate knowledge from person communicating it - it has the person's values o We can never separate ourselves from our reality o language shapes the understanding and representation of reality (rhetoric, power, gender, construction) o Value stories over theories o They encourage personal reflection and engagement with texts, making their analyses more similar to those done in literature than the hard sciences.

Hegemony

Leadership or dominance, esp. by one country or social group.

MAXMINCON

MAXimize systematic variance - between group variability (numerator) 1. maximize variance of dependent variable ● Easier to make a difference in the means when there is more variability in possible scores 2. choose dependent variables that can be affected ● What part of construct is your intervention able to change (if you aim for a small component of the construct you likely won't be able to change it; like sensitivity but not for instrument, instead for intervention) 3. choose levels of the independent variable that are as different as possible and reasonable (both assigned and experimental variables) MINimize random error variance - within group variability (denominator) 1. Use controlled conditions for obtaining responses and giving treatments ● Control the intervention ● Control the extraneous variables ● Understand the threats and design the research accordingly to avoid threats to internal validity ● Larger N 2. Increase reliability of instruments CONtrol effect of confounding/extraneous variables - this is done during research design these are threats to internal validity (mess up conclusion because they will impact difference in means) 1. Use random assignment if at all possible 2. Build extraneous variables into the design as independent variables, but use only if interested in interactions or comparisons between levels of the variable. 3. Match subjects as long as the matching variable is related to the dependent variable. Matching does not replace random assignment. 4. Use statistical adjustments such as covariance analysis and propensity scores.

What role do "values" have in educational research? This is quite obviously a contested point, so please summarize competing positions on this question and be sure to include who might support the positions (i.e., researchers coming from particular perspectives or political positions or employing certain methodologies).

Memorize this : Biesta said, education is a moral practice, not a technological enterprise. Paradigms are coherent belief structures; lens through which to view the world. A paradigm is a bundle of assumptions about the nature of reality, the status of human knowledge, and the kinds of methods that can be used to answer research questions. Decisions and outcomes are not value neutral. Values shape the questions we ask, the methods we use, the conclusions we draw. People's values play into what outcomes they desire. Values are part of professional practice - teachers use values to make context based decisions. Quantitative, postpositivist researchers - suspicious of values; try to avoid value discussions through rigorous scientific method and statistical analyses - research should be unbiased and best research is not affected by values. Pragmatic researchers see the role values play in all levels of ed research and ed practice and try to find ways to incorporate them into practice, not deny them or see them as biased Critical/constructivist researchers think that values impact all facets of ed research and practice, and perpetuate power differentials, oppression, try to legitimize and highlight values of non-dominant groups; are explicit about own values and use of values in designing and conducting research What contributions do our sense of our own beliefs regarding what is 'worthy' make in terms of what we know or depict to be true? Post-positivism, pragmatism and positivism do not really consider values. The more you consider values, the further you get from objectivity. Paradigms that do include values: critical, race/ethnicity, ethics.

epistemology

theory of knowledge. Assumptions about what knowledge is, what is accepted as knowledge claims, and what is taken as evidence. "How we know what we know"

Explain how science, policy and politics relate to each other in the realm of educational research. Provide an example (e.g., NCLB (no child left behind), value added teacher evaluation, the What Works Clearinghouse, etc.

Politics will most certainly influence policy, but as those policies are written or reform is called for, politicians should use good scientific research to help inform and support the decisions made. In 2001, the No Child Left Behind Act (NCLB) actually sites that federal funding would only be provided for educational programs that had been scientifically proven to be effective with the most children. Then, in 2002, the National Research Council published a report citing 6 principles for good educational research: 1. Pose significant questions that can be investigated empirically 2. Link research to relevant theory 3. Use methods that permit direct investigation of the question(s) 4. Provide a coherent and explicit chain of reasoning 5. Replicate and generalize across studies 6. Disclose research to encourage professional scrutiny and critique The federal government's desire to involve educational research in political/policy decisions, can be a testament to the value they perceive good scientific research to have. Policy can also dictate what type of research truly qualifies as 'scientific', and has shaped the work of many in the educational research field because state and schools were asking for a specific type of research to back programs and decisions-only certain types of "evidence" counted and only certain outcomes mattered. And when the lines start to blur in this way, you begin to wonder if educational research is truly guided more by science or politics. A war within perhaps? Consider how current social values can play into research and outcomes, and how questions can arise regarding whether outcomes which go against social norms should be reported when tied to the federal government. How often do they not include necessary/required limitations, raise awareness and open doors for discussion ( an essential part of good scientific research)? Yet, NCLB eludes that anything not "evidence-based" is considered opinion based and not useful.

What are some distinguishing characteristics of positivism, postpositivism, and non-positivism as research paradigms?

Positivism: only verifiable statements have meaning (verified through experience and observation/empirical data/natural observations- only what you can count and see); reject metaphysics (abstract concepts) • Some elements still exist: empirical data is valued; some behavioral theory still applies; still reject metaphysics as valid • Rigorous and scientific approach - hypothesis testing, etc. • Goals: description, control, prediction • Humans can be researched same way as natural phenomena- power to researcher • Can conduct research with objectivity(value-free) and discover Truth • Focus is on quantitative using experimental methods with experimental and control groups and pre/post tests because more rigor • Research is true, researchers exist apart from data • We don't do true positivism anymore • Criticized due to: lack of regard for subjective, view of human behavior as passive and controlled. We do not do true positivism anymore. · · · Post-positivism: we live in a post positivist world where a single counter example can prove a statement wrong. It is similar to big T truth positivism, but you can prove something wrong. It's always falsifiable. That is why you have limitations section. ● Utilizes empirical data, but often takes a multi-method approach ● Allows for more interaction between researcher and participants ● Still search for little t truth - we accept the best evidence we have until something changes it - truth is out there, we just don't have the means to find it yet ● Utilize positivist methods, but acknowledge facts may change with new information ● Can still make truth claims, but those are subject to falsification; ● Common components of shared reality help move research forward (buildings exist, schools exist) McMillan: Positivism and post-positivism are based on the assumption that phenomena should be studied objectively with the goal ofobtaining a single true reality or at least reality within known probabilities. The researcher takes a neutral role, one that does not influence what is observed or recorded. Empericism is emphasized through the use of numerical data. Nonpositivism: rejects positivism(because it's the opposite) and rejects empiricism (knowledge is derived from sense experience), data is collected transparently; emphasizes ethics, and no particular data type is prioritized ● No single 'true' position ● Methodologies emphasize diversity ● Objectivity: researchers always have values and must identify their self-analytically ● Often favors qualitative ● Take an interpretive stance as opposed to experimental basis ● It is the umbrella for interpretivism, Art Based Ed Research, gender/race/ethnicity, critical theory, poststructuralism, queer theory, critical race theory, constructivism, and postmodernism

Help in Case Have to Draw?

Pre-experimental Experimental Quasi-Experimental Single Group Posttest only Chi-square (categories w/in that group) A X O ----> Single group Pretest-Post Test Dependent samples t-test A O X O -----> Multiple Groups Post Test only Independent samples t-test or ANOVA A X1 O B X2 O C X3 O -----> Randomized to Groups posttest only Control group design Independent samples t-test; ANOVA if only one IV; Factorial ANOVA if more than one IV A X O R B ----------> Randomized to groups Post-test only COMPARISON design Factorial ANOVA A X1 O R B X2 O ----------> Randomized to groups Pretest-Post Test control group design ANCOVA if only one IV; Factorial ANCOVA if more than one IV A O X O R B O O ----------> Randomized to groups pretest post test design ANCOVA if one IV, Factorial ANCOVA if more than one IV A O X1 O R B O X2 O ---------------> Nonequivalent Groups Pretest Post test control group ANCOVA A O X1 O R B O O --------------> Nonequivalent groups Pretest/Posttest comparison group ANCOVA A O X1 O R B O X2 O --------------> Nonequivelent groups multiple pretest control group design. Independent samples t-test or ANOVA A O O X1 O R B O O O -----------------> Nonequivelent groups dependent variables design Independent samples t-test or ANOVA

Type of research design:

QUALITATIVE Case Study- indepth study of a particular problem Grounded Theory- generate codes for data, develop theory and then collect data which is more focused Phenomological- indepth meaning of a particular experience, understand the everydayness, understand how one or more people experience a phenomenon, attempt to understand from their perspective Critical Study Ethnographic - explore cultural phenomena, what's it like to be a member Historical Research- research about past MIXED METHOD Mostly quantitative; mostly qualitative; concurrent or sequential; equally qual/quant (Fundamental principle- strategically mix or combine methods and approaches to get design with complementary strengths) QUANTIATIVE: Experimental: intervention strictly controlled by researcher, only type of causation research Single subject- subject serves as his/her control- take baseline, continuous assessment True Experimental- when random assignment is used Quasi-Experimental- if random assignment is not used, and the design uses either multiple groups or waves of measurements, approximates experimental design but does not have control group Multivariate/Factorial: a multivariate research design with 2 or more IVs; could have multicausality; look at interactions and main effects Non-experimental - Goal is to maximize description, differences, and/or relationships; can be cross sectional (all data collected at one point in time) or longitudinal (collected at 2 or more times) Descriptive- usually help describe the 5 W's; used to get information in regards to current situation Comparative- to compare values of two or more levels of an independent variable Correlational- explore relationships to make predictions Prediction Studies- to show how well one or more variables predicts something Ex-post facto- Existing groups with different interventions in the past Post hoc- data from existing database Causal-Comparative- existing groups that experience different interventions that are not controlled by experimenter; primary IV is a categorical variable MIXED METHOD Mostly quantitative; mostly qualitative; concurrent; equally qual/quant ANALYTICAL Historical, Legal, Concept analysis

Do you think this study makes a contribution to the field? Consider the literature, the quality of the study as well as the practical significance of the findings in your response.

Study's that are well designed, even if there are no significant findings, can still contribute to the field of study. You are trying to find the signal, even though there can be a lot of noise that can impact that signal. If your study has been well designed you can be confident in your conclusions and the measurement used. But if not, then inferences can be damaging or will be thrown out by other researchers. Then, what's the point? In this case, I think the literature review was pretty good, and the findings supported what was cited in the literature review. I'm afraid though that some of the issues with the way the study was designed and conducted could make it too easy for those in the field to poke holes in it and call it 'unworthy'. For example, there was no pilot test of the instrument and they did not use one that had been used previously in related studies. While they made an attempt to operationalize inquiry, the definition is not defined in practitioner terms, nor in measurable terms so each participant could interpret its meaning in different ways. The way the survey questions are worded, they lead the participant to what is the more desirable response-experimenter bias.

Statistical conclusion validity

The degree to which one can infer that the independent variable (IV) and dependent variable (DV) are related and the strength of that relationship.

How is the concept "margin of error" used for both measurement and confidence intervals?

The margin of error is usually defined as half the confidence interval for a specific parameter. In general the confidence interval is used when describing, interpreting, or comparing samples, and the margin of error is used when interpreting or comparing specific results among measured characteristics. ● The margin of error is a statistic expressing the amount of random sampling error in a survey's results. It is most frequently used to imply a range of uncertainty when comparing values of two characteristics from a single survey. (Such as support for two political candidates reported in a single survey.) ● Confidence interval: interval of numbers within which parameter is believed to fall o Margin of error: range where you can't distinguish true or false; interval within which the true population probably lies o [estimate - margin of error, estimate + margin of error] • The larger the margin of error, the less confidence one should have that the poll's reported results are close to the "true" figures; that is, the figures for the whole population. Margin of error occurs whenever a population is incompletely sampled." ● Both a confidence interval and margin of error are related to a set of measures of variance in a set of data. Margin of error - indicates an interval within which the true population value lies - likely error in determining the result for the population...ex: 45% +/- 3 - 95% chance of the result being between 42 and 48

Why is this design appropriate to address the research questions?

The research questions focused on understanding how teachers used inquiry - not on the impact of various interventions. Therefore nonexperimental was best. Research question number three related to how teachers thought about their inquiry implementation, which was best understood through open-ended qualitative questions, not scales or dichotomous questions.

What is the problem under investigation? and what is the purpose of the study?

The study investigates the factors contributing to, supporting, and constraining teacher use of inquiry in North Carolina. (similar response on page 2 if get stuck) The purpose of our study was to investigate the use of inquiry across the state and all K-12 grade levels.

What does the phrase "plausible rival hypothesis" refer to?

This is another term for an alternative explanation (to the experimenter's explanation) for why the data came out the way it did in any particular study. It's a plausible hypothesis that is an alternative to your own that could explain the relationship that we see. ● Alternate explanation for the result; something else caused the result, not your intervention/explanation; threat to internal validity ● Competing explanation for relationship between IV and DV ● Often a result of confounding/extraneous variables ● Deal with through appropriate experimental design and statistical analyses (i.e. use of control groups/random assignment) and statistical analyses. ● generate additional hypotheses by thinking about whether the test measures more or less than target construct For example, if my research question was "How does the race of students predict standardized test scores?" and my hypothesis is that racial minority status will be more likely to predict low test scores, an alternative hypothesis could be that socioeconomic status was actually the variable that predicted the scores above race. Again, this can be dealt with through appropriate experimental design and analyses. Experimental design can be used as variance control, because it makes you pay attention to and helps control all kinds of variance in studies. This is where you bring in MAXMINCON. Why is it important: It is the guiding principle for effective control of variances (thus, effective research design)

What approach to research is used?

This study could be seen as an attempt at mixed methods, but it is quantitative only despite the addition of a few open-ended questions. The article clearly states that the use of the qualitative questions 'were used to qualify and triangulate the data collected.' Essentially, this supports that it is simply supplementing the main purpose of the quantitative study. The analysis also supports that the method used to analyze the qualitative responses was not strong either, as it only mentions specifically segregating and coding responses to one question, and there is not a lot of mention regarding themes in the qualitative findings in the Discussion &Implications or Recommendations sections of the article. It is not strong enough to be considered an integrative approach, nor does it appear that different lenses are used in the same study.

What are threats to internal validity? Which are most troublesome for single group pretest-posttest designs?

Threats to internal validity: potential factors that obscure the true effects of the IV, Must affect the dependent variable AND co-vary with independent variable: can't separate from the IV; if something affects both groups the SAME, is NOT a threat ▪ History: things associated with one group but not the other. Events, incidents, or other factors- must be systematically confounded with the treatments and effect the dependent variable Internal threat are within the group (group dynamics). External threats (e.g. fire alarm) ▪ Maturation: problem in longitudinal studies; treatment effects are really due to natural biological, developmental, etc. changes; tired/frustrated/bored o Biologically (mature/grow) o Social, emotional, cognitive, developmental o Being tired or can also fall here o Typically only an issue in longitudinal studies • Testing: only a problem in pre/post test designs.. Practice on the pretest may improve performance on the post-test • Instrumentation: changes in measuring instrument; observer or recorder bias; ceiling and floor effects; implemented differently to each group. This is a possible threat in almost every design. There are reliability issues, includes ceiling and floor effects, error in measurement. The biggest issue when researcher is data-gatherer, there is even bigger issue when data must be rated. • Regression to the mean: extreme scores on pretest naturally become less extreme on the posstest; natural movement of extreme scores toward the mean; biggest problem when you are testing the extremes (i.e. interventions for gifted students); Normally Statistical regression prevents us from finding significant scores; however, if a you want a truer score for an individual the individuals regression towards the mean will lead to a more realistic score. • Mortality: attrition: participants leave the study (problem if one group drops out at different rate than another) • Selection: most SERIOUS threat; differences come from preexisting differences in groups/characteristics of members before the study started. This is a strong threat to quasi-designs. • Selection/maturation: groups that score similar on the pretest may naturally mature at different rates which may affect scores on the posttest. History may impact groups in different ways. The instrumentation may impact the groups in different ways. • Participant effects: Participant picks up on nature of intervention and reacts. diffusion; compensatory rivalry; resentful demoralization; Hawthorne effect (aware of being in experiment); demand; social desirability; please experimenter • Intervention diffusion: participants in one group are exposed to treatment in other group; intervention group treatment seeps into control group • Experimenter effects: experimenter communicates what they expect or want • Instrument order effects: order of questions/instruments impacts the results (should use counterbalancing) • Novelty effect: when something is new, it elicits responses it wouldn't typically get. Participant's give more motivation or intention. • Treatment replications: BIG PROBLEM; when treatment is given in groups, the number of groups is the N, not number of participants. How an intervention is administered and if it is administered to each participant separately or to a group. Anything that happens in a group setting impacts everyone. • Treatment fidelity: intervention not implemented as planned/intended; different groups get different implementations. Would have to have observers counter this, and a debriefing surveys/questionnaires or participant logs to know its been implemented properly. • Participant attrition - participants leave the study for whatever reason (death, relocation, or drop out). Differential attrition - if one group (treatment) drop out more frequently or at greater rate than non-treatment group. This usually affects longitudinal studies.

What is the difference between Type I and Type II error?

Type 1 error: incorrect rejection of a true null hypothesis; usually conclude a relationship exists when it doesn't. A Type I error is a false POSITIVE; and P has a single vertical line. Type 2 error: failure to reject a false null hypothesis. A Type II error is a false NEGATIVE: and N has two vertical lines. When comparing two means, concluding the means were different when in reality they were not different would be a Type I error; concluding the means were not different when in reality they were different would be a Type II error. When the village boy cried wolf, The first error the villagers did (when they believed him) was type 1 error, The second error the villagers did (when they didn't believe him) was type 2 error. The best way to avoid these errors is to allow yourself to set a low alpha level and to increase your sample size.

How is external validity assessed in qualitative research, and why is qualitative research typically weak on this type of validity?

You will recall that external validity refers to the degree to which you can generalize your findings. This is often weak in qualitative research because only a few cases are typically examined in qualitative. In fact, qualitative researchers are often far less interested in obtaining external validity than in having good in-depth examination of the cases or group and the context in which it is located. (The book points out ways that generalizing still can be done even in these situations.)

ontology

assumptions about what makes up reality, the nature of human existence, and how human beings interact with their environments

inquiry

confirming or elaborating existing knowledge; generating new knowledge

systematic inquiry

formalized process of questioning, designing, and methods to conduct research

Postpositivism

inquiry affects and interacts with the lives of individuals and so it must be trustworthy; it would be immoral to affect those lives without having trustworthy reasons o would argue that knowledge/inquiry is value neutral/value free - scientific proof gains trust o Science doesn't deliver absolute truth, but truth at a time o Research and beliefs have foundation that lead to new truths, but bounded by reality o VERY interested in good methods/quality (favor randomized studies, etc) o Science doesn't deliver absolute truth, but we can arrive at best available truth at the time until new information is found (small "t: truth) o Use empirical/scientific methods

Pragmatism

inquiry is not neutral; some benefit from research and some are hurt by it; truth will be applied to society, so a social consequence needs to be a factor. Care theory should be considered o Discusses both theory and practice, equally important o Used 'warranted assertions' because you can rarely be sure of absolute truth- you can partially prove o We can count on beliefs being justified until something occurs to shake our beliefs- acceptance of new information allows change o Thinking with the end in mind when developing theory (practical, get things done) o Don't favor a particular research method. o View in terms of practical uses o Shares some assumptions with postpositivism: existence of a concrete reality where truth is not absolute. o Goes further; views knowledge as created through both theory and practice o Social construction of knowledge by maintaining that words shape meaning and meaning shapes knowledge. o To what extent is knowledge advanced by a study, whether the authors' conclusions match the study's outcomes, and how the study's vocabulary impacts the findings o Concerned with how knowledge is constructed, questioned, refined, encoded o Also care theory - consequences of our work

constructivism

inquiry is permeated with human values. Because values are inescapable, researchers must make extraordinary efforts to reveal, or uncover, beliefs and values that guide and generate individual and group constructions. o Consider physical and constructed realities o The researcher is no longer outside the system but a part of it o Goal is understanding and structuring as opposed to prediction o People construct their own social realities in relation to each other o Favor qualitative, but okay with quantitative o All in the individual interpretation- we all construct our world o Very interested in social justice - research is never value free o Reality is subjective and experiential - some groups/people may hold one construction but another may construct different reality o Meaning-making activities

Ethics

inquiry should tie its conclusions to values and include measures to eliminate power imbalances o How our research contributes to greater world o Focus how research participants are treated; goes beyond IRB o Expert relationship with participants: Quantitative: opportunity to consent or not; privacy not breached; risks/benefits are more predictable because a clear intervention o Dialogical relationship with participants: Qualitative: might exceed consent; privacy could breach; risks/benefits unfold unpredictable o Moral and political neutrality are undesirable; research should be used in service of democracy o Importance of values

Realism

physical objects exist independently of their being perceived.

What research paradigm appears to be operational in this research endeavor?

postpositivist

Empiricism

sensory experience is the ultimate source of all our concepts and knowledge (see, feel, hear, touch, smell). Science assumes the position of empiricism, because observational experience is necessary; the experience must be objective and communicable or describable

Construct Validity

the degree to which the measure is measuring the construct or variable that it claims to measure. When scores from one instrument correlate highly with scores from another measure of the same trait (convergent evidence), when scores do not correlate highly with scores from an instrument that measures something different (discriminate evidence - correlation coefficient would need to be near zero, between -.2 and .2) To check convergent, you can use the Known-groups technique: find one group known to possess a high degree of a characteristic you want to measure and compare it with a group known to possess a low degree of that characteristic

Rationalism

the doctrine that knowledge is acquired by reason without resort to experience. French philosopher René Descartes, who wrote "I think therefore I am" is considered the father of rationalism.

What statistical procedures are appropriate to examine differences on categorical and continuous dependent variables?

• Categorical: Chi-Square (Kruskal-Wallace or Mann-Whitney U test if you have ordinal data or had to downgrade from interval or ratio data because assumptions are not met). • Continuous: o independent sample t-test (if there are two groups) or paired sample t-test (used for repeated measures) o ANOVA: if there are more than two groups, this can be a simple comparison between groups, factorial (if there is more than one IV), repeated measures (possibly over time), ANCOVA (if you are using a dependent variable as a covariate), or a MANOVA (if there are multiple, related dependent variables) o Correlation: probably to find the relationship between two dependent variables (e.g. as smoking increases, lung capacity decreases) o Regression (we may not need to know this for this test): to determine how much one variable predicts variance in an ANOVA, also indicates the type of relationship between variables (e.g. linear, curvilinear)

Statistical Analyses Test (even though question doesn't ask for them?)

• Descriptive Stats: help summarize data in a meaningful way, but don't make conclusions beyond reach. Standard deviation- degree to which scores vary, frequency distribution, mode, confidence interval, standard error of the mean • T-test: likely error that two group means are not equal (independent or dependent); A t-test asks whether a difference between two groups' averages is unlikely to have occurred because of random chance in sample selection. A difference is more likely to be meaningful and "real" if (1) the difference between the averages is large, (2) the sample size is large, and (3) responses are consistently close to the average values and not widely spread out (the standard deviation is low). Use a one sample t-test when comparing one group's average value to a single number. Use a paired t-test when each observation in one group is paired with a related observation in the other group. • One- Way ANOVA: single categorical IV and a single DV Ex. Do SAT scores differ for low-,middle-, and high-income students?; use when comparing significant differences between means of three or more independent groups; follow up with a post-hoc test to see where difference is • ANCOVA: covariance is measure of how much two variables change together and how strong relationship is between them; • Bivariate: relationship, used with two variables to determine relationship • Bivariate/Multiple regression: used with several IVs and one DV; relationship, used for prediction • Logistic regression: similar to multiple regression but DV is dichotomous variable • Chi Square: compares observed frequency to expected; Ex. Is distribution of sex and voting behavior due to chance or is there a difference?

What ethical issues should researchers consider when conducting research in school-based settings?

• Informed consent by stakeholders- children, teachers, parents, community members, principals • Confidentiality • Ability to withdraw • Dissemination of information (truly understanding the purpose) • Security and confidentiality of information (you need to code names and school names) • Understanding of power issues

What information does the standard deviation and standard error of measurement provide?

▪ When one refers to the standard deviation of scores on a test, usually he/she is referring to the standard deviation of the test scores obtained by a group of students on a single test. It is a measure of the "spread" of scores between students. o How far each case is (on average) from the group mean o an estimate of the average variability (spread) of a set of data measured in the same units of measurement as the original data. (sigma/ 64.2% within one SD) o SD is the square root of the variance. (To find variance- work out the mean, then for each number subtract the mean and square the result, take average of the squared differences); o the more spread apart the data, the higher the deviation. o SD is not directly affected by sample size o Gives idea of effect size. o It also puts it on standardized scale. You can compare two different scales ▪ When one refers to the standard error of measurement on a test, he/she is referring to the standard deviation of test scores that would have been obtained from a single student had that student been tested multiple times. It is a measure of the "spread" of scores a student would have had if the student been tested repeatedly. o Range of any individual score around what the "true score" would actually be o Can't ever calculate/know true score, so have to incorporate error o the standard deviation of the sampling distribution of a statistic. o For a given statistic (the mean) it tells us how much variability there is in this statistic across samples from the same population. o Large values therefore, indicate that a statistic from a given sample may not be an accurate reflection of the population from which the sample came; indication of the reliability of the mean o a small SE is indication that the sample mean is a more accurate reflection of actual population mean; o estimates how repeated measures of a person on the same instrument tend to be distributed around their 'true' score

What evidence is there to support your answer?

○ Open ended questions - recognizes the complexities of human behavior ○ Looked for causal relationships ○ Utilizes empirical data, but often take a multi-method approach - which is done here. They added qualitative questions to triangulate the data from the quantitative questions. ○ The limitations section - a post-positivist believes that the best evidence is always taken for little t truth, until someone comes along to disprove it. But it's always falsifiable. That is why there is a limitations section.

How could the researcher(s) change the design, methods, or procedures to enhance the internal validity of the study?

● One way that internal validity could be enhanced is hinted at by the researchers themselves in the limitations section "Teachers were invited to participate in the study. Therefore, the survey participants may not represent what all science teachers believe, but rather those who preferentially focus on science at the elementary level or those who find inquiry challenging and, therefore, were inclined to respond. Our best guess is that we have teachers who are more "science friendly" than is typical. " Thus, a wider pool of recruitment should be considered. ● Increase the number of survey questions to counter act the responses on any questions that could be unclear/difficult to answer (inclined toward 'it depends') ● Risk of social desirability/participant effects for some of the questions? Ex. How important is science to you? Teacher's charged with teaching science would be more inclined to feel as though they are supposed to answer this question a certain way (highly important). ● Since teachers, administrators, and university personnel have different roles in implementing inquiry based science in schools, they should have different questions - or ideally a completely different survey. ● They should have piloted the survey before use.

Why is practical significance considered by many to be more important than statistical significance?

● Practical significance is related to the real-life context of the results - looks at how meaningful and impactful the results are; effect size (emphasizes size of difference) is a measure of this. Practical significance asks if the differences between samples are big enough to have real meaning. How much of an effect or difference would this be if applied in the real world? ● Statistical significance (p-value) just looks at the difference between numbers - no context; p-value is a measure of this. Statistical significance tells the likely hood that differences in groups did not occur due to sampling error. It's the probability that a difference of at least the same size would have arisen by chance even if there were no difference between two populations. This doesn't tell you how big of an effect is present or how important that the effect would be for practical purposes. Bottom Line: A result could be statistically significant, but not practically significant - this could lead to wasted resources spent on something that makes little difference in real life. A result could be practically significant but not statistically significant - the intervention could still be useful and make a difference in people's lives, even if not statistically significant

What assumptions must be met to use parametric statistical procedures such as ANOVA?

● Within-group normality o Scores (values of the outcome) are normally distributed within each group/level o Check with boxplot and/or histogram (outliers, skewness, etc.) ● Homogeneity of variance o The variance of the outcome is fixed/steady across all groups/levels o Levene's Test ● Observations are independent


Conjuntos de estudio relacionados

Nervous System Brain and Cranial Nerves

View Set

Abnormal Psychology Chapters 1 & 2

View Set

Chapter 46 Management of Patients With Oral and Esophageal Disorders

View Set