PSY400

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

What is it: a reference list (in APA style!) plus: A brief (~150 word) description of study's design (experimental vs. correlational), research question, and key features Like a very mini critique of each article Purpose: To organize literature prior to a literature review

Annotated reference list and purpose

experimental group changes over time but only because the most extreme cases have systematically dropped out and their scores are not included in the posttest Attrition must go along with conditions participants are in to affect internal validity; either the experimental group OR control group are dropping out. The magnitude of the dropout isn't the issue it is the difference in drop out between groups

Attrition

Covariance Experiments establish situations where covariance between IV and DV can be examined empirically. Can observe change in IV which tends to go with change in DV Temporal precedence Experiments manipulate levels of the IV then measure any changes in the DV, ensuring temporal precedence. Know time order things are happening; we know what comes first in time because we control IV Control Experiments allow researchers to control as many potential confounds as possible, raising internal validity CORRELATIONAL STUDIES DON'T HAVE THESE FEATURES (ONLY COVARIANCE BC IT = CORRELATION)

Benefits of Experiments... describe them.

A construct is the quantity of interest that we want to measure. Such constructs are usually also variables because they vary (within or between people) as opposed to being constant (unchanging). Psychological constructs are latent because we cannot directly observe them in a physical medium. Can you come up with some examples? (My examples - personality, intelligence, athletic ability, depression) the variable of interest that we are trying to measure

Construct

Are we actually measuring the variable we want to be (and claim to be) measuring? An issue with psychological measurement because variables are conceptually defined, then operationally defined.

Construct Validity

some participants get conditions in one order and some get them in another order

Counterbalancing

4

2 x 2 Design produces how many conditions?

Causal Associational Frequency

3 types of claims

Skill or ability: whether or not someone is able to do something (can be cognitive skill or physical skill) Traits and characteristics: long lasting durable features of personality; tend to measure with inventories rather than tests Attitudes and opinions: things people think and believe about them; durable qualities in regards to how they think

3 types of latent constructs

6

3 x 2 Design produces how many conditions

high degree of control = high internal validity

High degree of control =

experimental group changes over time because of an external factor or event that affects all or most members of the group external event can happen and affect almost all members of a group. Ex: introduce new style of teaching but one day there was a snow day and another day half of the class was sick ... if the intervening event hadn't happened then the outcome may have been different

History

Purpose: To summarize the findings of the study To put the results in the broader context of the underlying theory Structure (remember the hourglass!) Begin with a restatement of your research question Summarize your results (no need to re-hash stats) Tie back to underlying theory If results supported hypotheses, what does this say about theory? If results did not support hypotheses, why might they not have? Note limitations of study Speculate on future directions

Discussion APA

A pre-test is a measurement taken at baseline (beginning of the study) Not required for between subjects OR within subjects designs Collecting a pre-test allows the researcher to: Measure change over the course of the study Increase statistical power

Do you need a pretest for within or between subject experiments ? What is the purpose of pretests?

1" Margins 12 pt. Times New Roman Even double spacing References on ref. page - not footnotes Demo in Microsoft Word Running head Can use san serif fonts for figures.

Document Formatting APA

An effect can be statistically significant (unlikely to be zero), but that doesn't mean that it is a large or important effect. Practical significance is found by interpreting effect sizes Two types: Correlations - Is there an association between these two variables? Pearson's r 0.1 = small; 0.3 = medium; 0.5 = large Mean differences - How large is the difference between these two groups? Cohen's D Cohen's d 0.2 = small; 0.5 = medium; 0.8 = large Odds ratio: used when we have categorical variables (chi square)

Effect sizes, their purpose, 2 types

Main Effects are examined "averaging over" the levels of the other factor then... Main Effects examined "averaging over" the levels of the other factor

How are main effects examined?

Main Effect: The mean differences among the levels of one factor Interaction Effect: When the effect of one factor depends on the level of the other factor.

Factorial designs allow us to look at two different types of info.. what are they?

Factorial designs incorporate two or more IVs These designs allow examination of interactions Where one variable (IV) influences the relationship between the other variable (IV) and the dependent variable We say the effect of IV 1 on the DV depends on IV 2 A research design that examines the influence of TWO or more independent variables/grouping variables on the outcome of interest Factor = Independent variable/grouping variable Referred to by letters: Factor A, Factor B, Factor C Each factor can have two or more levels

Factorial experimental designs

Measure of central tendency (what does the typical data point look like?) Mean Median: middle. Mode: most common. Measure of variability (how much do data points differ from one another?) Range Variance: SD squared. Standard Deviation: how far away average data point is the mean. Good for interpretation purposes.

How can we summarize our data?

Unreliability, e.g., Participant marks the wrong option Participant marks all 3s or makes a zig zag pattern Adds noise Invalidity, e.g., Participant "fakes good" or "fakes bad" Participant censors their results (embarrassed) Contaminates measurement

Invalidity Vs. Unreliability

Self or informant reported rating scales (Likert) Forced choice scenarios Behavioral observation or traces (coding on rating scales) Can ask informants or people to report on themselves Can ask them to choose what of 2 scenarios given fits them best

Inventories consist of

An experiment group improves over time only because natural development or spontaneous improvement People change and grow and if we aren't careful we may attribute that growth to the treatment. Something can happen in the middle of the study causing growth that is not the manipulation.

Maturation

Mean = sum of scores divided by number of Ss M = ΣX/N

Mean

Design confound, selection effect, order effect, maturation, history, regression to the mean, attrition, testing, instrumentation, observer bias, demand characteristic, placebo effect

Name the 12 threats of internal validity

Internal validity: degree to which we can be confident that changes in IV cause changes in DV Higher level of this in experiments External validity: degree to which we can confident that results of study will generalize beyond the study extent that if we did study again we would see same result or if we did study with new participants would we get the same result Sometimes lower level of this in experiments Construct validity: valid and reliable measurement and manipulation Statistical validity: making correct choices about handling data and what tests to use

Name the 4 big validities and describe them

Operationalization: Specific way of measuring or manipulating a variable (e.g., Rosenberg self-esteem scale, standard score on WISC) Construct validity: How well does operationalization map onto conceptual definition

Operalization and construct validity

In a within groups design when the effect of the IV is confounded with carryover from one level to another or with practice, fatigue, or boredom Participants exposed to multiple variables

Order effect

Title Page (page 1) Abstract (page 2) Introduction (always starts on page 3) Method Results Discussion References (starts on own page) Tables (each one on own page) Figures (each one on own page) Appendices (optional - detailed supplements)

Overview characteristics of APA

Oxford comma: comma that comes before the conjunction in a list (just before AND, BUT, or OR). ALWAYS USE

Oxford comma

participants in experimental group improve only because they believe in the efficacy of the therapy or drug they receive

Placebo effect

Journal Article Author, A. A., & Author, B. B. (year). Article title. Journal Title, #, #-##. Book Chapter Author, A. A., & Author, B. B. (year). Chapter title. In X. Y. Jones (Ed.), Book title (pp. #-##). Location: Publisher. Always in alphabetical order by 1st author last name

Reference APA (Journal articles and Book chapters)

In-Text → Paraphrase 2 authors (Levine & Smolak, 2002) 3-6 authors First mention: (1st Author, 2nd Author, & 3rd Author, year) All subsequent mentions: (1st Author et al., year) 7+ authors (1st Author et al., year) for all mentions Levine and Smolak (2002) found... Previous research has found... (Levine & Smolak, 2002). In-Text → Direct quote Include page number where quote can be found First mention: A recent study found that "personality traits prospectively influenced who chose to join the military" (Jackson, Thoemmes, Jonkmann, Lüdtke, & Trautwein, 2012, p. 275). Later mention: Jackson et al. (2012) found that "personality traits prospectively influenced who chose to join the military" (p. 275).

References APA

experimental group whose average is extremely low or high at pretest will get better or worse over time because the random events that caused the extreme pretest scores do not recur the same way at posttest If we measure people and then again if we see extreme the first time then later on it is likely that the person will be closer to normal

Regression to the mean

Reliability: Precision of an instrument; lack of noise/random error. Consistency of response is reliability Validity: Is the measurement assessing the appropriate construct? Validity: truth or accuracy Construct validity: How accurately are we measuring or manipulating A reliable test captures the participants true state Radom error: there's no clear reason why we aren't capturing a participant's true state

Reliability and Validity

is there association between these variables

Research questions ask...

In an independent groups design when the two IV groups have systematically different kinds of participants in them Can happen when participants select their own treatment

Selection Effect

The goal of statistical significance test is to test whether an affect we are investigating is producing an affect larger than 0 Usually choose alpha as 0.05 (less than 5% should be "false alarms").

Statistical significance

Statistical significance is found by comparing p-values to some set standard (α, the type I error rate) In psychology that standard (α) is usually .05, but it is up to the researcher to choose and justify a value The question we are trying to answer is: IS THIS EFFECT DIFFERENT THAN ZERO? Is this association larger than r = 0, or is the difference between these two groups > 0? When we set α to .05, we are saying that we will incorrectly conclude that there is an effect, when actually there is no effect, less than 5% of the time in the long run.

Statistical significance Vs. Practical

1. Introduce research question Attention getter Broad statement of question Transition 2. Describe/evaluate existing research, one study at a time. Organize like you're telling a story. There should be a compelling narrative. You are taking the reader on a tour of your question 3. After literature is described, synthesize. Draw conclusions about state of knowledge on research question. Compare/contrast results of various studies - was there agreement? Where did they disagree (speculate on why)? Identify logical next steps for research, gaps in literature. 4. Current Research & Hypotheses If review is part of an introduction for a research paper, end the intro with a transition to current study and hypotheses for that study.

Steps for writing

Identify your research question Identify the scale of measurement of your variables Use descriptive statistics to summarize your variables Select and perform the appropriate analysis Interpret the results of your null hypothesis significance test Select and calculate the appropriate effect size Interpret the effect size of your result

Steps in data analysis

Determine the null hypothesis (usually the hypothesis of no difference) Determine your alternative hypothesis Determine your α (usually .05) Perform your statistical test Compare your obtained p value to α. If p is less than α (.05), then conclude that the result is "statistically significant" and reject the null hypothesis. If p is equal to or greater than α, then conclude that there is insufficient evidence to reject the null hypothesis. No association bw these two variables, no differences bw the two groups ect. (null hypothesis). Alternative hypothesis can be directional or nondirectional (converse of null / opposite of null)... nondirectional are better! NOTE JUST BECAUSE WE REJECT THE NULL DOESN'T MEAN THAT WE ACCEPT THE ALTERNATIVE

Steps of null hypothesis testing

1. Face - measure seems valid (to an ordinary person): seems like it is measuring what it is supposed to be. 2. Content - measure covers theoretically relevant content: what theory says leads to questions asked.→ Factor analysis (to be discussed later) 3. Convergent - measure IS correlated with similar constructs (and different operationalizations of the same construct): want measurements of the same things to converge. 4. Discriminant - measure is NOT correlated with dissimilar constructs (measure is distinct): want measurements of different things to be distinct. 5. Criterion - measure is correlated with a relevant outcome now (concurrent) or in the future (predictive)

Subunits of construct validity

Factorial Designs allow researchers to examine the influence of two or more variables simultaneously. This includes examination of Main Effects of each factor AND the interaction between factors An Interaction is when the effect of one factor depends on the level of the other factor. In other words, the influence of one factor changes across the levels of the other factor.

Summary of Factorial Design

T

T or F: In a factorial design there are as many Main Effects as there are factors One main effect per factor There are as many Interactions as there are potential combinations of the factors 2 Factors = 1 interaction (A*B) 3 Factors = 4 interactions (A*B, A*C, B*C, A*B*C)

ALL TRUE

T or F: Within subjects means that participants are exposed to all levels of the manipulation Between subjects means that participants experience only one level (one condition) Pretest means a measurement that happens before any manipulation Posttest means a measurement that happens after a manipulation Repeated measurement of DV for pre and post tests

T

T or F: Determining statistical significance is a yes/no decision based on α

T

T or F: Further spread out the scores are the bigger the variance will be.

T?

T or F: Higher levels of noisy error= unreliability ????? Higher levels of Systematic error= invalidity

T

T or F: IF THERE IS AN INTERACTION ONE WAY THERE WILL BE ONE THE OTHER WAY IF THERE IS NO INTERACTION ONE WAY THERE WILL BE NO INTERACTION THE OTHER WAY

T

T or F: If both of your variables are categorical, you can test for association with a chi-square test A chi-square test evaluates the predicted and observed counts in each of the cells of a contingency table

ALL TRUE

T or F: If both of your variables are quantitative (continuous), you can test for association with correlation and simple linear regression Correlation is given (usually) as the Pearson correlation coefficient (r) Remember! r contains information about strength and direction of association Regression is attempting to find the best fit line of the form y = a + bx

T Research questions ask: Is there a difference between these groups?

T or F: If your independent variable is categorical, and your dependent variable is quantitative, you can test for association with t-tests and one-way ANOVA Difference = the number of levels of the IV: t-tests are for 2 levels ANOVA for 2 and more levels

T

T or F: Types of variables we have are what drive what we choose to do data analyses!

A type of order effect: An experimental group changes over time because repeated testing has affected the participants. Subtypes include fatigue effects and practice effects Order matters AND participation in a particular type of treatment is permanently changed for example ... compare lobotomy to antidepressants; if you are in the group that gets lobotomy first you wont get the same experience once you get the antidepressants if you even live to get them (need within subjects for this effect)

Testing

Self-reported cognitive items (e.g., multiple choice, analogies) Performance (e.g., write this computer code)

Tests consist of

Header is different than rest of paper Includes running head, title, author, author affiliation, author note (optional) Running head < 50 characters (characters= letters and spaces)

Title page APA

Between Subjects: Each group only gets one treatment Also called independent samples require more people no contamination across independent variable levels Strengths of conclusions drawn depend on size of sample, larger is better for between subjects Within Subjects: require fewer people individuals serve as their own controls potential order effects, chance of experimental demand Participants are serving as their own comparison which is why we need less people to get better conclusions

Two major types of experiments

Tests of association Correlation Simple Linear Regression and Multiple Regression Chi-Square Test of Independence Tests of mean differences Independent samples t test Analysis of Variance (ANOVA) Factorial ANOVA

Two major types of statistical (bivariate) analyses

Quasi-experiments lack random assignment Correlational studies lack manipulation (and therefore also random assignment) Correlational: just measuring variables, not intervening on participants. Low in control so cant make same claims about causality. Quasi: often still feature some kind of manipulation. Cant make as strong of causal claims.

Two non experiments

Manipulation: Systematic variation of treatment (IV) by a researcher, for the purposes of observing effect on DV Random assignment: Researcher uses random rule (coin flip, random number generator) to assign Ps to condition

Two pillars of experiments

Metric or unit The administrator used an ordinal scale, which has a categorical metric. Theresa measured math ability on a scale from 0 to 100. Examples of units in psychology? Composite of items; a synonym for inventory Shantae summed the responses on the test to form a scale score. Jered averaged items 1 to 10 on the personality questionnaire to create an inventory.

Two ways that the term scale is used

The effect of Factor A at Levels of Factor B The effect of Factor B at Levels of Factor A

Two ways to examine interactions

Categorical (aka grouping): Discrete groups Called nominal or ordinal Quantitative (or continuous): Full spectrum of values Called interval or ratio

Types of data

S (Self-Report) I (Informant-Report) L (Life) B (Behavioral)

Types of data

Research report Purpose: To present the results of research Contains data from one or more studies Research review (or literature review) Purpose: To critically evaluate the state of knowledge on a research question No new data Synthesizes and evaluates research reports Meta-analysis Purpose: To aggregate knowledge, quantitatively, from multiple studies on the same topic Collects data from multiple sources and does new analyses to aggregate that data Dissertations/Theses Definition: Indep. research by a PhD student Purpose: To get that PhD student a degree DO NOT USE. Theory Development/Critique Purpose: To develop theoretical predictions/hypotheses (or critique a theory's predictions) No new data; philosophical in tone FLOWCHART SLIDE 21

Types of psychological writing... explain them.

Frequency claim: Examine one variable in detail (univariate) How frequently do participants report being in a good mood on a day to day basis is an example. Association & basic causal claims: Examine link between two variables (bivariate) Complex questions: examine three or more variables at the same time (multivariate)

Types of research questions

GROUPING: nominal ORDERED GROUPS: ordinal CONTINUOUS: interval MORE CONTINUOUS: ratio The scale of measurement of variables determines the statistical test you need. Most categorical (top) to most continuous (bottom) Nominal: naming or assigning to groups ex: being asked favorite color then grouped that way Ordinal: put people into groups but those groups do have some order ex: asked to run a 5k and grouped into faster than 15 min, 15-25 min range, and slower than 25. The distance between these groups is not necessarily equal. Meaning of numbers is not constant. Interval: can assign numbers and the distance between these numbers are equal Ratio: zero is a true zero, meaning the absence of the characteristic that we are measuring MUST THINK ABOUT MEANING OF ZERO NOT JUST THE FACT THAT IT IS ZERO... IS IT A TRUE ZERO MOSTLY GET INTERVAL, RATIO IS RARE

Types of scale of measurement and relevancy

Descriptive Statistics: To DESCRIBE and SUMMARIZE our data Inferential Statistics: To DRAW INFERENCES about a(n unobserved) population from a sample

Types of stats and why they are necessary

semantic differential

Used when trying to measure attitudes using antonyms

If two variables are correlated .70 or greater, they are likely measuring the same thing. - Measurement error (unreliability) attenuates correlations "Strong" validity coefficients are likely > .40 (convergent, predictive, concurrent) Correlations near zero +/- .10 indicate discriminant validity THESE ARE JUST GUIDELINES

Validity rules of thumb

Predictor variable and outcome variable are measured and can take any scale of measurement Predictor variable: think this comes first and causes outcome variable

Variables in correlational studies

Variance = sum of [(each score minus mean) squared] dvided by number of Ss minus 1 Variance = Σ(X-M)2/(N-1) Variance= SS/(N-1)

Variance

Cronbach's alpha reliability percentage of variability in measurement that is due to the construct (true score) one minus reliability = noise (random error) Test-retest reliability: use same tool

Ways to assess reliability

Manipulation and random assignment allow us to make causal claims

What are manipulation and random assignment used for?

The assignment of numbers to observed phenomena according to a rule.P Rules come from: Correspondence to physical properties Convention (an agreed upon rule)

What is measurement?

write an outline showing how they relate to one another. Are there groups of articles that agree and disagree with each other? Are there groups of articles that use a similar research methodology?

What to include in the outline

Not able to use an already existing inventory sometimes.

When do we use item construction?

Each point on graph corresponds to each mean on table Parallel lines tell us there is constant change and NO interaction Un-parallel lines indicate that there IS an interaction

When looking at interaction graphs

Factorial Experiments

When the answer is: "It depends..."

Interact: the effect of one depends on the other Looking at these together helps us see are effects different from one condition compared to others The ability to examine interactions is the primary advantage of a factorial design

Why are factorial designs used instead of conducting two separate experiments?

First step to designing a research study To see what's already been investigated on a topic Tells you where the boundary of knowledge is on a research topic, so you know what needs to be done in your research Lit review is key piece of introduction to research report

Why do a Lit Review?

To classify (e.g., clinical) To select (e.g., workplace) To understand change (e.g., educational) To do research (all areas) To make an informed, fair judgment

Why measure?

Number of response options 7 +/- 2 (Likert scales) so from 5-9 response options because people have a hard time keeping more in mind. Anchor text Avoid extremes (always, never) Choose equal appearing intervals Cliff (1959) (not at all, slightly, quite a bit, extremely) Unipolar vs. bipolar Unipolar: from none to a lot (ex: extremely talkative to not) Bipolar: from extremely one thing to its opposite (ex: extremely loud to extremely quiet) Choice depends on construct

Writing good inventory items

Keep it simple: don't use confusing language Use informal language: again about comprehension Avoid negations: words that make it so you have to apply a negative term to another word Avoid double-barreled questions: asking two things at once. Avoid loaded questions: any question where it might be clear to the participant what the researcher is going for (what is a socially desirable answer etc.). Avoid questions that restrict variance: no variance means we cant tell anyone apart when it comes to the participants. We want a question with some spread.

Writing good items

Claims are conclusions researcher wants to draw from study

Difference between claim and variable.

1" Margins, double-spaced, 12-point Times New Roman, no extra space btw. paragraphs Use past tense. Write in third person, except method section. Avoid "I think" Reserve first person for actions you took as the researcher (e.g., "I/We manipulated...") Use active voice. e.g., "Participants took surveys" as opposed to "Surveys were administered to the participants." In this example "participants took surveys" is appropriate while "surveys were administered to the participants" is bad and wordy Use inclusive language. All people, not just men Individuals with disabilities, rather than disabled people Numbers < 10 get spelled out; for numbers 10+ use digits. At the beginning of a sentence, numbers are always spelled out Numbers that represent measurements, statistical values, and sample sizes should always use digits "I.e.," means you have an all-inclusive list; "E.g.," means for example. Avoid etc. Descriptive title (no more than 12 words) Running head (50 character max)

APA general guidelines

Abstract (page 2) No more than 150 words Brief overview of major parts of paper (intro, methods, results, discussion are all major so need a little of all)

Abstract APA

participants guess what the study's purpose is and change their behavior in the expected direction Participants beliefs are potentially impacting the study

Demand characteristic

a second variable that unintentionally varies systematically with the IV

Design confound

Precise manipulation of independent variable AND DON'T MANIPULATE ANYTHING ELSE

Experimental control aka increased internal validity =

Manipulation: experimenter is actively controlling experience participant receives and there are controlled and varying conditions Random assignment: use chance to determine what condition you put each participant in

Experiments

Bigger than .70 ("acceptable") Bigger than .80 ("good") Bigger than .90 ("excellent") Closer to 1 is more reliable. If lower than .70 that isnt an automatic reason to throw that data out.

How large should reliability be?

Begin with broad topic e.g., body image Read a few recently published articles in your topic area Hone in on a more specific research question e.g., Is objectification related to body image in men as it is for women? Hunt down additional articles in your topic area Relevance is key Assemble variety of relevant articles Read them, and take notes. Annotated Bibliography to get organized

How to do Lit Review?

Adjustments to the instrument Lengthen the test or inventory Increase homogeneity of items Adjustments to the testing environment Increase standardization of testing conditions Take steps ensure that Ps take test seriously (reduce careless responding) Aggregation: take info from difference sources and combine it all together.

How to increase reliability

Manipulated variable (IV) consists of distinct treatment groups to which participants are assigned Outcome variable (DV) is continuously measured

IV and DV in experiments

experimental group changes over time but only because repeated measurements have changed the quality of the measure instrument Repeated measurement is changing quality of measurement instrument itself

Instrumentation

Opening paragraph Attention getter Broad/quick overview of research question Transition Theoretical background (repeat as needed) Identify relevant theory (by name, if possible) Describe why theory is applicable to R. Q. Transition Review of previous research Identify relevant previous research (cite) Describe connection btw. studies and R. Q. Transition Hypotheses Give overview of current research design List hypotheses and tie back to R. Q.

Intro Structure APA

Begins on page 3 Title centered at top of page Return and indent - then begin first paragraph Purposes of introduction Describe research question Give theory underlying research question Describe previous research on topic State hypotheses for current study

Introduction APA

Items are the individual building blocks of tests and inventories. Tests are scored dichotomously (correct/incorrect); measure of ability that has right or wrong response Inventories (or questionnaires) are scored in many ways

Item Vs. Test or Inventory

number of different things we are manipulating

Number of variables=

experimental groups ratings differ from a comparison groups but only because the researcher expects the groups ratings to differ researchers beliefs potentially impacting the study

Observer bias

Odds Ratios → (A*D)/(B*C) Equal odds = 1.0 Small effect = 1.5 Medium effect = 2.5 Large effect = 4.3

Odds Ratio

Special challenges because psychological states (aka constructs or variables) we want to observe are latent (not directly observable) Examples? Anxiety, stress, perceptual abilities, learning, memory

Psychological measurement and examples

Whether one variable causes another (an outcome) Whether two variables are associated with one another The rate or frequency of some outcome

Purpose of research is to understand what ?

blinding = keep people in the dark as much as possible about the study

Solution for demand characteristic and observer bias

Acquiescence bias (yea-saying): when people agree with everything or agree even if they don't fully agree. Fence sitting: individuals straddle the middle of the scale. Both are particularly important in cross-cultural settings Socially desirable responding: people choose the "acceptable" answer instead of the true answer. Blind spots: asking people to report on topics they lack knowledge of: changing answer to fit what you think. May be asking people to report on things that they are not familiar with. Behavioral data Observer bias: researchers see what they want to see: make observations fit expectations. Expectancy effects: researchers create results via own expectations: people change behavior to fit expectations. Reactivity: People change their behavior when being watched

Sources of invalidity and self reports

Standard deviation = Square root of variance SD = sqrt(variance)

Standard Deviation


संबंधित स्टडी सेट्स

Chapter 6 Anatomy Professor Bigos

View Set

Casualty Overview-Evaluating Needs

View Set

Social Media Marketing Certification (Stukent)

View Set

Geography (Regions, Provinces & Capital of the Philippines)

View Set

Chapter 26 - NUR 240 Review Questions

View Set

chapter 11 review questions: blood

View Set