Research Stats 2: Exam 3

Ace your homework & exams now with Quizwiz!

Baron and Kenny: Moderator-Mediator

- (a) the moderator function of third variables, which partitions a focal independent variable into subgroups that es- tablish its domains of maximal effectivenessin regard to a given dependent variable, and (b) the mediator function of a third variable, which represents the generative mechanism through which the focal independent variable is able to influence the dependent variable of interest. - For example, it is possiblethat in some problem areas disagreements about mediators can be resolved by treating cer- tain variables as moderators. - That is, moderators may involve either manipulations or assessments and either situational or person variables. Moreover, mediators are in no way restricted to verbal reports or, for that matter, to individual-level variables.

Warner II - Chapter 6 - ANOVA advantage vs. regression

- An advantage of choosing ANOVA as the method for comparing group means is that the SPSS procedures provide a wider range of options for follow-up analysis, for example, post hoc protected tests. Also, when ANOVA is used, interaction terms are generated automatically for all pairs of (categorical) predictor variables or factors, so it is less likely that a researcher will fail to notice an interaction when the analysis is performed as a factorial ANOVA (as discussed in Volume I, Chapter 16 [Warner, 2020]) than when a comparison of group means is performed using dummy variables as predictors in a regression. ANOVA does not assume a linear relationship between scores on categorical predictor variables and scores on quantitative outcome variables. A quantitative predictor can be added to an ANOVA model (this type of analysis, called analysis of covariance [ANCOVA], is discussed in Chapter 8). - On the other hand, an advantage of choosing regression as the method for comparing group means is that it is easy to use quantitative predictor variables along with group membership predictor variables to predict scores on a quantitative outcome variable. Regression analysis yields equations that can be used to generate different predicted scores for cases with different score values on both categorical and dummy predictor variables. A possible disadvantage of the regression approach is that interaction terms are not automatically included in a regression; the data analyst must specifically create a new variable (the product of the two variables involved in the interaction) and add that new variable as a predictor. Thus, unless they specifically include interaction terms in their models (as discussed in Chapter 7), data analysts who use regression analysis may fail to notice interactions between predictors. - Ultimately, however, ANOVA and regression with dummy predictor variables yield essentially the same information about predicted scores for different groups. In many research situations, ANOVA may be a more convenient method to assess differences among group means. However, regression with dummy variables provides a viable alternative, and in some research situations (where predictor variables include both categorical and quantitative variables), a regression analysis may be a more convenient way of setting up the analysis.

CBL Chapter 19 - Meta-analysis - 7

- Before jumping to this conclusion, it is important to be sure that the statistical power of the meta-analysis is sufficient. The statistical power afforded to detect whether a summary effect is significant is dependent on both the number of studies and the number of participants in each of the studies. If too few studies each containing small samples are used, the likelihood of discovering a reliable summary effect is minimized. - This pooling results in greater statistical power than any single primary study in that area of research. Thus, meta-analysis is ideal for synthesizing areas of primary research typified by small samples. - Applying their binomial effect size display (BESD) to interpret the sum- mary effect provides graphic evidence of the practical importance of a relationship between two variables. The display is especially informative for a meta-analysis that yields, at cursory inspection, a small summary effect. - For the purpose of creating a BESD, this value must first be converted to the correlation coefficient. Applying a formula to convert between effect sizes, the computation yields a summary r = .20. The BESD for this overall effect is shown in Figure 19.4. The BESD requires the hypothetical assumption that the entire meta-analysis could be represented and summarized by a total of 200 participants. Furthermore, each row and column must add up to 100 participants. - To create a BESD for our summary r = .20, simply remove the decimal place, which yields 20. Then insert numbers for each of the four cells so that cells in each row or column are separated by exactly 20 - The BESD for our meta-analysis should be interpreted as follows. For participants receiving the attractive condition (rather than the unattractive condition), there will be a 20% greater difference in liking (rather than not liking). This display is a visually helpful device to highlight the practical importance of the summary effect, especially if it is presented to a general audience. - Known as vote counting, it is a method of literature review that involves counting and comparing the num- ber of primary studies that are statistically significant or not - major problem with vote counting is that a decision of whether a theoretical link is tenable is usually made on the basis of such borderline votes. - The problem with vote counting is twofold. First, probability values between different studies are not directly comparable, as was noted. Second, perhaps more importantly, this approach loses considerable information as the magnitude of effect and its direction is completely ignored. Vote counting relies strictly on the dichotomous distinction of study significance or nonsignifi- cance, whereas the overall effect obtained in a meta-analysis is more nuanced and accurate than those based on mere tabulation of supportive or non-supportive study outcomes - Conversely, the results of the meta-analysis showed a significant summary effect, with a value that indicates the magnitude of the relation. The difference in conclusion stems from accumulating the sample sizes from all the stud- ies and the use of effect sizes in meta-analysis to estimate the strength of effects. In a meta-analysis, even studies with small or nonsignificant effect sizes add information about the general direction of the obtained summary effect. This added information often is important in clarifying the strength and direction of relationship between constructs.

CBL Chapter 16 - Measures of memory - 2

- First, participants often are very cautious about which items they report on a recall measure (assuming that they are being tested for accuracy), and so do not list items unless they are fairly confident that they actually appeared. - Second, when intrusions do occur, we cannot tell whether they reflect cognitions that were generated at the time of the original presentation or simply bad "guesses" about information that cannot be recalled correctly.4 - Another method of analysis of recall protocols involves the researcher noting the sequencing of the items recalled, specifically which items are remembered first or later, and/or which items are recalled together. - The former provides information about accessibility in memory (i.e., which information is recalled most easily and rapidly and which requires more search time and effort). The latter (measures of "clustering" in recall) provides information about how material has been organized in memory. - Clustering measures are most often useful when information has been origi- nally presented in some random or haphazard order but then appears in a different, more systematic order on the recall listings. Clustering measures are indices of the frequency with which items of the same "type" (or category) appear sequentially in the recall protocol compared to chance. - They found that when the persons were familiar individuals known to the perceiver, information was encoded and organized by individual person. But when the social stimuli were unfamiliar persons, memory was organized by behavioral categories rather than on a person-by-person basis. - The difference between recall and recognition measures of memory parallels the difference between an essay exam and a multiple-choice exam (Taylor & Fiske, 1981). As with a good multiple-choice exam, the researcher using recognition methods must carefully design and select wrong answers ("foils") that will appear to be correct if the respondent's memory of the earlier material includes cognitions and assumptions that were not actually pre- sented at the time. With recognition measures, the researcher's interest is primarily in the types of errors that are made, rather than in the accuracy of memory. - In general, there are two different kinds of recognition tasks. In one paradigm the respon- dent's task is to review each item presented by the researcher and indicate whether that item of information was or was not seen before by responding "old" or "new" (or "true," "false"). False recognitions (responding "old" to an item that was not present earlier) provide information about how the original materials were encoded and stored along with prior knowledge or inferences that the perceiver brought to bear at the time the information was received and processed. - The second type of recognition measure involves assessing memory confusions. In this case, par- ticipants are given two or more options and asked to indicate which one corresponds to information presented in the original materials - Both types of recognition task suffer from one major disadvantage as measures of what has been encoded and stored in memory. Both accurate and false recognition can reflect information that was encoded at the time of presentation, but they also can reflect guessing or inferences made by respondents at the time the memory measure is taken—that is, memory may be constructed (or reconstructed) when the person is tested on what he or she remembers.

CBL Chapter 16 - Social psychophysiology

- However, they all share the disadvantage that the researcher must be concerned that participants not become aware of the true purpose of the priming manipulation, because awareness can influence the critical responses. Internal physiological responses are less susceptible to alteration by conscious awareness and thus provide the possibility for yet less reactive methods for assessing implicit, unintended responses. - One major problem is that there is rarely, if ever, a simple one-to-one relationship between a single physiological response and some specific internal psychological state ( - Fortunately, recent advances in theory and methodology now allow for multivariate physiologi- cal assessments that are capable of distinguishing different motivational states, positive and negative affect, attention, and active cognitive processing. - pecific patterns of cardiovascular responses can distinguish between feelings of threat (a negative, avoidance state) versus challenge (a positive, approach state) as motivational states in anticipation of potentially difficult or stressful situations, such as performing difficult arithmetic problems or preparing for a public speech. - One of the difficulties of many physiological indicators of arousal is that these measures do not dis- tinguish between arousal due to positive affect or approach and negative affect or avoidance. Facial expressions, however, do vary in ways that correspond to specific positive or negative social emotions - facial action coding system (FACS) to assess various emotional states on the basis of detailed aspects of spontaneous facial expressions. Using this system, researchers can be trained to observe people's facial expressions to distinguish among the basic emotions of happiness, sadness, anger, fear, disgust, contempt, and sur- prise, which Ekman (2007) considered universal across cultures - First, training to use the FACS is extensive and requires a considerable investment of time and effort. Second, although facial expressions are usually produced without conscious intention, it is possible to control and conceal facial responses if one makes an effort to do so. Thus, facial expression is not always a valid measure of implicit affect. - Thus, Cacioppo and his colleagues (e.g., Cacioppo & Petty, 1981; Cacioppo, Petty, Losch, & Kim, 1986) have recommended the use of facial electromyograms (EMG) specific to targeted facial muscles as physiological markers of positive and negative affective states. EMG measures focus in particular on specific muscles of the eyes and mouth associated with corrugator ("frown muscles") and zygomaticus ("smile muscles") activ- ity. - Another minute facial muscle measure that may prove useful to index affective states is electro- myograms specific to reflexive eyeblinks (Blascovich, 2000). The startle eyeblink response refers to the reflexive blinks that occur when individuals perceive an unexpected, relatively intense stimulus, such as a loud sound. The startle eyeblink reflex is negatively toned - Hence Lang and his colleagues (Lang, Bradley, & Cuthbert, 1990, 1992) have reasoned that the eyeblink response should be facili- tated or enhanced if the perceiver is in a negative affective state and inhibited if the perceiver is experiencing ongoing positive affec - In transcranial magnetic stimulation (TMS), external magnetic pulses are transmitted through the scalp to different brain regions to identify the part of the brain that produces a specific behavior. - In research using techniques of event-related potential (ERP), electrodes attached to multiple regions of the scalp are used to measure rapid waves of electrical brain activity arising from the processing of information. - n electrical storm of voltage fluctuations is generated while a partici- pant is processing information. - n both types of measures, the location and timing interval of differential brain activity is used to assess emotional responding, categorization, and evaluative reactions - More recently, functional magnetic resonance imaging (fMRI), which measures the rela- tive degree of oxygen flow through blood vessels in the brain, identifies which brain regions are implicated in various cognitive and emotional processing functions. Fluctuations in low and high oxygen flow throughout the brain as a result of information processing are detected by placing a participant in a magnetic resonance scanner. - Unlike a PET scan, the methods of fMRI do not require participants to ingest some sort of radioactive biochemical substance to trace the movement of a chemical in the brain during mental processing. This is a main reason why fMRI has virtually supplanted PET scanning research. - A neutral stimulus (e.g. blank screen) is often used as a base assessment of random brain activity. This is necessary because the brain remains active even when a participant is not actively processing information or is distracted by random thoughts. Measurable difference in brain activity beyond that of the neutral stimulus may be attributed to processing of the stimulus (signal). A large signal relative to baseline indicates brain reaction and processing of the presented stimulus. Another strategy to rule out interference due to random brain noise is to repeatedly present the same stimulus in sequential trials to the same participant. T - The patterns of brain activity are then averaged across the multiple trials.

CBL Chapter 11 - Random Sampling - 2

- In simple random sampling, every member of the population in question has an equal (and nonzero) probability of being selected every time a unit is drawn for inclusion in the sample.2 Simple random sampling should not be confused with the overarching term of random sampling. The probability of selection in simple random sampling is equal to the sampling fraction. It is calculated by dividing the number of units to be included in the sample by the total number of units in the population. - Sampling approaches of this type are called epsem designs, which stands for "equal probability of selection method." Simple random sampling, systematic sampling, and proportionate stratified sampling approaches are examples of epsem designs. - Systematic sampling requires sampling every predetermined nth member from a population. In this approach, as before, a specific sample size is determined. Then, the size of the sample is divided by the total eligible population to determine the sampling fraction. - It differs from simple random sampling because the probability of units being included in the sample is not equal. Thus, if the number 15 were randomly chosen as our starting point, the probability of the 16th student being included in the sample is zero, because our sampling interval is 20. However, the probability of students 15 and 35 both being included in the sample is 1/20, because if 15 is chosen (a 1 in 20 chance), then 35 is sure to be chosen as well. This technique is classified as a type of random sampling because the sampling begins with a random starting point. - To ensure that an adequate number of participants are selected from each of the different sub- groups of the population, survey researchers generally make use of a technique known as stratified sampling, in which the population is divided into theoretically meaningful or empirically impor- tant strata before members are randomly drawn from each stratum (or subpopulation) and used for the sample. Respondents then are randomly selected from within each stratum, and this permits prespecified subsample sizes for each stratum. The subsamples are later combined to yield the final sample. - Frequently, the same sampling fraction is used for each of the strata; in such a case, the result is called a proportionate stratified (random) sample. Sometimes a different sampling fraction is employed within each stratum; in this instance, the resulting sample is termed a disproportionate stratified (random) sample. - Because the same sampling fraction is employed for both strata, the sample is a proportionate stratified random sample (i.e., an "epsem" design). - Note that in this approach, the proportion of Democrats and Republicans is approximately the same in both the population and the sample. - This approach is called dispropor- tionate stratified (random) sampling because the five subgroups formed by the SES stratification are not proportionately represented across the overall sample. The two highest SES groups are overs- ampled relative to the other three groups. - Why should the survey researcher bother to stratify? Earlier, we suggested that one compelling reason was to offset the possible selection of unusual or nonrepresentative group of respondents from a subpopulation. Although this nonrepresentativeness is possible, it is unlikely if members of a subpopulation are reasonably well represented in the population. More importantly, proportionate stratification helps ensure that the distribution of respondent units of each group in the sample is the same as that in the population, and this enhances the precision of our estimate. In other words, by ensuring that each of the subpopulations are adequately represented in our sample, we reduce sampling error in our estimates, because unaccounted variance that would have occurred as a result of the categorization variable are now accounted for. - stratification by political party differences enhances the precision of our estimate - The trade-off bears consideration—stratification can be expensive and dif- ficult, so to determine whether it is worth the costs involved, we should first decide whether or not the stratification factor is in some way related systematically to the variables under study.

Crano - History

- Islamic Golden Age (8th-15th centuries AD), where impor- tant epistemological advances in the development of the scien- tific method, and hence, in social psychology's development as a scientific discipline, were made. - experimental (or empirical) method. - n his Book of Optics, Alhazen conformed to the scientific method in exploring visual perception and optical illusions, which play important roles in contemporary social psychological research on social influence - Scientists of the Arabian Peninsula at the time of Islam's Golden Age even anticipated contemporary notions of error and bias. Al-Biruni advocated replication and multiple measure- ments to mitigate random and systematic error - However, because of the tenuous nature of measurement and research methodology of the time, knowl- edge gained from studies involving systematic variation between groups using a deductive, or top-down, approach was considered suspect by many. More certain for this audience was rational knowledge concerning the cause of an outcome, which was gained by commonplace observation and induction from fundamental axioms - Thus, if the results of an empirical study using systematic variation were inconsistent with what was "known" to be true axiomati- cally, rejection of the study was more likely than rejection of the axiom. - Translations of Islamic commentaries fostered an empirical orientation in the European Renaissance, and were important influences on Roger Bacon's development and refinement of the scientific method - supposi- tions, reveal heretofore undiscovered truths, and as such, learn Nature's (God's) secrets - exquisite detail of the ways in which experimental variables were defined, the manner in which experiments were conducted, and on replication (or verification) by independent researchers. In many ways, he anticipated Bridgeman's (1927) operationism, which was instrumental in the development of logical positivism - "We mean by any concept nothing more than a set of operations; the concept is synonymous with the corresponding set of operations" (emphasis in original).3 - They viewed such observation as irrelevant, as it would either confirm what they knew or erroneously disconfirm the obvious. - His adoption of experimentation was far more enthusiastic than Galileo's, and his Novum Organum outlined a proto- experimental approach that was based heavily on the collection of facts, rather than the recitation of known truths. These facts were compared to help eliminate alternate hypotheses and ultimately lead to scientific truth, - The deductive orientation, evident from the time of the early Islamic scholars, began to gain steam at this point in the history of science and, by extension, in the history of social psychology. - Principia include the principle that "arguments based on induc- tion may not be nullified by hypotheses" - the experimental method had contended from the time of the Islamic Golden Age. The battle would continue over the years, with empirical deductive methods gradually but inexorably overhauling inductive approaches. In mainstream social psychology today, the contest is clearly settled; the hypothetico-deductive method has been adopted almost uniformly.

Crano - multiple regression

- Mediating and moderating mechanisms may be evaluated using multiple regression, an extension of the bivariate correlation framework that emerged from the work of Galton, Pearson, and Spearman. Multiple regression analysis enables researchers to assess and forecast the contribution of more than one predictor on a single criterion - is its capability to remove contaminating covariates that might affect an outcome. Multiple regression facilitates assessment of spurious causes and statistically equalizes sampled participants on extraneous factors. The challenge for investigators attempting to control for confounds statistically is to anticipate such artifacts and measure them in advance of the analysis - Even so, though regression designs often are not as rigorous, or controlled, as randomized experimental designs, they are assuredly a step above designs that ignore measurable and obvious sources of error. - In addition to underspecification, matched samples often are drawn from populations with decidedly different (mean) values on the critical matching variable. - Propensity score analysis, an improvement and possible corrective to simple matching, can make use of a multitude of indicators to match respondents from different populations in attempting to equalize initial status of groups that cannot be assigned randomly to conditions. - Matching, common in quasi-experimental contexts, suffers from the problem of the missing critical variable (Crano & Brewer, 2002); no matter how hard one tries, it is inevitable that a critically important matching variable is left out of the analysis. - this type of matching uses measured variables chosen on the basis of theoretical relevance to create a - propensity score, the predicted probability of the respondent's membership in a treatment or comparison group. Typically, these scores are determined via logistic regression. - With propensity matching, between-group differences are more confidently attributed to the treatment than to initial differences, which presumably are equalized in the matching process. - it is based on the assumption that the variables entered into the matching routine are measured without error. - he advantage of allowing multiple predictors, which can be used to forecast one or more criterion variables. The approach allows explicit assessment of measurement errors that can be attributed to measured variables via latent factors (Jőreskog, 1973). The path model approach specifies predictive associations between variables, but lacks a latent measurement component. - Wright (1918, 1921, 1923) developed path analysis to investigate the relative contributions of environment and heredity to physical characeristics. In likely the earliest path diagram, Wright (1920) illustrated how parental genes, envi- ronment, and developmental differences predicted variations in color patterns of guinea pig offspring from the same litter. - At a minimum, path analysis requires specifi- cation of a theoretical model in advance of the analysis - structural equation modeling, which, along with other latent modeling approaches (e.g., factor analysis, canonical correlation), is an advance over earlier multiple correlational models whose measured indica- tors are impure representations of the underlying concept, and therefore contain unreliability attributable to measurement error (Ullman & Bentler, 2003). The latent approaches are designed to remove such error, and thereby improve the standard regression models that assume perfect reliability of measures. - Multilevel modeling approaches, close cousins of the struc- tural equation modeling techniques, are becoming more widely used in social psychology - provide information on both group and individual variations across time (if longitudinal data are available), and sidestep many of the limitations of the more standard random effects multilevel models.

Crano - the crises

- Popper's critical rationalist views regarding the impossibility of proving a position—coupled with the ever-present possibility of disconfirmation—and are prepared to live with the constraints imposed by this understanding - that by adopting the classical methods of the natural sciences, psychology made a fundamental and possibly fatal error. The objects of study in the natural sciences, Vygotsky argued, do not have the capacity for thought or volition - where the objects of investigation usually are thinking, living, human beings, who are not independent of the techniques used in their study, and whose later behavior is affected by the knowledge developed in such studies. As such, the social sciences are largely place-bound and stuck in the present, with knowledge likely to change as conditions change. - To be sure, the expansion of useful research visions is always desirable, so long as the methods that are part and parcel of these expansions are used appropriately and their limitations understood and addressed. The hegemony of experimental techniques in much of social psychology is apparent—and not necessarily beneficial. A one-size-fits-all orientation is not the stuff of which rapid progress is made - Though experimental methods can be used to infer, if not to establish unambiguously, cause-and-effect relationships, the leap to causation may be entirely futile if the operations used to define the independent and dependent variables are flawed and do not accurately reflect the underlying constructs in question. Even more problematic is the possibility that the controls put in place in laboratory settings paradoxically may present salient social cues that elicit responses that are more a function of the research context than the experimental treatment. - In addition, experiments conducted in artificial laboratory environments often lack resemblance to real-world contexts. - To bridge the gap between laboratory and field, investigators have turned increasingly to naturalistic observational tech- niques, which may include field experimentation, to isolate intrapersonal and interpersonal behaviors and interactions in settings outside the laboratory. - These more naturalistic studies avoided the artificiality of the laboratory, and most often did not involve unethical treatment of research participants, though with characteristic ingenuity, this sometimes was accomplished as well. - Naturalistic experiments designed to capture real-world social behavior, while apparently holding a stronger claim to generalizability, lack many of the controls available in carefully controlled laboratory environments. - Perhaps the degree to which any study represents the ideal of the true experimental design should be viewed as falling along a continuum - ntegration of strong methods with interventions designed to enrich the lives of the citizenry - A defining element of quasi-experimental designs is their lack of random assignment of research subjects to conditions. This limitation had been viewed as a fatal shortcoming for causal inference, but as has been argued (Cook, 2007; Reichardt, Trochim, & Cappelleri, 1995; Shadish & Cook, 2009; Trochim, 1984), even the inability to assign randomly does not inevitably rule out causal inference. It also should be understood that even RCTs (experiments) vary in the degree to which causation may be attributed unambiguously. If experiments and quasi- experiments were aligned along a certainty with which causa- tion may be inferred dimension, there would be considerable overlap of various experimental and quasi-experimental tech- niques, insofar as some methods that clearly do not allow for randomized assignment substantially control - The use of quasi-experimental methodologies requires considerably greater creativity in offsetting rival alternative hypotheses. Rather than using the brute logic of the design to mitigate alternatives, quasi-experimentalists must draw on creativity and contextual knowledge in developing controls specific to each threat that might offer a reasonable rival expla- nation of study outcomes.

CBL Chapter 11 - Random Digit Dialing

- Random digit dialing was developed to overcome the frame deficiency of unlisted numbers in the telephone directory. - To overcome this limitation, survey researchers developed a number of ingenious solutions involving random digit dialing (RDD), some of which, unfortunately, have some rather major practi- cal deficiencies. The simplest random digit dialing approach calls for the use of a random number generator to develop lists of telephone numbers. The ensuing numbers are the random sample of those to be called. The problem with this approach is that most of the numbers generated in this manner are not in use, are fax numbers, or are not assigned to residential dwellings - An alternative scheme makes use of the list of all published numbers in combination with a ran- domization process. The option begins with the random selection of telephone numbers from the phone directory. Then, the last two digits of each of the chosen numbers are deleted and replaced - The most obvious reason for the move to phone interviewing is obvious—phone surveys can result in substantial savings of interviewer time and research funds. - Arguably, a person speaking anonymously over the phone might be more willing to give an honest answer to this question (assuming the answer was more than never) than a respondent facing his or her questioner across the kitchen table - surveys making use of the phone, even random digit dialing surveys, systematically under-sample the poor, the undereducated, the disenfranchised, and people of modest educational accomplishment. On some issues, this underrepresentation may produce biased results, whereas on others, these sociodemographic variations may be inconsequential. - Considering the characteristics of those who screen calls raises concerns. Oldendick and Link (1994) found that wealthy, white, educated, young city dwellers were most likely to screen calls. This consistency in demographic characteristics requires that researchers remain vigilant on this issue. Call screen- ing is becoming increasingly frequent (Kempf & Remington, 2007), and if it remains consistently associated with demographic characteristics, survey researchers will face a difficult problem. - The development of computer-assisted self-administered interviews (CASAI) has made possible a new forum for survey research, namely, web-based and e-mail surveys. - Not surprisingly,the samples generated by such methods are highly self-selected and probably not typical of the population at large. The use of sites at which potential participants may learn of research opportunities (e.g., Amazon's Mechanical Turk, or the Social Psychology Network) offer even more opportunities for survey researchers, and findings suggest that data collected on these sites are comparable to those obtained in face-to-face research - To implement probability-based random sampling for Internet surveys, some research organi- zations recruit panels of respondents by contacting individuals through standard RDD telephone survey sampling techniques. When potential respondents have been reached by telephone, they are invited to become part of a survey research panel and are provided with the necessary equipment to complete surveys online in exchange for their participation. For example, Knowledge Networks is one organization that has recruited a panel of nearly 100,000 potential respondents in this way and provides panel participants with Internet access via Web TVs. Once individuals are in the panel, they constitute a sampling frame for specific surveys. Random samples of the panel are drawn and are sent electronic messages instructing them to complete a survey on a specified site. This sampling method is initially costly in terms of recruitment and provision of hardware and Internet access, but it produces samples that are comparable in representativeness to those obtained by standard RDD telephone survey methods. - Missing completely at random (MCAR) occurs if the probability of missing responses in a variable is not explained by (or unrelated to) any other variable. - no rhyme or reason exists for why a question was skipped: Every person in the sample, that is, has an equal probability of responding or not responding to that question. - To fully test MCAR requires that we examine the variable with missing values against all other items to ensure independence. - Missing at random (MAR) occurs if the probability of missingness in a variable is not ran- dom, but its missingness may be fully explained by the other measured variables in the datas - Technically, the "random" refers to the deterministic notion that distributions of other variables could be used to probabilistically infer the missing values - Not missing at random (NMAR) occurs if a variable is not missing at random and its missingness is not explainable by the other measured variables in the dataset (but is explicable by other unmeasured variables). This is the least desirable form of nonresponse in the data, as it is impossible for missing values to be statistically deduced from other measured items in the dataset. NMAR implies that an unknown or unidentified event has caused the missing values

CBL Chapter 16 - Priming - 2

- Regardless of whether supraliminal or subliminal priming techniques are used, it is important that the researcher determine that participants were truly unaware of the priming manipulation. Awareness matters, because if participants consciously recognize that there is a relationship between the presentation of the prime and the subsequent judgment task, they are likely to intentionally correct for the potential influences of the prime before making their responses in an attempt to appear unbiased. - With supraliminal priming, the issue is whether participants become aware of the researcher's intent to activate certain constructs in the first task that may affect their judgments or behavior in the second task. To avoid awareness, it is important to camouflage the relation between the priming and judgment phases of the experiment as much as possible, including moving to dif- ferent rooms or having different experimenters give instructions - This may be accomplished through the use of a "fun- neled debriefing" (e.g., see Chartrand & Bargh, 1996), a sequence of questions designed to elicit any suspicions or inferences that the participant may have made about the purpose of the experiment or the relationship between the priming task and the judgment task - Another variation on priming techniques is used to assess automatic associations between mental concepts. The idea behind sequential priming is that if one stimulus has become associated with some other concept, feeling, or behavior, then presentation of that stimulus will automatically activate (prime) those associations. In that case, if the prime and the association are presented sequentially, responding to the second (target) stimulus will be facilitated because of the prior preparation produced by the prime. - The basic structure of the sequential priming paradigm is as follows. On each trial, the prime stimulus (a word or a picture) is presented for a short duration (e.g., 150 milliseconds), then erased, and after a brief delay the target stimulus (outcome task) is presented and the participant makes a judgment about the target by pressing a key to indicate his or her response. The outcome measure of interest is the speed of reaction time to make a judgment of the target stimulus. If the target is connected with the prime in a person's memory, then responding to that target should be facili- tated when the prime has been presented just before - Thus, if the response made by a participant is faster when the target (e.g., tangerine) is preceded by a relevant prime (e.g., orange) than when the same target is judged in the presence of an irrelevant prime (e.g., carpet), this indicates that the two concepts (orange and tangerine) are automatically associated in a person's memory - In addition, the potential effects of awareness are minimized by limiting the amount of time for making a response. - In one version of the sequential priming technique, the target stimulus is a string of letters, and the judgment that the participant is to make is to indicate as quickly as possible whether the string is an actual word or not. If the target is a word, this judgment is made more quickly when it is preceded by presentation of a prime concerning a related word or concept. For instance, when the prime is an overarching category label (e.g., "furniture"), target words representing members of that category (e.g., "chair") are recognized faster than target words that do not belong to the category (e.g., "bird") (Neely, 1977). Based on this principle, the lexical decision task can be used to assess automatic activation of stereotypic traits when the prime is a social category label or picture

CBL Chapter 16 - Priming - 1

- Specifically, priming techniques are designed to assess automatic cognitive and affective processes that occur without awareness or intent. - The concept of priming was introduced in Chapter 6 and refers to the unintended influence that recent or recurrent experiences have on subsequent thoughts, feelings, and behavio - The idea underlying priming techniques is that exposure to a priming stimulus creates a state of mental readiness or preparedness for perceiving and interpreting subsequent infor- mation. Priming effects reflect implicit memory processes that function independently of what can be consciously retrieved from memory. B - Because these processes occur automatically and without awareness, the priming effect has come to be utilized to tap implicit cognition and affect. - Priming studies usually involve two phases: (a) A priming task involving participant exposure to a stimulus (independent variable), followed by (b) an outcome judgment task (dependent variable) to assess the influence of the prime on subsequent judgments. - With such "supraliminal" priming, the participant is made fully aware of the priming stimuli, but not of the underlying con- cept that the stimuli are intended to make accessible. - Thus, supraliminal priming effects demonstrate how biases in person perception and decision- making can be invoked without participant awareness. - "Subliminal" exposure is achieved by presenting the prime (usually a word or a picture) very briefly for a fraction of a second, and then immediately masking this stimulus trace with a supraliminally presented neutral or nonsense stimulus. - The key to subliminal priming is determining a time or duration of exposure of the priming stimulus that is too short to be consciously recognized. Usually, the stimulus is projected by a tachistoscope (a device developed by perception researchers to project stimuli at very brief expo- sures) or on a computer screen, with the participant gazing at a fixation point (e.g., an asterisk) at the center of the screen. - With foveal processing, the priming stimulus is presented at the fixation point (within 0-2 degrees of visual angle from the focal point of atten- tion), a location at the center of the person's field of vision. With parafoveal processing, the prime is presented in the periphery or fringe of the visual field, at 3-6 degrees of visual angle from the focal point. Foveal presentation requires extremely short exposure time (on the order of 15 mil- liseconds) to be subliminal. Because parafoveal presentation is outside of the region of the focal point of attention, it allows for a somewhat longer duration (e.g., 60-120 milliseconds). However, it is somewhat more difficult to implement parafoveal presentations because the researcher has to ensure that the participant's overall field of view includes the field in which the peripheral stimu- lus is presented. - A masking stimulus is a pattern with the same physical features as the prime. So, for example, if the priming stimulus is the word "PREGNANT," the subsequent masking pattern would be a meaningless string of letters ("XQFBZRMQ") that covers the location where the prime was presented. - Thus, the immediate effects of an activated con- cept on subsequent judgments or evaluations can be assessed. Subliminal priming has proved to be particularly useful for assessing the biasing effects of social stereotypes when perceivers are unaware that the stereotype has been subconsciously activated.

CBL Chapter 16 - Measures of memory - 1

- The extent to which individuals can remember information or experiences provides significant clues regarding which information has been encoded and how it is stored in memory. - Memory of stored information may be accessed in either of two ways: Recall or recognition. In a recall measure, the participant reports what he or she remembers about information that was previously presented. - Bypassing the search and retrieval stage, in a recognition measure the participant simply decides whether or not the material currently shown matches his or her memory of what was previously presented. - The process of recall is considered more cogni- tively effortful and slower than the process of recognition, as recognition requires only familiarity with the previously exposed material. - The typical paradigm for recall memory experiments involves an initial presentation of stimulus information, which may be anything from a list of words, a series of pictures, a written description or story, or a videotaped event - After the presentation, some time is allowed to lapse, and then the participant is asked to report what they can remember of what was presented earlier.2 With free recall tasks, participants are given no further instructions about what to search for in memory and are free to list anything that they think was presented earlier - With cued recall, the participant is specifically instructed as to what type of information to retrieve - The volume or quantity of memory (i.e., the number of different items listed) is sometimes used as a measure of the degree of attention and mental processing of the material presented earlier. A sparse listing of details suggests relatively little active processing of the stimulus materials, and greater volume suggests more processing. For this purpose, it does not necessarily matter whether the "recalled" information was actually presented or reflects the perceiver's own internal cogni- tions generated during the presentation stage - For this reason, the sheer quantity of recall, without consideration of recall accuracy, is an imprecise measure of encoding that occurred during the original presentation, because we cannot know whether respondents are recording thoughts that they had at the time of the presentation or thoughts that were generated later, during the recall task itself. - More often researchers are interested in the degree of accurate recall represented in the memory protocol, i.e., whether or not the items participants listed match information actually presented at the earlier time. For this purpose, the recall lists must be evaluated and each item scored as correct or incorrect.3 The final measure may then be the number of items correctly recalled (overall accu- racy of recall), or the researcher may be interested in which items were more likely to be accurately recalled ( - Research on this topic has shown that people tend to have superior recall for items near the beginning (primacy effect) and end (recency effect) of a list of previously presented items. - Finally, the researcher may be interested not only in the items correctly recalled, but also in the content of errors as represented by the items in the recall list that do not match information actually presented. Such incorrect items are referred to as memory "intrusions" and provide clues as to how the original material was encoded and interpreted before being stored in memory. - recognition measures usually are more appropriate for the study of memory errors than are recall measures.

Lac - effect sizes and summary effects

- The fundamental unit of measurement in a meta-analysis is an effect size, an index to indicate relationship strength, usually between two variables, in a primary study. - The probability of attaining significance in a statistical test increases as the sample size increases. Consequently, it is impractical to compare the p-values of primary studies, especially if the investigations are based on disparate sample sizes. - meta-analysis is focused on effect sizes to determine practical significance, or the degree that the observed result is sufficiently large to be meaningful and useful. In a nonexperimental study, an effect size is computed for the relationship between predictor and criterion variables, whereas an experimental study entails the relationship between independent and dependent variables. One advantage of converting a test statistic to an effect size is that its strength is not influenced by the size of the sample, so the effects of studies with dissimilar samples sizes are contrastable. Another advantage is that effect sizes are in standardized units, even if metrics of measurement were originally scaled on different units - This allows the meta-analyst to mathematically synthesize effect sizes across primary studies to yield a summary or overall effect - Cohen's d should be the preferred metric to indicate the strength of a posited relationship if one variable is binary and the other is quan- titative. - Odds ratio is most meaningfully interpreted if both variables are measured with binary values in a hypothesis of inter- est. - Pearson correlation, typically interpreted as a magnitude of relationship for a nonexperimental study, is calculated if both variables are quantitative. - Considering that these three correlation subtypes could be pooled to obtain a summary effect, the metric is flexible to synthesize results of primary studies, regardless of whether variables were measured quantita- tively or dichotomously - more focused meta-analysis might consider pooling drug intervention studies targeted toward a particular ethnic group, if a sufficient number of such studies were avail- able in the literature. - it may be possible to isolate potential culprits responsible for an outcome, especially if the observed relationship is consistently observed across investigations - A meta-analysis incorporating strictly experimental stud- ies has the power to make causal inferences concerning a theoretical relationship across research contexts. - As underscored in these examples, effect sizes culled from primary studies are combined to obtain a summary effect. An unweighted procedure involving the mathe- matical average of effect sizes should be avoided, as it ignores sample sizes across studies. A primary study with a large sample size produces a more stable estimate than one containing a small sample. For this reason, a weighted procedure, such as a fixed-effect or random- effects model, should be undertaken to mathematically combine effect sizes and estimate the summary effect. - Generally, the rec- ommended strategy is to conduct a meta-analysis with a random-effects model, as its statistical assumptions are more likely to reflect reality - In practice, however, many meta-analyses continue to use a fixed-effect model, with possible reasons being that it is less mathematically complex and has a greater probability of producing a statistically significant sum- mary effect.

CBL Chapter 9 - Nonexperimental research - Multiple Mediation

- The observant reader will notice that two different types of arrows (single-headed versus double-headed) connect the variables in Figure 9.10. Most common in this diagram are the single- headed arrows, or example, those linking social competence with assumed reciprocity, and assumed reciprocity with attraction. Connections of this type imply directional hypotheses; thus, social competence is hypothesized to be a determinant of assumed reciprocity (though perhaps only one of many), which in turn is thought to be a determinant of attraction. - Here, a double-headed arrow connects the variables. Relationships indicated by connections of this type indicate that a non-directional or correlational relationship has been hypothesized between the variables.

Warner II - Chapter 4 - Regression Analysis and Statistical Control - 3

- The most conservative strategy is not to give either X1 or X2 credit for explaining the variance that corresponds to Area c in Figure 4.3. Areas a, b, c, and d in Figure 4.3 correspond to proportions of the total variance of Y, the outcome variable, as given in the table below the overlapping circles diagram. - In words, then, we can divide the total variance of scores on the Y outcome variable into four components when we have two predictors: the proportion of variance in Y that is uniquely predictable from X1 (Area a, sr21), the proportion of variance in Y that is uniquely predictable from X2 (Area b, sr22), - the proportion of variance in Y that could be predicted from either X1 or X2 (Area c, obtained by subtraction), and the proportion of variance in Y that cannot be predicted from either X1 or X2 (Area d, 1 - R2Y.12). - If X1 and X2 are uncorrelated with each other, then there is no overlap between the circles that correspond to the X1 and X2 variables in this diagram and Area c is 0. However, in most applications of multiple regression, X1 and X2 are correlated with each other to some degree; this is represented by an overlap between the circles that represent the variances of X1 and X2. - When some types of suppression are present, the value obtained for Area c by taking 1.0 - Area a - Area b - Area d can actually be a negative value; in such situations, the overlapping circle diagram may not be the most useful way to think about variance partitioning. - Requiring a significant F for the overall regression before testing the significance of individual predictor variables used to be recommended as a way to limit the increased risk for Type I error that arises when many predictors are assessed; - It is also possible to ask whether X1 is more strongly predictive of Y than X2 (by comparing β1 and β2). However, comparisons between regression coefficients must be interpreted very cautiously; factors that artifactually influence the magnitude of correlations can also artifactually increase or decrease the magnitude of slopes. - The numerators for partial r (pr), semipartial r (sr), and beta (β) are identical. The denominators differ slightly because they are scaled to be interpreted in slightly different ways (squared partial r as a proportion of variance in Y when X2 has been partialled out of Y; squared semipartial r as a proportion of the total variance of Y; and beta as a partial slope, the number of standard deviation units of change in Y for a one-unit SD change in X1). - X1). It should be obvious from looking at the formulas that sr, pr, and β tend to be similar in magnitude and must have the same sign. - Partial correlation to predict Y from X1, controlling for X2 (removing X2 completely from both X1 and Y): - Semipartial (or part) correlation to predict Y from X1, controlling for X2 (removing X2 only from X1, as explained in this chapter): - If any one of these four statistics exactly equals 0, then the other three also equal 0, and all these statistics must have the same sign. They are scaled or sized slightly differently so that they can be used in different situations (to make predictions from raw vs. standard scores and to estimate the proportion of variance accounted for relative to the total variance in Y or only the variance in Y that isn't related to X2). - The difference among the four statistics above is subtle: β1 is a partial slope (how much change in zY is predicted for a 1-SD change in zX1 if zX2 is held constant). The partial r describes how X1 and Y are related if X2 is removed from both variables. The semipartial r describes how X1 and Y are related if X2 is removed only from X1. In the context of multiple regression, the squared semipartial r (sr2) provides the most convenient way to estimate effect size and variance partitioning. In some research situations, analysts prefer to report the b (raw-score slope) coefficients as indexes of the strength of the relationship among variables.

CBL Chapter 19 - Meta-analysis - 3

- Unpublished research is important to include in meta-analyses, because we know that scientific journals are biased toward publishing primary studies showing significant results. Indeed, it is rare for a journal to publish a study containing results that are statistically nonsignifica - Publication bias occurs when primary studies are not published for some systematic reason, usually because results were not found to be statistically significant. This is known as the file drawer problem (Bradley & Gupta, 1997; Rosenthal, 1979, 1991), which is the tendency for unpublished studies to be tucked away in file drawers and therefore ignored in a meta-analysis, resulting in a synthesis not representative of the research that has been conducted. - However, Rosenthal (1979, 1991) suggested a resolution to the problem that allows us to estimate the extent to which it may be an issue in a particular meta-analysis. This approach involves calculating the number of nonsignificant studies with null results (i.e., effect size of zero) that would have to exist "in the file drawers" before the statistically significant overall effect obtained from the meta-analysis of included studies would no longer be significant at the p < .05 level. The size of this number of unpublished studies helps us evaluate the seriousness of the threat to conclusions drawn from the meta-analysis - convert the test statistics reported in each primary study to the same effect size metric. An effect size serves as standardized metric of practical significance, or magnitude of relationship between two variables, and is the unit of analysis in a meta-analysis. - To synthesize a selection of literature, we need to convert the statistical results of primary stud- ies to the same standardized metric. - Three major families of metrics are commonly used to represent effect size (Borenstein, Hedges, Higgins, & Rothstein, 2009; Cooper, 2009). Cohen's d, or the standardized difference between two means, should be used as an indicator of effect size if the relation involves a binary variable and a quantitative variable - If the means of the two groups are about the same, the formula will yield a value of Cohen's d close to 0 (Figure 19.1b). If the results reveal that the control group (1) scored better that the experimental group (2) (opposite to that hypothesized), Cohen's d will be a negative value (Figure 19.1c). Thus, for our meta-analysis of attraction to liking, a Cohen's d is calculated for each of the primary studies to signify the dif- ference in substantive effect between the attractive and unattractive conditions on liking scores. - The odds ratio is most meaningfully interpreted as an effect size metric for studies that examine the relationship between two binary variables. - For instance, it could serve as an indicator of practi- cal significance in a non-experimental study comparing males and females on the outcome variable of whether they are marijuana users or not. - The correlation is the most meaningful effect size metric for studies that examine a relationship in which both variables are quantitative. Used in non-experimental designs, the correlation is not only a type of statistical test, but it also serves as an indicator of effect size. - (a) Pearson correlation, for assessing the relationship involving two quantitative variables, (b) Point-biserial correlation, for assessing the relationship involving a binary variable and a quantitative variable, and (c) phi, for assessing the relationship involving two binary variables - These indices indicate the strength of a relationship, independent of the sample size of each study, and therefore allow comparison and aggregation of studies that might have used varying numbers of participants.

Regression: Macro-level vs. Micro-level

- advanced techniques consist of analyses at macro-level and micro-level Macro-level (overall) - always inspect first - if not stat. sig, stop - if stat. sig (p < .05), proceed to micro-level testing Micro-level (specific) follow-up - to determine if each predictor is stat. sig - e.g. if three groups - if overall F test sig, permission to perform post hoc test. between group 1 and 2, 2 and 3, and 1 and 3 APA ***In the regression model, the predictor explained 69% of the total variance in the outcome, F(1, 8) = 18.09, p < .05. Specifically, greater numbers of hours studying predicted higher exam scores, B = 5.25, beta = .83, p < .05.

Why is indices of predictor to outcome relation important

- in the regression literature, reported may be any of these 3 types of indices of unique effects of the predictor on the trounce (that vary in terms of how unique is defined) - semi partial r, beta, partial r (what is meant by unique contribution of each predictor) - most common to report beta as these represent unique slope effects while controlling for other predictors in standardized z score units

Baron and Kenny: Mediator

- n general, a given variable may be said to function as a medi- ator to the extent that it accounts for the relation between the predictor and the criterion. Mediators explain how external physical events take on internal psychological significance. Whereas moderator variables specify when certain effects will hold, mediators speak to how or why such effects occur. - Toclarifythemeaningofmediation, wenowintroduceapath diagram as a model for depicting a causal chain. - wo causal paths feeding into the outcome variable: the direct impact of the independent variable (Path c) and the impact of the mediator (Path b). There is also a path from the independent variable to the mediator (Path a). - A variable functions as a mediator when it meets the follow- ing conditions: (a) variations in levels of the independent vari- able significantly account for variations in the presumed media- tor (i.e., Path a), (b) variations in the mediator significantly ac- count for variations in the dependent variable (i.e., Path b), and (c) when Paths a and b are controlled, a previously significant relation between the independent and dependent variables is no longer significant, with the strongest demonstration of media- tion occurring when Path c is zero - Iftbe residual Path c is not zero, this indicates the operation of multiple mediating factors. - From a theo- retical perspective, a significant reduction demonstrates that a given mediator is indeed potent, albeit not both a necessary and a sufficient condition for an effect to occur. - Rather, as recommended by Judd and Kenny (1981b), a series of regression models should be estimated. To test for mediation, one should estimate the three following regression equations: first, regressing the mediator on the independent variable; sec- ond, regressing the dependent variable on the independent vari- able; and third, regressing the dependent variable on both the independent variable and on the mediator. Separate coefficients for each equation should be estimated and tested. There is no need for hierarchical or stepwise regression or the computation of any partial or semipartial correlations. - To establish mediation, the fol- lowing conditions must hold: First, the independent variable must affect the mediator in the first equation; second, the inde- pendent variable must be shown to affect the dependent variable in the second equation; and third, the mediator must affect the dependent variable in the third equation. If these conditions all hold in the predicted direction, then the effect of the indepen- dent variable on the dependent variable must be less in the third equation than in the second. Perfect mediation holds ifthe inde- pendent variable has no effect when the mediator is controlled. - Because the independent variable is assumed to cause the me- diator, these two variables should be correlated. The presence of such a correlation results in multicollinearity when the effects of independent variable and mediator on the dependent variable are estimated. - It is then critical that the investigator examine not only the significance of the co- efficients but also their absolute size. - The use of multiple regression to estimate a mediational model requires the two following assumptions: that there be no measurement error in the mediator and that the dependent vari- able not cause the mediator. - The mediator, because it is often an internal, psychological variable, is likely to be measured with error. The presence of measurement error in the mediator tends to produce an under- estimate of the effect of the mediator and an overestimate of the effect of the independent variable on the dependent variable - Additionally, measurement error in the mediator is likely to result in an overestimate in the effect of the independent variable on the dependent variable - Because a successful mediator is caused by the independent variable and causes the dependent variable, successful mediators measured with error are most subject to this overestimation bias. - The common approach to unreliability is to have multiple operations or indicators of the construct. Such an approach re- quires two or more operationalizations or indicators of each construct. - The major advantages of structural modeling tech- niques are the following: First, although these techniques were developed for the analysis of nonexperimental data (e.g., field- correlational studies), the experimental context actually strengthens the use of the techniques. Second, all the relevant paths are directly tested and none are omitted as in ANOVA. Third, complications of measurement error, correlated mea- surement error, and even feedback are incorporated directly into the model. - We now turn our attention to the second source of bias in the mediational chain: feedback. The use of multiple regression analysis presumes that the mediator is not caused by the depen- dent variable. It may be possible that we are mistaken about which variable is the mediator and which is the dependent vari- able.

Crano - contemporary experimentation

- requires random assignment of units (subjects) based on chance procedures - This insistence on randomized assignment developed to help ensure that participants across groups were "equivalent" at the onset of the study, consequently justifying causal statements linking manipulation (independent variable) to outcome (dependent variable). - The importance of random assignment of participants to conditions in social psychological research is difficult to overestimate. It facilitates disentangling between-group differences attributable to the experimental treatment from those that might have occurred as a result of factors that arrived with the respondents, and over which the researcher had no control. - Randomization to conditions is necessary in the "true" experiment (now generally known as randomized controlled trials—RCT), but early studies were hampered by the inability to compare more than two outcomes (treatment vs. control group) at a time (Campbell & Stanley, 1963). These two-group statistical comparisons were enabled by Student's t test, invented by William Gosset, who developed the procedure while working for the Guinness Brewery. Owing to commercial secrecy concerns, - Fisher developed the analysis of variance, which enormously expanded researchers' capacity to study the complex interplay of manipulated and measured variables. No longer were scientists limited to comparison of only two groups in their experiments, a limitation imposed by the restrictions of Student's t. - These designs allowed the researcher, in the same study, to examine the direct and interacting effects of multiple independent variables on an outcome measure. Fisher's analytic model accelerated progress in social psychology expo- nentially. His innovation allowed for discovery of interactions among independently manipulated variables, - to investigate mediation and moderation - An unintended failing of randomization (or, better, of randomizers) can be realized owing to a misunderstanding of the limits of the procedure. Randomization is effected in the attempt to equalize groups. That it succeeds in this equalization is largely an article of faith for social scientists, for even with very large samples it is not reasonable to assume that randomization matches participants between conditions on every conceivable psychological, phys- ical, and social characteristic that might affect an experimental outcome - In randomization, size matters. It is for this reason that Crano and Brewer (2002) insisted on the "law of large numbers" in randomization. To believe in the therapeutic effects of random- ization without a concomitant acknowledgement that the process cannot reasonably be expected to work as hoped without large numbers of units (or subjects) assigned to condi- tions is akin to a child's belief in magic. - Arguing that the presence of statistically significant effects with small numbers is persuasive proof of the strength of a treatment's effect betrays a serious misunderstanding of the function of randomization, which is the pretreatment equalization of comparison groups - it is essential that experi- menters recognize that factors not related to the treatment must be ruled out as explanations of their findings in such a way that they cannot threaten interpretation. - They termed these factors threats to internal validity. The usual, and most familiar, of the true experimental designs (or RCTs) is the pretest-posttest control group design (random- ization to conditions is understood), which effectively controls for a host of factors other than the experimental manipulation, - These extraneous alternatives, or threats to proper inference, include historical events that occur between pretest and posttest, maturational differences that might obtain between groups, testing artifacts, subject mortality/ drop-out, statistical regression, and so on. Campbell and Stanley observed that when properly conducted, the pretest/posttest control group design can offset these problems. In a later discussion, Shadish et al. (2002) appended a series of t - An interesting addition to the "standard" experimental design, the posttest-only control group design, dispenses with the pretest altogether. Assuming that random assignment operates as designed, a pretest is not, strictly speaking, necessary. In addition to the obvious economic gains, giving up the pretest can enhance the generalizability (external validity in Campbell & Stanley's (1963) terms) of experimental results, as pretesting at times can unintentionally provide clues to the study's hypothesis and purpose, thereby potentially contami- nating the effect of the manipulation and influencing posttest responses. - Popper's critical rationalism rejected induction's methods of proof, just as it dismissed the possibility of certainty in establishing the validity of any theory. Popper held that a theory's validity could never be confirmed unambiguously, but it could be disconfirmed by a single inconsistent result. - Theories inevitably were time- and culture- bound, subject to change as conditions changed. Establishing enduring truths was not in the job description of the working scientist.

Testing mediation in regression

1) peer norms is sig. related to drinking days not controlling for mediator (c) 2) peer norms is sign related to alcohol attitudes (b) 3) alcohol attitudes is sig related to drinking days after controlling for peer norms (path c')(path b) 4) given the first 3 steps satisfied, compare c vs. c' paths - full mediation determined - path coefficient (beta) from peer norms and drinking (after controlling for alcohol attitudes) is reduced to near - and no longer sig

Testing mediation (Baron & Kenny)

1) predictor is sig. (path c) related to outcome (not controlling for mediator) 2) predictor is sig. (path a) related to mediator 3) mediator is sig. (path b) related to outcome, after controlling for predictor (path c') 4) if steps 1 to 3 are satisfied (sig.), compare paths c vs. c' - partial mediation: path c' (vs c) coefficient is reduced and still sig. - full mediation: path c' (vs. c) coefficient is reduced to near 0 and no longer sig.

Regression: Each predictor stat. sig?

B: a predictor's unstandardized slope (direction) Beta: a predictor's standardized slope (direction and effect size) Predictor's B (and beta) stat sig? H0: B = 0 (beta = 0) - no relation between predictor and outcome H1" B =/ 0 (beta =/ 0) - indicating slopes/coefficients sig predictive of outcome SE (standard error): magnitude of chance - predictor's standard (sampling) error - based on null "chance sampling distribution" of B values t = B/SE = 5.25/1.23 = 4.25, p < .05 - B = predictor's signal - SE = predictor's noise (chance) ***The predictor was significantly related to the outcome, B = 5.25, beta = .83, p < .05.

Why use multiple predictors in regression model

Because many predictors are usually responsible for explain any outcome/DV

Mediational Analyses

Criticism of Baron and Kenny - Testing all 4 steps is not required - most important is step 4. A sig "mediation test" (e.g., Sobel test) of c vs. c' is sufficient to demonstrate mediation - sobel test: assumes that the mediation effect (indirect effect) exhibits a normal sampling distribution. but statistical simulation studies show that indirect effects are nonnormal - thus bootstrapping the indirect effect so that the normality assumption is no longer needed and yields adequate stat power - challenging (but possible) to test complex meditational models - a model could have multiple predictors, multiple mediators, and multiple outcomes All these criticism are resolved in - mediation using multiple regression: PROCESS (Hayes) add-on for SPSS - mediation using structure equation modeling (SEM) - superior to PROCESS approach

Memory measures

Recall - report (reproduce) from memory the material that was previously presented - requires greater cog effort Recognition - decide from emory whether the material matches what was previously presented - requires less cog effort - e.g., multiple choice

Thematic Apperception Test (TAT)

deliberate ambiguous picture of person/scene presented participant: generate story about characters and scenario depicted in picture (e.g., thinking, doing, what led up to event, how it will end) instructions: generate a story about the characters based on this depicted scenario, including who are they and what are they thinking or doing. list notes and provide details. type of projective test

Mediational effect stat sig?

satisfy Baron & Kenny steps 1, 2, and 3 - next as part of step 5, evaluate whether the meditational effect (partial or full) is sig beyond change this test could be interpreted in any of the following ways - path c' become sig lower than path c - pathway from the predictor to outcome was sig. mediated - meditational pathway from the predictor to mediators was sig? test meditational effect using Sobel text - using unstandardized coefficeints - SE path a and B (so step 2 and 3) - if p value is less than 05, significant mediation APA ***The pathway from peer norms to drinking days was significantly mediated by alcohol attitudes, Sobel z test = 5.88, p < .001.

Regression: Small vs. Large Residuals of Y vs. Y'

small residual --> small SSresidual --> Larger SSregression --> Regression model (line) more likely to be stat. sig (beyond change) large residual --> large SSresidual --> smaller SSregression -- regression model (line) less likely to be stat. sig (due to chance) - variability largely attributed to SSresidual, less variability left over for SSregression line - larger error component a regression model that is stat sig indicate that predictors are related to outcome (beyond chance) residual = error of prediction between regression line and observed data points

CBL Chapter 2 - Moderators and Mediators

- A moderator is a third variable that can either augment or block the presence of a predictive relationship - For instance, the sun to sunburn relationship is much stronger for fair-skinned than for dark-skinned persons. Thus, fair skin is a moderator variable that enhances the relation- ship between sun exposure and burning. - Other moderator variables can reduce or block a causal sequence. For instance, the use of effective sunscreen lotions literally "blocks" (or at least retards) the link between the sun's ultraviolet rays and burning - Spurious relationship - A mediator is a third variable that serves as intermediary to help explicate the chain of processes of a predictive relationship. - With moderator effects, there is a link from A to B, but the observed relationship between these two variables is qualified by levels of moderator variable "C" which either enhances or blocks the causal process. - Mediator - In this case, the presence of "C" is necessary to complete the directional process that links A to B. In effect, varying A results in variations in C, which in turn results in changes in B. To return to our weather examples, the effect of rain on depression may be mediated by social factors. Rain causes people to stay indoors or to hide behind big umbrellas, hence reducing social contact. Social isolation, in turn, may produce depression. However, rain may not be the only determinant of social isolation. In this case, rain as a predictor variable is a sufficient, but not necessary, cause in its link to depression. - Moderator variables are determined by testing the interaction of a predictor variable and moderator variable on an outcome

Crano - correlational methods

- Galton developed the correlational technique to further his research on kinship (Galton, 1888, 1890; Stigler, 1989). The technique was refined and extended by Pearson (1896, 1920) and Yule (1897) to include multiple predictors of a criterion variable in what they termed multiple correlation - regression artifact, the tendency of extreme values on fallible tests to regress to the mean of their distribution upon retesting. This statistical inevitability owes its existence to random error, and continues to bedevil researchers who assign participants to conditions on the basis of fallible measures - Galton's seminal ideas for correlation sprang from his desire to appraise the strength of the relationship between a pair of variables that might be drawn from entirely different distributions, and operationalized with different measurement scales. - Initial applications of the concepts of correlation focused on heredity - Practical reasons and ethical concerns often pre- clude manipulation of theoretically relevant variables, so rather than manipulate, researchers correlate to understand the relatedness of naturally occurring variables. In consequence, correlational results are subject to contrasting causal interpreta- tions. Examining the association between two variables is the most common form of correlational design - Although a strong correlation suggests a strong association between two variables, the causal priority of these variables cannot be determined via simple correlation. - an unspecified third factor may spuriously affect the relation between the two variables, further muddying the correlational waters. - Mediation and moderation models, popularized by Baron and Kenny (1986), provide means of assessing an intervening variable's influence on an outcome measure. - A mediating variable elucidates the link between a predictor and an outcome. Analysis of mediation indicates the extent to which a putative causal variable operates through another (the mediator) in affecting an outcome. - It represents a felt need to delve deeper into the causal processes underlying a relationship in contexts in which the necessary causal method- ologies are unavailable. This approach is reasonable, so long as researchers understand that causal castles based on meditational results are built on correlational sand. - Although commonly confused with mediators, moderating variables interact with the predictor to modulate influence on the criterion variable. If a significant moderating effect is deter- mined, the strength of the relationship from the predictor to the criterion systematically varies as a function of the moderator.

Crano - meta-analysis

- Meta-analysis is a useful and increasingly common technique that integrates statistical moderation with fundamental correla- tional methods - approach to enhance power of small-scale studies by combining them into a single analysis (O'Rourke, 2007). Meta-analysis is a family of techniques used to combine results of multiple studies on the same phenom- enon by calculating the size of the treatment effect found in each, and combining these estimates to estimate the average effect of a given treatment or intervention on a specified outcome. - also to study factors that moderate these effects. By combining studies, meta- analysis provides greater power than any study of which it makes use, and thus a better estimate of the overall effect of an intervention or treatment on a dependent measure. It can provide insights into factors that enhance or attenuate the critical relationship by identifying factors that systematically moderate effect sizes across studies. - distortions that might be introduced through the common editorial practice of publishing only statistically significant results

CBL Chapter 11 - Nonrandom sampling

- Thus, nonrandom sampling procedures sacrifice precision of estimates for the sake of reducing recruitment costs. - In convenience sampling, the sample consists of people most readily accessible or willing to be part in the study. - Despite the potential of the Internet as a tool to sample individuals from all walks of life, most research collected through this format also makes use of convenience sample - In snowball sampling, initially sampled participants are asked to contact and recruit others in their social network. These new recruits in turn are asked to invite additional members, and so forth. Those who personally know initially recruited individuals will have a greater likelihood of being included in the sample. Snowball sampling might be helpful if the researcher wishes to take a relatively hands-off recruitment approach, and might also be a valuable route to recruit individuals belonging to margin- alized or hard-to-obtain groups. - Participants recruited through snowball sampling will tend to have acquaintances who tend to be personally alike (e.g., politically, racially, living in the same city, etc.) compared to typical members of the intended larger population. This makes sense, as we are more likely to be friends with people who are similar to us and are geographically closer. - In quota sampling, members are sampled nonrandomly until a predefined number of partici- pants for each of a specified set of subgroups is achieved. It differs from stratified sampling, which requires a listing of all members from each subpopulation, and which is used to select members from each subsample. This ensures that an adequate number of people belonging to relatively rare subpopulations are included as part of the study, to yield more precise subsample estimates. As a list of all subpopulation members of the targeted population is not always obtainable, the researcher might resort to quota sampling. - In some types of quota sampling, quotas are set relative to a rough guess of the subgroup pro- portion in the population. In other types of quota sampling, a number is set to achieve sufficient statistical power for each of the subgroups

Warner II - Chapter 7 - Moderation - 2

- When all predictor variables are categorical and the outcome variable is quantitative, and interactions between categorical predictors are of interest, it is usually most convenient to do the analysis as a factorial ANOVA - A regression analysis with dummy predictor variables yields the same results as a factorial ANOVA, although the information is presented in different ways in the output. Regression output usually does not directly include group means, although these can be calculated from the coefficients of the regression equation. - Interaction can be understood as "different slopes for different folks." - Men and women may also have different average starting salaries. This can be called a "main effect" of sex on salary. In this regression analysis, men and women can have different intercepts (b0 for women and b0 + b1 for men). If b1 differs significantly from 0, they have significantly different starting salaries - increases. In other words, there can be a different slope to predict salary from years for men than for women. That is called an interaction between sex and years as predictors of salary, or moderation of the effect of years on salary by sex.) - The statistical significance of an interaction between quantitative X1 and X2 predictors in a linear regression can be assessed by forming a new variable - that is the product of the predictors and including this product term in a regression, along with the original predictor variables, - contributions. Aiken and West (1991) tell us that centering scores on X1 and X2 prior to calculating the (X1 × X2) product reduces this collinearity. Centering is done for each variable by subtracting the mean for that variable from each score (e.g., scores on X1 are centered by subtracting the mean of X1 from each individual X1 score).

3 types of research synthesis

1) narrative review - qualitative approach for summarizing and interpreting primary studies addressing same research question - disadvantages 1) failure to include all existing studies comprehensively 2) lack of clearly stated rules for study inclusion/exclusion 3) no statistical metrics to combine studies 2) vote counting - literature review that involves counting and comparing number of studies that are sig. vs. not - disadvantages 1) black and white tally - borderline vote possible 2) ignores effect sizes 3) meta-analysis (most rigorous) - quantitative approach for summarizing and synthesizing research studies addressing same research questions - advantages 1) combines N (to increase stat power) 2) combines effect sizes (to evaluate practical importance) metal-analysis combining N and effect size - study 1: N = 30, d = .50, p = .18 - study 2: N = 30, d = .50, p = .18 - study 3: N = 30, d = .50, p = .18 - meta analysis: Total N = 90, summar d = .50, p = .02 - due to larger sample size, p value emerges as significant

Measures of memory: direct and indirect measures

Direct measures - participants are directly asked about topic and therefore aware of reason for their responses - e.g., typical self-report questionnaires such as the "scale of ageism" - aim: high face validity - more susceptible to evaluation apprehension and social desirability bias Indirect/implicit measures - participants are indirectly asked about topic and therefore less aware of reason for their responses - e.g., sentence completion tasks (old people are____) thematic apperception test (TAT), implicit association test (IAT) - aim: low face validity - less susceptible to evaluation apprehension and social desirability bias - criticism - so indirect/implicit -a re measuring what think measuring? Direct vs. indirect implicit measures - each assessing what is claims to measure (measurement validity)? - methodological reviews show only low to medium positive correlations between direct vs. indirect/implicit measures of the same construct - to what extent both assessing the same construct

Experimental vs. Statistical Control

Experimental control - used in experimental methods - e.g., random assignment in an experiment - advantage: ensures conditions (groups) are initially equivalent, to determine if the manipulated IV causes the DV - advantage: possible to rule out all possible confounds/extraneous predictors on DV (most desirable type of controlling to rule out competing confounds/extraneous predictors that don't have to be measured) - disadvantage: not always ethical or practical to randomly assign participants to groups - no diff across groups attributed to treatment - results caused by treatment on outcome - ensure al groups initially equivalent on demographic characteristics and background characteristics Statistical control - used in correlational methods - E.g., regression with multiple predictors - advantage: superior to simple r (Pearson, point-biserial, phi) and regression with a single predictor (those techniques to don control for covariates or confounds) - advantage: controlling by entering covariates (to rule out these competing confounds/extraneous predictors) in regression analysis - disadvantage: impossible to rule out all possible confounds/extraneous predictors on outcome (competing predictors - covariates - must be measured and entered into the analysis - to rule out these confounds/extraneous predictors) - controlling predictors - enter controlling for - equalizing them across

Results APA - dummy coding

In the regression model, race explained 4% of the variance in alcohol attitudes, F(3, 494) = 7.56, p < .001. Specifically, White compared to Black (beta = -.18, p < .001) and Asian (beta = -.14, p < .01) race uniquely predicted higher alcohol attitudes, The predictor of Latino (vs. White) race (beta - .07) was not significant in the model.

Regression APA style

In the regression model, the set of predictors explained 15% of the variance in life satisfaction, F(4, 495) = 21.29, p <.001. Specifically, being rated in a dual-parent (versus single-parent) household (beta = .20, p < .001), higher conscientiousness (beta = .16, p < .001), and higher extraversion (beta = .24, p < .001) unique predicted higher life satisfaction. The predictor of gender (beta = -.04) was not significant in the model.

Multiple regression visual representation

Less common scenario - each predictor uniquely explains a separate portion (a and b) of the outcome More common scenario - each predictor uniquely explains a separate portion (a and b) of outcome - predictors explain some of the same portions (c) of the outcome - predictors entered must not be highly correlated with one another - causes multicollinearity problems

Implicit association test (IAT)

Measure automatic associations between mental concepts (form one's memory history), by classifying and sorting center object (image or text) into different categories Design 1) instructions: as quickly and accurately as possible, classify center object with word pair on either top left or right 2) practice trials (early on), then test trials (later on) 3) within-subjects design (counterbalanced) - participants receive congruent then incongruent condition or vice versa 4) analysis: condition A vs. B (IV) on mean (seconds) categorization speed (DV) a) preference for whites (faster in condition A - most common) - white good vs. black bad b) no preference (same reaction speed for both conditions) c) preference for blacks (faster in condition B) - white bad vs. black good

Warner II - Chapter 4 - Regression Analysis and Statistical Control - 1

On Section 4.4 (p, 103 to 104), "semipartial correlation" is known as "part correlation" in SPSS output. In SPSS, this may be requested as follows: "Analyze"-->"Regression"-->"Linear"-->"Statistics"--> Checkmark "part and partial correlations." The "semipartial" (AKA "part") correlation should not be confused with the "partial correction." The semipartial (part) correlation, partial correlation, and β represent different varieties of "unique effects" in multiple regression. Among these 3 types of unique effects, the β coefficient is the unique effect coefficient most commonly reported in published studies. - The previous chapter described how to calculate and interpret a partial correlation between X1 and Y, controlling for X2. One way to obtain rY1.2 (the partial correlation between X1 and Y, controlling for X2) is to perform a simple bivariate regression to predict X1 from X2, run another regression to predict Y from X2, and then correlate the residuals from these two regressions (X*1 and Y*). This correlation is denoted by r1Y.2, which is read as "the partial correlation between X1 and Y, controlling for X2." This partial r tells us how X1 is related to Y when X2 has been removed from or partialled out of both the X1 and the Y variables. The squared partial r correlation, r2Y1.2, can be interpreted as the proportion of variance in Y that can be predicted from X1 when all the variance that is linearly associated with X2 is removed from both the X1 and the Y variables. - Partial correlations are sometimes reported in studies where the researcher wants to assess the strength and nature of the X1, Y relationship with the variance that is linearly associated with X2 completely removed from both variables. - This chapter introduces a slightly different statistic (the semipartial or part correlation) that provides information about the partition of variance between predictor variables X1 and X2 in regression in a more convenient form. - To obtain this semipartial correlation, we remove the variance that is associated with X2 from only the X1 predictor (and not from the Y outcome variable). - This is called a semipartial correlation because the variance associated with X2 is removed from only one of the two variables (and not removed entirely from both X1 and Y as in partial correlation analysis).

Testing Moderation using Multiple Regression

Purpose - test interaction using predictors (or moderators) that could be quantitative and or categorical (dummy coded) in regression - Why? ANOVA only permits testing interaction using categorical predictors and moderators - AKA Aiken & West (1991) approach - when categorical, doesn't matter if ANOVA or regression - p values are the same - interaction - effect over and beyond main effect - interaction erm - if sig suggests alcohol attitude to drinking days moderated by levels conscientiousness - strength of connection influence by conscientiousness

Research and academic positions

Tips/advice - always be open to constructive feedback and learning - submit only conscientious and fully-checked work to advisor - seize opportunities in graduate school to build CV (e.g., conference presentations, journal articles) Peer-review process for journals 1) submit extremely clean research manuscript to journal (with research mentor guidance) 2) desk review (editor rejects or sends out for full peer-review) 3) full review (2 to 5 reviewers) examples of reviewer comments/suggestions - grammar/spelling problems (absolute no-no) - add citations and paragraphs - reorganize sections - modify analyses and/or additional analyses - collect data for another sample - all of the above 4) based on all the reviewer comments, the editor indicates reject or revise and resubmit 5) if revise and resubmit authors time to implement all or almost all the reviewer comments 6) publication likelihood - based on percentage and quality of implementations Adjunct/instructor - part time or full time teaching contrast is per semester or year; no research expected - teach while getting phd (requires MA) - teach after getting phd (no job security) postdoc researcher - after getting PhD degree, to produce more research publications (to increase chances of eventually obtaining tenure-track professor position) tenure-track professor (job security if tenure granted) - community college: 5-6 courses semester, research discourages, 0 expected pubs - bachelor's is highest: 3-4 courses semester, low research expectations, 1 expected pub - master's is highest: 2-3 courses semester, medium research expectations, 1-2 expected pub - doctoral is highest: 2 courses semester, high research expectations, 2 or more pubs 3 main tenure-track professor duties 1) Research: publish or perish 2) Teaching: teaching, course preparation, grading, office hours, meeting with students, email correspondence 3) service: department committees, university committees, profession service

Mediator

definition: intermediate variable that helps to explicate the chain of processes of a predictor to outcome relation (essentially, A leads to B and then leads to C) - changes in predictor --> changes in mediator --> changes in outcome Statistically analyzing mediation - multiple regression (Baron and Kenny's approach or Hayes' PROCESS approach) - structural equation modeling Examples - predictor: rain (inches), outcome: depression - mediators: social isolation, physical inactivity seeking to understand if A --> B --> C - clarify/give info to chain of events - to get from A to C need to go from A to B to C

History

first model meta-analysis (smith and glass, 1977) - are psychological therapies effective? - meta-analyzed 375 psychotherapy studies - inclusion criteria = studies that compared treatment vs. control group on any outcome - Total N > 25,000 patients - cohen's d effect size - summary effect size computed by taking average (unweighted model): d = 0.68 - conclusion: yes, psychological therapies are effective, but some types are more efficacious than others (systematic desensitization, rational-emotive, behavior medication, adlerian, etc.)

Dummy (binary) coding

purpose - convert categorical predictor consisting of 3 or more groups (mutually exclusive) into several dummy predictors for regression analysis - why? predictors entered into regression must be binary or quantitative - create 4 new binary variable representing each category/response within race category - now have direction on higher implicate more of race group level - lower indicates more white level

CBL Chapter 19 - Meta-analysis - 1

- A primary study is an original study report- ing on the results of analysis of data collected from a sample of participants ( - serves as the principle unit of empirical knowledge. - Traditionally,this constructive process of integrating knowledge was based on a narrative review, a qualitative approach for summarizing and interpreting primary studies that have addressed the same research question - The most widespread type is meta-analysis, a quantitative approach for summariz- ing and interpreting primary studies that have addressed the same research question. - Although the traditional narrative review has served us well, its critics sug- gest that it is prone to shortcomings, including (a) the failure to review the existing knowledge base comprehensively1, (b) the lack of clearly stated rules for inclusion or exclusion of studies, and (c) the failure to use statistical metrics to combine findings across studies objectively. - meta-analysis, through quantitatively combining primary studies with varying measure- ment idiosyncrasies, offers a clearer path to understanding the true strength of relation between variables than does nonquantitative narrative review. - In effect, meta-analysis provides a way of assessing construct validity and external validity by pooling primary investigations that address the same hypothesis and which may involve different methods of design and measurement. - The problem with comparing studies on this basis is that statistical significance is affected by a host of methodological and statistical factors that do not necessarily reflect upon the validity of the proposed relationship, such as sample size or the reliability of measures - With all else being equal, a study containing a larger sample size is more likely to achieve statistical significance as it yields more stable estimates. Thus, probability values obtained are not directly comparable between studies, because they indicate statistical significance, not practical significance. To compare studies properly requires a common metric to express the size of the effect obtained in each of the two studies. This is the basic insight and contribution of meta-analysis—to convert the statistical tests of primary studies that address the same hypothesis into a common metric, and then to aggregate these studies to obtain an overall effect size estimate.

Lac - study level moderators

- Aside from conducting a meta-analysis to assess an A to B summary effect, additional variables may be used to eval- uate potential moderators of the connection - The moderator in a meta-analysis is a study-level qualifier of the distribution of effect sizes ac- cumulated. If the distribution of effect sizes is found to be dispersed over and beyond sampling error, study-level moderators should be coded and analyzed to understand the extent to which they are responsible for the hetero- geneity. - Each primary study is coded in terms of whether it possesses a characteristic level of a moder- ator. Then, to judge whether systematic differences exist as a function of moderator levels, a separate synthesized effect is calculated for each subgroup of studies, followed by a test of contrast to assess whether the effect size mag- nitudes are significantly different. - The degree of specificity that could be coded and analyzed in a meta-analysis is only as fine- grained as the original primary studies that have been con- ducted and identified in that area of research. - moderation analysis serves as an option to determine whether a cultural variable enhances or reduces the strength of the link

CBL Chapter 16 - Priming - 3

- Automatic evaluation - the priming stimuli are words or pictures of an attitude object, followed by target words that are evaluative adjectives (e.g., "delight- ful," "awful"). The respondent's task is to indicate as quickly as possible whether the target word has a good or bad connotation. If the prime automatically elicits an evaluative reaction of some kind, then the participant's evaluative response is expected to carry over to the judgment of the target. If the evaluative meaning of the target sufficiently matches that of the prime, respond- ing should be facilitated. - Presentation of a posi- tive prime will speed up responding to positive adjectives and slow down responding to negative ones. Presentation of a negative prime will speed up responses to subsequent negative judgments and inhibit positive judgments. - Pronunciation task - An alternative method for assessing implicit evaluation and other automatic associations replaces the judgment reaction time measure with a measure of time taken to pronounce the target word aloud. Again the idea is that if the target word has been activated by presentation of a preceding prime, the time it takes to recognize and speak the word will be shorter than in the absence of a relevant prime. - Many factors other than the priming effects of interest can influence response latencies to particular target stimuli, including word length and word frequency. Thus, it is extremely important that these stimulus features be controlled for in making comparisons between priming conditions. - eaction time measures also create some problems for data analysis purposes. First, the dis- tribution of response times is typically positively skewed (in that very long reaction times occur occasionally but extremely short latencies are impossible). For this reason, various transformations of the reaction time measure (e.g., taking the square root, or the natural logarithm) may need to be used to normalize the distribution for purposes of analysis. - Second, the researcher needs to be concerned about "outliers" in each participant's reaction time data—excessively long reac- tion times that indicate the respondent wasn't paying attention at the time of presentation of the target stimulus, or excessively short reaction times that reflect anticipatory responding before the target was actually processed

CBL Chapter 9 - Nonexperimental research - 5

- Because error is random, a new analysis employing another sample of respondents would be susceptible to different sources of error (e.g., first sample might be 18-year-olds completing the study at one university, but second sample might involve 20-year-olds completing the survey at a different uni- versity),and the regression weights should be expected to change accordingly. - Thus R2 values should be reported with some correction for this expected "shrinkage" (see McNemar, 1969;Yin & Fan, 2001). The extent of shrinkage is affected by the size and composition of the original respondent sample and by the quality of the measures employed in the multiple regression. Higher quality (i.e., more reliable) measures result in less shrinkage. With perfectly reliable measures, no shrinkage would occur. - Another useful means of estimating the extent of "shrinkage" makes use of a cross-validation sample. In this approach, the specific regression weights are determined in an initial sample of participants. These weights then are employed on the data of the second, different sample in calcu- lating a new multiple R2. If the weights that were determined in the original analysis successfully replicate the multiple R2 in the second sample of respondents, confidence in the utility of the pre- diction formula is bolstered. - From a research orientation, the main problem with the evaluation of freely occurring variables is that they usually have natural extraneous covariates; that is, the occurrence of the variable of interest is confounded by the co-occurrence of other

Crano - factor analysis

- Complementary methods used to determine scale quality made use of basic correlational methods augmented by techniques of exploratory and confirmatory factor analysis. Factor analysis seeks to identify items that are strongly associated within a set (or factor), and that are weakly related (or unrelated) to other variables that fall in different factors - parsimonious grouping of sets of variables into common under- lying factors, and is used extensively in development of measures whose items share a common focus. - one of the underlying methodologies that is a common feature of all latent variable models. Although he developed this method to test and promote his two-factor theory of intelligence, - In pursuing improved measurement of psychological states, researchers have become increasingly concerned with the construct validity of measures, the extent to which a psycho- logical concept plausibly exists - Psychometrically, construct validity is indicated if factors (scales or other indicators) with which the measure theoretically should relate are statistically associated with the scale (convergent validity), and factors with which the measure theoretically should not relate are not statistically associated with the scale (discriminant validity). In its most rigorous conceptualization, construct validity may be evaluated using the classic multitrait-multimethod matrix (MTMMM) - takes advantage of latent factor approaches for assessing the contribution of both forms of variance - One such possibility is seen in the more widespread use of the underutilized crosslagged panel design (Crano, Kenny, & Campbell, 1972; Pelz & Andrews, 1964; Rozelle & Campbell, 1969), which with appropriate caution uses time order to help estimate the causal preponderance of relationships that are crossed and lagged over time

Baron and Kenny: Moderator

- In general terms, a moderator is a qualitative (e.g., sex, race, class) or quantitative (e.g., level of reward) variable that affects the direction and/or strength of the relation between an inde- pendent or predictor variable and a dependent or criterion vari- able. - Specifically within a correlational analysis framework, a moderator is a third variable that affects the zero-order correla- tion between two other variables. - A moderator effect within a correlational framework may also be said to occur where the direction of the correlation changes. - In the more familiar analysis of variance (ANOVA)terms, a basic moderator effect can be represented as an interaction be- tween a focal independent variable and a factor that specifies the appropriate conditions for its operation. - A moderator-interaction effect also would be said to oc- cur if a relation is substantially reduced instead of being re- versed, for example, if we find no difference under the private condition. - A common framework for capturing both the correlational and the experimental views of a moderator variable is possible by using a path diagram as both a descriptive and an analytic procedure. - impact of the noise intensity as a predictor (Path a), the impact of controllability as a moderator (Path b), and the interaction or product of these two (Path c). The moderator hypothesis is supported if the interaction (Path c) is significant. There may also be significant main effects for the predictor and the moder- ator (Paths a and b), but these are not directly relevant concep- tually to testing the moderator hypothesis. - desirable that the moderator variable be uncorrelated with both the predictor and the criterion (the dependent variable) to provide a clearly interpretable interaction term. - unlike the media- tor-predictor relation (where the predictor is causally anteced- ent to the mediator), moderators and predictors are at the same level in regard to their role as causal variables antecedent or exogenous to certain criterion effects. That is, moderator vari- ables always function as independent variables, whereas medi- ating events shift roles from effects to causes, depending on the focus oftbe analysis. - Within this framework, moderation implies that the causal relation between two variables changes as a function ofthe moderator variable. The statistical analysis must measure and test the differential effect of the independent variable on the dependent variable as a function of the moderator.

CBL Chapter 9 - Nonexperimental research - Mediation

- Conversely, some variables are thought to influence others only indirectly—that is, their influence on a variable is mediated by another variable (or set of variables), - An alternative to a fully mediated model (Figure 9.9b), Figure 9.9c represents a model in which assumed reciprocity acts as a mediator, but in addition, there remains a direct effect of assumed similarity. In Figure 9.9c the direct effects are from similarity to assumed reciprocity, from assumed reciprocity to attraction, and from similarity to attraction. An indirect effect occurs starting with similarity through the mediator of assumed reciprocity to attraction. The traversal of pathways via indirect effects may be determined by starting the trace (through one-headed arrows) from an initial exogenous variable to a mediator to a final endogenous variable. - Indeed, when the influ- ence of both assumed reciprocity and similarity was accounted for on the attraction criterion, the resulting partial correlation weight (r = .18) of assumed reciprocity to attraction was substantially less than the original correlation of .64 (as shown in Figure 9.9c). - Given this attenuation of the similarity to attraction connection after inclusion of assumed reciprocity, it is reasonable to con- clude that the mediational model is more plausible than the model without the mediator. Because the partial correlation was not completely reduced to .00 upon controlling for both predictors on attraction, the analysis suggests that both direct and indirect effects from similarity to attraction were plausible. - The mediational interpretation is not contradictory to the direct effect idea; rather, it suggests that assumed reciprocity does not fully account for the influence of similarity on attrac- tion.

Baron and Kenny - Case Examples Moderator

- For this case, a dichotomous inde- pendent variable's effect on the dependent variable varies as a function of another dichotomy. The analysis is a 2 • 2 ANOVA, and moderation is indicated by an interaction - Here the moderator is a dichotomy and the independent vari- able is a continuous variable. - The typical way to mea- sure this type of moderator effect is to correlate intentions with behavior separately for each gender and then test the difference. - The correlational method has two serious deficiencies. First, it presumes that the independent variable has equal variance at each level of the moderator. - The source of this difference is referred to as a restriction in range (McNemar, 1969). Second, if the amount of measurement error in the dependent variable varies as a function of the moderator, then the correlations be- tween the independent and dependent variables will differ spuri- ously. - These problems illustrate that correlations are influenced by changes in variances. However, regression coefficients are not affected by differences in the variances of the independent vari- able or differences in measurement error in the dependent vari- able. It is almost always preferable to measure the effect of the independent variable on the dependent variable not by correla- tion coefficients but by unstandardized (not betas) regression coefficients - In this case, the moderator is a continuous variable and the independent variable is a dichotomy. - To measure modera- tor effects in this case, we must know a priori how the effect of the independent variable varies as a function of the moderator. It is impossible to evaluate the general hypothesis that the effect of the independent variable changes as a function of the moder- ator because the moderator has many levels. - First, the effect of the independent variable on the de- pendent variable changes linearly with respect to the moderator. The linear hypothesis represents a gradual, steady change in the effect of the independent variable on the dependent variable as the moderator changes - The second function in the figure is a quadratic function. For instance, the fear-arousing message may be more generally effective than the rational message for all low-IQ sub- jects, but as IQ increases, the fear-arousing message loses its ad- vantage and the rational message is more effective. - The third function in Figure 2 is a step function. At some critical IQ level, the rational message becomes more effective than the fear-arousing message. - The linear hypothesis is tested by adding the product of the moderator and the dichotomous independent variable to the regression equation - So if the independent variable is de- noted as X, the moderator as Z, and the dependent variable as Y, Y is regressed on X, Z, and XZ. Moderator effects are indi- cated by the significant effect of XZ while X and Z are con- trolled. - The quadratic moderation effect can be tested by dichotomiz- ing the moderator at the point at which the function is pre- sumedtoaccelerate - Alternatively, quadratic moderation can be tested by hierarchical regression procedures - In this case both the moderator variable and the independent variable are continuous. If one believes that the moderator al- ters the independent-dependent variable relation in a step func- tion (the bottom diagram in Figure 2), one can dichotomize the moderator at the point where the step takes place. After dichot- omizing the moderator, the pattern becomes Case 2. The mea- sure of the effect of the independent variable is a regression co- efficient. - If one presumes that the effect of the independent variable (X) on the dependent variable (Y) varies linearly or quadrati- cally with respect to the moderator (Z), the product variable approach described in Case 3 should be used. For quadratic moderation, the moderator squared must be introduced.

CBL Chapter 11 - Random Sampling - 3

- In cluster sampling, geographic locations (or clusters or segments) are randomly sampled, and all members from the clusters selected are used for the sample. In this method, the sampling frame is identified (say, all city blocks of houses), and from this population, specific clusters (city blocks, in this case) are cho- sen randomly. Once a cluster is chosen for inclusion in the sample, all members of the cluster are surveyed (in our example, all eligible parents of high schoolers within the chosen cluster, or block, would be surveyed). Cluster sampling is classified under the umbrella of random sampling because the clusters are randomly chosen, although all members within the selected clusters are then used for the sample. - In multistage sampling, clusters of locations are sampled from a geographical sampling frame (as in cluster sampling), and then (unlike cluster sampling) units within each cluster are sampled as well. As the name of this approach suggests, the sampling process is extended to more than one stage or occasion. - Multistage sampling involves two stages, and the random sampling takes place more than once, at the cluster level and again at the participant level of the selected clusters. - A potential distorting influence in cluster or multistage sampling is that clusters generally do not contain the same number of potential respondents. - In such cases, potential respondents do not all have an equal probability of selection, and the precision of the resulting estimates is thereby jeopardized. - An approach known as probability proportional to size sampling (PPS) has been developed to solve this potential problem. Kalton (1983; also Levy & Lemeshow, 2008) has discussed the details of this technique; for our purposes, it is sufficient to understand that the PPS sampling approach ensures that the likelihood of selection in a cluster or multistage sample is the same for all potential sam- pling units (or respondents) no matter the size of the cluster from which they are drawn. Under such constraints, the sample is an epsem one, and the standard approaches for estimating precision may be used. Generally PPS sampling is preferred in cluster or multistage sampling designs. - Multistage sampling is particularly useful when the population to be studied is spread over a large geographic area. - With cluster and multistage sampling approaches, the precision of the survey estimates thus depends on the distributional characteristics of the traits of interest. If the population clusters are relatively homogeneous on the issues that are central to the survey, with high heterogeneity between clusters, the results obtained through this method will be less precise than those obtained from a random sample of the same size. However, if the population clusters are relatively hetero- geneous (i.e., if the individual clusters provide a representative picture of the overall population), multistage sampling will generate estimates as precise as simple random sampling of respondents.

Warner II - Chapter 7 - Moderation - 1

- In factorial analysis of variance (ANOVA), when the effect of one predictor variable (Factor A) on the outcome variable Y differs within separate groups defined by Factor B, we call this an interaction between Factors A and B. Interaction can also be called nonadditivity because the presence of interaction means we need more than just additive main effects for Factors A and B to predict cell means for Y; an A × B interaction term is also needed. - Interactions between predictors can also be examined in multiple regression. In the context of regression, interaction is usually called moderation. Consider an example in which we want to predict interview skills (Y) from the categorical variable sex (X1) and a quantitative measure of emotional intelligence (X2). If we find that the slope that relates X2 to Y is different for men than for women, we would say that sex is a moderator variable. The effect of X2 on Y is moderated by, or changed by, or dependent upon sex. - Mediation - first, X1 causes X2, then X2 causes Y - Correlation btw predictors tells us nothing about potential interaction - Predictors are often correlated in regression analyses, but as noted previously, those correlations are not evidence for or against the possible existence of moderation. - X1 and X2 can both be categorical variables. One of the predictor variables can be categorical, the other can be quantitative. The categorical predictor variable is usually treated as the moderator variable. Both X1 and X2 can be quantitative variables. - When both predictor variables are categorical, the most convenient way to analyze the data is usually factorial ANOVA - not). In such situations, the naturally occurring group membership factor is often thought of as the moderator. - This interaction is disordinal (that is, the lines cross; the rank order of mean amount of time given to Mr. Right and Mr. Wrong was opposite for the two groups of women).

CBL Chapter 16 - Indirect and implicit measures of cognition and affect

- In indirect attitude assessment, "the investigator interprets the responses in terms of dimensions and categories different from those held in mind by the respon- dent while answering" - Indirect approaches are used to reduce possible distortions that might come about when respondents answer questions in a socially desir- able or normative manner to place themselves in a more favorable light. - Many researchers feel that they can obtain more accurate evaluations by having respondents focus their attention on irrelevant but compelling features of the task. They hope that by using misdirection or assessing reactions of which people are unaware, their respondents will lower their defenses and present a more valid picture of their attitudes, beliefs, or judgments. - Suppose, for example, that a researcher was interested in indirectly assessing respon- dents' attitudes toward labor unions, and decided to employ a sentence completion task, an indirect measurement in which a partial sentence is presented and participants are required to complete the sentence. - In addition to sentence completion tests, another approach is used to assess attitudes surrepti- tiously. The thematic apperception test is an indirect measurement in which the respondent views a deliberately ambiguous picture and then generates a story about the characters and scenario depicted in the picture - The intent of the measure is far from transparent, and thus the hope is that the method will allow the researcher to obtain more honest and unbiased answers than would more direct assessment methods. - It must be acknowledged that these forms of assessment (the sentence completion, the TAT, the coding of the speeches of public figures) make considerable demands on the time and techni- cal expertise of the investigator because the responses typically gathered in these methods do not lend themselves to easy scoring. - Generally, coders must be trained to perform systematic content analyses of respondents' qualitative responses (Chapter 14). This training is labor intensive and time consuming. In addition, it is necessary that some estimates of the interrater reliability of the scoring procedure be developed - Other methodologies for indirect assessment of attitudes and preferences rely on implicit responses, responses to stimuli that participants are not necessarily aware that they are making. - Many of the techniques for assessing cognition derive from a general model of information processing that assumes that knowledge about the world and experiences is acquired and remem- bered through four stages or operations: attention (what information is attended to), encoding (how that information is understood and interpreted at the time of intake), storage (how information is retained in memory), and retrieval (what information is subsequently accessible in memory).

Warner II - Chapter 6 - Dummy

- In other words, scores on a multiple-group categorical predictor variable (such as political party coded 1 = Democrat, 2 = Republican, and 3 = Independent) are not necessarily linearly related to scores on quantitative variables. - The score values that represent political party membership may not be rank ordered in a way that reflects a monotonic relationship with changes in conservatism; as we move from Group 1 = Democrat to Group 2 = Republican, scores on conservatism may increase, but as we move from Group 2 = Republican to Group 3 = Independent, conservatism may decrease. - In general, when we have k groups or categories, a set of k - 1 dummy variables is sufficient to provide complete information about group membership. - It is acceptable to use dichotomous predictor variables in regression and correlation analysis. This works because (as discussed in Volume I, Chapter 10 [Warner, 2020]) a dichotomous categorical variable has only two possible score values, and the only possible relationship between scores on a dichotomous predictor variable and a quantitative outcome variable is a linear one. - Essentially, when we use a dummy variable as a predictor in a regression, we have the same research situation as when we do a t test or ANOVA (both analyses predict scores on the Y outcome variable for two groups). When we use several dummy variables as predictors in a regression, we have the same research situation as in a one-way ANOVA (both analyses compare means across several groups). Therefore, the issues reviewed in planning studies that use t tests and one-way ANOVA are also relevant when we use a regression analysis as the method of data analysis. - compared received different "dosage" levels of some treatment variable, the dosage levels need to be far enough apart to produce detectable differences in outcomes. If the groups are formed on the basis of participant characteristics (such as age), the groups need to be far enough apart on these characteristics to yield detectable differences in outcome. Other variables that might create within-group variability in scores may need to be experimentally or statistically controlled to reduce the magnitude of error variance, as described in discussions of the independent-samples t test and one-way between-S ANOVA. - From Equation 6.2, we can work out two separate prediction equations: one that makes predictions of Y for women and one that makes predictions of Y for men. To do this, we substitute the values of 0 (for women) and 1 (for men) into Equation 6.2 and simplify the expression to obtain these two different equations: - These two equations tell us that the constant value b0 is the best prediction of salary for women, and the constant value (b0 + b1) is the best prediction of salary for men. This implies that b0 = mean salary for women, b0 + b1 = mean salary for men, and b1 = the difference in mean salary between the male and female groups.

CBL Chapter 11 - Random Sampling - 1

- In random sampling, a random mechanism, at some point during the process, is used to obtain a sample intended to be representative of the underlying population. - Random sampling is also known as probability sampling, because the probabilities of a member of a population being included in the sample may be determined (at some point in the process), although probabilities of member selection may or may not be equal. - In nonrandom sampling, a nonrandom mechanism is used to obtain a sample from a pop- ulation. Samples are gathered based on convenience, by snowball sampling or quota sampling approaches. - Also known as nonprobability sampling, the probability that members are selected from a population cannot be determined, usually because a sampling frame listing of potential respondents or locations is unavailable. - Any approach using random sampling will produce higher external validity than nonrandom approaches. - Random sampling is undertaken in the service of the fundamental goals of efficiency and econ- omy. Efficiency refers to the attempt to balance considerations of cost with those of precision. One of the central preoccupations of many sampling approaches is to devise ways by which the precision of estimates can be enhanced without resorting to samples of unmanageable size, and to provide sample estimates of population values of high precision. - Nonrandom sampling approaches have been developed in the service of economy, and are undertaken not to enhance the precision/cost ratio, but rather to reduce the expenses involved in sampling and data collection.

CBL Chapter 11 - Survey Studies Design and Sampling - 1

- In this chapter, the term survey refers to the process of polling, or surveying, some group of respondents with respect to topics of interest to the researcher—their attitudes, perceptions, intentions, behaviors, and so on. - Therefore, a central concern in survey contexts is, "How well do the responses of a subset of individuals actually represent those of that population?" Generally, we are less concerned with issues of internal validity (i.e., is an experimental manipulation responsible for the obtained findings?), because surveys are primarily used for descriptive or non-experimental research studies. - Sampling participants is different from, and perhaps more fundamental than assigning participants. Sampling is not concerned with the rules that govern the placement (assignment of participants) into the treatment conditions of a study, but rather with the issue of how those particular people got into the study in the first place. - A random sampling procedure is necessary to achieve high external validity for generalization of results back to the population of interest, and random assignment of participants to different conditions is necessary to achieve high internal validity for making causal inferences. - A census is an investigation involving all the potential units that could be included from the target population (the universe of interest). In comparison, a sample refers to a subset of eligible units in a population. - In most research, a sample is preferred to the use of the complete set of possible units because, within some reasonable degree of sampling error, the sample will approximate the results that would have been obtained had a complete census been taken. A sample is collected at a fraction of the cost associated with a complete enumeration of all population units - A census does not necessarily imply a massive number of all units, such as every person living in a particular nation or even on earth, but the intended population to which the researcher wishes to generalize the survey results - An estimate is a statistical value computed from members of a sample. It serves as an inference, with some degree of sampling error, of a population value - The precision of an estimate is inversely related to the degree of sampling error (standard error), or the expected typical dis- crepancy between the estimate calculated from a sample and the value that would be obtained if the entire targeted population (census) had been included in the study - Sampling error represents the discrepancy expected if many samples were randomly drawn from the same targeted population. - Obviously, if all the population members were included, the concept of sampling error would not be applicable. Sampling error, which concerns the extent that a sample deviates from that of the population, should not be confused with measurement error. - Estimates beyond a reasonable degree of sampling error suggests an unrepresentative error, or biased sample, which is unlikely to provide a precise sample estimate of the population value. - This is known as the sampling frame, a listing of the population of interest from which members of a population are drawn and used as the sample for a study

CBL Chapter 16 - Other measures of automaticity

- Most automatic responding is presumed to occur early in information processing; given additional time and cognitive effort, some automatic processes may be overridden or corrected by more thoughtful, deliberative cognitive processing - SOA - Capacity for conscious processing can be limited by various techniques for creating cognitive over- load, either presenting a lot of information very rapidly (e.g., Bargh & Thein, 1985), or occupying cognitive resources with a secondary task - A cognitive busyness manipulation might require participants to hold an eight-digit number in memory (distractor task) while they are engaging in a judgment task or while the stimulus information is being presented - Responses that are automatically elicited can be expected to interfere with production of other responses that are incompatible with it. - The Stroop effect demonstrates that performance of an instructed response can be inhibited when an incompatible automatic response is elicited by the stimulus. Thus the occurrence of such inter- ference can be interpreted as an indication of automatic processing at work. - As another use of response interference as a measure of automatic processes, the implicit association test (IAT) assesses automatic associations between mental concepts by classifying and sorting items as quickly and accurately as possible into different categories - That is, people with more implicit racial biases are expected to respond more quickly to trails consisting of response pairings that are consistent rather than inconsistent with racial ste- reotypes. - Thus, the difference in speed of responding to the congruent over the incongruent trials is a measure of degree of implicit association between concepts represented in a person's memory.

CBL Chapter 9 - Nonexperimental research - 4

- Multiple regression is an extension of the Pearson correlation, to estimate the relationships of multiple predictors to a criterion. - For such an endeavor, the most commonly used analytic technique is multiple regression, in which a weighted combination of predictor variables is used to estimate predicted outcome values on the criterion variable, which is derived from a multiple regression equation. - This is very similar to the regression equation for the Pearson correlation, except that the equa- tion for multiple regression allows for more than one predictor. Let's say X1 indicates the number of years of education, X2 is socioeconomic status, X3 is openness to experience, and Y′ is the pre- dicted number of cross-race friends. The constant (a) represents the predicted number of friends if a person scored a zero on all three predictor variables. The remaining parts of the equation are the weights for each predictor variable: b is the weight for X1, c is the weight for X2, and d is the weight for X3. A variable next to a larger weight is given greater weight in prediction than one with a smaller weight. This reflects the degree of change in the criterion variable that can be expected from a change in the specific predictor variable. - The proportion of variation accounted for by the set of predictors on the criterion is noted by the multiple regression version of the coefficient of determination, otherwise known as multiple R2. - Proportion of virance in outcome unexplained by set of predictors - predictive error: E = 1 - R2 - A larger multiple R2 is desired because it indicates that the set of pre- dictors collectively explains more of the variability in the criterion. - Because the weights assigned to each predictor in the multiple regression formula are calculated to maximize prediction using data from a specific sample of participants, generalizing the resulting equation to a new sample inevitably tends to produce a lower R2 because the original equation is to some degree less applicable to a different sample. This is so because the regression analysis proceeds on the assump- tion that the sample data are free from measurement error.

CBL Chapter 19 - Meta-analysis - 5

- On other hand,the more dispersed the effect sizes of the primary studies over and above sampling error, the greater the heterogeneity. Heterogeneity of effect sizes is attributed to study variations in methodology and measurements in examining the same research hypothesis. - How- ever, if there is dispersion in effect sizes among the studies above and beyond that expected from sampling error, pointing to a random-effects model, the results cannot be regarded as homogeneous across the compiled primary studies. Heterogeneity in the strength and direction of effect sizes may provide important insights into potential study-level moderators. Moderators should be tested to determine what qualifies or changes the strength of the hypothesized A to B summary effect. Ideally, potential moderator variables should be determined in advance, on the basis of theory or empirically based hunches, but even post hoc moderators can prove enlightening. - When the effect sizes are heterogeneous, the meta-analysis should code for and evaluate the pos- sibility of study-level moderators - It also is true that more broadly defined constructs are more likely to reveal meaningful differences among studies that are systematically dif- ferent in terms of effect size—that is, the more likely we are to discover study-level moderators that affect the strength of the hypothesized relationship - A broadly defined construct will require that we break studies down by theoretically relevant levels of sub-studies to evaluate potential modera- tors. Meta-analysis is not useful for synthesizing small sets of studies that differ widely in methods and operationalizations of constructs - Study-level moderators may be classified into three varieties. Methodological moderators include methods factors, such as whether the study was conducted in a field or lab setting, whether self- report or behavioral measures were used, and whether the experimental design involved within- or between-group comparisons. - Demographic moderators coded at the study level include the sample's average age (young, middle-aged, or older adults), socioeconomic status (wealthy or poor), and type of sample (undergraduate students or a representative sample from the population). - Theoretical mod- erators involve studies that used different operational definitions to define the same construct, and the context under which the relationship was assessed

CBL Chapter 19 - Meta-analysis - 4

- Once effect sizes are calculated for all primary studies, they are aggregated. This produces a sum- mary effect, an overall effect size across the primary studies, that may be estimated using an unweighted, fixed-effect, or random-effects model. - In an unweighted model, the effect sizes of primary studies are mathematically averaged, without taking into consideration the sample size of each study, to calculate the sum- mary effect. - Generally, this is not a justifiable practice, given that primary studies with larger samples contain more representative samples and yield more stable effect size estimates. - A weighted technique, therefore, is recommended for aggregating effect sizes in meta-analyses - In a fixed-effect model, the effect sizes of primary studies with larger sample sizes are weighted more heavily to calculate the summary effect. It is most appropriate if the distribution of effect sizes is presumed to be homogeneous - because a study with a very large sample size relative to that of the other studies will overpower estimation of the summary effect. A fixed-effect model assumes that a single "fixed population" of studies, addressing the same hypothesis, underlies the effect size distribution. Consequently, the major disadvantage of a fixed effect model is that the distribution of effect sizes is expected to be largely homogeneous, with the meta-analysis findings only generalizable to the primary studies that were part of the synthesis. - In a random-effects model, effect sizes of primary studies are weighted as a compromise between sample size and number of primary studies to calculate the summary effect. It is the most appropriate estimation technique if the distribution of effect sizes is presumed to be het- erogeneous. - The random-effects model assumes many "random populations" of studies addressing the same hypothesis, which are represented in the distribution of effect sizes. Because the effect sizes are expected to be heterogeneous, the meta-analytic findings may be generalized to the population of studies that were not even collected as part of the current meta-analysis. This is the primary advantage of the random-effects model. In theory and in practice, random-effects models should be the preferred choice over fixed-effect and unweighted models - When effect sizes are approximately similar in magnitude, primarily differing in sampling error, the distribution is said to be homogeneous. If this was indeed found, then the meta-analyst should compute the summary effect using a fixed-effect model. - The extent of effect size dispersion in a meta-analysis is visually presented as a stem and leaf display.

Lac - meta-analysis

- The issue arises because different studies are susceptible to different sources of measurement error - Despite the volu- minous research on the construct of acculturation (Yoon, Langrehr, & Ong, 2011), varying results make it challeng- ing to conclude whether it confers positive, negative, or no benefits, and especially for which cultural groups and which behaviors - Discrepancies in results, across studies addressing the same theoretical hypothesis, may stem from study to study variations in demographics (e.g., participant race/ethnicity, immigrant status, primary language), meth- ods (e.g., whether culturally sensitive procedures were used), and theory (e.g., how a cultural construct was op- erationally defined). These heterogeneous sources of ir- relevancies obscure and make it difficult to unravel the overall strength of a theoretical connection - By aggregating the empirical results of pri- mary studies, meta-analysis serves as a method to quan- tify the overall strength of a relationship, as well as deter- mine whether the connection is modified by demographic, methodological, and theoretical features of studies - Researchers are advised to read or perform a meta- analysis on a related research question or hypothesis prior to pursuing an empirical primary investigation. Doing so takes advantage of the preexisting knowledge base to consolidate and foster deep understanding of the literature. - the furnished information may profitably guide the development and refinement of primary studies and hypotheses.

CBL Chapter 19 - Meta-analysis - 6

- Once potential moderator variables have been proposed, each study in the analysis is coded on each of these characteristics (i.e., whether the characteristic is present or absent in the operations or procedures, or the level that was present). The effects of the presence or absence of these moderators, or their level, are then examined to determine whether the variable can explain the heterogeneity among the effect sizes of the different studies that constitute the meta-analysis. - The effect of the moderator on the hypothesized A to B relationship can be tested in one of two ways. The first technique involves dividing the total set of studies into subsets that differ on the char- acteristic in question, to determine whether the summary effect computed within the subsets differ significantly from each other (Rosenthal & Rubin, 1982). - o statistically determine whether gender moderated the overall summary effect, we compute the summary effect for the three studies of female participants and compare this effect size to the summary effect for the three studies of male participants. Analyses show that the studies contain- ing females revealed a weak connection between attractiveness and perceptions of liking (d = .03, ns), but this link was relatively strong in the male samples (d = .60, p < .05) - A test of contrast between these values reveals that the attraction to liking relationship is significantly stronger in studies that contained male participants. This result implicates gender as a moderator in helping to explicate some of the heterogeneity across studies. - A second method for assessing the contribution of a potential moderator variable to account for the variability in effect magnitude across studies is to enter the coded moderator variable into a meta-correlation or meta-regression, using each study's effect size as the dependent measure - Meta-regression is most appro- priate when the moderator variable under consideration is defined quantitatively. - However, if one correlates this mod- erator with the effect size, it will yield a positive meta-correlation, significantly larger than zero (Chapter 9 discusses the interpretation of correlations). This suggests that there is a moderating effect of age of sample that differs across studies and tends to increase as the age of participants in a sample gets older. - Interpreting the meaning of the dispersion of effects in a meta-analysis is in part determined by the pattern of results obtained. If the summary effect is small and not statistically significant, it is important to determine that the null finding is not a result of a heterogeneous data set with positive and negative effect sizes canceling each other. If we have highly disparate effect sizes in the model, it is important to determine whether a study-level moderator might help to statistically explain the varying distribution of effect size across investigations. - Finding potential moderators should be facilitated by theory regarding the variables involved in the hypothesized relation. - However, sometimes a moderator might be suggested only after examining the pattern of effects. It should be recognized that serendipitous findings of this type are post hoc, and should be interpreted accord- ingly in a tentative fashion.

CBL Chapter 9 - Nonexperimental research - 2

- One possible decision rule is to use a median split, a process in which participants are divided at the "middlemost" score, with 50% of participants above the median termed the "high group," and the 50% below the median the "low group." - Problems of this type, in which a continuous variable represent- ing a wide range of scores is forced to be categorical, could be avoided in correlational studies. The correlational approach avoids losing the detailed levels in the variable and enhances the potential for accurately assessing the extent of covariation between measure - The most commonly computed correlation—the Pearson product-moment correlation coefficient—is used to determine the extent of linear relationship between two variables, that is, the extent that variation in one measure is accompanied consistently by unidirectional variation in the other - This highlights the importance of representing the relationship between two variables graphically before computing the correlation coefficient. - Only when the relationship between two measures is essentially linear does the Pearson cor- relation coefficient accurately assess the degree of relationship. The correlation coefficient (r) will indicate the magnitude and direction of a linear relationship. The coefficient r may vary from -1.00 to +1.00, with the sign signifying the direction of relationship (positive vs. negative). A coef- ficient of .00 indicates that no linear relationship exists between these two measures. An r of +1.00 represents a perfect positive linear relationship. - The coefficient of determination, or squared value of the Pearson correlation (r 2), represents the proportion of variance shared between two variables - Higher value greater amount of variation in one measure that is accounte for by variation in the other - When r2 = 1.00, the proportion of common variation is 100%. This indicates that if the X variable were "held constant" (i.e., only participants with the same score on the X measure were considered), variation in Y would be eliminated (i.e., they all would have the same Y score). - With correlations that are less than perfect, the value of r2 indicates the proportion by which variation in Y would be reduced if X were held constant, or vice versa. For example, a Pearson r = .60 would denote that 36% of the spread in scores of the Y variable could be explained or accounted for by the scores on the X variable, so that if X were held constant, variation in Y scores would be reduced by 36%. - It should be emphasized that the existence of shared variance between two variables

CBL Chapter 9 - Nonexperimental research - 1

- Rather, the research question becomes focused on whether the variables are in some way related to or associated with one another. The type of analysis thus becomes nonexperimental or correlational. By correlational, we are not necessarily referring to the use of simple bivariate correlation, but rather a set of analysis techniques such as multiple regression, multi-level modeling, and structural equation modeling, all of which are used to examine relationships among variables in nonexperimental research - Nonexperimental and cor- relational are terms that are used interchangeably. - In a nonexperimental or correlational research design, the variables are allowed to vary freely, and the researcher observes and records the extent of their covariation—that is, the extent that changes in one variable are associated with (but not necessarily caused by) changes in the other, or vice versa. - In some experiments, researchers first sort individuals into different groups on the basis of per- sonal characteristics (sex, age, etc.) or responses made on some measuring instrument (e.g., high vs. low extraversion scores), and then introduce an experimental manipulation to study the effects of both variables on the dependent variable. Studies of this type are called a "mixed factorial design," as they investigate both an experimental manipulation of treatment conditions and a nonexperi- mental variable based on participant characteristics. - The major advantage of nonexperimental or correlational research is that it allows both variables of interest to vary freely, so that the degree of relationship between them can be determined

Warner II - Chapter 9 - Mediation

- Sometimes "causal" language is used in this chapter. As the variables in the examples are nonexperimental, noncausal language (e.g., "predicted," "related to," "associated with," "contributed to," or "explained") should always be used in reports. - Quantitative or binary - Hypothesized causes must occur earlier in time than hypothesized outcomes - Important to consider time lag between measures - if too brief, effect of X1 may not be apparent yet when Y measured - if too long, effects of X1 may have worn off - Sobel test - Another method to assess the significance of mediation is to examine the product of the a, b coefficients for the mediated path. - It is possible for moderation (described in another chapter) to co-occur with mediation in two different ways. Mediated moderation occurs when two initial causal variables (let's call these variables A and B) have an interaction (A × B), and the effects of this interaction involve a mediating variable. In this situation, A, B, and the A × B interaction are included as initial causal variables, and the mediation analysis is conducted to assess the degree to which a potential mediating variable explains the impact of the A × B interaction on the outcome variable. - Moderated mediation occurs when you have two different groups (e.g., men and women), and the strength or signs of the paths in a mediation model for the same set of variables differ across these two groups. - models. First, it is now generally agreed that bootstrapping is the preferred method to test the statistical significance of indirect effects in mediated models; bootstrapping may be more robust to violations of assumptions of normality. Second, once a student has learned to use Amos (or other SEM programs) to test simple mediation models similar to the example in this chapter, the program can be used to add additional predictor and/or mediator variables,

Warner II - Chapter 5 - Multiple Regression - 2

- Standard or simultaneous or direct regression: In this type of regression analysis, only one regression equation is estimated, all the Xi predictor variables are added at the same time, and the predictive usefulness of each Xi predictor is assessed while statistically controlling for any linear association of Xi with all other predictor variables in the equation. - Sequential or hierarchical regression (user-determined order of entry): In this type of regression analysis, the data analyst decides on an order of entry for the predictor variables on the basis of some theoretical rationale. A series of regression equations are estimated. In each step, either one Xi predictor variable or a set of several Xi predictor variables are added to the regression equation. - Statistical regression (data-driven order of entry): In this type of regression analysis, the order of entry of predictor variables is determined by statistical criteria. In Step 1, the single predictor variable that has the largest squared correlation with Y is entered into the equation; in each subsequent step, the variable that is entered into the equation is the one that produces the largest possible increase in the magnitude of R2. - When all of the predictor variables are entered into the analysis at the same time (in one step), this corresponds to standard multiple regression - In a statistical regression, the order of entry for predictor variables is based on statistical criteria. SPSS offers several different options for statistical regression. - Thus, in an SPSS stepwise statistical regression, variables are added in each step, but variables can also be dropped from the model if they are no longer significant - Sequential or statistical regression essentially makes an arbitrary decision to give the variable that entered in an earlier step (X1) credit for variance that could be explained just as well by variables that entered in later steps (X2 or X3). The decision about order of entry is sometimes arbitrary. Unless there are strong theoretical justifications, or the variables were measured at different points in time, it can be difficult to defend the decision to enter a particular predictor in an early step. - It makes sense, in general, to include "control," "nuisance," or "competing" variables in the sequential regression in early steps and to include the predictors the researcher wants to subject to the most stringent test and to make the strongest case for in later steps. - The use of statistical regression is not recommended under any circumstances; this is a data-fishing technique that produces the largest R2 possible from the minimum number of predictors, but it is likely to capitalize on chance, to result in a model that makes little sense, and to include predictors whose significance is due to Type I error. - 0 tolerance - no additional variance or info that is not already present in predictor variables X1 through X 3 and therefore X4 cannot provide any new predictive info - 1 - max for tolerance - completely uncorrelated wit other set of predictors

Baron and Kennny - Framework for combining

- Step 1. The Step 1 regression is illustrated in Figure 1. This step is a simple 2 X 2 ANOVAon the outcome variable. If C has a significant effect on O, then control may be a mediating variable of the stressor effect on the outcome. If S affects O, then it is sensible to evaluate the mediating effects of perceived control. - Step 2. The Step 2 regressions are illustrated in Figure 4. In this step, two equations are estimated. First, P is regressed on C, S, and CS. This can be more easily accomplished by a 2 • 2 ANOVA. Second, O is regressed on C, S, P, and CS. For P to mediate the S to O relation, S must affect P and P must affect O. If there is complete mediation, then S does not affect O when P is controlled. - The stressor moderates the effectiveness of the manipulation. The final Step 2 path is the one from CS to O. Let us assume that CS affects O in the Step 1 regression, and in Step 2 CS has a weaker effect on O. Then the interpretation is that P has mediated the CS effect on O. We have what might be termed mediated moderation. Mediated moderation would be indicated by CS affecting O in Step 1, and in Step 2 CS affecting P and P affecting C. - Step 3. In this step, one equation is estimated. The variable O is regressed on C, S, P, CS, and PS. This equation is identical to the second Step 2 equation, but the PS term has been added. The key question is the extent to which the CS effect on O is reduced in moving from Step 2 to Step 3. - There are then two ways in which the CS effect on O can be explained by E It can be explained by P because the control manipulation is differentially affecting perceived control for the levels of the stressor. Or, the CS interaction can be funnelled through the PS interaction. The former explanation would change what was a moderator effect into a mediator effect, and the latter would keep the moderator explanation but enhance the meaning of the moderator construct. - moderated mediation (James & Brett, 1984). That is, the medi- ational effects of P vary across the levels of C. - Given the present status of the evidence, it appears much easier to support the claim that control moderates, as op- posed to mediates, the density-crowding relation. Such an inter- pretation would leave open the possibility that other factors, such as an arousal-labeling or an arousal-amplification mecha- nism, mediate the effects of density

Warner II - Chapter 4 - Regression Analysis

- The b1 and b2 regression coefficients in Equation 4.1 are partial slopes. That is, b1 represents the number of units of change in Y that are predicted for each one-unit increase in X1 when X2 is statistically controlled or partialled out of X1. - How well does the entire set of predictor variables (X1 and X2 together) predict Y? Both a statistical significance test and an effect size are provided. How much does each individual predictor variable (X1 alone, X2 alone) contribute to prediction of Y? Each predictor variable has a significance test to evaluate whether its b slope coefficient differs significantly from zero, effect size information (i.e., the percentage of variance in Y that can be predicted by X1 alone, controlling for X2), and the percentage of variance in Y that can be predicted by X2 alone, controlling for X1. - The effect size for the overall model—that is, the proportion of variance in Y that is predictable from X1 and X2 combined—is estimated by computation of an R2. - However, R2 (as described in this chapter) is an index of the strength of linear relationship, - The researcher should have some theoretical rationale for the choice of independent variables. - Selection of predictor variables on the basis of "data fishing"—that is, choosing predictors because they happen to have high correlations with the Y outcome variable in the sample of data in hand—is not recommended. Regression analyses that are set up in this way are likely to report "significant" predictive relationships that are instances of Type I error. It is preferable to base the choice of predictor variables on past research and theory rather than on sizes of correlations.

CBL Chapter 9 - Nonexperimental research - 3

- The best fitting straight line drawn through this set of points is called a regression line (see Figure 9.2). The regression line in the scatterplot is derived from a formula for predicting the score on one measure (Y, which is the criterion variable) on the basis of a score on another measure (X, which is the predictor variable). - Perfect prediction of the Y′ score stemming from knowledge of a participant's X score only occurs if the correlation is exactly +1.00 or -1.00. The degree of variation, defined as the vertical distances of observed scatterplot points from the linear regression line, is known as the residual or prediction error. - The extent of error arising from inaccuracy of prediction of one variable using the other may be computed simply: Error = 1 - r2. For example, a relatively strong correlation (e.g., r = .80) will have a small amount of error, and participant data points will be closer to the regression line; a weak correlation (e.g., r = .10), on the other hand, will have considerable error, and data points will be scattered farther away from the regression line. Thus the higher the correlation coefficient (r), the less variation in observed scores vertically around the regression line (i.e., the tighter the fit between the actual Y values and its predicted Y′), and the greater the accuracy of the linear prediction. - With the Pearson correlation, the choice of "predictor" and "criterion" is often arbitrary, except when the predictor variable is one that temporally precedes the criterion variable. - If the results of a Pearson correlation indicate an approximately .00 linear relationship between two measures, there are four potential explanations for this result. First, and most simply, there may be no systematic relationship between the two variables. This would be the expected, for example, if one assessed the correlation between measures of shoe sizes and scores on a research methods exam. - The second possibility, which already has been illustrated, is that there is some systematic rela- tionship between the two variables, but the relationship is essentially nonlinear. - The third possibility is that one or both of the measures involved in the correlation is flawed or unreliable. - Thus, such a variable contains only a small reliable proportion of the shared variance, and a large proportion of the measurement is contaminated by measurement error - The fourth possibility is that a very low correlation value may be an artifact of limitations of measurement. The size of the correlation between any two variables will be automatically attenuated (diminished) if the range of scores on either or both measures is restricted or truncated. - A truncated measure, for example, might arise if a self-esteem scale is administered to a sample consisting of people suffering from clinical depression.

CBL Chapter 16 - Measures of attention

- The information-processing model assumes that attention is a limited resource that is selectively distributed among the myriad of visual, auditory, and other sensory stimuli that bombard us at any point in time. - Attention serves as a selective filter of the voluminous information that comes to us from the external world: We must first pay attention to information before it can be passed on - It is further assumed that the particular stimuli that capture and hold a person's attention are those that are most salient or important to the perceiver at the time. - Thus, by measuring which inputs a person attends to when multiple stimuli are available, or measuring how long the person attends to particular stimuli compared to others, we have an indirect way of assessing what is important, interesting, or salient to that individual. - The majority of research on attention (and hence, methods for assessing attention) focuses on the perception of visual information—either of actual events or displays of pictures, words, or other symbols. - Measures of visual attention involve tracking direction and duration of eye gaze, i.e., when and how long the perceiver's eyes are fixated on a particular object or event presented in the visual field - Eye gaze proved to be a sensitive measure of selective allocation of visual attention and preference. - The well-known "Stroop effect" (Stroop, 1935) is an example of an interference- based measure of unintended attention. In this research paradigm, participants are presented with a series of words printed in various colors, and are instructed to not read the word but to identify aloud the color of ink in which each is printed. The relationship between the word and the ink color is varied. - In congruent trials, each written word corresponds to the ink color (e.g., the word "green" printed in green ink.), so identification of ink color should be facilitated. In incongruent trials, the written word itself is a color different from the printed ink color (e.g., the word "red" printed in green ink), so the semantic meaning of the word interferes with the participant's ability to identify the ink color correctly - This response interference is an indication that attending to the semantic meaning of the word is automatic and cannot be suppressed, even when instructed to identify the ink color instead. - The Stroop effect has been adapted as a general measure of automatic attention to semantic content - Essentially, the conflicting processes of reading and shape identification simultaneously competed for our attention. The additional amount of time it requires to complete the incongruent condition over the congruent condition indicates the extent of interference. - A third method for assessing how important or significant particular information is to the perceiver is to measure the amount of time the individual spends viewing or contemplating the information before making a decision or judgment or moving on to another processing task. - Duration of eye gaze is one measure of processing time, but more often this method is used when information is presented sequentially, as, for example, images are displayed successively or successive photographs or text are projected on a computer screen. If the participant is given control over the movement from one screen to the next, the amount of time spent viewing a particular screen provides a mea- sure of processing time for that information - Nonetheless, the relative amount of time spent viewing each piece of information in a sequence often may provide useful information about the types of information that attract the most attention and cognitive effort.

Multiple Regression: 3 Major types

1) standard regression - AKA simultaneous (or single block) regression - Hypothesis-based (researcher decides) - researcher enters all predictors simultaneously (includes mediation and moderation analysis) - single block of predictors - to explain outcome - specific hypothesis how predictors related to outcome 2) exploratory regression - AKA stepwise (or data driver or statistical order) regression - exploratory based (software automatically decides) - software decides on the predictors to enter - software searchers through data to find predictors sig related to outcome 3) hierarchical regression - AKA sequential regression - hypothesis based (researcher decides) - researcher enters 2 or more blocks of predictors: assess incremental contribution of each block - sequentially added into model - research specifies which predictors to enter

CBL Chapter 11 - sample size

- The question of sample size must be addressed when developing a survey. How many respondents must we sample to attain reasonable precision when estimating population values? - As in all other areas of survey sampling, the deci- sion regarding the size of the sample involves trade-offs, most of which concern the complementary issues of cost and precision. - What matters most is the absolute size of the sample, rather than the size of the sample rela- tive to the size of the population. A moment's reflection on the formula for the standard error of a simple random sample will show why this is so. This formula demonstrates that the size of the sample, not the sampling fraction, determines precision. - The first decision that must be made in determining the survey sample size concerns the amount of error we are willing to tolerate. The greater the precision desired, the larger the sample needed. Thus, if we wish to obtain extremely precise findings—results that will estimate underlying popu- lation values with a high degree of accuracy—it must be understood that we will need to sample more respondents. - A word of caution is required at this point. The calculations of the size of a sample necessary to estimate a population parameter at a given level of precision will hold only if the sampling pro- cedure itself is properly executed. - Attributable at least in part to the Literary Digest fiasco is the problem of dropouts or nonrespon- dents. Because sampling theory is based on probability theory, the mathematics that underlies sampling inference assumes perfect response rates. Otherwise, sampling weights must be derived and applied to the estimates - Weights are applied to estimates to undo this nonrepresentative sampling. After the sample data are collected, weights are statistically derived to take the sam- pling design into account and to adjust for any remaining imbalances in proportions of various subgroups between the sample and targeted population. The most common type of weighting variable is called a general sampling weight, which is used to calibrate the demographic characteristics (e.g., age, gender, race, marital status, socioeconomic status) of the sample to a known population. - This demographic adjustment will yield statistical estimates that are more representative of the general population.

Warner II - Chapter 4 - Regression Analysis and Statistical Control - 2

- The squared semipartial correlation between X1 and Y, controlling for X2—that is, r2Y(1.2) or sr21—is equivalent to the proportion of the total variance of Y that is predictable from X1 when the variance that is shared with X2 has been partialled out of X1. - In multiple regression analysis, one goal is to obtain a partition of variance for the dependent variable Y (blood pressure) into variance that can be accounted for or predicted by each of the predictor variables, X1 (age) and X2 (weight), taking into account the overlap or correlation between the predictors. - Each circle has a total area of 1 (this represents the total variance of zY, for example). For each pair of variables, such as X1 and Y, the squared correlation between X1 and Y (i.e., r2Y1) corresponds to the proportion of the total variance of Y that overlaps with X1, as shown in - The total variance of the outcome variable (such as Y, blood pressure) corresponds to the entire circle in Figure 4.3 with sections that are labeled a, b, c, and d. - total variance of Y and that Y is given in z-score units, so the total variance or total area a + b + c + d in this diagram corresponds to a value of 1.0. - As in earlier examples, overlap between circles that represent different variables corresponds to squared correlation; the total area of overlap between X1 and Y (which corresponds to the sum of Areas a and c) is equal to r21Y, the squared correlation between X1 and Y. One goal of multiple regression is to obtain information about the partition of variance in the outcome variable into the following components. Area d in the diagram corresponds to the proportion of variance in Y that is not predictable from either X1 or X2. Area a in this diagram corresponds to the proportion of variance in Y that is uniquely predictable from X1 (controlling for or partialling out any variance in X1 that is shared with X2). Area b corresponds to the proportion of variance in Y that is uniquely predictable from X2 (controlling for or partialling out any variance in X2 that is shared with the other predictor, X1). Area c corresponds to a proportion of variance in Y that can be predicted by either X1 or X2. We can use results from a multiple regression analysis that predicts Y from X1 and X2 to deduce the proportions of variance that correspond to each of these areas, labeled a, b, c, and d, in this diagram. - We can interpret squared semipartial correlations as information about variance partitioning in regression. We can calculate zero-order correlations among all these variables by running Pearson correlations of X1 with Y, X2 with Y, and X1 with X2. The overall squared zero-order bivariate correlations between X1 and Y and between X2 and Y correspond to the areas that show the total overlap of each predictor variable with Y as follows: - The squared partial correlations and squared semipartial r's can also be expressed in terms of areas in the diagram in Figure 4.3. The squared semipartial correlation between X1 and Y, controlling for X2, corresponds to Area a in Figure 4.3; the squared semipartial correlation sr21 can be interpreted as "the proportion of the total variance of Y that is uniquely predictable from X1." - The squared partial correlation has a somewhat less convenient interpretation; it corresponds to a ratio of areas in the diagram in Figure 4.3. When a partial correlation is calculated, the variance that is linearly predictable from X2 is removed from the Y outcome variable, and therefore, the proportion of variance that remains in Y after controlling for X2 corresponds to the sum of Areas a and d. - We can reconstruct the total variance of Y, the outcome variable, by summing Areas a, b, c, and d in Figure 4.3. Because Areas a and b correspond to the squared semipartial correlations of X1 and X2 with Y, it is more convenient to report squared semipartial correlations (instead of squared partial correlations) as effect size information for a multiple regression. Area c represents variance that could be explained equally well by either X1 or X2.

CBL Chapter 11 - Survey Studies Design and Sampling - 2

- Usually, we will not know the true population value (called the population parameter); however, we will be able to estimate the parameter from the from the sample results - We can esti- mate the precision of a sample mean by determining the sampling error, or standard error (S.E.), of the mean. A high sampling error reflects low precision, whereas high precision is reflected by a low sampling error. - The fpc is included in the calculation of the precision estimate to reflect the facts that in simple random sampling units are chosen without replacement, and that the population from which the sample is drawn is not infinite (as assumed in standard statistical theory). The fpc formula indicates that sampling without replacement results in greater precision than sampling with replacement. - This follows logically, because in situations involving a small sampling fraction, the likelihood of selecting the same respondent more than once (when sampling with replacement) is minimal, hence the sampling effect on the standard error is minimal. Thus, in practice, with small sampling fractions, the fpc value is negligible and rarely used. - In addition to the fpc, the formulas presented here contain two other clues about factors that influence the precision of an estimate. Notice that the size of the sample has much to do with the sampling error (or lack of precision of the estimate). In fact, the sample size, not the sampling frac- tion, plays the predominant role in determining precision. The precision formula clearly shows that as sample size increases, the standard error decreases. As a larger sample size is more reflective of the population, increasing the number of respondents will reduce the standard error of an estimate. - This indicates that if researchers wish to double the precision of an estimate (if s remains constant), they would have to quadruple the sample size. - The other important term required to calculate the formula for the standard error is the stan- dard deviation, denoted by the term s, which represents the variability of the individual participant values in the sample. - As population information is usually unknown, the sample estimates are used as an approximation of their respective population values. The larger this sample's standard deviation term, the greater the standard error of the mean. In other words, the more variable the spread of individual values used to compute the sample estimate (in our example, semester textbook costs) of the population, the greater the standard error of the sample mean, and, consequently, the lower the precision of the sample mean. - When values are the same for all units in a sample, there is, by definition, no variation in these values. The sample standard deviation would equal exactly zero, and upon solving the formula the standard error also would be zero. The results of any division of this term (no matter what the sample size, n) also would equal zero. Thus, the more restricted the sample values, the more precise the sample estimate, all other things being equal - Or, to put it another way, the smaller the standard deviation of the sample, the fewer respondents will be needed to obtain greater precision of estimate for the sample mean.

CBL Chapter 19 - Meta-analysis - 2

- When many studies of the same relationship or phenomenon have been conducted, it is possible to use meta-analysis not only to compute an overall effect, but also to sort out and identify the sources of variation in results across studies and thereby develop a deeper understanding of an underlying relationship. The particular steps in the meta-analysis may vary from synthesis to synthesis, but the procedures to be followed are the same. - The first step, as in any other approach to reviewing research literature, requires that we have a good understanding of the scientific studies that have been conducted on the phenomenon chosen for the meta-analysis. Techniques of meta-analysis are most effective when they are focused on a hypothesized relationship between (usually two) variables that can be specified with a high degree of precision. - A basic requirement is that there be sufficient past research on the relationship of interest. - In general, a meta-analysis requires a relatively large number of existing studies that all examine the same critical relationship before the method's true power as an integrative force can be realized - The importance of specifying a hypothesized relationship clearly becomes apparent in this phase, which entails choosing potential primary studies to be included in the meta-analysis. To choose studies, the researcher must decide on the search criteria to be used to determine the relevance of a particular study. - hoosing appropriate primary studies for inclusion or exclusion involves two distinct processes. The first requires a tentative specification of inclusion rules; that is, which studies of the critical relationship will be included in the synthesis and which will be excluded.3 Studies are included if they meet a specific criterion, or set of criteria - There is some debate about the inclusion or exclusion of studies on the basis of quality. Some have suggested that studies that do not meet some common methodological standard (e.g., no control group, no manipulation check information, poor reliability of measures, etc.) should be excluded(e.g.,Greenwald&Russell,1991;Kraemer,Gardner,Brooks,&Yesavage,1998).Webelieve, however, that meta-analysts generally should not exclude studies that meet substantive inclusion criteria, even those that are methodologically suspect. - After deciding on the inclusion rules, the researcher must gather every possible study that meets the criteria. - Another way of using the electronic databases is through descendancy and ascendancy search for eligible studies (Cooper, 2009). In descendancy searching, an important early and relevant primary study is identified and used as the starting point to search for all chronologically later stud- ies (descendants) that have cited it. - Conversely, it also is practical to consider ascendency search- ing, in which the reference list of a recent and relevant primary study is consulted to perform a backward search to find chronologically earlier relevant studies (ancestors). These located primary studies, in turn, can be used for further backward literature searches, and the process is continued until all relevant studies are found.

Crano - social psych

- With recent advances in technology, functional magnetic resonance imaging (fMRI), with its foundations in positron emission tomography (PET) and MRI scans, allows researchers to examine noninvasively the correlation between neural activity and psychological processes via changes in cere- bral blood flow - the phrenology movement was an early and later discredited approach that sought to identify cranial regions associated with personality and psychological trait - fMRI technology to investigate the ways in which the brain is involved in the mediation of social interac- tions - The capacity of experimental methods to shed light on fundamental causal relations has proved an irresistible draw to science, and over the centuries, experimental techniques have been refined and developed on the basis of fundamental experimental/causal logic. - the experimental method is manifest in any investigation in which some control of extraneous alternative variables (confounds) is deliberately and systematically emplaced by the researcher. - the modern definition mandating the use of random assignment of subjects to condi- tions, was the dominant "experimental" methodology in the formative years of psychology. - physiological link between mind and body. - "structur- alism," whose primary investigative methodology was self- observation, or introspection (Tichener, 1899a, 1899b). This method was designed to delve into and identify the components of consciousness, and in so doing unearth the elemental mental processes that were the basis of higher thought - a prime target of the operational- istic and behavioristic counterattacks - Owing to the extreme reaction the approach stimulated, introspectionism has gained perhaps a worse reputation than it deserves. Tichener (1901-1905), for example, instituted strong controls in his use of the technique. Of course, one might argue that any attempts at controlling introspection might constrain the very (free) mental processes it was designed to elucidate, and thus, ultimately defeat the intended purpose of the technique. Though introspectionism has been largely discredited, use of subjective reports of mental processes has not disappeared from the realm of mental discovery. - The think- aloud protocol, which requires respondents to report on their thought processes and the mental steps engaged in while forming a response, are common and useful in the design of instruments

CBL Chapter 11 - Random Sampling - 4

- With stratified sampling, increased heterogeneity within strata results in a loss of precision—the respondents (or sampling units) within strata ideally should be as similar as possible. With clus- ter or multistage sampling, however, increased heterogeneity within clusters results in increased precision. - By this pre-stratification process, we help ensure that clusters from different strata are represented approximately equally in the sample. - Two-phase sampling. In two-phase (or double) sampling, all respondents (who might have been chosen by any sampling method, such as simple random, cluster, or multistage sampling) complete the basic survey. Then,either concurrently or some time thereafter,additional information is sought from the previously selected respondents. In two-phase sampling, two (or more) surveys are conducted: The basic survey, in which all participate, and the auxiliary survey, which employs a specified subsample of the main sample. - Perhaps the most important use of two-phase sampling is in developing stratification factors. When the auxiliary data are collected subsequent to (rather than concurrent with) the basic survey information, the initial survey can be used to create stratification factors. Often in two-phase sam- pling, numerically rare groups that cannot be identified on the basis of readily available information are sought for study. On the basis of the information obtained in the first-phase sample,the sought- after group is identified and usually disproportionately oversampled, with a lesser sampling fraction being used for the remaining population. - anel survey, in which a prespecified sample of respondents (a panel) is surveyed repeatedly over time. - he purpose of the panel survey is to assess the individual changes that might occur in the knowledge, opinions, or perceptions of the respondent sample over the course of tim

Crano - early methods

- psychophysics, a foundational feature of research in social psychology. - physical stimulation and perception, showing that mental events could be quantified via measurable stimuli. - The methodology was developed and extended in his creation of psychophysical methods, which helped lay the foun- dation of modern scale construction - Elements of Psychophysics, he demonstrated the benefits of careful experimentation and attention to the link between physical and mental events, and established the place of psychophysics in modern social psychology. - research showed that mental events could indeed be measured, and that they were linked systematically with measurable variations in stimulus intensity. - complementary modes of reasoning—abduction, deduction, and induction—that today are acknowledged in practice as necessary components of the scientific enterprise. - scientific investiga- tion is initiated by abduction, a guess or hypothesis, followed by deductive inferences drawn from hypothesis about other relations that must exist if the hypothesis is true. - enables the scientist to deduce other testable hypothetical relations. - early practitioners of the random- ized experiment and a major contributor to modern statistics - using a shuffled deck of playing cards to randomly determine the presentation order of stimuli used in a psychometric study (Peirce & Jastrow, 1885). Using each other as participants, the researchers made a series of weight judgments comparing extremely similar weights and stated their confidence on each judgment. - These results anticipated the implicit associations movement by more than a century. In distinction to Nisbett and Wilson's (1977) "Telling more than we can know," Peirce and Jastrow (1885) might have titled their paper "Knowing more than we can tell."

CBL Chapter 11 - More sampling issues

- some listing of the population must be available to obtain a representative sample of a population, and this listing is called the sampling frame. In simple random sampling the frame is a listing of all the individual population units (or respondents), whereas in cluster or multistage sampling the initial listing involves the clusters that contain all the population units (or respondents). - Two types of sampling frames are commonly employed, sampling frames of people and sampling frames of locations. The first of these consists of a listing of all of the individuals of a specified population - A sampling frame of locations consists of using a detailed map of a specific physical environment— a city or town, a precinct, ward, or neighborhood. Cluster or multistage sampling is used with location sampling frames. Using maps to define the list of clusters in a sampling frame is common in surveys that seek to cover reasonably wide geographic areas. In general, however, although their use is more difficult and demanding than survey designs that employ population lists of people as the sampling frame, in many situations no adequate population list is available

Testing moderation using multiple regression 3 steps

1) z score (type of centering) predictor and moderator - to prevent multicollinearity problems due to redundancy of each predictor/moderator with interaction term (therefore to yield correct beta and p values) - next use z scored predictor and z scored moderator to compute interaction term 2) enter z scored predictors/moderators (and its computed interaction) into regression model - verify no multicollinearity problems - interpret beta and its p value form this output - if interaction term is sig. go to next step 3) if interaction is sig, graph using original (not z scored) predictor and moderator in regression model - unstandardized regression coefficients = B coefficients using model with original variables - not z scored - if both predictor and moderator are quantitative, values of variables is blank - see raw value scores for 2 categorical levels

Warner II - Chapter 5 - Multiple Regression - 1

- the slope for each individual predictor is calculated controlling for all other predictors; thus, in Equation 5.2, b1 represents the predicted change in Y for a one-unit increase in X1, controlling for X2, X3, ..., Xk (i.e., controlling for all other predictor variables included in the regression analysis). - The beta coefficients in the standard-score version of the regression can be compared across variables to assess which of the predictor variables are more strongly related to the Y outcome variable when all the variables are represented in z-score form. - We can conduct an overall or omnibus significance test to assess whether the entire set of all k predictor variables significantly predicts scores on Y; we can also test the significance of the slopes, bi, for each individual predictor to assess whether each Xi predictor variable is significantly predictive of Y when all other predictors are statistically controlled. - "kitchen sink" approach to selection of predictors. It is not a good idea to run a regression that includes 10 or 20 predictor variables that happen to be strongly correlated with the outcome variable in the sample data; this approach increases the risk for Type I error. It is preferable to have a rationale for the inclusion of each predictor; each variable should be included (a) because a well-specified theory says it could be a "causal influence" on Y, (b) because it is known to be a useful predictor of Y, or (c) because it is important to control for the specific variable when assessing the predictive usefulness of other variables, because the variable is confounded with or interacts with other variables, for example. - If an Xi variable that is theorized to be a "cause" of Y fails to account for a significant amount of variance in the Y variable in the regression analysis, this outcome may weaken the researcher's belief that the Xi variable has a causal connection with Y. On the other hand, if an Xi variable that is thought to be "causal" does uniquely predict a significant proportion of variance in Y even when confounded variables or competing causal variables are statistically controlled, this outcome may be interpreted as consistent with the possibility of causality. - The strongest conclusion a researcher is justified in drawing when a regression analysis is performed on data from a nonexperimental study is that a particular Xi variable is (or is not) significantly predictive of Y when a specific set of other X variables (that represent competing explanations, confounds, sources of measurement bias, or other extraneous variables) is controlled.

Baron and Kenny - Distinctions between moderators and mediators

- to demonstrate mediation one must establish strong relations between (a) the predictor and the mediating variable and (b) the mediating variable and some distal endogenous or criterion variable - mediators represent properties of the person that transform the predictor or input variables in some way. - In addition, whereas mediator-oriented research is more in- terested in the mechanism than in the exogenous variable itself (e.g., dissonance and personal-control mediators have been im- plicated as explaining an almost unending variety of predic- tors), moderator research typically has a greater interest in the predictor variable per se. - moderators are often as theoretically derived as mediators. - Moderator variables are typically introduced when there is an unexpectedly weak or inconsistent relation between a pre- dictor and a criterion variable (e.g., a relation holds in one set- ring but not in another, or for one subpopulation but not for another). - Mediation, on the other hand, is best done in the case of a strong relation between the predictor and the criterion variable. - A similar point can be made in regard to the current use of moderator variables in personality research. That is, iftwo vari- ables have equal power as potential moderators of a trait-behav- ior relation, one should choose the variable that more readily lends itself to a specification of a mediational mechanism - Differences in perceived control may be found to mediate the relation between social density and decre- ments in task performance. - Thus, at times moderator effects may suggest a mediator to be tested at a more advanced stage of research in a given area. Conversely, mediators may be used to derive interventions to serve applied goals. - First, the moderator interpretation of the relation between the stressor and control typically entails an experimental manipula- tion of control as a means of establishing independence between the stressor and control as a feature of the environment separate from the stressor. - A theory that assigns a mediator role to the control construct, however, is only secondarily concerned with the independent manipulation of control. The most essential feature of the hy- pothesis is that perceived control is the mechanism through which the stressor affects the outcome variable. - an independent assessment of perceived control is essential for conceptual reasons, as opposed to methodological reasons as in the moderator case

Statistical assumptions: multiple regression

1) Normal distribution for quantitative X and Y variables - skewness index indicates approx normal distribution - no extreme outliers - between +/ 2 - cutoff for skewness index 2) Homoscedasticity - for each value of an X, variances for Y values approx the same (scatterplot) - variant of homogeneity of variances assumption - if violates, referred to as heteroscedasticity 3) Each X variable linearly related to Y variable - straight line roughly represents the relation (does not detect nonlinear relations) 4) Independence of participants - each participant provides only one set of scores (for all predictors and outcome) 5) No "multicollinearity" problems - predictors must not be highly interrelated with one another - assumption 5 is implied in reports - most commonly reported = 1 and 5

Regression: sample size requirements

1) bought guideline of many textbooks - a minimum of 10 (preferably 20) N per predictor 2) published journal articles: N >= 200 3) perform "power analysis" using software to determine minimum N required - G*Power (free software) - to derive minimum sample size requirements for various stat techniques (regression, ANOVA, t test) - in specifying power analysis, default is to enter power = .80 (80% power) and critical rejection region alpha .05 (two tailed) - also enter the proposed effect size (effect sizes could be derived from journal articles of a similar topic to yours) for the statistical technique - increasing N more likely beta and B and multiple R2 will yield sig p values in regression N = 50, beta = .30, ns N = 100, beta = .30, p <05 N = 150, beta = .30, p < .01 N = 200, beta = .30, p < .001 - e.g. 5 predictors should have min 50 participants - larger sample size able to rule out chance - 80% likelihood obtaining stat significance

3 types of nonrandom sampling

1) convenience sampling - sample of people most readily accessible/willing to be part of the study - e.g., psych undergrad subject pools (replication crisis - age , SES, more educate), newspaper article predominantly females read - may not be generalizable to pop. 2) snowball sampling - initially sampled participants asked to contact and recruit others in their social network; they then recruit additional others, and so forth - e.g., people know others woh are similar to themselves, recruit with similar feature and background, PTSD among north Korean defectors 3) quota sampling - sampled nonrandomly until a predefined number/proportion of participants for reach subgroup achieve - ensures adequate number of people from all subgroups included in study - e.g., study - compare gender differences of police officers on attitudes about death - sampling aim N = 100 (13% females not sufficient N, therefore sample 50% females and male officers)

Multicollinearity detection indices

1) correlation matrix across predictors - aim for non-high correlations: < |.80| - disadvantage: only able to detect redundancy for 2 predictors with each other 2) Tolerance - definition: proportion of variance in predictor not explained by other predictors - math: first all other predictors used to explain that predictor, to obtain Rsquared. second compute tolerance: 1-Rsquared - range: 0 (extreme multicollinearity) to 1.00 (no multicollinearity) - aim for high tolerance: > .20 (preferably > .40) - suggests low multicollinearity problems - high tolerance = variance in predictor not explained by other predictor 3) Variance inflation factor (VIF) - definition: inverse of tolerance - math: VIF = 1/tolerance - range 1 (no multicollinearity) to + infinity (extreme multicollinearity) - aim for low VIF - < 5.00 (preferably < 2.50) - values close to 1 indicate no multicollinearity problems - if tolerance = .98, means R2 = .02 (1-.02 = .98) - means only 2% of variance explained by other predictors

Results APA Moderation

A multiple regression model containing an interaction term was estimated. The skewness range of -0.76 yo 0.52 indicated that all variables were approximately normally distributed. The predictors were z scores (standardized) and then used to compute the interaction term. Afterward, the tolerance values of .93 to .97 revealed no multicollinearity problems. The regression model explained 38% of the variance in drinking days, F(3, 498) = 102.70, p < .001. Specifically, higher alcohol attitudes (beta = .61, p < .001), lower conscientiousness (beta = -.11, p < .01), and attitudes x conscientiousness (beta = -.12, p .01) uniquely predicted greater number of days drinking. The interaction (moderation) effect is graphed in Figure 1. The graph shows that the relation from alcohol attitudes to drinking days is stronger for participants processing low versus high conscientiousness.

Multicollinearity

Definition: Predictors highly interrelated (avoid!) Why is multicollinearity problematic? 1) Redundant predictors don't add much more additional unique contributions in increasing the Multiple R (and R squares) 2) produces unstable beta coefficients (see suppression effect) due to lack of unique contributions of each predictor 3) each predictor's standard error (SE) becomes erroneously inflated (because predictors possess highly redundant shared variance) - so each predictor is less likely to emerge as sig. on the outcome Multicollinearity will produce suppression effect problems - out of bound and strange beta values > |1.00| - pearson r and beta of predictor to outcome exhibit opposite signs (e.g., r = .40 but beta = -.20) Solutions to multicollinearity problem - drop one of the redundant predictors (because redundant predictors lack discriminant validity with each other) - create mean composite and report Cronbach's alpha for this redundant variables to enter as single predictor

Regression: Overall model (containing entire set of predictors)

Multiple R (model effect size: less commonly reported) - conceptually: correlation between set of predictors and outcome - stronger beta magnitude for each predictor will contribute to higher multiple R value - statistically: correlation between Y and Y' scores - extent that observed and predicted scores correspond - range 0.00 to 1.00 - multiple R = .83 - index of effect between set of predictors on outcome - entire set of predictors Multiple R2 (model effect size: more commonly reported - more meaningful) - square the multiple R (.83^2 = .69) - or SSregression/SStotal - the set of predictors explain this proportion of variance in the outcome - R2 = .69 (69%) - # hours studying explains .69 proportion or 69 percent of variance on exam score - variability of regression divided by total variability - proportion variability attributed to regression line Multiple R and r squared stat. sig? H0: R = 0 (R2 = 0) H1: R > 0 (R2 > 0) F = MSregression/MSresidual MSregression - model's signal MSresidual - model's noise (chance) - want to be as low as possible ***In the regression mode, the predictors explained 69% of the total variance in the outcome, F(1, 8) = 18.09, p < .05. - 1, 8 = regression df, residual df - 69% = multiple r squared

R2 vs. Adjusted R2

Multiple R2 - proportion outcome variance explained by set of predictors (not correcting for chance) Adjusted (shrunken) multiple R2 - proportion outcome variance explained by set of predictors (after correcting for chance) - Adjusted R2 increases only if predictors entered contribute to outcome beyond chance - The chance (due to increasing number of predictors and smaller N) correction penalize by reducing the R2 - thus adjusted R2 is always smaller or equal than R2 Greater discrepancy of R2 vs. adjusted due to - increasing number of predictors - smaller N Recommendation - R2 is more commonly reported in journal articles than adjusted - could report both - more predictors, more like capitalize on chance - smaller sample size, more due to chance and greater adjustment to R2 - -if not many predictors and large N, R2 and adjusted almost same

Exploratory regression

Purpose: use if no hypotheses and/or many predictors - software automatically searches for stat. sig predictors - greatly increases risk for Type I error Methods Stepwise regression (most popular exploratory) 1) software starts with no predictors in the model 2) most sig. (based on p-value) predictor is added into the model (by the software) 3) next most sig. predictor is added. Remove any previously added predictors if no longer sig. 4) software keeps on repeating step 3 until only sig. predictors are in the model 5) final exploratory model: only sig predictors Backward stepwise regression 1) software stars with all predictors in the model 2) the most non-sig (based on p-value) predictor is removed from the model by the software 3) next most non-sig predictor is removed 4) software keeps on repeating step 3 until only sig predictors remain in model 5) final exploratory model: only sig predictors Why high risk for Type I error - 28 (7 predictors x 4 model searches) p-value analyses performed to yield this final model - e.g. if 7 predictors - for each analysis search 7 times - analysis stops when rest of predictors aren't sig

Hierarchical regression

Purpose: use if two or more hypothesized blocks of predictors - test whether each predictor block (and the predictors in each block) incrementally contributes to a sig. proportion of variance in the model R2 - each block represents a set of conceptually similar predictors - e.g., demographic block (age, gender, race, SES), personality block (extraversion, agreeableness, conscientiousness, neuroticism, openness), psychological block (attitudes, beliefs, perceptions, ideology), social block (peer norms, family variables, social environment) - how to determine the entry order for each block - blocks hypothesized to occur earlier in development or time should be entered earlier - e.g., demographics entered before personality, personality entered before psychological

Dummy predictors in regression

Reference group - do not enter 1 dummy predictor (reference group) into regression mode - group with largest N or research question based (comparison groups will each be contrasted with this reference group) - e.g., White (do not enter this dummy predictor into regression model) - white represented by the zeros that overlap across each dummy coded variable Comparison groups - enter dummy predictors for these comparison groups in regression mode - e.g., latino, black, and asian (enter these dummy predictors into regression model) - minimum n = 10 per group for sufficient stat power - showing 0 value for all groups - automatically reference group - reference group being contrasted with other groups - e.g, latino, black, and asian groups compared to white groups - a participant showing 0 values for all the comparison groups is automatically a reference group members - this is inferred by simultaneously controlling for all dummy (comparison group) predictors in the regression model

Regression: Residuals

Residual (error) - deviation (distance) of Y observed scores vs. Y' predicted scores - small residuals (near zero) are desirable - noise, inaccuracy of predictor, if observed data points not falling on regression line considered error in prediction - want regression line variability to be high and residual variability to be small - issue always add up to zero - positive and negative residuals always cancel out - negative residual indicates Y' is higher than y observed - positive indicates y observed higher than y' predicted point - Y-Y' = residual - R = 1.00 if y and y' same values - positive residual is above regression line - negative residual below regression line - if zero - Y observed very close to or on regression line SSresidual = sum(Y-Y')^2 - conceptually: represents variability attributed to all the residuals (AKA variability not due to regression line) - smaller the SSresidual the better the correspondence between Y and Y' scores - thus, if all the Y observed scores fall exactly on the Y' regression line, then SSresidual = 0 - square each deviation residual value to eliminate negative side - if all observed data points fall exactly on regression line, SSresidual = zero - entire variability of dataset due to regression line Ordinary least squares (OLS) - regression - statistical estimation (extraction) method for regression - linear regression line (equation) is derived from minimizing the SSresidual value - essentially using the "least squared of the deviation scores" - squared deviation scores between observed and predicted points

Results APA: Regression assumptions

The statistical assumptions for regression analysis were evaluated. The quantitative variables were approximately normally distributed, with a skewness range of -1.14 to 2.02. The tolerance values of .83 to .98 indicated no multicollinearity problems. In the regression model, the set of predictors explained 8% of the variance in alcohol attitudes, F(7, 490) = 5.90, p < .001. Specifically, white compared to black (beta = -.17, p < .001) and Asian (beta = -.14, p < .01) race, male compared to female (beta = .13, p < .01), older age (beta = .10, p < .05), and higher peer attachment (beta = .12, p < .05), uniquely predicted higher positive alcohol attitudes. The predictors of Latino (vs. White) race (beta = -.16) and parent attachment (beta = -.04) were not significant in the model.

Results APA style Mediation

The statistical assumptions for regression analysis were evaluated. The skewness value of -1.08 to 0.52 indicated that the three variables were approximately normally distributed. The tolerance range of .93 to 1.00 did not reveal any multicollinearity problems. In the hypothesized model, the predictor was peer norms, mediator was alcohol attitudes, and outcome was drinking days. The meditational model was evaluated using the four steps recommended by Baron and Kenny (1986). The results are presented in Figure 1. First, higher peer norms was significantly related to higher drinking days (not controlling for the mediator alcohol attitudes). Second, higher peer norms significantly explained higher alcohol attitudes. Third, higher alcohol attitudes significantly predicted higher drinking days, after controlling for peer norms. Fourth, the reduction in beta weight from the significant c path to nonsignificant c' path signified full mediation. The meditational pathway from pee norms to alcohol attitudes to drinking days was significant, Sobel z test = 5.73, p < .001.

Results APA style Hierarchical regression model

The statistical assumptions for regression analysis were evaluated. The variables were normally distributed with a skewness range of -1.08 to -0.05. Multicollinearity problems were not encountered as tolerance ranged from .68 to .94. A hierarchical multiple regression model was estimated to predict alcohol attitudes. The results are presented in Table 1. In Model 1, the personality trait block explained 4% of the variance in alcohol attitudes, F(5, 491) =4.37, p < .01. Specifically, higher extraversion and lower conscientiousness uniquely predicted positive alcohol attitudes. In model 2, the personality and alcohol norms block together contributed to 16% of the variance in alcohol attitudes, F(7, 491) = 13.24, p < .001. specifically, higher extraversion, lower conscientiousness, higher peer norms, and higher parental norms uniquely predicted positive alcohol attitudes. Furthermore, the R2 change test determined that the alcohol norms block contributed beyond the personality block an additional 12% of the variance in alcohol attitudes, Fchange(2, 491) = 33.91, p < .001.

Indices of predictor to outcome relation

X1 explains how much prop. variance in Y Pearson r2 - not controlling for X2 - not controlling for any other predictors - ignoring the - a + c / a + b + c + d Semi-partial r2 - controlling for X2 - part of predictor that overlaps with outcome without conturing part that overlaps with other predictor - completely ignores overlapping part of predictor - a / a + b + c + d Beta2 - controlling X2 - considered 2 predictors correlated to an extend - a + (some other prop. of c) / a + b + c + d Partial r - controlling X2 - ignoring entirety of any overlap 2nd predictor has on outcome - a / a + d - never reported because Y is not all of Y (a + b + c + d)

Moderator

definition: variable that augments or reduces the strength of a predictor to outcome relation (essentially, the interaction effect on the outcome) Statistically analyzing moderation (AKA interaction effect) - Factorial ANOVA with interaction (predictor and moderator must be categorical) - Multiple regression with interaction (predictor and moderate could be quantitative or categorical - dummy coded) Examples - predictor: sun exposure, outcome: sunburn - moderators - sunscreen, residency - moderator considered another predictor - interaction over and beyond main effect - if interaction term is sig. moderation exists - draw interaction graph - if interaction sig, evidence of moderator go predictor --> outcome - responsible for increase or decrease strength of predictor relation to outcome - beta weight

Regression: SPSS

multiple r - correlation between set of predictors and outcome - stronger correlation, stronger predictor in contributing to explaining variance on outcome multiple r squares - interpret preferred over multiple r ANOVA - regression - regression + residual = total - total = variability in entire dataset - sum of squares = partitioning variance attributed to regression line and variability not attributed to regression line (residual) - ss/df = mean ss - msregression/ms residual = F - if sig - showing regression line sig stat beyond change - permission to examine if each predictor sig related to outcome Coefficients - constant = intercept - point at which regression line touches y intercept when x = 0 - x_hoursstudying = slope - B and beta weight sig p value same - just on diff metric - beta = standardized value - info on effect size strength between predictor and outcome

Regression: Multiple Predictors

unstandardized regression equation Y' = B0 + B1(X1) + B2(X2) + B3(X3)... standardized regression equation Zy' = Beta1(ZX1) + Beta2(ZX2) + Beta3(ZX3)... Each predictor's B and beta represent unique effects (after statistically controlling for all other predictors) on the outcome How SPSS automatically computes the standardized regression equation - first, all the variables (predictors and outcomes) are standardized (z scored) - next, the standardized variables are used to solve for and build the standardized regression equation - The beta coefficient for each predictor provides effect size information (after controlling for all other predictors) and ranges from -1.00 to 1.00

Meta analysis: cohen's d

unweighted model - effect sizes mathematically averaged by number of studies (without taking into consideration N in each study) - each study equally weighted in terms of effect size - each contributing same percent to summary effect fixed-effect model - studies possessing larger Ns are weighted more heavily - takes into consideration sample size random-effects model - effect sizes weighted as a compromise between N in each study and K # of studies - prevent 1 from overcontirbuting to study effect - considered most acceptable and accurate approach


Related study sets

Penny Chapter 18: The Ovaries and Fallopian Tubes Review Questions

View Set

🌺microbiology 130🌺chapter 8C ..Lac Operon...🌺️Exam 3

View Set

Unit 4 Money and Banking Vocabulary

View Set

Quiz Questions - Org Development

View Set

Colorado Licensing Rules / Regulations

View Set