COMPS: Cognitive Ability

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

How does one anchor assessments of subjective probability distributions?

Anchoring the assessments of subjective probability distributions: less extreme distributions

What is adverse impact?

Differences in hiring rates between subgroups. The hiring rate (selection ratio) is the percentage of applicants who were hired

What is differential prediction?

Differences in regression equations between subgroups. This is often used as a synonym for predictive bias.

What are intercept differences?

Differences in regression intercepts between subgroups. This is a form of differential prediction.

What are slope differences?

Differences in regression slopes between subgroups. This is a form of differential prediction.

What is differential range restriction?

Differences in the amount of range restriction between groups

What is differential validity?

Differences in the correlation between the test and criterion (i.e., validity) between subgroups

What is a heuristic?

Heuristic: mental shortcut that generally produces reasonable results observed decision behavior generated by reasonably small # of cognitive rules of thumb

What are the components of the two-factor theory of abilities proposed by Spearman (1904)?

Spearman (1904) proposed a two-factor theory of abilities, including general cognitive ability (g) and specific abilities (s)

What is test bias?

"Any construct-irrelevant source of variance that results in systematically higher or lower scores for identifiable groups of examinees" (SIOP 2003, p. 32). Two particularly relevant forms of test bias are measurement bias and predictive bias.

What is test fairness?

"Fairness is a social rather than a psychometric concept. Its definition depends on what one considers to be fair. Fairness has no single meaning, and, therefore, no single definition, whether statistical, psychometric, or social" (SIOP 2003, p. 31)

What are the major new findings of Nisbett et al. (2012) regarding intelligence?

(a) Heritability of IQ varies significantly by social class. (b) Almost no genetic polymorphisms have been discovered that are consistently associated with variation in IQ in the normal range. (c) Much has been learned about the biological underpinnings of intelligence. (d) "Crystallized" and "fluid" IQ are quite different aspects of intelligence at both the behavioral and biological levels. (e) The importance of the environment for IQ is established by the 12-point to 18-point increase in IQ when children are adopted from working-class to middle-class homes. (f) Even when improvements in IQ produced by the most effective early childhood interventions fail to persist, there can be very marked effects on academic achievement and life outcomes. (g) In most developed countries studied, gains on IQ tests have continued, and they are beginning in the developing world. (h) Sex differences in aspects of intelligence are due partly to identifiable biological factors and partly to socialization factors. (i) The IQ gap between Blacks and Whites has been reduced by 0.33 SD in recent years.

What are some strategies that use p[redictors with smaller subgroup differences than cognitive ability?

1. Use alternative predictor measurement methods (e.g., interviews, work samples, assessment centers, situational judgment tests, biodata) Generally effective, but specific reductions are quite variable. Predictors with smaller cognitive loadings produce smaller differences. Some methods decrease differences for one group but increase them for another. 2. Use educational attainment or GPA as a proxy for cognitive ability. Small to moderate reduction in subgroup differences compared to cognitive ability. It is less valid than cognitive ability and susceptible to faking in self-reports. 3. Use specific measures of ability Specific (narrow) measures of cognitive ability (e.g., verbal, quantitative) have smaller subgroup differences than overall cognitive ability. Male/female differences may be larger than for overall ability and may favor men (quantitative ability) or women (verbal ability)

What are some strategies that allow practice?

13. Retesting → small to no reduction 14. use predictor orientation programs → small & inconsistent

What are some strategies that foster favorable applicant reactions?

15. Increasing and retaining racioethnic minority and female applicants → small reductions 16. Enhance applicant perceptions → small reductions

What are some strategies that combine and manipulate scores?

4. Assess the full range of KSAOs Generally effective, but the magnitude of reduction depends on predictor validities and intercorrelations. Diminishing returns after adding four or more predictors. 5. Banding and score adjustments Reductions can be sizeable if using racioethnic minority or female preference within bands; otherwise reductions are small or nonexistent Racioethnic minority or female preference is usually illegal 6. Explicit predictor weighting small to moderate reduction in subgroup differences Greater reduction likely comes from choosing which predictors to put in the battery, rather than differential weighting within the battery 7. Criterion weighting small to moderate reduction in adverse impact when weighting contextual performance

What are some strategies that reduce construct irrelevant variance from predictor scores?

8. Minimize verbal ability requirements to the extent supported by job analysis --> generally effective but magnitude variable 9. use "content free" items that don't promote any cultural subgroup → small magnitudes & inconsistent 10. Differential Item Functioning → small magnitudes & inconsistent 11. Sensitivity review panels → no data on effectiveness 12. No time limits → no clear reduction in subgroup differences

What is over/underprediction?

A form of differential prediction in which one's subgroup's regression line relating the test and criterion lies above the other subgroup's regression line, resulting in overprediction of criterion performance for the subgroup with the lower regression line when the common regression line (regression line for both subgroups combined) is used to predict that subgroup's criterion performance.

What is differential item functioning?

A form of measurement bias wherein individual items are biased such that, when ability is held constant. item difficulty or discrimination is different for subgroups

What is a statistical artifact?

A statistical artefact is an inference that results from bias in the collection or manipulation of data. A statistical artifact can be defined as erroneous or inaccurate information about a phenomenon caused by biased data and/or measurement. The implication is that the findings do not reflect the real world but are, rather, an unintended consequence of measurement error. For example, when a study found a statistically significant difference in intelligence between two ethnic groups but in actually the difference is caused by the cultural insensitivity of the tool used to measure intelligence, the finding can be viewed as a statistical artifact.

When is the adverse impact potential greatest?

Adverse impact potential is greatest when the mean subgroup score difference is large (as with cognitive ability tests) and the selection ratio is small (as is typical in high-stakes personnel selection settings)

Is g the only important difference among people?

Although one cannot ignore the huge influence of individual differences in g, acknowledging that g is important does not imply that other factors are unimportant codeterminants of valued outcomes.

Which Strategies Do Not Involve a Validity Tradeoff (Ployhart et al., 2008)?

Among the most effective strategies, the only strategy that does not also reduce validity is assessing the full range of KSAOs (Strategy 4). → tends to enhance validity The other effective strategies will reduce validity, although sometimes to only a small degree --> (a) minimizing verbal ability (Strategy 8) appears to lower validity for some predictors (situational judgment) but not others (assessment centers) many of the more distal techniques, such as retesting (Strategy 13), increasing and (b) retaining racioethnic minority and female applicants (Strategy 15), and enhancing applicant reactions (Strategy 16), appear to have little to no effect on validity

What is measurement invariance?

An instance in which the factor structure of a test is the same (invariant) for two subgroups. Lack of measurement invariance (i.e., DIF) is a form of measurement bias.

What is range restriction?

An instance in which the range of scores on a test is restricted in a sample due to selection of the sample based on test scores or some variable related to test scores. Range restriction is a statistical artifact that systematically reduces the correlation between test and criterion scores. Example: Imagine a cognitive ability test has a SD of 1.0 in an applicant pool. If only applicants scoring in the top 10% on cognitive ability were hired, then the standard deviation of cognitive ability test scores would be only about .39 in the hired incumbent sample. The restricted range would decrease the correlation between the cognitive ability test and job performance in the incumbent sample than in the entire applicant pool (if the entire applicant pool had hypothetically been hired)

What is predictive bias?

An instance in which, "for a given subgroup, consistent nonzero errors of prediction are made for members of the subgroup" (SIOP 2003, p. 32). This is often used as a synonym for differential prediction.

Is g the primary determinant of individual differences in merit?

Assuming that merit is based on a person's demonstrated capacities and accomplishments, we can understand that g is an important influence. Higher g people are more likely to be able to develop capacities to demonstrate meritorious accomplishments; individual differences in g are going to account for a large part of the variance in any criterion used to assess merit. However, it should also be obvious that accomplishments are the result of a multitude of interacting factors that are genetic and environmental, cognitive and noncognitive, and distal and proximal. g is not a sole determinant, and having high g does not guarantee that one will exhibit meritorious behavior.

What is a bias of imaginability (availability heuristic)?

Biases of imaginability: when the frequency of an instance is not stored in memory, we sometimes generate this frequency according to some rule For example when we want to estimate which is more frequent, the existence of committees of 8 members or of 2, we will mentally construct committees and rate them by the ease of this construction. The mental construction of 2 member committees is easier, and thus may be considered as most frequent. In real life imaginability biases can lead us to overestimate some risks with vivid scenarios and underestimate dangerous risks that are hard to conceive

What are two reasons for HR researchers to adopt a multidimensional view of intelligence

Bifactor models: Consistent with the bifactor model of cognitive abilities (Humphreys, 1981; Yung, Thissen, & McLeod, 1999)—the value of specific abilities for predicting job performance need not necessarily reside in their incremental prediction over and beyond g. Instead, it might make sense to treat specific abilities and g as two separate notions that can compete for predictive utility simultaneously, rather than awarding all their common variance explained to g. Adverse impact reduction: When we consider covariates of cognitive ability other than general job performance, the relationships can vary considerably across the specific abilities One prime example is ethnic group differences, which can differ W.J. Schneider, D.A. Newman / Human Resource Management Review 25 (2015) 12-27 15 considerably from one specific cognitive ability to the next. For example, Roth et al. (2001) showed that the average Black-White subgroup d was at least 10% smaller for math tests than for verbal tests, using within-job industrial samples of both applicants and incumbents. Whereas Hough, Oswald, & Ployhart (2001; who did not focus on within-job samples) showed that average Black- White subgroup d was much smaller for tests of memory and tests of mental processing speed (see similar findings by Outtz & Newman, 2010)

What were the components of Cattrell's original gf-gc theory?

Cattell's (1941, 1943) original gf-gc theory split Spearman's g into two general factors: abilities that are acquired (crystallized intelligence or gc) and abilities that reflect natural potential and the biological integrity of the cerebral cortex (fluid intelligence or gf). Cattell's original theory split Spearman's g into 2 factors: abilities that are acquired (crystallized intelligence or gc) and abilities that reflect natural potential and the biological integrity of the cerebral cortex (fluid intelligence or gf)

Is g sufficient for effective or exceptional domain-specific performance?

Clearly, the answer is no. Even a moderate understanding of the development of performance capabilities requires the addition of multiple factors.

What is the difference between a conjunctive event and a disjunctive event (anchoring heuristics)?

Evaluation of conjunctive vs disjunctive events: overestimate of the conjunctive events and underestimate the disjunctive events (a complex system will malfunction if any of its essential components fail). Although, each likelihood of failure is small but the whole failure can be possible because many components are involved). conjunctive events --> e.g., such as drawing a red marble seven times in succession, with replacement, from a bag containing 90 percent red marbles and 10 percent white marbles disjunctive events --> e.g., such as drawing a red marble at least once in seven successive tries, with replacement, from a bag containing 10 percent red marbles and 90 percent white marbles.

What is face validity?

Face validity: the extent to which examinees perceive the content of the selection procedure to be related to the content of the job (Smither et al., 1993)

What is a bais due to the retrievability of instances (availability heuristic)?

Familiar, salient, and recent instances are easier to recall

What happens to adverse impact as the selection ratio gets smaller?

Given a mean score difference between subgroups and all else equal, adverse impact will become greater as the selection ratio becomes smaller

Is g the most important difference among people?

If we define "most important" specifically in terms of accounting for the most variance in "job performance" (for jobs typically studied by industrial-organizational [I-O] psychologists) or "academic performance," then g is almost always the "most important" factor. In the aggregate, individual differences in g predict a wider array of criteria and do so better than any other single variable. Fortunately, we are not limited to a single variable, and we do not have to limit our criteria to the common variance among measures of job or academic performance. We are adept at assessing a wide range of individual differences, and also we know that an assessment of individual differences on a wide array of factors will always predict a single criterion, or an array of criteria, better than any single variable, whether that single variable is g or any other construct. "For a given domain, what combination of traits and environmental conditions afford the best probability of acquiring expertise and demonstrating effective performance?"

What is illusion of validity (representativeness heuristic)?

Illusion of validity: confidence based on degree of representativeness/consistency BUT more accurate based on several info when they are independent rather than when they are correlated

What is an illusory correlation?

Illusory correlation: occurs when two different variables occur at the same time and an unproven connection is made based on little evidence e.g., an individual has a bad experience with a lawyer and they immediately assume all lawyers are bad people

What is insensitivity to predictability (representativeness heuristic)?

Insensitivity to predictability: if no info relevant to profit then the save value should be assigned to all companies (low predictability) (but people assign different values based on favorability)

What is insensitivity to prior probability (representativeness heuristic)?

Insensitivity to prior probability: whether the description of the person resembles an lawyer and overlook the base rate of an lawyer

What is insensitivity to sample size (representativeness heuristic)?

Insensitivity to sample size: identical judgments for large vs. small samples

What Are the Major New Developments in Alternative Predictor Measurement Methods for Reducing Adverse Impact (e.g., interviews, SJTs, ACs, work samples, GPA and educational attainment, constructed responses)?

Interviews = Black and Hispanic scores about one-quarter of a standard deviation lower than Whites, but reasonably high validity → subgroup diffs larger as cog load increases SJTs = SJTs reduce subgroup diffs but only for some subgroups → smaller Black-White differences than Asian-White diffs → driven by cognitive loading ACs = high predictive validity, but subgroup differences due to cognitive loading Work samples = subgroup diffs larger and validity smaller GPA & educational attainment = more related to motivation than cognitive ability; hard to compare constructed response options = reduced subgroup differences compared to multiple choice

What is one reason why a researcher might deem face validity important?

It's one of the few elements of a test (from the lens of validity) that is in the control of the researcher.

Is cognitive ability unidimensional?

No, cognitive ability is not unidimensional. Whereas unidimensional models of cognitive ability do not provide horrible fit to cognitive test data, a huge amount of empirical evidence from factor analysis suggests that unidimensional models of cognitive ability tests have notably worse model fit than multidimensional models and hierarchical models.

What did Chan et al. (1997) find regarding racial differences in face validity perceptions and test-taking motivation?

Our findings indicate lower levels of face validity perceptions and test-taking motivation for Black examinees, but the correlation between face validity and test performance is lower for Black examinees than for White examinees.

What are the major findings of Cottrell et al. (2015) regarding the impact of development on cognitive ability

Our results suggest that Black-White gaps in cognitive test scores are large and pervasive, and are already established by 54 months of age. We show that race gives rise to a set of group differences in maternal advantage factors: income, maternal education and maternal verbal ability/knowledge (Step 1), which in turn lead to parenting factors of maternal sensitivity, acceptance, physical environment, learning materials, birth weight, and birth order (Step 2), which in turn promote cognitive ability/knowledge in children (Step 3). Maternal verbal ability/knowledge directly impacts a child's cognitive ability/knowledge Adverse impact created by cognitive tests may arise from Black-White differences in the important developmental conditions described above.

What are the findings of Sackett et al. (2023) regarding the relationship between general cognitive ability and job performance?

Overall, general cognitive ability (GCA) measures produce useful correlations with measures of job performance in the modern era, with a mean observed correlation of .16 with residual correlation of .09, and a mean corrected correlation of .22, with residual SD = 0.11. This is markedly lower than the .51 estimate produced by Schmidt and Hunter (1998), but more similar to the estimate of .31 produced by Sackett et al. (2022) based on revisiting prior meta-analyses. We conclude that GCA is related to job performance, but our estimate of the magnitude of the relationship is lower than prior estimates.

What does the ability-performance compatibility principle suggest? What is a major issue with this principle?

Proposes that general abilities predict general job performance, whereas specific abilities predict specific job performance. Only the first half has been rigorously research More theory is needed to evaluate the second half

What is measurment error?

Random error affecting tests or criteria. Measurment error is one type of statistical artifact.

What are practical effects of applicant reactions to selection procedures?

Reactions can indirectly influence applicant pursuit or acceptance of job offers through perceived organizational attractiveness Reactions may relate to the likelihood of litigation and the success of the legal defense of the selection procedure Reactions may indirectly affect both validity and utility by their effect on test-taking motivation and loss of qualified applicants, respectively

What is a caveat of purposefully reducing subgroup differences in a selection process?

Reducing subgroup differences for one group may exaggerate them for another. Ryan et al. (1998) demonstrated that, relative to selecting solely on cognitive ability scores, selecting solely on personality scores would reduce adverse impact against Blacks and Hispanics but would simultaneously increase adverse impact against women

What Are the Major New Developments in Strategies for Reducing Adverse Impact?

Rejected White applicants more likely to retake predictor. Inverted U-shaped relationship between predictor performance and propensity to retake the selection predictor. Small effects of Black withdrawal on adverse impact → even when withdrawal was related to predictor scores, reducing Black withdrawal would not produce substantial reduction in AI Blacks usually have more negative reactions to subgroup differences → stereotype threat

What are 3 heuristic principles people use to reduce the complex tasks of assessing probabilities but also to reduce the predicting values?

Representativeness: probabilities evaluated by degree to which A is representative of B Availability: people assess the freq of a class or probability of an event by the ease with which instances or occurrences could be brought to mind Anchoring: people make estimates starting from an initial value which is adjusted to yield the final answer. Different starting points yield different estimates, which are biased towards the initial values.

Why should practitioners be cognizant of methodological factors that influence subgroup differences?

Subgroup differences tend to be underestimated in incumbent settings because of range restriction. Differences in predictor reliability can distort comparisons of subgroup differences. Therefore, when comparing predictors, reviewing the literature, or conducting a concurrent validation study, realize that many of the same factors attenuating criterion-related validity are also attenuating subgroup differences.

As mentioned in Berry (2005), what does the literature suggest about racial differences in cognitive ability tests regarding differential validity and differential prediction?

Test bias → some aspect of the test causing it to work systematically differently across racial/ethnic subgroups The literature suggests that observed cognitive ability test validity is somewhat lower for African Americans and Hispanic Americans than for Whites. Available evidence now suggests that cognitive ability tests do exhibit test bias in theform of predictive bias The non-White regression line typically has a lower slope and intercept than the White regression line, meaning that cognitive ability tests do not predict job performance quite as strongly for the non-White subgroup members

What did Chan et al. (1997) find regarding the effects of test-taking motivation?

Test-taking motivation affects subsequent performance on a parallel test even after the effects of race and performance on the first test are controlled

What are the components of the Cattell-Horn-Carroll (CHC) Model?

The CHS Model is the most widely accepted and empirically validated theory of intelligence. perceptual abilities = auditory processing & visual-spatial processing vulnerable abilities = short-term acquisition & retrieval, fluid intelligence, processing speed expertise abilities = crystallized intelligence, tertiary storage & retrieval, quantitative knowledge

Is g necessary for effective or exceptional domain-specific performance?

The answer clearly is yes for virtually every domain studied by psychologists to date. Although the relative importance may vary across domains, there is always some minimal level of g required, and as the environment becomes increasingly complex, the minimum requirement increases.

Is g important only because we measure it?

The argument is that correlations between g and criteria only exist because people have been preselected on the basis of g.This is a false argument. The influence of individual differences in g will manifest whether or not we acknowledge and measure them. The correlations observed between measures of intelligence and various criteria reflect natural covariation. The covariation does not suddenly exist because we measure g and a criterion. The importance of g comes not from our assessment of it per se, but rather from its relations with a vast array of important criteria. It is the breadth and depth of the g nexus and the robustness of that nexus across cultures, countries, time, and environments that make g important.

Is a single factor model the best representation of the predictor space?

The construct space of mental abilities is best described, not by a single factor model, but rather by a factor hierarchy with numerous specific abilities occupying the lower levels, a small number of group factors at an intermediate level, and a single general factor at the top (Carroll, 1993). The appropriate level of measurement within the hierarchy will depend on the purpose of prediction and the nature of the criterion or criteria that one seeks to predict. Although each of the specific abilities is g-loaded, they each yield reliable, unique variance that may differentially relate to various classes of criteria. Furthermore, even a hierarchical model is inadequate if we consider the larger predictor space relevant to work psychology. As exemplified by Dawis and Lofquist's (1984) Theory of Work Adjustment, abilities, personality, and preferences (i.e., values, interests) will all interact with the environmental features of a job to determine a breadth of individual outcomes (e.g., satisfaction).

What is operational validity?

The correlation between a test and a criterion, free from range restriction and measurement error in the criterion (but not the test)

What is observed validity?

The correlation between a test and a criterion, uncorrected for statistical artifacts.

What did Chan et al. (1997) find regarding the effect of race on subsequent test performance?

The finding that the effect of race on subsequent test performance is mediated partially by motivation also provides evidence that a nontrivial portion of the typical Black-White difference in test performance may be explained through differences in test-taking motivation

Is there a single general mental factor underlying individual differences in specific mental abilities?

The g factor has been shown to be remarkably invariant across (a) different test batteries (Ree & Earles, 1991b; Thorndike, 1987); (b) the method of factor extraction (Jensen, 1998; Jensen & Weng, 1994); and (c) racial, cultural, ethnic, and nationality groups (Carroll, 1993; Irvine & Berry, 1988; Jensen, 1985; Jensen, 1998; Jensen & Reynolds, 1982).

What is a selection ratio?

The percentage of applicants hired. It is also referred to as the hiring rate and is calculated as the number of hired applicants divided by total applicants

What are some recommendations for minimizing the diversity-validity dilemma?

Use job analysis to carefully define the nature of performance on the job, being sure to recognize both technical and non technical aspects of performance Use cognitive and non cognitive predictors to measure the full range of relevant cognitive and non cognitive KSAOs, as much as practically realistic Use alternative predictor measurement methods (interviews, SJTs, biodata, accomplishment record, assessment centers) when feasible. Supplementing a cognitive predictor with alternative predictor measurement methods can produce sizeable reductions of adverse impact (if they are not too highly correlated) Decrease the cognitive loading of predictors and minimize verbal ability and reading requirements to the extent supported by a job analysis. Enhance applicant reactions. Although this strategy has only a minimal effect on subgroup differences, it does not reduce validity and is almost invariably beneficial from a public relations perspective. Simply using face valid predictors (such as interviews and assessment centers) goes a long way toward enhancing these perceptions. Consider banding. We emphasize the word "consider" because this remains a controversial strategy among IO psychologists and will substantially reduce subgroup differences only when there is explicit racioethnic minority or female preference in final hiring decisions.

Did Sackett et al. (2023) find a meaningful difference in GCA validity when using overall perform vs task performance as the criterion? How does this relate to the findings of Nye et al. (2022)?

We found no meaningful difference in GCA validity using overall performance versus task performance as a criterion, we revisited Nye et al. (2022). They report a mean corrected validity of .23 for GCA against a task performance criterion, comparable to the value of .22 produced in our study.

What is measurement bias?

When individuals who are identical on the construct measured by the test but who are from different subgroups have different probabilities of attaining the same observed score (Berry et al., 2011). Measurment invariance and differential item functioning analyses are ways to test for two different forms of measurement bias.

What is an example of a bias due to the effectiveness of a search set (availability heuristic)?

When we ask to compare the instances of the word 'love' with the word 'door' the first seems more frequent. A main reason for this is that besides the comparison of words, there is a hidden task of recalling contexts in which these words appear. It is generally easier to recall abstract contexts than concrete ones.

What is an example of a misconception of chance (representativeness heuristic)?

You have a bag of 10 blue marbles and 10 red marbles. After 10 blue will expect the next one is red (self-correcting process)

Which Strategies Are Most Effective for Reducing Subgroup Differences (Ployhart et al., 2008)?

most effective categories of strategies involve using predictors with smaller subgroup differences (Category I) and combining/ manipulating predictor scores (Category II) most effective = predictor measurement methods such as interviews and assessment centers (Strategy 1); assessing the entire range of knowledge, skills, abilities, and other constructs (KSAOs; Strategy 4); banding (Strategy 5; but only when using racioethnic minority or female preference); and minimizing the verbal ability requirements of the predictor (Strategy 8; but only to the extent supported by a job analysis)

What is a misconception of regression (representativeness heuristic)?

overlooking regression toward the mean

What is an issue with the CHC Model?

problem = aims to be complete taxonomy of cognitive abilities & can be overwhelming at first glance

List the abilities in the CHC Model that are associated with attention and memory.

short-term memory (Gsm): apprehend and maintain awareness of information that is useful for multi-step problem solving long-term storage & retrieval (Glr): store and consolidate new information and fluently retrieve the stored information learning efficiency: learning efficiency is presumably measured in trainability tests in which job-relevant knowledge and skills are taught to applicants and then applicants demonstrate how well they can recall the new knowledge or perform the new skills → r = .8 with cognitive ability tests & long-term assessments of job performance retrieval fluency processing speed (Gs): automatically and fluently perform relatively easy elementary cognitive tasks fluid reasoning (Gf): use deliberate and controlled mental operations to solve novel problems

Can specific abilities predict work performance, beyond g?

the incremental validity of specific abilities beyond g for predicting job performance and training performance is not large (ΔR2 typically varies from .02 to .06) the incremental validity of specific abilities beyond g for predicting job performance is typically larger than zero (and is similar in magnitude to the incremental validity of job knowledge tests, reference checks, job experience, and biodata measures; see Schmidt & Hunter, 1998) the incremental validity of specific abilities beyond g is likely larger when specific abilities can be weighted differently/tailored for each specific job, but declines when specific abilities are constrained to have the same predictive weights/regression coefficients across all job specific abilities often predict job performance above and beyond g, accounting for 2% or more additional variance in job performance

What is an insufficient adjustment (adjustment and anchoring heuristic)?

too close to the starting point

List the abilities in the CHC Model that are associated with knowledge and expertise.

verbal comprehension & knowledge (Gc) -- language ability and general knowledge have been classified among the strongest single cognitive predictors of job performance domain-specific knowledge (Gkn): declarative and procedural knowledge related to specialized interests academic abilities → reading/writing ability & quantitative knowledge (Grw and Gq)

List the abilities in the CHC Model that are associated with specific sensory modalities.

visual-spatial processing (Gv): perceive, discriminate, and manipulate images auditory processing (Ga): perceive, discriminate, and manipulate sounds


Kaugnay na mga set ng pag-aaral

Programmable logic controllers 2

View Set

PrepU Chapter 23: Asepsis and Infection Control

View Set

Business Driven Technology Baltzan

View Set

MRI quiz 3 image contrast and weighting

View Set

Chapter 20: Viruses, Bacteria, and Archaea

View Set

Volume and Surface Area Problems

View Set