Preliminary Exam-General

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Bommer, William H.; Johnson, Jonathan L.; Rich, Gregory A.; Podsakoff, Philip M.; Mackenzie, Scott B.1995ON THE INTERCHANGEABILITY OF OBJECTIVE AND SUBJECTIVE MEASURES OF EMPLOYEE PERFORMANCE: A META-ANALYSIS

(META) Subjective and Objective performance ratings, while related (.389), are not interchangeable and do in fact capture different information (Unless they are measures of the SAME construct at the SAME level)

Hough, Leaetta M.; Oswald, Frederick L.2008Personality Testing and Industrial-Organizational Psychology: Reflections, Progress, and Prospects

7 questions regarding personality testing in I/O -If the criterion is complex, the most effective predicrtor is also complex. Narrow predictor-criterion pairs (when aggregated) help us understand aggregated criteria (for example, different aspects of conscientiousness predict different aspects of performance)

2015Essentials of Personnel Assessment and Selection

Chapter 5: Minimizing Error in Measurement <hr> Same as the rest of the validity/reliability chapters; i think i've hit saturation here

Breaugh, James A.2009The use of biodata for employee selection: Past research and future directions

Overview of Biodata research in selection <hr> Biodata has been shown to be a strong predictor of performance and turnover

Adler, Seymour; Campion, Michael; Colquitt, Alan; Grubb, Amy; Murphy, Kevin; Ollander-Krane, Rob; Pulakos, Elaine D.2016Getting Rid of Performance Ratings: Genius or Folly? A Debate

The center of the debate (Split this into two cards; one for and one against) <hr> Background Despite being created to help managers guide employees to increasing performance, many find them useless and demotivating! <hr> Arguments for: First, let's clarify the argument: Should we continue to collect psychometrically meaningful, quantitative performance index data? <strong>Lack of expected change on performance management practice if ratings were removed (nothing would really change) <strong>Performance is always evaluated somehow can be as informal as "knocked it out of the park" or "isn't cut out for sales" Even purely developmental cultures need an image of where an employee is now and where they want to end up; you need ratings to develop <strong>"Too hard" is no excuse Appeal to pride (this is what we do!), also legally they have to be in place so get back to rating <strong>Ratings have merit for orgs! We all agree that there is some agreement with true score JP Try to establish systematic validfation of frequent, informal check ins, I dare ya!!! Strong performers naturally come to individually-recognizing contexts <strong>Artificial tradeoffs are leading organizations to abandon ratings inappropriately How do we determine accuracy Alr=ternatives? Narrative only? <strong>We should instead focus on IMPROVING ratings Calibration meetings between managers <hr> Arguments against: <strong>Disappointing Interventions Introduction of improved scales led to such megar results, Landy and Farr (1980) called for an end to scale improvement research While FOR training helps a bit, it relies on the assumption that raters lack the ability to rate accurately, when the real problem is conflicting goals and motivations <strong>Disagreement between multiple raters &nbsp;(Sports raters, who have a simpler task, often disagree too!) Raters, regardless of position, are not as in agreement as different forms of paper and pencil tests <span style="background-color: #ff666680">(We can hit this one by talking about the meaningful disagreement)</span> <strong>Failure to develop adequate criteria for rating <em>evaluation</em> Lack of errors does not equal accurate ratings, and even using errors as a criteria, we don't know how much counts AS an error Accuracy measures are often entirely unrelated to one another it is almost impossible to measure accuracy in the field <strong>Week performance x rating relationships Due to many things, state of economy, rater political goals, etc. <strong>Conflicting goals of ratings in organizations Org decisions and development are incompatible as it is between person vs within person, and different forms of range restriction (i dont buy this one) <strong>Inconsistent effects of ratings on performance Feedback is not acted upon for a variety of reasons (ego threat, what have you) <strong>Weak relationship between rating research and practice

Hough, Leaetta M.; Oswald, Frederick L.; Ock, Jisoo2015Beyond the Big Five: New directions for personality research and practice in organizations

; Discusses the four main personality taxonomies (Big Five, HEXACO, Circumplex, Nomological Web-Clustering <hr> Issues with big Five Lexical hypothesis and factor analysis based creation means that it was at the mercy of what was originally included, and thus is likely deficient Also the facet level factros do not always load only onto their respective higher order traits, but can cross load on multiple Circumplex model (if the standard Big Five is orthogonal, this is oblique; lets the traits correlate, with those closer in the circle correlating more than others)

Spector, Paul; Brannick, Michael2011Methodological Urban Legends: The Misuse of Statistical Control Variables

<p dir="rtl">Control variables are being included in regressions lackadasically, and are being misused due to urban legends. <p dir="rtl">If controls are to be included, they should be explicitly and theoretically explained <p dir="rtl">Demographic variables should not be used as proxies for the actual underlying variables one wishes to study <hr> There must be theory linking controls and the variables of interest, control variables don't just magically sanctify your data (this is the urban legend or the "purification principle") Note: Remind self what supression is in regression Removing &nbsp;variabilitity attributable to a control variable that is CAUSED BY the outcome variable removes the effect one wishes to study &nbsp;= throwing baby out with bathwater (<strong>in otherwords, controlling is only effective if contamination/supriousness actually occured!) <strong>Contamination = Z variable affects the MEASUREMENT of X or Y (desireability) Contamination of X and Y by the same variable = inflated correlation Contamination of X and Y by different variables -= deflated correlation <strong>Spuriousness = Z variable affects the x or Y variables directly (confounding)_

COMPOSITE VS. MULTIPLE CRITERIA: A REVIEW AND RESOLUTION OF THE CONTROVERSY - SCHMIDT - 1971 - Personnel Psychology - Wiley Online Library

<strong>Use a composite criterion when you want to validate tests (predictive validity) and multiple criterion when you want to understand the constructs at play

Sackett, Paul R.; Shewach, Oren R.; Keiser, Heidi N.2017Assessment centers versus cognitive ability tests: Challenging the conventional wisdom on criterion-related validity.

<div data-citation-items="%5B%7B%22uris%22%3A%5B%22http%3A%2F%2Fzotero.org%2Fusers%2F6824199%2Fitems%2FSLFZSTH9%22%5D%2C%22itemData%22%3A%7B%22id%22%3A%22http%3A%2F%2Fzotero.org%2Fusers%2F6824199%2Fitems%2FSLFZSTH9%22%2C%22type%22%3A%22article-journal%22%2C%22abstract%22%3A%22Separate%20meta-analyses%20of%20the%20cognitive%20ability%20and%20assessment%20center%20(AC)%20literatures%20report%20higher%20criterion-related%20validity%20for%20cognitive%20ability%20tests%20in%20predicting%20job%20performance.%20We%20instead%20focus%20on%2017%20samples%20in%20which%20both%20AC%20and%20ability%20scores%20are%20obtained%20for%20the%20same%20examinees%20and%20used%20to%20predict%20the%20same%20criterion.%20Thus%2C%20we%20control%20for%20differences%20in%20job%20type%20and%20in%20criteria%20that%20may%20have%20affected%20prior%20conclusions.%20In%20contrast%20to%20Schmidt%20and%20Hunter%E2%80%99s%20(1998)%20meta-analysis%2C%20reporting%20mean%20validity%20of%20.51%20for%20ability%20and%20.37%20for%20ACs%2C%20we%20found%20using%20random-effects%20models%20mean%20validity%20of%20.22%20for%20ability%20and%20.44%20for%20ACs%20using%20comparable%20corrections%20for%20range%20restriction%20and%20measurement%20error%20in%20the%20criterion.%20We%20posit%20that%202%20factors%20contribute%20to%20the%20differences%20in%20findings%3A%20(a)%20ACs%20being%20used%20on%20populations%20already%20restricted%20on%20cognitive%20ability%20and%20(b)%20the%20use%20of%20less%20cognitively%20loaded%20criteria%20in%20AC%20validation%20research.%22%2C%22container-title%22%3A%22Journal%20of%20Applied%20Psychology%22%2C%22DOI%22%3A%2210.1037%2Fapl0000236%22%2C%22ISSN%22%3A%221939-1854%2C%200021-9010%22%2C%22issue%22%3A%2210%22%2C%22journalAbbreviation%22%3A%22Journal%20of%20Applied%20Psychology%22%2C%22language%22%3A%22en%22%2C%22page%22%3A%221435-1447%22%2C%22source%22%3A%22DOI.org%20(Crossref)%22%2C%22title%22%3A%22Assessment%20centers%20versus%20cognitive%20ability%20tests%3A%20Challenging%20the%20conventional%20wisdom%20on%20criterion-related%20validity.%22%2C%22title-short%22%3A%22Assessment%20centers%20versus%20cognitive%20ability%20tests%22%2C%22URL%22%3A%22http%3A%2F%2Fdoi.apa.org%2Fgetdoi.cfm%3Fdoi%3D10.1037%2Fapl0000236%22%2C%22volume%22%3A%22102%22%2C%22author%22%3A%5B%7B%22family%22%3A%22Sackett%22%2C%22given%22%3A%22Paul%20R.%22%7D%2C%7B%22family%22%3A%22Shewach%22%2C%22given%22%3A%22Oren%20R.%22%7D%2C%7B%22family%22%3A%22Keiser%22%2C%22given%22%3A%22Heidi%20N.%22%7D%5D%2C%22accessed%22%3A%7B%22date-parts%22%3A%5B%5B%222022%22%2C8%2C28%5D%5D%7D%2C%22issued%22%3A%7B%22date-parts%22%3A%5B%5B%222017%22%2C10%5D%5D%7D%7D%7D%5D" data-schema-version="8">Separate metas of cognitive ability and ACs as predictors of performance indicate cognitive ability (meta r = .51) is stronger than ACs (meta r = .37) (Schmidt &amp; Hunter, 1998) Focusing on 17 samples in which AC and cog ab scores were obtained for participants (controlling for job type, etc. unlike the previous meta) shows AC (meta r = .44) stronger than cog ab (meta r = .22) This descrepancy is due to previous metas using ACs on populations already rstricted in range on cog ab and the use of less cog-loaded AC criteria in validation research This is Sacket et al., 2017. Remember this one. ACs GOOD. <hr> <strong>Why have we seen deflated criterion validity figures for ACs and inflated ones for GMA? "(a) Ability tests are commonly used to predict narrower, more cognitively loaded criteria (types of job perf) than ACs (b) ACs are broader than ability tests in the constructs they assess and are typically used to predict similarly broad criteria (GMA is usually prediecting task performance, ACs are usually pedicting broader ideas such as "leadership", or "communication") (c) there is an emerging body of research showing lower ability test validity against broader criteria than against narrow, cognitively loaded criteria( <span class="highlight" data-annotation="%7B%22attachmentURI%22%3A%22http%3A%2F%2Fzotero.org%2Fusers%2F6824199%2Fitems%2FYJMLKP32%22%2C%22pageLabel%22%3A%221436%22%2C%22position%22%3A%7B%22pageIndex%22%3A1%2C%22rects%22%3A%5B%5B167.324%2C242.719%2C297.062%2C250.963%5D%2C%5B57%2C231.72%2C297.065%2C239.964%5D%2C%5B57%2C220.72%2C114.676%2C228.964%5D%5D%7D%2C%22citationItem%22%3A%7B%22uris%22%3A%5B%22http%3A%2F%2Fzotero.org%2Fusers%2F6824199%2Fitems%2FSLFZSTH9%22%5D%2C%22locator%22%3A%221436%22%7D%7D">"criterion-related validity was 39% lower against a broad criterion composite than against task performance alone"</span>; Gonzales-Mule et al., 2014) (d) samples are differentially affected by range restriction for ACs than for ability tests, and validity must be examined in reference to the same test-taking population for both." <span class="citation" data-citation="%7B%22citationItems%22%3A%5B%7B%22uris%22%3A%5B%22http%3A%2F%2Fzotero.org%2Fusers%2F6824199%2Fitems%2FSLFZSTH9%22%5D%2C%22locator%22%3A%221435%22%7D%5D%2C%22properties%22%3A%7B%7D%7D">(<span class="citation-item">Sackett et al., 2017, p. 1435</span>)</span>

Spain, Seth M.; Harms, Peter; LeBreton, James M.2014The dark side of personality at work

A general overview of the dark personality traits, issues with assessing them, and future research directions <hr> Negatively evaluative terms like EVIL were left out of the big 5's development. When added back in, you get the big 7 (adding positive and negative valence--self-evaluativeness; Waller &amp; Zavala, 1993) <strong>The Dark Triad <em>Machiavellianism</em> (willing to manipulate others, though not necessarily better at manipulating than others) <ul> <li> neuroticism + </li> <li> Conscientiousness - </li> </ul> <em>Narcissism</em> (subclinical level of Narcissistic personality disorder) Openness, extroversion, neuroticism + <em>Psychopathy</em> (Impulsivity and Thrill-seeking + low empathy and anxiety Conscientiousness and neuroticism - Openness and Extroversion + all three DT traits re negatively correlated with agreeableness *in terms of HEXACO, all three relate to Honesty-humility, pointing to evidence that this factor may be a general dark factor META by Oboyle et al. (2012) shows Machiavelli and Psychopathy DT traits wekaly negatively correlate with JP DT negatively related to OCBs, positively related to CWBs and unethical decisiuonmaking

Wainer, HowardThe most dangerous profession: A note on nonsampling error.

A note on nonsampling (non-random sampling) error and how it can be overcome (or at least assessed using multiple imputation) <hr> We need to test missing data by collecting a small random sample

Johnson, Jeff W.; Carter, Gary W.2010VALIDATING SYNTHETIC VALIDATION: COMPARING TRADITIONAL AND SYNTHETIC VALIDITY COEFFICIENTS

APplies synthetic validation to a selection system development project COllected JA data from 4,725 incumbents, 619 supervisors to identify 11 familis and 27 components Developed 12 tests to predict performance on components Foudn synthetic validity coefficient similar to within-family validity coeffieicent for most job families <strong>Validities were highest when predictors were weighted according to number of relevant job components and when job component criterion measures were unit weighted DQ: Synthetic Validation and the future of Job Analysts b <hr> Synthetic validation = validity in a specific situation is inferred by identifying basic components of the job, determining validity of a selection instrument for predicing each individual component, then synthesizing those validities into a whole (with varying weighting schemes (Cascio, 1987)--principal citation -Used to assemble test batteries and calculate validity coeficients for jobs that its difficult for SV assumptions: 1) Job families share job components also have similar predictors and thus can be predicted by similar instruments 2) The validity of a test for predicting performance on a job component is similar across jobs <img alt="" data-attachment-key="USNE5UKD" width="640" height="514">

Klein, Katherine J.; Kozlowski, Steve W. J.2000From Micro to Meso: Critical Steps in Conceptualizing and Conducting Multilevel Research

Addresses 4 main decisions to be made when working with multilevel organizational data: Construct/Measurement Issues Model Specification Research Design/Sampling Data Analyses <hr> Findings at one level of analysis (between individual) do not neatly generalize to other levels (between teams); ecological correlations based on aggregate data generally inflate the lower level relationships Generalizing from large to small = ecological fallacy Generalizing from small to large = atomistic fallacy <strong>Construct/Measurement Issues 3 types of high-order constructs: Global Properties: Characterize the team as a whole, easy to observe; are properties of the team not the individuals (i.e. team function/location); a single expert can report this data for the whole team Shared Properties: Experiences, attitudes, values, etc. that are common across team members (can be explained by factors which limit variability such as org. attraction, socialization, etc.); why these properties are shared must be stated theoretically in the manuscript; gathered from individual team members, aggregated only if large agreement is present Configural Properties: Capture the pattern/variability of individual differences within a team (e.g., team age diversity, team performance/contributions to performance) <strong>Model Specification <u>Single Level Models:</u> All analysis is done at the individual or team level, straight-forward as long as the single level operationalization is justified <u>Cross-Level Models:</u> Describes relationships between variables at different levels of analysis Can be Direct Effect (team level variable predictor and individual level outcome) Can be Moderator Model (Interaction between individual predictor and group level predictor affect individual level outcome) Can be Frog-Pond Model(effect of standing within a group on individual level outcomes like performance relative to team mates predicting self efficacy) <u>Homologous Multilevel Models</u> The specified relationship between variables holds at multiple levels of analysis <strong>Research Design/Sampling Just like single-level range restriction, we want to avoid between united range restrction on variables of interest as well (homogeneity within teams is alright, but across teams--such as from org values--is problematic) <strong>Data Analyses Rwg = within-group variance as related to chance ICC 1 = estimate of the proportion of total variance explanable by group membership (larger = more agreement); for small groups use eta-squared ICC 2 = how reliable are group means in the sampl? RETURN HERE BEFORE DOING MULTILEVEL RESEARCH, BUT THIS WON'T BE ASKED ON PRELIMS

Highhouse, Scott2009Designing Experiments That Generalize

As long as the sample will not affect the operation of the construct(s) of interest, sample source doesn't matter nearly as much as people think it does!

Murphy, Kevin R.; Russell, Craig J.2017Mend It or End It: Redirecting the Search for Interactions in the Organizational Sciences

Authots argue that moderation studies in org sciences often lack statistical power and that, due to how studies are designed, only ever find small effect sizes (low reliability of multiplicative terms + misuse of transformations) Suggest only hypothesizing moderators when they can be meaningfully tested <hr> come back to this one if time permits; sooooo dry...

Anderson, NeilHandbook of Industrial, Work and Organizational Psychology

Chapter 6: Assessment of Individual Job Performance past and future Only write down what you didnt learn from criterion theory JP assessment (of different criteria types) can be grouped into organizational records (objective) and subjective evaluations (human judgement like ratings/rankings) everything else is just a survey of criterion theory <hr> Chapter 17: Utility Analysis Explain the history of utility analysis (modeling &nbsp;the expected gains and losses due to selection test implementation or interventions such as turnover vs money saved) Suggests Multiple Attribute Utility Analysis as an alternative to the standard single attribute utility analysis (that single attribute is, I assume, job performance) <hr> Utility Analysis (Cronbach &amp; Gleser, 1965) <strong>The Gold Standard of UA: Brogden-Cronbach-Gleser Approach Measure utility gains in "dollar criterion" (through the $Dy term which translates performance into dollar value) Hiring highest predicted performers leads to highest predicted utility gain Due to how the equation is set up, even small increases of the criterion-related validity of a selection instrument can result in huge utility gains <strong>Multi-attribute utility models (MAU) (started in the 60s and 70s) Considering, for example, applicant reactions, adverse impact, organizational image, AND job performance gains from a selection process (makes sense to model all of our goals, especially since they all somehow contribute to ou bottom line

Salgado, Jesús F.; Viswesvaran, Chockalingam; Ones, Deniz S.2001Predictors Used for Personnel Selection: An Overview of Constructs, Methods and Techniques

Designing a selection system from the ground up; what types of constructs are used (cogab, personality, biodata, etc.), how they are administered, and validity issues with each! Give this another look later <hr> All cogAb measures are just g, specialized abilities rarely account for incremental variance above g Psychomotor/physical ability tests are used in less than 10% of companies, but when they are used, theres generally 3 factors: muscle strength, endurance, and movement quality Psychomotor/perceptual abilities rarely add incremental criterion-related validity male-female differences tho (look into minimum cutoffs instead of highest performers) Enough, just look at the Sackett et al. (2021) paper for an update

Borman, Walter C.; Motowidlo, Stephan J.1997Task performance and contextual performance: The meaning for personnel selection research

Distinguishes between task and contextual performance, and considers OCBs as types of contextual perf Suggests that supervisors weight contextual and task performance equally in their overall ratings ] <strong>Suggest that personality predicts overall performance specifically because it predicts the OCB portion of the overall performance ratings the supervisors give <hr>

Brett, Joan F.; Atwater, Leanne E.2001360° feedback: Accuracy, reactions, and perceptions of usefulness.

Examined how discrepancies between 360 and self ratings related to feedback reactions, perceievd feedback accuracy, percieved usefulness, and receptivity Found that less favorable ratings were related to negative reactions and beliefs that the feedback was less accurate (makes sense), which were in turn related to perceptions of feedback as less useful Goal orientation was related to perceptions of usefulness several weeks post-feedback <hr> While goal orientation was related to perceptions of feedback usefulness several weeks post-feedback in the Brett and Atwater (2001) study, do we feel that more basic moderators should have been examined first? For example, general negative reactions to negative feedback may be more so a function of neuroticism than a function of low goal orientation. Though the authors mention personality briefly in the "future directions" section, why do we suppose that they began with goal orientation? Can we think of any additional (ID-related) explanations for the found relationships?

Judge, Timothy A.; Higgins, Chad A.; Thoresen, Carl J.; Barrick, Murray R.1999The Big Five Personality Traits, General Mental Ability, and Career Success Across the Life Span

Investigates the relationship between big five and cognitive ability with career success (longitudinal study following from childhood to retirement Found that: Conscientiousness positively predicted JS and income Neuroticism negatively predicted income <strong>Personality correlated with career success above and beyond cognitive ability <strong>Both childhood and adult personality contributed unique variance to career success <hr> Career success = extrinsic (income, status, etc.) + intrinsic (job satisfaction)

Podsakoff, Philip M.; MacKenzie, Scott B.; Podsakoff, Nathan P.2012Sources of Method Bias in Social Science Research and Recommendations on How to Control It

Is method bias actually a problem? If so, for what kinds of measures, and how do we control for it? (This is an updated version of Podsakoff 2003) <hr> CMB can inflate reliability (since we can't determine systematic method variance from construct-related variance), thus underestimating meta-anaylytic correlations Baumgartner &amp; STeenkamp (2001) with a multinational sample foudn that response style (extreme points, etc&gt;0 accounted for 27% of the variance in correlations Note: Check out the CFA-marker approach

Smither, James W.; London, Manuel; Reilly, Richard R.2005Does Performance Improve Following Multisource Feedback? A Theoretical Model, Meta-Analysis, and Review of Empirical Findings

META Found that: Multisource (supervisor, peer and subordinate) ratings relate to other leadership effectiveness measures, suggesting that multiple sources conceptualize leader performance similarly Performance rating improvement over time is generally small (k = 24) Improvement in response to feedback is likely when: feedback indicates change is necessary, recepients have positive feedback orientation and percieve a need to change behavior, believe the change is feasible,s set goals, and take improving action <hr>

Chapman, Derek S.; Uggerslev, Krista L.; Carroll, Sarah A.; Piasentin, Kelly A.; Jones, David A.2005Applicant Attraction to Organizations and Job Choice: A Meta-Analytic Review of the Correlates of Recruiting Outcomes

META Predictors of applicant attraction: Job-org characteristics (salary, benefits, etc.) recruiter behaviors (friendliness, etc.) perceptions of recruiting process (Face validity + perceived respect perceived fit hiring expectancies NOT: Recruiter demographics Perceived alternatives ALSO, MEDIATION: Predictors mentioned above ---&gt; Applicant Attitudes/Intentions---&gt;Job choice <hr>

Hoffman, Brian; Lance, Charles E.; Bynum, Bethany; Gentry, William A.2010RATER SOURCE EFFECTS ARE ALIVE AND WELL AFTER ALL

Re-examines the impact of rater source on multisource performance ratings (using CFA) since they believe systematic rater source effects are more important than rater perspective effects (despite prevailing thought that rater source effects don't matter)&nbsp; Found that:&nbsp; Structure of multi-source perf ratings = General perf + Diimensional perf (task/contextual) + idiosyncratic rater + source factors&nbsp; Source factors explain way more variance in perf ratings and general perf explains much less&nbsp; (though idiosyncratic effects still explained the most variance) Since source factors are so important, we really out to keep 360 feedback ---------------------- Hoffman et al. recount an anonymous reviewer's note that prior research has conceptualized source effects from different sources as (at least implicitly) complely orthogonal, with the authors instead finding 49% shared variance. My question is as follows: is this an accurate characterization of the literature? It seems logical to believe that, since the same individual is being observed, that a non-zero amount of overlap should be present, and I find it hard to believe that previous research would think otherwise.&nbsp;

Klein, Katherine J.; Zedeck, SheldonIntroduction to the Special Section on Theoretical Models and Conceptual Analyses: Theory in Applied Psychology: Lessons (Re)Learned.

Results of a JAP call for new theories Received 91 submissions, suggesting that scholars are eager to explore new theoretical outlets (maybe 20% will be accepted, due to clear, consistent theorybuilding) Claims that JAP has always been open to publishing nonempirical manuscripts of note SOME RECOMMENDATIONS FOR CREATING GOOD THEORY <ol> <li> Good Theory Offers Novel Insights (make it clear early in the manuscript what novel contributions are present) </li> <li> Good Theory is Interesting (I may disagree here, this is one of the reasons we're in the replication crisis now) </li> <li> Good Theory is Focused and Cohesive (and only as complex as it needs to be) </li> <li> Good Theory is Grounded in Extant Lit but is more than a Review (make sure editors know you know your stuff, but don't overwhelm) </li> <li> Good Theory Presents Clear Constructs and Clear, thoughtful links between constructs within a model (just because a costruct is clear to you, doesn't mean it will be clear to others) </li> </ol>

Proactive Behavior in Organizations

Reviews the nomological network of proactive behavior and the research domains that have addressed it, then gives recommendatinos for studying proactive personality <hr> Proactive behavior: Taking initiative in improving current curcimnstances or creating new ons + challenging the status quo Job crafting seems to be a proactive behavior by this definition Apparently as of 2000, there is no proactive behavior scale Important antecedents: Proactive personality, job involvement, need for achievement, org. culture/norms, management support Important Outcomes: Increased performance, career success, feelings of control, improved job attitudes Orbiting Constructs: Proactive Personality (there is a scale for this); many of the same outcomes Personal Initiative (measured via interview): Consistent with org mission + long-term + action-oriented + persistent in the face of obstacles + self-starting and proactive Role Breadth Self-Efficacy: Employees perceieved abillity to carry out broader more proactive tasks Taking Charge I'm...gonna call it here; i don't see this being relevant

Iddekinge, Chad H Van; Lanivich, Stephen E; Roth, Philip L; Junco, ElliottSocial Media for Selection? Validity and Adverse Impact Potential of a Facebook-Based Assessment

Summ: Had recruiters rate applicant facebook pages and compared these ratings with Job Performance and turnover after jobs were accepted and found that Facebook rating had no predictive power at all, and actually preferred Whites and Women.

The Earth Is Not Round (p = .00)

Tackles the myth that blindly replacing null hypothesis testing (p values) with effect sizes will always improve our science <hr>

Kuncel, Nathan R.; Sackett, Paul R.2014Resolving the assessment center construct validity problem (as we know it).

The construct validity problem was never real, but there may be a general AC factor which makes things difficult. <hr> Dimension ratings (which are created by combining excercise ratings) are the dominant source of variance in assessment center ratings (more so than excersize specific variance), but a general factor of AC performance still captures the most variance (but we dont know what this general factor is! combination of IDs? Test wiseness?) <strong>While the dimension ratings based directly on the excercises (PEDRs) tent to reflect more excercise variance than dimension variance, the combination of these PEDRs into overall dimensions are where we see the dimensions start to explain more variance than excercises Read Later for real

Spector, Paul E.2006Method Variance in Organizational Research: Truth or Urban Legend?

The current understanding of CMV as omnipresent and affecting all variables is mistaken and oversimplified...CMV is unlikely to affect correlations to a significant degree <hr> It is true that measuring two variables from the same source in the same way will affect the numbers generated, but some variables does not equal ALL variables affected monomethod vs multimethod correlations do not always differ significantly (additionally, we often see insignificant findings, and would expect far less if everything was correlated via CMV) Moorman and Podsakoff (1992) foudn through literature review that SD accounted for a very small amount of variance across 36 studies; found that SD is a supressor if anything; Ones et al. (1996) show small effect sizes for SD in personality-based job tests Rorer (1965) foudn that aquiessence changed by test (different respondents acquiesed to different scales) <strong>The ultimate takeaway: do away with the umbrella term of CMV; instead focus on the specific type of CMV (e.g. social desirability) and deal with partialling that out/otherwise procedurally controlling for it

Schmidt, Frank L.; Hunter, John E.The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings.

Updated 2016 version says that the highest validity selection battery is GMA + integrity tests (.78) and GMA + Structured interview (.74), both for performance and training. When combined with Sackett (2021), perhaps the GMA and interview is best (100 YEAR META)

Chan, David2009So why ask me? Are self-report data really that bad?

4 alleged issues with Sef-report data (and their debunking) 1: Construct Validity Issues Construct validity reduced due to the large number of systematic error variances (item comprehension, social desirability, etc.) <strong>However, much of these errors (loadings on method factor, cmv) are dependent on the context in which the test is given, which is, somewhat, controllable by researchers 2: Correlation Interpretation Due to cmv, self-report data are unsuited to provide information on correlations <strong>The random effects contained in these measurements lead to less than perfect reliability which deflates correlations, even i cmv is present it is balanced out 3:Social Desirability responding <strong>Not all scales are equally susceptible; certainly not all self report measures <strong>Ones et al. (1994) meta found extant but small correlations between desirability and personality scores and performance criteria <strong>Essentially, SD responding does exist, but the effects on actual data is small. Problem 4: Overvaluation of other-report measures Growing beliefs that: It is always better to use other-report measures if possible We can be more confident in the validity of other report data There are constructs which are better measured by self-report (job satisfaction), other-report (likability), and a combination of both (Performance)

Outtz, James L.2002The Role of Cognitive Ability Tests in Employment Selection

<span style="color: rgb(51, 51, 51)">Given that</span> <span style="color: rgb(51, 51, 51)">(a) cognitive ability tests can be combined with other predictors such that adverse impact is reduced while overall validity is increased</span> <span style="color: rgb(51, 51, 51)">(b) alternative predictors with less adverse impact can produce validity coefficients comparable to those obtained with cognitive ability tests alone,</span> <strong><span style="color: rgb(51, 51, 51)">sole reliance on cognitive ability tests when alternatives are available is unwarranted (even more so when considering the Sackett et al. (2021) which places structured interviews above GMA in terms of validity when adjusted for range restriction overcorrection</span>

Judge, Timothy A.; Cable, Daniel M.1997Applicant Personality, Organizational Culture, and Organization Attraction

<strong>Found that big five traits can be used to predict which dimensions of org culture are prefered by applicants (23% of variance explained) *High N leads to less liking for innovative and decisive cultures *High E individuals prefer aggressive, team-oriented cultures and are less attracted to supportive cultures *High O individuals prefer innovative cultures and avoid detail/team-oriented cultures *High A individuals prefer supportive, team-oriented cultures and dislike aggressive, outcome-oriented, and decisive cultures <strong>Both objective and subjective person-org fit predict org. attraction <strong>Mediation: Objective fit--&gt;Subjective Fit--&gt;Org Attraction

Rosenthal, R.; DiMatteo, M. R.2001Meta-Analysis: Recent Developments in Quantitative Methods for Literature Reviews

Advantages of the meta-analytic procedures: include seeing the "landscape" of a research domain keeping statistical significance in perspective minimizing wasted data, becoming intimate with the data summarized finding moderator variables <hr> you can fix "garbage in, garbage out" by weighting studies by quality

Shavelson, Richard J.; Webb, Noreen M.; Rowley, Glenn L.1989Generalizability theory.

An introduction to Generalizability Theory (GT), which is used to examine the dependability (reliability) oif measurements on specific populations See page 929 for an example of how to do a G study <hr> Classical Test Theory (again) Observed Score = True score + Error However the error portion can be day-to-day, item-centric, response centric, etc. Generalizability theory means that a score's usefulness depends on how accurately we can generalize accurately to a wider set of situations (we want the mean of all acceptable possible observations) GT doesn't <em>do</em> reliability (no interest in true score), GT cares about the above GT is the ANOVA of reliability (error)...maybe i'll understand what that means by the end of the article...<strong>seems like the main difference is identifying and accounting for multiple sources of error instead of throwing everything into "random error" More judges = more universes of possibilities = more potential generalizability of the mean GT always assumes steady behavior (behavior patterns do not change over time; so only use GT when the time interval is such that we would not expect behavior to meaningfully change <strong>Vocab Relative Decisions = individual differences-focused Absolute decisions = scores can be interpreted without reference to other cases Generalizability studies (estimate the magnitude of as many error sources as possible) vs Decision Studies (use info from G studies to design measurement which minimizes error "Facets" = error sources (person, occasion, item, residual Levels of facets = Start from Conceptual Underpinnings and really understand these vocab words before moving on

Reconnecting With the Spirit of Workplace Ethnography: A Historical Review - Michael J. Zickar, Nathan T. Carter, 2010

Gives the history of workplace ethnography Explains the benefits of workplace ethnography and why we should start using it more again <hr> Essentially anthropological approach; study work by working in the actual job with the people you study Data collected = autobiographical accoutns of workers, diaries, observations, etc.

Carter, Nathan T.; Dalal, Dev K.; Boyce, Anthony S.; O'Connell, Matthew S.; Kung, Mei-Chuan; Delgado, Kristin M.2014Uncovering curvilinear relationships between conscientiousness and job performance: How theoretically appropriate measurement makes an empirical difference

Authors note that current research on the curvilinear relationship between job performance and conscientiousness has yielded mixed results HOWEVER they claim this is because dominance models have been used to measure C instead of ideal point models (which, as we know, are better for personality constructs) reveals that the curvilinear relationship reveals itself almost all the time, AND leads to increased predictive validity when conscientiouissness is measured using ideal point models. <hr>

Hambrick, Donald C.2007The Field of Management's Devotion to Theory: Too Much of a Good Thing?

Blind devotion to theory impedes our ability to understand the aspects of the management world for which we dont/cant have theory (specifically through journal editors turning away promising results if they lack theory contributions) other fields dont do this (example given is that a 1930s epidemiologist attempting to prove cigs cause cancer would br rejected by management standards) grew out of a need to prove academic rigor/worthiness in the 50s We don't test simple ideas multiple times (replication crisis) TWO FIXES <ol> <li> Management journals should publish atheoretical papers if the results are of consequence (stimulate future research, etc.) </li> <li> Create a new journal specifically to test theories in a straightforward fashion (tests, replications, etc.) </li> </ol>

Ones, Deniz S.; Viswesvaran, Chockalingam2011Individual Differences at Work

Book chapter examining the role of individual differences at work (cognitive ability and personality specifically); break this one into multiple flashcards; this will be good to cite often <hr> Individual differences are most useful as predictors for workplace criterion such as performance and satisfaction (cite them on this) <hr> general Cognitive ability correlates around .5 with overall performance integrity tests (conscienciousness + agreeableness + stability; otherwise known as Digman's alpha factor) generally correlate .4 conscientiousness correlates .20 at best(though consistently across jobs) and Personality and cog ab offer incremental validity over each other <hr> Importance of g increases as job complexity increases (as does the criterion-related validity ranging from .2 in low complex jobs to over .5 in complex ones) This g x performance relationship holds in the face of many demographic moderators Personality leads variability in OCB and CWB

Gatewood, Robert D; Feild, Hubert S; Barrick, Murray R2019Human resource selection

Ch. 9 Summ: Excellent overview of what Biodata is and how to use it effectively and legally in the selection context Consulting Ref Great section on how to crate a biodata application form Biodata: Applicant information empirically developed and scaled to maxamize predictive ability(Categorized by response type and behvaior type) Biodata must be analyzed and weighted wityh rewspect to the construct of interedt

2011APA handbook of industrial and organizational psychology, Vol 2: Selecting and developing members for the organization

Chapter 4: Individual Differences, their measurement, and validity <hr> IN the person vs situation debate, individual differences and situational characteristics account for approxiately the same amoutn of variance (Funder &amp; Ozer, 1983) Cognitive ability (Carroll, 1993, g--&gt;mid-level 8 such as memory and crystalized---&gt;Facets personality The facet level of personality should be studied more as it helps study more granular relations between personality and performance (e.g. order facet of C contributing to early perf and the industriousness facet contributing to later perf (Moon, 2001) values narratives

Handbook of Employee Selection | James L. Farr, Nancy T. Tippins | Tay

Choosing a psychological assessment (based on reliability and validity) <hr> 3 main types of reliability threats (errors) Due to items the more unique variance between items (lack of measuring same factor/construct), the lower reliability will be Due to raters Combat by standardizing the behaviors observed by raters (unless variability in observed behavior may be meaningful) and having enough raters to even out idiosynchratic raters Due to momentary, time-limited factors STandardize, standardize, standardize <hr> validity (content) Sufficieny Ensure that the model covers all aspects of the consruct, and make a test based on the model Contamination Attempt to control for construct-irrelevant variance (item complexity making it hard for ESL students, item content referencing things that only one group understand a la "how much is a soda at the corner store")

Zickar, Michael J.; Broadfoot, Alison A.2009The partial revival of a dead horse? Comparing classical test theory and item response theory

IRT vs CTT babeyyyyy (specifically, CTT still has uses in an IRT world <hr> Come back to this for a great summary of IRT parameters <hr>

Barrick, Murray R; Mount, Michael K1991THE BIG FIVE PERSONALITY DIMENSIONS AND JOB PERFORMANCE: A META-ANALYSIS

Examined relationship of the Big Five to Job Proficiency, training proficiency, and personnel data Found that: <strong>Conscientiousness showed relationships with all three performance criteria for all occupational groups Extroversion predicted performance in social interaction jobs (management and sales) Openness and Extraversion predicted training proficiency across occupations

Ryan, Ann Marie; Sacco, Joshua M.; McFarland, Lynn A.; Kriska, S. David2000Applicant self-selection: Correlates of withdrawal from a multiple hurdle process

Examined the process of self-selecting out at various stages of a multiple hurdles selection battery Those who self-selected out early were categorically different from those who selected out late Those who stayed in and those who selected out were also significantly different on basis of: negative job/org perceptions job (obtaining) commitment job attitudes work family conflict need to relocate (less = morel ikely to stay in) <hr>

Austin, James T.; Villanova, Peter1992The criterion problem: 1917-1992.

Criteria = &nbsp;a sample of performance (including behavior and outcomes), measured directly or indirectly, perceived to be of value to organizational constituencies for facilitating decisions about predictors or programs Gives a history of criterion from 1917-1992 -- Criterion problem (Flanagan, 1956), difficulty of measuring performance constructs that are multidimensional and appropriate for different purposes Before I/O, scientific management emphasized measuring performance (Time/motion studies); criticized for being too job-specific (17 movement taxonomy by Meredith, 1953) 1917-1939 -I/Os trained in germany as experimentalists -Focus on selection of low-level workers like clerks -Criterion was the standard for evaluating predictors (tests) -Criterion of the day: Kornhauser and Kingsbury (1923) focused on output (best), ratings, and length of service (tenure) Burtt (1926) Output and supervisor ratings Bingham and Fred had like 12 Viteles (1932) divided into objective and subjective criteria 1940-1959 Multidimensional criteria became a thing (factor analysis bloomed) Toops (1944, 1959) <strong>sucess is multidimensional and predicting a profile of criterion measures should be the goal of prediction; success is not unitary, if it is, it must be a weighed combo of other things 1960-1979 start on page 14 if you have time after other readings

Aguinis, Herman; Pierce, Charles A.; Bosco, Frank A.; Dalton, Dan R.; Dalton, Catherine M.2011Debunking Myths and Urban Legends About Meta-Analysis

Debunking 7 urban legends about metas then detailing best practices <hr> *A single meta-analytic correlation is good for at aglance, but disguises many nuances (moderators, context effects, etc.) *Metas cannot turn a bunch of weak studies into a solid study (make lemonads from lemons); but congrgating high quality studies will make a high quality meta *Metas provide more evidence for causality than single studies, but unless we're talking experiments, we still are dealing with correlations *Metas have enough N for moderation analysis, BUT make sure everything else is in place ass well (Hedges and Pigott a priori power analysis) *use meta-analysis to explain across-study variability caused by artifactual sources of variance and moderators &nbsp;(e.g., sampling error; identified by departures from meta analytic main effect) *Improvements in meta analytic techniques are important, but are unlikely to rock the world of science each time they occur

Barrett, Gerald V.; Caldwell, Marilyn S.; Alexander, Ralph A.1985The Concept of Dynamic Criteria: A Critical Reanalysis

Discusses the three main types of dynamic criteria (and why they are flawed): Changes in group average performance over time (results of failure to diferentiate between learning and post-learning phases, range restriction of only incumbents who stay for a long time, etc.) Changes in validity over time (the three pillar studies used to advance this failed to actually have statistically different predictor x criterion correlations across time points Changes in rank-ordering of criterion scores over time (largely explainable by lack of test-retest reliability due to the criterion being cab fares) Analyzed 1300 studies and found: Dynamic criteria are rare, and can usual be explained by methodological artifacts <strong>Practitioners should focus on removing sources of criterion unreliability instead <hr>

Schmidt, Frank L.; Hunter, John2004General mental ability in the world of work: occupational attainment and job performance

Enourmous meat-riding for GMA; it is the best predictor and outpredicts weighted combinations of aptitudes. GMA is God,. Bring this one up to debunk with Sackett (2021) and Outtz (2002)

Detecting Faking on a Personality Instrument Using Appropriateness Measurement

Evaluating the use of IRT-based "appropriateness measurement" for detecting faking as compared to social desireability scales (using army personality data) <strong>Found that apropriateness measurement correctly classified more fakers with fewer false positives; bring this up as an IRT-based alternative to faking detection <hr> One reason organizations use ability tests and not personality inventories is due to fears of fkaing, despite the information theyre leaving on the table Previous ways of faking detection: Writing verifiable items (were you a member of x clubs in school?) Becker &amp; Colquitt (1992) Writing ambiguous items making it hard for the taker to tell what's being measured (at the cost of validity and possible DIF; Edwards, 1970) Social Desireability items peppered into a measure (sound good but should not all be endorsed) <strong>Appropriateness Measurement (The big dog of this article; Levine &amp; Rubin, 1979) <strong>Compares the observed pattern of item responses to the expected responses (considering a respondent's standing on theta and the item response functions) <strong>Respondents who have very different observed and expected patterns will have a high appropriateness index (high chance of cheating or some outside influence affected the response pattern) 2 parameter logistic model is usually used for personality tests (2plm); works for dichotomous personality items

Cascio, Wayne F.; Aguinis, HermanResearch in industrial and organizational psychology from 1963 to 2007: Changes, choices, and trends.

Examined articles in JAP and Personnel Psych from 1963-2007 to see which areas and subareas of I/O were focused on. Found that: The field of I/O, on its current trajectory, isn't going to become any more well-known or achieve its lofty goals due to: Lagged relationships between societal issues and IO research that addresses them --&gt; Failure to address human capital trends (equal pay, etc) in a timely manner We can narrow the Academic practitioner divide by: changing how I/Os are trained/socialized to emphasize the practitioner side <hr> Top 4 broad areas in both journals: Psychometrics/methods Performance predictors Work motivation/attitutdes (cycles up and down; others are more constant) Performance measurement

Conway, James M.; Huffcutt, Allen I.2003A Review and Evaluation of Exploratory Factor Analysis Practices in Organizational Research

Extending previous research on how and when EFA is used (previous reviews found overreliance on eigenvalue greater than 1 rules and orthogonal rotations) Authors found that, from 1985-1999, researchers are beginning to use multiple selectors for number of factors (eigenvalues, parallel analysis, scree plots, etc.) and <strong>researchers tend to make higher quality decisions (in terms of EFA methods) when EFA plays a major role in addressing research questions <hr> Factor extraction method (Principal components vs common factor) Principle component = reducing number of variables by creating linear combinations which retain maximum variance (does not differentiate between common and unique variance) Common factor (maximum likelihood)= understand the latent variables that account for relationships among variables (higher quality if you want interpretable constructs; discerns between common and unique variance) Number of factors to retain (Gorsuch, 1997) says that eigenvalue greater than one rule often results in too many factors The authors recommend a combination of a priori, parallel analysis, and interpretability Rotation Oblique is preferred; orthoganal artificially forces loadings away from simple structure (Thurstone, 1947) *Report all decisions you make and why *Keep an eye out for statistical software default EFA settings which are likely to be (PCA + Eigenvalue greater than one + Varimax)

Donovan, John J.; Dwight, Stephen A.; Schneider, Dan2014The Impact of Applicant Faking on Selection Measures, Hiring Decisions, and Employee Performance

Faking might be a thing after all lol Used a self-report within person longitudinal experiemt (assessed applicant goal orientation before hire, then assessed same goal orientation 4 months post-hire; faking was declared if the answer changed in a socially desireable direction between testing dates -50% of subjects faked This faking negatively impacted the psychometric proprties of the test Fakers performed significantly less effectively than non-fakers on the job itself

Le, Huy; Schmidt, Frank L.; Harter, James K.; Lauver, Kristy J.2010The problem of empirical redundancy of constructs in organizational research: An empirical investigation

Focus on the idea of redundant constructs (specifically organizational committment and job satisfaction) in the org sciences (Correlated .91 and have similar relationships to positive and negative affect btw) Suggest a movement towards construct parsimony to spur advances in knowledge <hr> Though constructs may be identical, measurement artifacts may result in less-than-perfect correlation Empirical redundancy = correlated closely to 1.00 when measured AND similar nomological nets Observed score variance = true score variance + error variance (<strong>Lord &amp; Novick, 1968) Latent factor = shared variance among indicators measuring the same construct Generalizability theory = in addition to true and error scores, there is scale-specific variance from how the items are worded (scale vs item specificity)

Murphy, Kevin R.; Cleveland, Jeanette N.; Skattebo, Amie L.; Kinney, Ted B.2004Raters who pursue different goals give different ratings.

Found significant correlations between measured student rating goals (to identify strengths vs to motivate, etc.) and how they rated their instructors&nbsp; --------- Murphy et al. (2014) found that rater goals --at least in terms of student rating goals for instructors-- were only moderately stable over time. Has this potential maliability been used as one of the bases of rater training? I imagine that making raters aware of these (possible unconscious) goals or outright priming them with different goals could help with accuracy.&nbsp;

Truxillo, Donald M.; Bauer, Talya N.; Campion, Michael A.; Paronto, Matthew E.2002Selection fairness information and applicant reactions: A longitudinal field study

Found that Information (this test will evaluate X, which is important for a Y) was related to perceived fairness (at testing time and 1 month later during result reception) INformation moderated the relationship between outcome favorability and test taking self efficacy only for Blacks INformation was unrelated to behavioral measures (Police sample, so..maybe a bit wonky with racial dynamics) <hr>

van der Zee, Karen I.; Bakker, Arnold B.; Bakker, Paulien2002Why are structured interviews so rarely used in personnel selection?

In terms of Ajzen's (1991) Planned Behavior Theory, the three determinants of action are: Attitude toward an action (evaluation) Subjective Norm (Perceived pressure to perform or not perform) Perceived Behavioral Control Ease of doing the thing and possible obstacles Found that attitudes and subjective norms weere related to <em>intentions </em>to engage in either type of interview, but <strong>In the end, intentions towards engaging in unstructured interviews ()or lack thereof) was the only thing related to the actual behavior

Woehr, David J.; Sheehan, M. Kathleen; Bennett Jr., WinstonAssessing measurement equivalence across rating sources: A multitrait-multirater approach.

Instead of examining the MI of rating scales assuming only one underlying dimension (performance), authors separated the variance into perfromance dimension AND rating source; this allows measuring the DIRECT effect of rating source instead of relegating it to an interaction effect) <strong>We are using CFA-based MI analysis to see if variance accounted (factor loadings) for by Source effect and Dimension effects are similar across source groups Results indicate that: Impact of underlying performance dimension on rating is equivalent across sources (similar conceptualizations of what performance is) (at the metric but not scalar level) Impact of rating source is substantial and almost equal to the effect of underlying performance dimension Impact of rating source differs depending on the source <hr> Harris &amp; Shaubroeck (1988) Self and other ratings correlated .3ish, rater/peer ratings correlated .62 (evidence for johari window)

Murphy, Kevin R.; Deshon, Richard2000INTERRATER CORRELATIONS DO NOT ESTIMATE THE RELIABILITY OF JOB PERFORMANCE RATINGS

Interrater correlations do NOT equal reliability and shouldnt be used to calculate such (just because raters agree, doesn't mean that their agreement is free of random error) ; systematic rater effects and rater-specific variance are not random error since we can model and predict them <hr> Correcting for interrelater reliability can lead to a change more than 5 times larger than correcting for intra-rater reliability (alpha) <strong>You can only estimate reliability via rater agreement if: <strong>A) Raters are similar enough to be considered alternate forms of the same test (witness same behavior, etc.) <strong>B) Agreement between raters can be considered to reflect true scores in ratings <strong>C) Disagreement between raters is variance unrelated to job performance Often we see All three of these assumptions violated (due to differences in information observed and differences in rating processes) , making intrarater a much more tennable approximation of error Whether or not rater effects are seen as measurment error or meaningful variance depends on what the criterion is (if the ratee is being compared to an absolute standard, rater effects are error. If they are being compared to other ratees, rater effects can be meaningful) Under what conditions might multiple raters meet the highest proportion of the assumptions necessary to use interrater correlations as reliabilities as listed by Murphy and DeShon (2000)? Perhaps raters who witness precisely the same work instances, who have been trained in rating to the extent that any rater effects truly are random? This brings assessment centers (or at least the ideal version of an assessment center) to mind. However, the last assumption, that correlations between raters reflect only true performance, may be violated if AC raters feel pressure from their role to agree with the other raters. Thoughts?

MacCallum, Robert C.; Austin, James T.2000Applications of Structural Equation Modeling in Psychological Research

Intro to how SEM has been used in the literature (up to 2000) and some issues with it <hr> What is SEM? A method of modeling linear relationships between variables (measured and/or latent); usually doen to account for variance/covariance bwteen measured variables (using latent variables) SEM model with only measured variables = path model <strong>Factor analysis is a type of SEM model which includes latent variables (these are the factors themselves) and the measured variables (indcators) used to make them; factor analysis has all EXCEPT direction influences (causal; require longitudinal designs) <hr> Two common extensions of SEM Multisample Modeling: applying the same SEM model (same relationships between latent and measured variables) to multiple samples; can be used to test measurement invariance by comparing fit statistics Modeling of measured variable means so that a model can account for changes in mean over repeated samples <hr> Latent growth curve models are also SEM models, with the possible curves acting as latent factors and each case being a linear combination of some latent curves (multisample lgcm models can be used to compare trajectories of control and treatment groups) SEM is highly sample-specific, so cross-validation is a must before suggestign generalizability Since the only indicators of latent variables are the items included, ensuring construct representation is param,ount Generate "equivalent models" if you can (<strong>good SEM fit means that a model is plausible, not right) *YOU CANNOT INCLUDE DIRECTIONAL (causeal) EFFECTS IN A CROSS-SECTIONAL SEM STUDY UNLESS *You argue that the amount of time for variables to influence each other is instant *The effect of one variable on another doesn't change over time <hr> Models with only Measured variables assume these MVs are perfect representations of the construct space and thus can be bioased, LVs get aroudn this, and allows us to measure the variance within each indicator and each latent variable (usually we combine item clusters into parcels as using each item would mean too many indicators)

Kerr, Norbert L.1998HARKing: Hypothesizing After the Results are Known

Introduces the concept of HARKing (hypothesizing after results are known), why scientists do it, how many of us do it, why we shouldn't do it, and if sometimes the benefits outweigh the costs (harking bad surprise) <hr> Bem (1987) claims that "the best journal articles are informed by the actual empircal findings from the opening sentence (basically, HARKing is alright IF IT IS AN EXPLORATORY STUDY) HARKing in the purest sense is presenting a post-hoc hypothesis as if it were an a priori one can be done for many reasons (wanting to be published, being taught incorrectly) Has a host of problematic effects (can translate type 1 error into theory, promotes statistical abuses, etc.) <strong>Empirical replication is the simplest remedy for HARKing

Hoffman, Brian J.; Kennedy, Colby L.; LoPilato, Alexander C.; Monahan, Elizabeth L.; Lance, Charles E.2015A review of the content, criterion-related, and construct-related validity of assessment center exercises.

Meta-analysis on 5 commonly-used AC exercises . Found that all 5 types were related to criterion variables (p=.16-.19), moderately related to GMA, and mostly related to Extraversion and Openess. ACs also explain incremental variance beyond GMA and Big5 Overall Assessment Ratings (OARs) have previously been shown to have criterion related validity <span style="text-decoration: underline;">5 commonly used exercises (Managerial simulation):</span> In-Basket: Simulates paperwork that would arrive in mailbox or on desk Leaderless Group Discussion: Exactly what it sounds like; used to look at leadership emergence and group processes RolePlay (interview simulations) Case Analysis (written)/Oral Presentation (spoken)

Hofmann, David A.1997An Overview of the Logic and Rationale of Hierarchical Linear Models

Introduction to hierarchical modeling (dealing with nested data; workers nested within teams nested within departments, etc.) Still dont really get hierarchical models tbh; watch a youtube video on them <hr> Two main questions of hierarchical modeling: 1)How to aggregate nested data <strong>2)How to investigate relationships between variables at different levels (the Meso paradigm; House et al., 1995) Cross-level research = investigating both low-level and high-level variables on a low-level (individual) variable Approach 1: Disaggregation (give everyone in a group the same score as an individual level variable and go from there); violates independence of observation Approach 2: Aggregation (combine lower level individuals into a higher level group of individuals to study at that higher level); possibility of missing out on meaningful individual-level variance <strong>Approach 3: Hierarchical linear models: recognize individuals within groups are similar to each other and model both individual and group residuals (partial interdependence of individuals); <strong>simultaneously model within and between group variance HLM cannot find the amount of within group variance however, through one-way anova, we can find the between group variance and the total variance, then calculate inter classd coreeltation (ICC) by dividing the between group variance by the total variance. Will look of later what we use that for)

Morgeson, Frederick P; Campion, Michael ASocial and Cognitive Sources of Potential Inaccuracy in Job Analysis

Lists 16 potential sources of job analysis inaccuracies Keep this in the handbook; which decisions made during a JA have what effects <hr> Two main forms: Social and Cognitive Social Inaccuracies are caused by normative pressures from social environment Cognitive Inaccuracies are caused by the limitations of human info processing JA and real differences in tasks: To maximize, use task-based surveys and time-spent ratings; to minimize use importance/difficulty ratings of tasks THE SOURCES <u>Social Sources</u> <em>Social Influence</em> Conformity Pressures -Think Asch (1956), this can manifest on teams of incumbents, reducing accuracy; can scale inversely with rank; conforming to org norms Extremity Shifts (Group polarization) <ul> <li> After conversing with a group of incumbents, stances are extremified </li> </ul> Motivation Loss -Social loafing, free riding; especially in low-meaning situations with no additional reward (like JA) <em>Self-Presentation</em> <em>Impression Management</em> -Information may only reflect what incumbents <em>want</em> people to think their job entails; lack of info on scientist's side means incumbents can IM all they want; IM emerges more in self-report/monitor situations; many JAs are done for compensation purposes--&gt;inflate! Social Desireability -Looking for approval from the analyst, supervisor, or organization; complex high-ability tasks overstated, mundane tasks understated Demand Effects -Attempt to gain social approval from the analyst specifically (being a good participant) <u>Cognitive Sources</u> <em><u>Information Processing Limitations</u></em> Information Overload -JA process too much cognitive burden--&gt;incomplete surveys/tasks, reduced reliability, etc. ; decomposed (task based) judgement take burden off of respondents, holistic job judgements put it on...just always use decomposed judgements and combine them statistically later Heuristics -Availability, representativeness, anchoring (on part of analyst) Categorization -Routinized doesn't always equal simple, etc. <em>Biases in Information Processing</em> <em>Carelessness</em> -INattentive responding/not showing up; 10% careless response can lead to statistical artifacts (Stults, 1985) Extraneous Info -JS, compensation, etc. can bias job analysis Inadequate Info -Analysts new to the job, missing key job info (Cornelius et al. 1984 found a correlation of .41 between new and expert rater scores) Order/Contrast Effects Halo -True halo (actual coincidence of positive traits) vs illusory halo (you like the guy so he must be smart and productive) Leniency &amp; Severity -Unwillingness to be critical of job due to insecurity or inflated job requirements(Leniency) vs a compensation analyst attempting to conserve resources by rating job importance lower Methods Effects -Consistency (stick with previous answers) and priming (DQ: Morgeson and Campion (1997) note extraneous information as a possible biasing factor in job analysis. Specifically, they cite research in which the apparent job satisfaction of incumbents upwardly biased analyst ratings of job enrichment. However, if the goal of job enrichment is, ultimately, to produce more satisfied (interested and challenged) employees, would not incumbent satisfaction be an accurate indicator of this? Is the issue cited more so that of, say, a subgroup of very enthusiastic employees biasing the enrichment rating for an entire team/position? If so, could these differences be meaningful if this satisfied subgroup of employees have traits in common (e.g., "Job appears to be especially rich and meaningful for those with high altruistic motivations", etc.)?

Van Iddekinge, Chad H.; Ployhart, Robert E.2008DEVELOPMENTS IN THE CRITERION-RELATED VALIDATION OF SELECTION PROCEDURES: A CRITICAL REVIEW AND RECOMMENDATIONS FOR PRACTICE

Lists developments between the years 1998 and 2008 that affect criterion-related validity analysis. These developments fall into 5 major camps: <ol> <li> <strong>Validity coefficient correction procedures Using alpha for single rater ratings of JP attributes all idiosyncacies to true score instead of error ,and can overstimate reliability and underestimate corrected validity coefficient Using intraclass correlations for multiple raters, conversely, assumes that all rater-specific variance is error instead of meaningful (treats raters as parallell forms)...THE SOLUTION IS G THEORY IT IS ALWAYS G THEORY (so long as the proper amount of data is collected to capture most of the variance) </li> <li> <strong>Evaluating multiple predictors Relative Weights Analysis/Dominance Analysis (allows you to see relative predictive power with and without the other predictors with which variance is shared); use these when colinearity occurs </li> <li> <strong>Differential Prediction analyses Check for differential prediction whenever feasible (even small differences can become important)Can be due to predictor, criterion, or both Keep that N up </li> <li> <strong>Validation sample chatacteristics </li> <li> <strong>Criterion Issues </li> </ol>

Chapman, Derek S.; Zweig, David I.2005Developing a Nomological Network for Interview Structure: Antecedents and Consequences of the Structured Selection Interview

Looked at the 15 interview elements posited by Campion et al. (1997) in a new sample (N=812 interviewees) Factor Analysis reveals 4 factors: Questioning Consistency (what we think of as structure; same Qs same order) Evaluation Standardization (are numeric scoring procedures used) Question Sophistication (positive interviewer reaction) Rapport building (positive interviewer reaction) Selection focused interviewers are more likely to use structured interviews than recruitment focused ones Iinterviewers with formal training are mre likely to use SIs Procedural justice perceptions wee unaffected by interview structure in applicants, but perceievd difficulty was <hr> Interview structure: "Reduction of procedural variance across applicants..." Huffcut and Arthur (1994) SIs = higher psychometric properties (Campion et al., 1988)

Foldes, Hannah J.; Duehr, Emily E.; Ones, Deniz S.2008Group differences in personality: Meta-analyses comparing five U.S. racial groups

META (k ~ 700) While personality doesn't seem to produce adverse impact of subgroup differences at the scale level; the facet level may be problematic depenging on which groups are being looked at and which personality tests exactly are being used; for example: <hr> Facets with problematic levels of subgroup differences: <strong>Extroversion Sociability (whites beat blacks) <strong>Emotional Stability Low Anxiety (whites beat blacks) EVen-tempered (Asians beat whites) <strong>Agreeableness (asians beat whites in all) <strong>Conscientiousness (Asians beat whites) &nbsp;

Huffcutt, Allen I.; Conway, James M.; Roth, Philip L.; Stone, Nancy J.2001Identification and meta-analytic assessment of psychological constructs measured in employment interviews

META Divided types of constructs assessed into 7 groups Found that: Personality and Social Skills are teh most often assessed overall Structured and Unstructured interviews focus on different constructs Structured have higher validity cocefficients because they tend to focus on constructs more related to JP <hr>

Hunter1984Revisited: Interview Validity for Entry-Level Jobs

META from the 80s Structure increases the validity of interviews over unstructered ones, but only to a point at which no incremental validity is gained; suggesting a ceiling effect for structure <p style="padding-left: 40px" data-indent="1"><strong>Not allowing follow-up questions or any deviation from question scri[t seems to have no incrimental gain over just making sure the same questions are asked (or at least equivalent ones)

Heidemeier, Heike; Moser, KlausSelf-other agreement in job performance ratings: A meta-analytic test of a process model.

META on self-other performance rating agreement Self and supervisor ratings overall correlated r = .22 (k = 115) significant moderators were position characteristics and use of nonjudgemental indicators (objective criteria); let to more agreement) Evidence for leniency in self-ratings was found in western samples (one third of a standard deviation) <hr> Three-step process of ratings = Collecting of cues (job type determines which cues are collected/behaviors observed) ---&gt; selection and integration of &nbsp;cues (scale format and content can prime/affect ratings)---&gt; Communication(cultural factors)

Van Iddekinge, Chad H.; Roth, Philip L.; Raymark, Patrick H.; Odle-Dusseau, Heather N.2012The criterion-related validity of integrity tests: An updated meta-analysis.

Meta-analyzed 104 studies (134 samples) keeping test-publishers and non-test publisher authors relatively stable, who did good science (proper rigor) Found that overall mean observed validity estimates for integrity tests (corrected for unreliability in criteria) were .15 for JP, .16 for training performance, .32 for CWB, .09 for turnover (even higher when corrected for range restriction) Validities forJP criteria were higher among test publisher authors Validities for CWB criteria were larger when based on self-reports of CWB ________________________________ Ones et al. (2012) SLAMS this paper saying that&nbsp; due to including tests that had other constructs besides integrity, inaccurately applying range restriction corrections, and lacking fully hierarchical moderator analyses, and alternate explanations for the test-publisher/non-publisher gap, their conclusions are not tenable. Damn.

Baron, Reuben M.; Kenny, David A.1986The Moderator-Mediator Variable Distinction in Social Psychological Research: Conceptual, Strategic, and Statistical Considerations.

Moderators tune relationships up and down, mediators are REQUIRED for an interaction to happen <hr>

Binning & Barrett (1989) Validity of personnel decisions: A conceptual analysis of the inferential and evidential bases

More evidence in favor of validity as a unified argument for test score interpretation instad of different types Building a psychological theory is a lot like validating a personnel decision...gonna skip this its dry and repetitive...hope i dont regret this

Cortina, Jose M.; Landis, Ronald S.2009When small effect sizes tell a big story, and when large effect sizes don't.

Movement away from only p values and towards including ES is a good start, but the es criteria + lack of considering pactical significance results in an incomplete science still. Small ES ==/= nonsignificant <strong>Do not blindly apply Cohen's d ES cutoffs; rather consider sample size, measures used, etc. in order to clearly communicate your level of confidence in the results

Barrick, Murray R; Mount, Michael K.1996Effects of impression management and self-deception on the predictive validity of personality constructs. Journal of Applied Psychology, 81, 261-272. - Google Search

Neither self-deception nor impression management was found to attenuate the predictive validity of BIg Five scores (even predictive when faked...those who can fake on the test can fake in the office? Those who fake have higher g and thus better perf? Hard to tell Total N &nbsp;= about 300

Schneider, Benjamin; Goldstein, Harold W.; Smith, D. Brent1995The ASA framework: An update

Provides additional support for the attraction, selection, attrition phenomenon and evidence that foudners and top managers do indeed have an effect on organizations specifically through this process. *cite this paper jointly with any ASA citations

Campion, Michael A.; Fink, Alexis A.; Ruggeberg, Brian J.; Carr, Linda; Phillips, Geneva M.; Odman, Ronald B.2011DOING COMPETENCIES WELL: BEST PRACTICES IN COMPETENCY MODELING

SUMM: Splits 20 suggestions for competency modeling into 3 sections (Analyzing, organizing, and using) GREAT intro to competency modeling, take a look before starting any consulting work

Kluger, Avraham N.; DeNisi, Angelo1996The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory.

Since the beginning of the century, feedback interventions (FIs) produced negative—but largely ignored—effects on performance. A meta-analysis (607 effect sizes; 23,663 observations) suggests that FIs improved performance on average (d = .41) but that over '/3 of the FIs decreased performance. This finding cannot be explained by sampling error, feedback sign, or existing theories. The authors proposed a preliminary FI theory (FIT) and tested it with moderator analyses. The central assumption of FIT is that FIs change the locus of attention among 3 general and hierarchically organized levels of control: task learning, task motivation, and meta-tasks (including self-related) processes. <strong>The results suggest that FI effectiveness decreases as attention moves up the hierarchy closer to the self and away from the task. These findings are further moderated by task characteristics that are still poorly understood.

Mirvis, Philip H.; Seashore, Stanley E.Being ethical in organizational research.

Specific ethical problems arise doing research within organizations (stakeholder expectations, etc.) Role Theory is the main lens through which these dilemmas can be understood Probably dont bother including in flash cards <hr> Individuals within organizations have a network of relationships, hierarchies, etc. with others in the org. and are thus not independent "Can participants really give consent when participation is part of their job?" Will confidentiality be upheld when poowerful individuals ask to break it (let me see x manager's responses) Role concepts = the requirements that social systems have for their members and the personal identity that members invest in social systems Sources of role conflict and ambiguity = being beholden to multiple conflicting aspects of a job (i.e. teaching/research) and to multiple conflicting stakeholders (direct reports and upper management and unions)

Murphy, Kevin R.; Jako, Robert A.; Anhalt, Rebecca L.Nature and consequences of halo error: A critical analysis.

Suggests doing away with the current conception/use of halo error because: We still can't separate out true vs illusory halo Consequences of halo are unclear Suggest future directions for halo research <hr> Current (problematic) assumptions of halo: It is common It is a rater-based error with true and illusory components It leads to inflated correlations among rating dimensions It has negative consequences and should be avoided/removed Literature review shows however that <strong>Observed correlations are often smaller than true correlations True halo as currently defined is only likely to occur when true correlations are very low (orthogonal rating dimensions) <strong>The presence of halo errors increases some aspects of validity and accuracy of ratings and removing halo decreases psychometric properties <strong>We cannot reliably separate true from illusory halo in field settings Unless dimensions are truly orthoganal, it is impossible to differentiate true from false halo, and when we do, we don't see much of an effect on the data

Hausknecht, John PApplicant Reactions to Selection Procedures: An Updated Model and Meta&#8208;Analysis

Summ: (META) Used a Meta-analysis (k=86, N=48,750) to test a new model of applicant reactions to selection -Applicants who hold positive perceptions about selection process are morel ikely to view the organization favorably--&gt;stronger intentions to accept job offers and recommend to others -Applicant perceptions of selection were positively correlated with actual and perceievd performance on selection tools -Face validity and perceived predictive validity were strong predictors of applicant justice perceptions and attitudes towards tests and selection as a whole Perceived favorability of selection tools were tiered Top: Interviews and Works Samples Middle: Cognitive ability tests Bottom: personality tests, honesty tests, biodata

Swider, Brian W.; Zimmerman, Ryan D.; Barrick, Murray R.2015Searching for the right fit: Development of applicant person-organization fit perceptions during the recruitment process.

Summ: Applies Differentiation-Consolidation Theory (DCT) to applicant fit perceptions over the recruitment process to get a better idea of how they change. Used 8 Person-org fit assessments over 169 accounting student applicants split between 4 companies Found evidence that both initial level of perceived fit differentiation (ranked but ever-changing mental distances between how much the applicant fit with each org) and changing (as recruitment process went from generating applicants phase to maintaining attention phase) perceived org fit predicted ultimate decision to accept or reject job offers 3 stages of recruitment Generating suitable applicants Maintaining Interest of Applicants Influencing job choice DCT: Decision makers consistently update and reorder alternatives in their heads as they get more information This one was really really cool!!! I might look into DCT for applicant fit for my staffing final.

Avery, Derek R.; McKAY, Patrick F.2006TARGET PRACTICE: AN ORGANIZATIONAL IMPRESSION MANAGEMENT APPROACH TO ATTRACTING MINORITY AND FEMALE JOB APPLICANTS

Summ: Nontraditional job seekers (minorities and women) are attracted to different factors than traditional (white male) applicants. This paper makes 14 suggestions through the Organizational Image Management (OIM) framework regarding how best to attract nontraditional job seekers. 2 main factors of OIM Assertive (proactive) vs Defensive(reactive; damage control), Direct (actions taken by org) vs. indirect (affiliations with pertinent groups) Suggestions: -Place ads in targeted content (BET or something lol) -Recruit at predominately nontraditional locales (HBCUs) -Employ recruiters reflecting targeted population -Participate in diversity fairs -<strong>Present evidence of effective diversity management through ads and representatives (only one that involves actually doing something for nontrad emps) -Publically sponsor minority/women's causes (Planned Parenthood) -Convey dependence on minority/females (may be seen as deperation if not carefully done) <strong>-Diversity reputation moderates relationship between OIM and actual perceived valuation of nontrads <strong>-Attributions also moderate this (nontrads must attribute the ads to actually wanting to recruit them)

Macan, Therese2009The employment interview: A review of current studies and directions for future research

Summ: Reviews interview research from 2002-2009 (over 100 articles) 3 main issues: Why structured interviews predict, what contructs do SIs measure, and applicant/interview factors that may affect the process. 3 future directions: common model/measurement of interview structure, what constructs could be/are being best measured, and consistent definitions, lebaling, and measuement of applicant factors <strong>Factors that may moderate reliability/validity of interviewer judgements -Adding structure + -If interview device is not being used to make decision - Job complexity (mixed) Situational vs behavioral (past; includes probing prompts) interviews <strong>What is structure? Varies by researcher; makes meta analysis difficult Behaviorally-anchored rating scales improve accuracy for veterans and newbies (Maurer, 2002); scoring guides with behavioral benchmarks improves reliability and validity. Panel interviews: non-standardized criteria makes comparisons difficult, but research suggests panel racial makeup affects ratings via similarity attraction paradigm; may be more important to fairness percerptions than validity or reliability HR professionals seem to prefer 'moderately strutctured interviews with 'moderatrely standard' rubrics. Concerns related to loss of dicresion, losing personal touch, and time demands, interviewer cognitive styl;e and need for power relate to this reason Holding them accountable to procedure instead of outcome may fix this (Brtek and Motowidlo, 2002) Constructs measured GMA/interview score correlation is around r=.27, but more sructure and behavioral focus and high job complexity decreases this link Persibakuty is often assessed; mostly extraversion and aggreableness; influences interview scores 'as much as it is allowed to' Interviewer cog load could be decreased and thus accuracy increased by allowing note taking, giving a specific rubric, and having them rate right after interview

Steele-Johnson, Debra; Osburn, Hobart G.; Pieper, Kalen F.2000A review and extension of current models of dynamic criteria

Talks about the current state of dynamic criterion (differences in the predictor x criterion relationship over time OR change in rank order of the criterion over time) Generally we see the predictive validity decrease due to: Different abilities being required for the criterion at different times (think, cognitive ability and openness needed for training/onboarding, but conscientiousness and stability needed for continuing performance) <hr> Ghiselli (1956): Incumbent performance changes as they learn and develop on the job (this can lead to teh learning/plateau phases being differentially predicted by a predictor) <img alt="" data-attachment-key="QE59BE4W" width="1187" height="696">

Van Iddekinge, Chad H.; Aguinis, Herman; Mackey, Jeremy D.; DeOrtentiis, Philip S.2018A Meta-Analysis of the Interactive, Additive, and Relative Effects of Cognitive Ability and Motivation on Performance

Tested the idea that performance is a function of cognitive ability and motivation (looked at multioplicative effects) <strong>Found that ability and motivation are additive, not multiplicative (additive effects of CogAb and Motiv = 91% of variance in JP, interaction was only 9%); furthermore, interactions did not always follow the expected more = better shape Multiplicative view (WRONG) Performance = Motivation X Cognitive Ability: Low motivation means low performance regardless of ability, but as effort increases, those with high ability will outperform low ability &nbsp;(if any of them are 0, its all 0) <strong>Additive view (based): Level of motivation does not affect the relationship between ability and performance; low motivation employees can compensate somewhat with higher ability (and vice versa) (you can be at 0 for one, but still have a finished product above 0)

Sackett, Paul R; Zedeck, Sheldon; Fogli, Larry1987Relations Between Measures of Typical and Maximum Job Performance

The primary finding that we report here is that carefully developed, highly reliable measures of typical and maximum performance obtained from two relatively large samples show only a modest correlation (.16 in one sample and .36 in the other) with one another. IF YOU WANT TO PREDICT TYPICAL LONG TERM PERFORMANCE, MAXIMUM PERFORMANCE IS NOT INTERCHANGABLE! REMEMBER THIS CITATION

Weick, Karl E.1995What Theory is Not, Theorizing Is

Theory should not be considered a "finished" product, but rather a process, as inching towards strong theory may look identical to lazy theory in Sutton and Staw (1995)'s view. Theory is "approximated" more than illustrated

Podsakoff, Philip M.; MacKenzie, Scott B.; Lee, Jeong-Yeon; Podsakoff, Nathan P.2003Common method biases in behavioral research: A critical review of the literature and recommended remedies.

This is the Common Method Bias paper Pg. 898 For data collection guide! <hr> &nbsp;Common method variance = variance attributable to the measurement method rather than the constructs (i.e. scale type, item wording, social desireability, priming effects, etc.) Between 15~30% of variance in a given field is attributed to measurement error such as CMB (Cote &amp; Buckley, 1987) Consistency motif = respondent tendency to answer in ways consistent with previous answers, even if they would 't answer those questions that way in a vaccuum. Procedural remedies = MINIMIZE similarities between predictor and criterion in terms of respondent, time of collection, contextual cues in measurement environment, and wording/format of the questions Use unrelated marker variables (lack of relation to the variables of interest may mean that it doesn't reflect the variance of the variables of interest; also assumes that the CMB is disrtibuted evenly across all variables) Statistical Remedies = look for how much variance a general factor accounts for (Harman's test; can't actually do anything about common variance found, and may just represent lack of discriminant validity) or partial out social desirability (may partial out some meaningful variance too)

Sutton, Robert I.; Staw, Barry M.1995What Theory is Not

This paper aims to explain the difference between true theory and what is often used in lieu of theory (data, diagrams, etc.) Also calls for stronger theory development in the field (of administrative science) by journals allowing partial tests of theories and illustrative instead of definitive data. <hr> Parts of an article that do not constitute theory: References (without going into a decent amount of detail as to what the theory being cited is and how it fits into the new theory being presented) Data (talks about what was observed, not WHY it was observed, thats where the theory comes in) Lists of variables/constructs are not theory Diagrams are not Hypotheses are not

Roch, Sylvia G.; Woehr, David J.; Mishra, Vipanchi; Kieszczynska, Urszula2012Rater training revisited: An updated meta-analytic review of frame-of-reference training: Rater training revisited

Updated Woehr and Huffcutt's (1994) meta <strong>Foudn that FOR training improves rater accuracy Improves Borman's differential accuracy(based exclusively on correlations; ignores distance between true and participant score) most, as well as behavioural accuracy ()

Rotundo, Maria; Sackett, Paul R.2002The relative importance of task, citizenship, and counterproductive performance to global ratings of job performance: A policy-capturing approach

Weighting of contextual and task performance varies by rater, and rater demographics don't affect which type of weighter you are (task heavy, contextual heavy, equal)&nbsp;

Spector, Paul E.1994Using self-report questionnaires in OB research: A comment on the use of a controversial method

When self-reports are best to use, and if CMB concerns are overblown <hr> Most researchers raise concerns over self-report measures of objective work properties rather than perceptions (which they think are captured fine by self-report); Spector agrees that <strong>self-report measured are most appropriate for perceptions and attitudes than for objective job properties Variance = trait (true; greater proportion of overall variance if item asks about perception vs objectivity) + method + error (Spector &amp; Brannick, 1994)

Guion, Robert M.2002Validity and reliability

Will record anything new over and above the Furr &amp; Bacharach (2008) textbook <hr> Validity generalization = meta analysis with a specific validity estimate (criterion-related, for example); must both reject situational specificity hypothesis and hold a general correlation shape across replications SEM can be used to <em>understand</em> a relationship beyond prediction, but needs a strong theoretical base Guion (2002) argues that the discreet types of validity (construct, content, etc.) should be dropped in favor of the comprehensive evaluation

Chapman, Derek S.; Gödöllei, Anna F.2017E-Recruiting

talk about this if asked about the current state/future of selection/recruiting; using social media can lead to increased reach and higher quality talent! Woah!!!! The internet!!!!!!!!!!!!!!!!!!!


Set pelajaran terkait

Business Law (v3) Module 7, 10, 11, 12

View Set

Study Guide Over Excel Module, Two

View Set