Job Analysis, Research Methods, Performance Appraisal - Part 1, Summary of Appraisal Formats, Performance Appraisal (Part 2), Predictors, Selection, Training and Development - Ch. 8, Individual Differences & Selection, Industrial Organizational Psych...

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

element

smallest unit of work activity

dunning-kruger effect

some people are so unskilled they aren't aware of how unskilled they are, a social-cognitive bias were some individuals overestimated their abilities and lack awareness they are doing so, because they aren't skilled enough to realize they lack skills they may not be interested in trying to improve, may ignore feedback from performance appraisals

why don't we simply use the average to understand a group?

sometimes the data is skewed with outliers

precision

specific and accurate

two ways to estimate internal consistency

split-half reliability and inter-item reliability

split-half reliability

splitting test in half to see if one half is equivalent to the other

range

spread from lowest to highest point

test-retest

stability of test over time

structured interview

standardized, job analysis-based questions that are asked of all candidates; increased reliability; allows for fair comparisons

why do most HR professionals give the performance appraisal systems of their companies a mediocre grade?

standards used may not be relevant to actual job tasks, ratings are unreliable, may reward employees who are narcissistic and self-promoting

recommendations for fair and just performance evaluations

start with job analysis to develop criteria, communicate performance standards in writing, recognize separate dimensions of performance rather than just one "overall" rating, use both objective and subjective criteria, give employees access to appeal, use multiple raters rather than one rater, document everything pertinent to personnel decisions, train the raters, if possible (if not, give them written instructions for conducting the performance appraisal)

correlation coefficient (r)

strength of the relationship between two variables

Experimental Design

Involves randomly assigning participants to conditions, while varying each condition and then measure the outcome in each condition COMPLETELY RANDOM

Lexical Hypothesis

The idea that, if people find something is important, they will develop a word for it, and therefore the major personality traits will have synonymous terms in many different languages. This man rounded up all 4,504 original words

Mediator

The interveening variable that explains the relationship between the independent variable and dependent variable.

Science

The logical approach of investigation, usually based on a theory or hypothesis.

Discriminant Validity

The measure is only modestly or negatively correlaed wit measures of dissimliar constructs

What an I/O Psychologist does

The use psychological principles and research methods to solve problems in the workplace and improve the quality of life. They study workplace productivity and management and employee working styles.

When to use ANOVA

This is used when the experimenter wants to study the interaction effects among the different test data

reliability

how consistently a test will measure the variable of interest - if you took the same test twice would it give you the same score?

current base rate

how many current employees considered successful

what does science try to describe?

how things happen

retraining incumbent employees

identification of low performers and developing employees for promotion

what does a job analysis do?

identifies the KSAOs required for the job and is then used to determine if job applicants have those required KSAOs

how to begin with job analysis?

identify relevant criteria for job success and then locate, develop, create, or modify predictors that are valid indicators of criteria

example of self-efficacy

if an elderly worker is convinced that computers are too complicated, she won't learn it well

time frames

if different raters are making ratings at different times they may not be judging the same set of tasks and behaviors

Internal Consistency (aka inter-item reliability)

if the items on a test measure the same construct, they should all correlate with one another

80% rule of thumb

if the selection rate of one particular group is less than 80% of the majority group, that may indicate discrimination

equal pay act

illegal to provide unequal pay and benefits to men an women who are holding jobs that are equal - there must be sizable and reasonable differences in work to support unequal pay

serious health condition

illness, injury, impairment, or physical or mental condition that involves in-patient care or condition that involves in-pateient care or continuous treatment by a health care provider

realistic job preview (RJP)

important for recruitment that applicants get a realistic view of what it would be like to work for the organization (organizations may want to "look good" to applicants, but if applicants feel deceived after joining the organization their work performance and job satisfaction will decrease)

common-metric questionnaire (CMQ)

improvement over PAQ, items are more behaviorally specific, reading level is lower, applies to both managerial and non managerial jobs

why doesn't presence of adverse impact alone not indicate illegal discrimination?

it can sometimes be justified for other work-related reasons

Convergent Validity

it should correlate highly with conceptually similar measures

Divergent Validity

it should not correlate too highly with conceptually dissimilar measures

correct rejections

it wasn't there, and we didn't see it

false alarms

it wasn't there, but we thought it was

hits

it's there, and we saw it

power test

items are more difficult, test taker is expected to be able to complete all items within the testing time

what is the most important building block for I/O psychology (performance appraisal, selection, training) and the foundation for most work in I/O?

job analysis

what is training needed for?

job or organization-specific training, changes in technology, and continued development of employees

what does KSAO stand for?

knowledge, skills, abilities, other

experimental methods

lab experiments (high control = bringing people in a lab), fired and quasi-experiments (less control)

con of archival research?

lack of control over quality of data is a concern

disadvantages of graphic rating scales

lack of precision in dimensions and lack of precision in anchors

why does central tendency occur?

laziness of raters, lack of knowledge about actual performance, easiest thing is to give an average rating (especially if there is nothing particularly outstanding about the employee), format of rating scale

types of distributional errors?

leniency, central tendency, and severity

cognitive ability tests

most frequently used predictors in selection because mental functioning or intelligence is important for most jobs

lab experiments

most take place in a contrived setting for control, very high internal validity, external validity (generalizability) questioned

variance

most useful measure of dispersion

Group Interview

multiple applicants answering Qs during same interview

task

multiple elements combined to specific objective

Panel Interview

multiple interviewers interviewing same applicant simultaneously

example of a job

multiple kinds of professors

more of task inventory approach

rated by incumbents on performance, importance/criticality, relative time spent on job performing task

Validity

refers to the accuracy of a measurement instrument and its ability to enable accurate inferences about some else (e.g., job performance)

mechanical ability

tests abilities involving mechanical relations, recognition of tools used for various purposes, and sometimes actual mechanical skills

weighted checklist

series of items previously weighted on importance or effectiveness, items indicate desirable and undesirable behavior

what should you do even if you are using a compensatory approach?

set minimum cutoff scores to exclude those below a certain standard - combination of multiple cut-off and multiple regression

motivational system to improve performance of employees

setting objective goals, continuous coaching and feedback, performance appraisal, developmental planning

what does the CRA does not include of?

sexual orientation or gender identity

distributional errors

should expect performance to be normally distributed - expect some are great, some are poor, but most are average

what is the goal of performance appraisals?

should have a purpose, personnel decisions, feedback and training, research or legal purposes

whole versus part learning

should it be learned all at once? should it be learned piece by piece?

clerical ability

tests both perceptual speed and accuracy in processing verbal and numerical data

Biodata

Refers to instruments that collect biographical information from job applicants/interviewees

examples of reaction criteria

does the process seem fair? is the process emotionally upsetting? do people feel intimidated by the process?

parallel forms reliability

extent that two different tests measure the same attribute (important to measure that the two tests are measuring the same thing)

construct validity

extent to which a test measures the underlying construct it was intended to measure

behaviorally anchored ratings scales (BARS)

similar to graphic rating scales but includes anchors and examples of behaviors that would fit ratings, include critical incidents

multiple hurdle

similar to multiple cut-off, but predictors are given in a specific order, must pass each test to receive the next test

parsimony

simple and elegant, but can explain a lot

who should rate an employee's performance?

upper management, middle management, direct supervisor, peers, subordinates, support staff, customers/public, vendors, self

when should a new selection battery be used?

use a taylor-russel table to estimate improvements

ways to make material meaningful

use job-relevant examples an present information a logical order

methodology

used to conduct quantitative literature reviews (previously only narrative reviews were conducted)

validation study

used to determine the extend to which a predictor(s) is related to a criterion

signal detection theory

used to distinguish a "signal" from the "noise"

documentation of organizational decisions

used to keep track of employees' performance patterns over time, provides detailed account of inadequate performance

cross-validation

using the same predictors and performance criteria with a different sample, will involve similar jobs

interviews (investigator-administered)

usually conducted face-to-face or sometimes over the phone

information needed for estimating utility

valid selection battery, organization's base rate information, and selection ratio

Predictors

variables about applicants that are related to the criteria: -predictors for successful basketball performance could be height and physical speed

hostile work environment harassment

verbal or physical behavior that creates an intimidating, hostile, or offensive work environment that interferes with one's job performance

quasi-experiments

very common in IO psych, field experiment without random assignment, not always practical to randomly assign participants; use of intact groups (not randomly assigned)

individualists

view self as independent and less connected to others

collectivistic

view self as interdependent and more connected to others

types of technological advances with surveys

web-based surveys and experience sampling methodology (ESM)

effect performance appraisals

well-received by rater and ratees, based on carefully documented behaviors, focused on important performance criteria, inclusive of many perspectives, focused on improving employee performance

manipulation

what is controlled, systematic control, variation, or application of one or more independent variables

questions to consider when doing a performance appraisal

what is the goal of doing the appraisal? who will provide the ratings? how will ratings be made?

what does science try to control/influence?

what might happen

what are ratings distorted by?

what the rater pays attention to, how they interpret or encode the info, what they remember (what they don't), and what they value about the job (or themselves)

independent variable (IV)

what you manipulate to cause an effect (the predictor)

when is casual inference made?

when data indicated that a causal relationship between two variables is likely

severity

when raters only use the low end of the rating scale (some raters may consistently give much lower ratings compared to other raters)

what does science try to explain?

why things happen

Machiavellianism

willingness to use deceit, manipulation, and exploitation to get what one wants, without regard for morality

different forms of paper-and-pencil tests

essay, MC, true or false, short answer

two classes of cognitive ability tests

general cognitive ability tests and specific cognitive ability tests

what can faulty criteria result in?

poor organizational decisions

what does correlation coefficient (r) provide information about?

the direction and the magnitude of the relationship

dependent variable (DV)

the effect you want to measure (the outcome)

induction

theories are able to be created from data

example of objective goals

20 cars produced, 40 units sold, etc.

review of court decisions (1980-1995)

300 court decisions related to performance appraisals

what percent of applicants "fake" personality tests?

33%-50%

Hugo Munsterberg

A founding father of I/O and wrote the first textbook applied to I/O in 1913

G factor

A general ability, proposed by Spearman as the main factor underlying all intelligent mental activity

fluid intelligence

A general learningand reasoning ability, which usually decreases with age

Correlation

A measure of the extent to which two factors vary together, and thus of how well either factor predicts the other.

Criterion Validity

A property exhibited by a test that accurately measures performance of the test taker against a specific learning goal.

Regression

A reversion to immature patterns of behavior in any given sample

Hawthorne Studies

A series of studies during the 1920s and 1930s that provided new insights into individual and group behavior

Meta-Analysis

A statistical method for combining and analyzing the results from man studies to draw a general conclusion

Walter Dill Scott & Walter Van Dyke Bingham

Adapted a well-known test called Stanford-Binet test: Designed for testing one individual at a time

what letter grade do most HR professionals give the performance appraisal systems of their companies?

C = mediocre

what were identified "fakers" more likely to engage in?

CWBs, having lower ratings of integrity, and power job performance

quasi-experimental design

Can manipulate the Independent Variable, but cannot randomly assign individuals to conditions PART CONTROLLED PART RANDOM

Data Analysis Techniques

Different types of techniques used for descriptive statistics, exploratory data and conformitory data analysis', which enable researchers to make inferences

Construct

Display information in a diagrammatic or logical form.

non-experimental design

Does not include manipulation or assignment to different conditions ALL CONTROLLED

Correlation (v causation)

Does not mean causation, the variables necessary must be related to each other

examples of exceptions to the ADEA

EEOC vs. university of texas health science center at san antonio (1983) - upheld maximum age of campus police officers and the age 60 rule

example of a paper-and-pencil test

GREs

Thurstone

He said that cognitive abilit (intelligence) was made of several primary factors like: spatialm numerical, memory and reasoning

Multiple Correlation

How well a given variable can be predicted using a liner function of a set of other variables. This is a correlation between the variable's values and the best predictions that can be computer linearly.

Elton Mayo (Hawthorne effect)

It is the process where human subjects of an experiment change their behavior, simply because they are being studied.

multiple regression

It is used when we want to predict the value of a variable based on the value of two or more other variables

Taylorism

Jobs can be studied and be ranked by different methods of the same work. This is also know as "the one best method" Looking for maximum efficiency, and want to hire employees according to personality and preformance

Criterion

The dependent variable that is the (expected) outcome. Employee trust would be an Independent Variable and the Dependent Variable would be Job Preformance (they're linked(

mean

arithmetic average of a group of scores

observational methods

common in field settings, data will be correlational only, does not involve random assignment or manipulation of independent variables, cannot make casual inferences

miss

it was there, but we didn't see it

recruitment

organizations want to recruit high quality applicants (the quality of who gets hired will depend on who applies), but applicants also evaluate how much they want to work for the organization (if high quality applicants don't find the job or organization appealing, they won't apply)

other types of errors

recency, primacy, and similar-to-me

self-perceptions

self-perception of the rater can influence how others are appraised, individualistic vs. collectivistic selves, and raters with a more individualistic self may feel less discomfort with evaluating others and less lenient

what is also prohibited by the CRA?

sexual harassment

standard deviation

stat describing how close a member would be to "typical"

validity generalization (VG)

used to demonstrate that test validities do not vary across situations

benefits of multi source feedback

when multiple raters are used participants are happier because they are involved in the process, the idiosyncrasies or biases of any single rater are overcome, there are multiple perspectives for a broader/more accurate view of performance

when is error training effective?

when people are high in openness

when is error training NOT effective?

when people are low in neuroticism

what does science try to predict?

when things will happen

what do errors in error training lead to?

a greater understanding of why correct responses are correct

interviews

communication process where both organization and candidate gathers information

what are parallel forms reliability important for?

developing tests for different populations

four types of responses

hits, miss, false alarms, correct rejections

paper-and-pencil tests

most cognitive ability tests in this format, many now in computerized for

mode

most frequent single score in a distribution

deduction

theories lead you to the data

big five personality traits

(O)pen to new experiences, (C)onscientious, (E)xtraversion, (A)greeableness, (N)euroticism

overall decision accuracy equation

(hits + false alarms) / total number of applicants

magnitude of correlation coefficient (r)

-1 to +1

Big 5 Personality Trait

-conscientiousness is personality trait most associated with job performance across different jobs -neuroticism also predicts job success (high neuroticism predicts it negatively)

Employee Recruitment Methods

-online search ads -help-wanted ads -current employee referrals -job fairs etc.

Curvilinear Relationship

-relationship between success at sales and extraversion -conscientiousness may have this relationship to job performance

Types of Tests

-tests of ability (cognitive, mechanical, and physical) -tests of achievement (job skills and knowledge tests) -miscellaneous tests (polygraphs, integrity/honesty tests) -personality tests

three criteria for causation

1. The cause must come before the effect. 2. The two variables must occur together. 3. The link between the two variables cannot be explained by the influence of any other variable.

person with a disability

1. has a physical or mental impairment that substantial limits one or more major life activities 2. has a record of such an impairment 3. is regarded as having such an impairment

job elements

KSAOs required for successful job performance

what isn't AAP?

NOT a quota system; purpose is to address historical discrimination

5 steps for developing BARS

SMEs identify and define several dimensions that are important for the job, another group generates a series of behavioral examples (critical indecent technique), sort critical incidents into appropriate dimensions (retranslation stage), rating behavioral examples on effectiveness, choose items that represent performance levels

Estimating Reliability

True Score Variance (differences in scores due to biologically different people) and Observed Score Variance (the portion due to faults in measurement)

When to use a T Test

Use when all you want to do is to compare means, and when its assumptions are met.

Spearman

Used factor analysis and ,ade tests of various mental abilities which showed strong positive correlations

example of teaching in a way that is not effective for the learning goals

a lecture may not be the best training to do surgery

correlation coefficient

a numericalmeasure of the extent to which two factors vary together, and thus of how well either factor predicts the other

incremental validity

When you want to add another predictor to the measurement, but is not related to the original predictor, a big incremental ability would be a math ability and social skills.

multiple regression

a compensatory approach, high scores on some predictors can compensate for low sores on other predictors

R

a composite score using multiple predictors to describe a linear relationship (from a regression equation)

Realistic Job Preview

a recruitment technique that acquaints prospective employees with a job's daily tasks, duties, and responsibilities, providing them with an accurate description of the positive and negative aspects of a job -related to: --higher job satisfaction --better job performance --lower turnover

Core Self Evaluation

a single construct combining four related existing constructs -self-esteem -generalized self-efficacy -locus of control -emotional stability *good scores are related to better job performance and satisfaction, higher motivation and income, reduced stress and burnout, etc.

attribute

a specific trait or characteristic that varies along individual which can be measured

Factor Analysis

a statistical procedure that identifies clusters of related items (called factors) on a test; used to identify different dimensions of performance that underlie a person's total score.

meta-analysis

a study of other studies

example of incremental validity

a test PLUS an interview

what are tests, like the focus-2 and meyers-briggs basing their measurements upon?

a theory, which is based upon data, gathered by testing

Situational Judgement Test

a type of test that describes a problem to the test taker and requires the test taker to rate various possible solutions in terms of their feasibility or applicability

Moderator

a variable that, depending on its level, changes the relationship between two other variables

what is the validity co-efficient for personality and job performance?

about r = .23

"amazing" selection battery

account for 70% of the variance

how is the variance amount in a criterion variable accounted for?

accounted for by a predictor variable

what does evidence suggest about general cognitive ability tests?

accounts for a large proportion of variance in criterion performance (validity coefficient = .53 OR r^2 = .25), predicts performance similarly across countries

realistic job purview (RJP)

accurate glimpse of what the job is actually like

individual tests

administered one person at a time, very costly (time and money)

quid pro quo harassment

advancement or continuation is contingent on sexual favors

importance of person-environment (PE) fit

agreement or match between an individual's KSAOs and values and job demands/organization characteristics

what do IRB committees examine?

all research protocol to determine if the research is ethically appropriate

taylor-russel tables

allow organizations to estimate whether using a new selection battery will be effective

ledbetter fair pay act (2009)

allowed for those who have been affected by unequal pay discrimination to sue for damages without any statue of limitations

family and medical leave act (FMLA)

allows eligible employees to take job-protected, unpaid leave for up to 12 weeks for birth of a child, serious health condition of a family member or one's own serious health condition

regression

allows prediction of one variable from another

interviews

among the most popular selection devices and typically used across all jobs

r^2

amount of outcome explained by the predictor (the variance "accounted for")

comparison between two or more groups

an "experimental" and "control" group

undue hardhsip

an accommodation that would result in significant difficulty or expense given the employer's size and financial resources

error training?

an alternative form of training that encourages making errors during learning - the idea: discoveries can be made from making mistakes

Personality

an individual's characteristic pattern of thinking, feeling, and acting

personality

an individual's traits or predispositions to behavior in a particular way across situations

narrative comments

analyses of written comments and feedback addition to ratings, research finds that supervisors and subordinates provide better (less vague) written feedback

archival research

answering a research question using existing, secondary data sets (looking for patterns not previously noticed)

Serial Interview

applicant has series of single interviews with different interviewers

faking

applicants may try to describe themselves in ways they think will look good to employers, faking reduces the predictive validity of testing

two ways biographical information is collected

application blank and biodata questionnaire

example of applied

applied social psych has been used to help design advertising campaigns and behavioral interventions

Multiple Regression Model

approach that allows predictors to be combined statistically -issues: based on correlations b/w predictors and criterion -assumes high scores on one predictor compensate for low scores on another

Criteria

are measures of job success/performance: -a criterion for successful basketball performance might be the number of baskets made

how can defendants (organizations) argue against adverse impact?

arguing that incomplete or incorrect data was used in the case against them and by making a business necessity defense that state the selection battery was entirely job-related and treats both minority and majority-applicants fairly

Job Knowledge Tests

assess applicants knowledge related to the particular job

in-basket

assess respond to a series of job-related scenarios and information that would typically appear in a manger's in-basket, takes action, and makes decisions about who to proceed

measurement

assignment of numbers to objects or events using rules in a way that represents specified attributes of the objects

biographical information

assumes past behavior is best predictor of future behavior

integrity tests (honesty)

attempt to predict whether an employee will engage in counter productive or dishonest work-related behaviors

key steps of the performance management cycle

attending to the feedback, process the feedback, and using the feedback

Weighted Application Forms

background info can be translated into numerical values to compare the qualifications

recommendations for implementing 360-degree feedback

be honest about how ratings will be used, help employees interpret and deal with the ratings, avoid presenting too much information

discoverability

behavior can be experience, examined, and discovered, it is possible to observe cause and effect relationships

determination

behavior is orderly and systematic, there is a reason why things happen (cause and effect)

position analysis questionnaire (PAQ)

best known job analysis method, useful in describing many jobs, standardized questionnaire describing general work behaviors, work conditions, and job characteristics

"common law" practice

both employers and employees have the right to initiate or terminate employment for any reason at any time

who uses organizational politics?

both supervisors and subordinates - bosses may manipulate performance appraisals for political reasons (playing favorites with certain employees or departments)

active learning

build on the idea that learning should be an active process, learning by doing, learning is enhanced by being engaged with the process

how is interrupter reliability measured?

by examining the correlation between ratings of two judges

how is decision accuracy maximized?

by having highly valid selection batteries

testability

can be empirically verified

generative

can generate new theories and knowledge

usefulness

can help describe, predict, and explain things

empiricism

can use systematic testing to discover cause and effect, can generate theories, make predictions, and test those predictions to support or refute theories

multiple cutoff

candidate must score high on each predictor, non-compensatory

experience sampling methodology (ESM)

captures momentary attitudes and psychological states, participants respond to notifications throughout the day on their phones

reasonable accommodations

changes or exceptions that would allow the qualified disabled individuals to successfully do the job

leanring and memory studied by cognitive, social, and experimental (behavior) psychologists

classical and operant conditioning, components of memory, memory biases

integrity tests

clear empirical support found, appear valid for predicting both counterproductive behavior, faking also appears to be possible on some integrity tests

do co-workers or supervisors tend to be more lenient with ratings?

co-workers

synonym for parallel forms reliability

coefficient of equivalence OR equivalent forms reliability

steps for a concurrent validity study

collect predictor and criteria data from incumbents (at the same time), compute the validity coefficient (correlation between predictors and criteria)

Employee Screening

collecting and reviewing info about job candidates to help select best individuals for jobs -data sources such as resumes, job applications, letters of recommendation, employment tests, and hiring interviews can be used in screening and selecting candidates -any source that's used should be reliable and valid

selection battery

collection of predictors (tests) used to make hiring decisions (better predictions can be made when more than one test is used) - important that each test used is relevant to the job

job

collection of similar positions in an organization

position

collection of tasks assigned to a specific individual in an organization

mental measurements yearbook

collects info about thousands of tests include what they test, their reliability/validity, and cost to administer

raters don't often agree

common rating errors, different standards and comparisons, observation of different behaviors

3 factors that characterize experimental methods

comparison between two or more groups, random assignment, and manipulation

self-administered questionnaires

completed by respondents in absence of researcher

types of testing

computer adaptive testing, speed versus power tests, individual versus group tests, paper-and-pencil versus performance tests

computer adaptive testing

computer administered tests that alters the difficulty level of equations depending on well the test taker is doing, score on earlier questions affects the difficulty of subsequent questions, allows for quick and accurate scoring, believed to provide more precise measurement

basic research

concerned with trying to gain knowledge in it's own right, aim is to gain greater understanding of a phenomenon

applied research

concerning with using currently understanding of a phenomenon in order to solve a real-world problem

example of basic research

concerns how social information influences behaviors in social psych

synonym for extraneous variable

confounding variable

internal consistency

consistency among test items

definition of reliability

consistency or stability of a measure

interrater reliability

consistency with which multiple raters view, and thus rate, the same behavior or person

evidence used to demonstrate construct validity

content validity and criterion-related validity

validity coefficient (r)

correlation between a predictor and criterion, evidence that the test is valid

assessment centers

created in Germany (1930s) and refined during WW2 by Britain and the United States

why is over learning good?

creates mastery, allows for effective performance even in stressful or unexpected situations (KSAs become more "automatic" with over learning)

synonyms for dependent variable

criteria, outcome, or consequence

types of archival research

cross-sectional data and longitudinal data

reasons for training and development

current needs, anticipated needs, orientation of new employees, retraining incumbent employees, implementing new technologies, legal compliance, compensating for gaps in education, organizational culture

who can be a SME?

current workers (incumbents) or others with job expertise or knowledge

continuous employee development

cyclical process in which employees are motivated to plan for an engage in actions or behaviors that benefit their future employability

3 dimensions on which tasks are evaluated

data, people, things

elements of the selection process that allow assessment of utility

decision accuracy, validity, base rate, selection ratio

divergent validity

degree to which measure is NOT related to measures of dissimilar constructs

convergent validity

degree to which measure is related to or predicts measures of other similar constructs

what is what an experiment is designed to assess?

dependent variable

r

describes linear relationship between predictor and outcome

decision accuracy

describes the percentage of hiring decisions that were "correct"

task oriented approach

describing the various tasks performed on the job

what are the two goals of case studies?

description and explanation - but are not typically used to test hypotheses

why does severity occur?

desire to appear "tougher" and false impression that criticism will motivate workers

what do case studies provide?

details about a typical or exceptional firm or individual

what are the 3 assumptions of science?

determination, discoverability, and empiricism

dictionary of occupational titles (DOT)

developed by FJA for department of labor in 1930s, took that matches people with jobs, narrative descriptions of tasks, duties, and working conditions of ~12,000 jobs, hierarchical organization of jobs

occupational information network (ONET)

developed by department of labor to replace the DOT; database of over 950 occupations, identifies and describes key components of modern occupations, based on data gathered in a variety of ways; hybrid approach, provides highly accessible, searchable online database

concerns about success/effectiveness of FMLA

difficult to understand and define what classes of illness qualify under FMLA and some employees simple cannot afford to take 12 weeks unpaid

job element method (JEM)

directly connects job analysis to the selection context through KSAOs

what were judgements in favor of plaintiffs most commonly linked to?

discrimination or unfairness due to age, gender, or race

task practice all at once or over time?

distributed practice and masses practice

demographic (specific groups)

do certain groups of employees need training? technology training for older workers, specialized training for disabled workers, remedial education for poorly educated or non-native english speakers

task

do employees need new KSAOs?

person (individuals)

do some particular employees need training? improving those who are low performers and training to promote those who are high performers

what is the key question with validity shrinkage?

does it shrink by a little or a lot?

criticism of JEM

does not specify job tasks

readiness

does the learner have the pre-requite KSAOs?

why does effectiveness of error training vary?

due to learner characteristics

paired comparisons

each employee compared to every other employee (comparisons made one pair at a time), useful with a small number of employees, becomes too cumbersome with many employees

advantages of checklists

easy to develop and easy to use

advantages of graphic rating scales

easy to develop and easy to use

pros of self-administered questionnaires

easy, can be used for very large groups, answers can be anonymous

what does psychology use?

empirical methods to gather data, generate theories, and make predictions

providing performance feedback

employee development is largely a function of their receptivity to feedback and the organization's approach to, or emphasis on, feedback

rank ordering

employees ranked form best to worst, useful for promotion decisions, subject to bias

continued development of employees

employees remain high performers with training, investing in employee training increases their job satisfaction and commitment to organization, increases productivity and decreases turnover

random assignment

ensures both groups start out the same, each participant has an equally likely chance of being assigned to each condition

interrater

equivalence of two raters

parallel forms

equivalence of two test forms

evaluating another individual's performance accurately and fairly is difficult

errors often result from this process and understanding of these errors is important to appreciating the complexities of performance appraisal

what should effective trainers do?

establish specific objectives and communicate these clearly to the trainees, possess solid understanding of how people learn and the role played by approach or style, display effective communication skills, realize that different trainees may require different style or different treatment from the trainer

employee comparison procedures

evaluation of raters with respect to how they measure up to, or compare with, other employees

why do we need to train?

even the best selection batteries are limited in their ability to predict performance, there will still be variation in how people actually perform

case studies

examination of single individuals, groups, companies, or societies

worker-oriented approach

examining broad human behaviors involved in work activities (physical, interpersonal, mental factors) - characteristics of a person to be good at a job

inter-item reliability (Cronbach's alpha)

examining correlations among all test items to determine consistency

critical incidents

examples of both poor and excellent performance, usually collected through logs employee performance records can be used

2 types of research designs

experimental methods and observational methods

Face Validity

extent to which a measure is subjectively viewed (by applicants, hiring managers, etc.) as being a reasonable/relevant measure -tests of general or job specific abilities/skills usually score high -when applicants think a measure lacks face validity, they often deem it unfair

concurrent validity

extent to which a test predicts a criterion that is measured at the same time the test is administered

implicit person theory (IPT)

extent to which an individual believes that other people can change impacts performance appraisal, supervisors who believe that people can change are perceived by subordinates as more fair and just in their appraisals

internal validity

extent to which casual inferences can be drawn about variables, requires more control to minimize confounds

external validity

extent to which results generalize to other people, settings, time - means less control, but more like the real-world

predictive validity

extent to which test scores obtained at one point in time predict criteria obtained at some later time

data

extent to which the job requires handling information, ideas, and facts

people

extent to which the job requires using interpersonal resources (understanding, courtesy, mentoring)

things

extent to which the job requires using physical resources (strength, speed, coordination)

Person-job Fit

extent to which the personality traits, interests, KSAOs of an individual match the requirements of the job they must perform

Person-organization Fit

extent to which the values of an employee are consistent with the values held by most others at the organization

public lawsuits over use of some systems

extent to which underrepresented groups tend to be disproportionately ranked in low category, forced distributions particularly problematic in this regard

internal validity

extent to which we can draw cause-and-effect inferences from a study

external validity

extent to which we can generalize findings to real-world settings.

what is any other variable that can contaminate results or is an alternative casual explanation?

extraneous variable

Army Alpha

first modern group intelligence test dapted from the original Standford-Binet Test (walter dill scott and van dyke bingham)

criticisms of worker-oriented techniques

focuses less on specific tasks and more on the human characteristics that contribute to successful job performance, more effective for comparisons across jobs

rater error training (RET)

focuses on describing errors and showing raters how to avoid them and reduces errors, but accuracy is not necessarily improved

frame of reference training (FOR)

focuses on enhancing raters' observational and categorization skills; establishes a common frame of reference, improves with behavioral observation training (BOT)

what are 3 ways to control?

hold extraneous variables constant, systematically manipulate different levels of extraneous variables (make part of experimental design), and use statistical control

Schneider's Attraction-Selection-Attrition/Retention (ASA) Model

focuses on fit -job seekers are attracted to orgs that have good fit with their personality/values -orgs select for applicants with certain kinds of values and personalities who will fit in at org -people who don't feel like they fit it at org are more likely to leave over time

situational interview

focuses on future behavior; interviewees are asked how they would handle work dilemmas or situations

behavior description interview

focuses on past behavior; interviewees are asked to describe specific ways in which they have addressed past situations

what are observational methods useful for?

for discovering and describing possible relationships

example of ledbetter fair pay act (2009)

for instance, if you discovered that your had been unfairly paid for 20 years... you can recover the lost wages even if you hadn't known previously

application blank

form requesting historical and current information; off-the-shelf forms are nota good idea unless validated

steps of the research process

formulate the hypothesis, design the study, collect data, analyze the data, and report the findings

James McKeen Cattell

founded the psychological corporation in 1921, the first to apply psychology to business and industry, coined term "mental tests"

what type of feedback are beneficial?

frequent feedback, positive feedback, and negative feedback

overlearning

frequently repeated practice leads to overlearning

steps for a predictive validity study

gather predictor data on all applicants, hire some of the applicants to fill open positions, after several months gather data (criteria for validation study), and compute a validity coefficient between the predictor score and the criterion score

hybrid method

gathering information about the work and the worker at the same time

general cognitive ability test

general form of intelligence

why does employee morale decrease with distributional errors?

good employees may feel sighted

examples of elements

grading, lecturing, and preparing a syllabus

rating formats

graphic rating scales, behaviorally anchored rating scales (BARS), checklists, employee comparison procedures

leaderless group discussion (LGD)

group exercise designed to tap managerial attributes, requires small group interaction, given an issue to resolve, no roles assigned, observed by assessors

biodata dimensions for position of customer service preventative

helping others, negotiation skills, interpersonal intelligence, patience, empathy, extraversion, oral communication

leader-member exchange theory (LMX)

high LMX means frequent positive social interactions between supervisor and subordinate

pros of interviews

higher response rates and ambiguity about questions can be resolved

functional job analysis (FJA)

highly structured task job-oriented approach, data obtained about what tasks a worker does and how those tasks are performed

correct

hiring the right people, but also not hiring the wrong people

equation for decision accuracy

hits / (hits + false alarms) OR hires that were correct/total number of hires

telework does not appear to decrease performance evaluations

in many organizations, employees are only eligible for telework after they have already demonstrated excellent work performance

when is interrupter reliability relevant?

in performance appraisal

legal issues in performance appraisal

in the US it is illegal to discriminate in performance appraisals on the basis of non-performance related factors (such as: age, gender, race, ethnicity, religion, and disability)

two popular assessment center exercises

in-basket and leaderless group of discussion (LGD)

incremental validity

increase in validity form a new version of a test, may include the increase in validity by combining procedures

issues with multi source feedback

increasing in popularity BUT not very well researched, may be seen as more fair by employees BUT interrater reliability tends to be low

what is anything systematically manipulated or an antecedent to other variables?

independent variable

measures of dispersion

indicates how closely scores are grouped around the measure of central tendency

internal consistency reliability

indication of relatedness of the test items that indicates how well test items are hanging together

feedback orientation (FO)

individual's overall attitude toward feedback or receptivity to feedback - includes perceptions of feedback utility, accountability to use feedback, social awareness through feedback, self-efficacy in dealing with feedback

parts of ethical considerations

informed consent and deception

developmental purposes

informs employees of their performance strengths/weaknesses, facilitates employee advancement (good for the organization - helps plan for the future)

Structured Interview

interviewer asks (1) a preplanned series (2) of job-related questions of each interviewee, and (3) has a standardized system for rating each response -removes much bias -can be a more valid predictor of job performance -higher face validity

types of surveys

interviews and self-administered questionnaires

predictive validity studies

investigate how effective predictors are at forecasting applicant job performance

when are realistic job previews better?

invoking multimedia, focused on the career, represent multiple levels of the organization

cybervettting

involves assessment of an individual's suitability for a job based on internet information, use of social media in recruiting allows organizations to broaden their potential pool

multi source feedback (360 feedback)

involves multiple raters at several organizational levels (supervisors, co-workers, customers, self ratings), each employee rated by multiple people at different levels, feedback from the different rating provided to the employees

t-test

is a hypothesis test that is used to compare the means of two populations

ANOVA

is a statistical technique that is used to compare the means of more than two populations

self-efficacy

is the learner confident in their own ability to learn?

motivation to learn

is the learner motivated to learn? will they apply the effort necessary to learn well?

organizational

is the organization achieving its goals? what is the organizational culture? does the organization want to invest in its employees

forced distribution

limit how many employees can receive top ratings, ratings based on a normal curve, only those employees in top 2% receive "excellent" ratings (2 standard deviations from the mean)

cons of self-administered questionnaires

low response rate (may lead to a biased sample) and responders can't ask questions for clarification

who will provide the ratings?

management, co-workers, subordinates, customers, or the employee themselves

group tests

many applicants can be tested at the same time, more cost effective

meaningfulness of material

material is learned and remembered better if it's meaningful to the learner, information should be presented ini ways the learned can understand, relate to, or apply

deception

may be used in research if properly justified, participants may be mislead to prevent them from guessing the purpose of the study

disgruntled employees

may decrease effort, ignore feedback, or leave the organization

adverse impact of public lawsuits

may indicate illegal discrimination against particular group, some groups are more negatively affected than others

employees who are narcissistic and self-promoting

may not reward the best workers, but instead the workers that brag the most about their accomplishments

example of approaches to selection

may not want to accept a student with a SAT verbals score of 0, even if they got a perfect math score

why don't ratings discriminate with distributional errors?

measure loses sensitivity, employees get lumped together, and it does not distinguish between good and bad workers

overt integrity test

measures attitudes toward theft and actual theft behaviors

performance appraisal discomfort scale (PADS)

measures how comfortable raters are with making ratings, raters who are uncomfortable with the process also tend to give more lenient ratings

psychomotor tests

measures of sensory abilities that evaluate the speed and accuracy of motor and sensory coordination

trust in the appraisal process survey (TAPS)

measures perception of how others are making ratings, raters who lack trust in the process tend to give more lenient ratings

personality-type integrity test

measures personality characteristics believed to predict counterproductive behavior

Multiple Cutoff Model

method of setting a minimum cutoff scores for each predictor -issues: assumes predictors are non-compensatory -have to determine where to set the cutoffs

Multiple Hurdle Approach

method that uses an ordered sequence of screening devices

3 parts of central tendency

mode, median, and mean

who is distributed practice good for?

more effective for learning, but may be less practice for organizations

concurrent validation studies

more viable than predictive validity studies

assessment centers

multiple raters evaluate applicants or incumbents on a standardized set of exercises that stimulate the job, involves multiple methods of assessment, assessors, assesses, last 2 to 3 days, used by many large companies, are expensive (time and money)

longitudinal data

multiple time periods

biodata questionnera

multiple-choice items, brand questions, asks many questions in broad areas; predicts performance well, r = .32 to .52

research conducted by universities

must be approved by an Institutional Review Board (IRB)

observational methods

naturalistic observation, archival research, and surveys

is perfect reliability possible in I/O psych?

no

unstructured interview

no consistency of questioning across applicants; less useful - tough to make comparisons

selection ratio

number of job openings divided by the number of applicants

selection ratio

number of jobs available / number of applicants

setting objective goals

objective meaning that it would be easy to count whether the goal was achieved

how will ratings be made?

objectively or subjectively (tend to be more biased)

naturalistic observation

observation of someone or something in its natural environment

unobtrusive observation

observer objectively observes individuals without drawing attention to themselves; does not try to blend in (or is unable to be seen)

participant observation

observer tries to "blend in" with those who are observed

central tendency

occurs when raters only use the midpoint of the scale and raters less likely to give extreme ratings

Telephone Interview

often used to screen applicants

non-compensatory

one low score on any predictor would prevent hiring

griggs vs. duke power (1971)

one of the most important cases involving the civil rights act - griggs (african american male) was not hired and presented adverse impact (AI) data against african americans and duke power said Al was unintentional, but lost largely because job relatedness of its selection battery could not be demonstrated

cross-sectional data

one point in time

what is a validation study initially based on?

one sample

what is the description from a case study based upon?

only on a single individual or organization of concern

The Big Five

openness, conscientiousness, extraversion, agreeableness, neuroticism

first impression (primacy effect)

opposite of recency, raters pay too much attention to initial experiences with ratee

what are internal and external validity often?

oppositely related and the more you manipulate the situation, the less it becomes like the real world

issues with cybervetting

organization may learn things about applicants that are not relevant to the job, but may bias their hiring decisions

feedback environment (FE)

organizational culture can influence the effectiveness of feedback (employees are more responsive to feedback if they are more satisfied with the organization), feedback that is seen as credible, fair, and high quality will be linked to more satisfaction with the process, greater satisfaction with the process linked to greater motivation to use the feedback to improve performance, more loyalty to the organization, more OCBS, and greater satisfaction also makes employees more likely to seek feedback on performance

civil rights act (CRA) 1964, 1991

originally passed in 1964 and updated in 1991 - makes it illegal for employers with 15 or more employees to discriminate in the workplace based on race, religion, national origin, or sex

crystallized intelligence

our accumulated knowledge and verbal skills; tends to increase with age

traits of a good theory?

parsimony, precision, testability, usefulness, and generative

web-based surveys

participants can answer quickly, easily, and on their own time, can easily survey thousands of responders

informed consent

participants should have sufficient information about the procedures to appropriately judge whether they want to participate

individualistic vs. collectivistic selves

people vary in the degree to which they feel connected to others, varies with gender and culture

what does SME reference?

people who have expertise with the job?

vendors

people you purchase things for your organization from

base rate

percentage of current employees who are successful on the job, reflects the quality of previous selection batteries and provides a basis of comparison for new battery

decision accuracy

percentage of hiring decisions that are correct

coefficient of determination (r^2)

percentage of variance accounted for by the predictor

trust and justice

perception of fairness in the process will impact performance appraisals, if raters believe that other raters are using a different standard they may adjust their ratings

what happens when LMX is low?

performance appraisals matched objective criteria

organizations (defendants) were most likely to win cases when:

performance appraisals were based on job analysis, written instructions for performance appraisal process were provided, employees had opportunity to review their appraisals, multiple raters agreed on performance ratings, rater training was used

feedforward interviews (FFIs) for performance appraisal

performance feedback that emphasizes the employee's strengths over pointing out weaknesses, idea is that positive feedback is better for encouraging improvement

Person-situation Interaction

personality interacts with situation to predict outcomes -to know relationship b/w personality and outcome, you need to know the situation

purposes of performance appraisal

personnel decisions, developmental purposes, documentation of organizational decisions

age 60 rule

pilots/copilots cannot be employed on or after age 60

developmental planning

planning for future tasks to achieve or future skills to learn

which direction can correlation coefficient (r) be?

positive ore negative

exceptions to the ADEA

possible when a company can demonstrate that age is a bona fide occupational qualification (BFOQ)

affirmative action plans (AAPs)

practice employed by organizations to increase the number of minorities or protected class members, range from relatively uncontroversial to very controversial

massed practice

practice or studying completed in one session, may "cram" lots of information at one time

distributed practice

practice or studying distributed over time, involves multiple sessions

advantages of BARS

precise and well-designed scales (good for coaching) and well received by raters and ratees

advantages of employee comparison methods

precise rankings are possible and useful for making administrative rewards on a limited basis

synonyms to independent variable

predictor, precursor, or antecedent

predictors must be used as substitutes for criteria

predictors are used to forecast criteria

characteristics of reliability

predictors must be measured reliably, measurement error renders measurement inaccurate or unreliable, outcomes cannot be accurately predicted with variables that are not measured well

what will happen if validity (R^2) with a different sample doesn't decrease much?

predicts generalize well

what did sex include in the civil rights act?

pregnancy, childbirth, or related medical conditions

important to the success of any training program

principles of instructional design, basic principles of learning, characteristics of trainee and trainer

Selection

process of deciding which applicants to hire, promote, or move to other jobs -to do it well, you need to determine the criteria that reflect job success/performance and find ways of predicting such criteria

job analysis

process of defining the key tasks or duties and the knowledge or skills required to perform a specific job

recruitment/selection

process of recruiting and selecting college applicants is very similar to selecting job applicants, need to advertise available job opening to attract applicants, then need to select who get hired from a variety of predictors

americans with disabilities act (ADA)

prohibits discrimination against qualified individuals with disabilities in employment decisions and a qualified person can perform the essential functions of the job, if necessary, with the aid of reasonable accommodations

personnel decisions

promotions, firing, transfers, and raises

feedback intervention theory (FIT)

proposes that feedback is most effective when targeted at task rather than at self (but lack evidence) - receiving feedback has a moderate effect (and in some cases actually made performance worse)

revised regulations (2009) of FMLA

provide for extended leave for families of wounded soldiers, clarify what constitutes serious health conditions, require enhanced communication between employees and employers

functions of providing feedback

provides information (so adjustments can be made to behaviors)

what are the functions of providing feedback?

provides information (so adjustments can be made to behaviors), makes the learning process more interesting and increase motivation to learn, leads to goal setting for improving performance

validity of testing

purpose of testing is to predict future job performance, want to maximize predictive validity, predictive validity is a type of criterion-relate validity

supervisor-subordinate relationship

quality of relationship between raters and ratees can impact performance appraisals, leader-member exchange theory (LMX)

Criterion-related Validity

quantitative type of validity reflecting whether a measurement accurately predicts some criterion (e.g., job performance) -past experience is a valid predictor of job performance if knowing past experience allows us to determine who will perform better

Integrity Tests

questionnaire items about applicants' attitudes about, and past experiences with, such things as dishonesty, theft, fraud, etc.

two types of sexual harassment

quid pro quo harassment and hostile work environment harassment

validity of interviews

r = .14

validity co-efficients for work samples close to?

r = .50

field experiments

random assignment and manipulation of independent variable in a naturally occurring, real-world setting (less control over the environment/setting)

what two factors are used in lab experiments to increase control?

random assignment and manipulation of independent variables

what is a selection battery with zero validity equivalent to?

random hiring

3 parts of how "spread-out" the data is

range, variance, standard deviation

satisfaction with the process

ratees reported less anger and more fairness when rater provides justifications for ratings, performance appraisals that are seen as unfair are related to ratees' feelings of emotional exhaustion

social perceptions

rater beliefs about how other people function another source of influence

disadvantages of checklists

rater errors such as halo, leniency, and severity are quite frequent

halo error

rater's tendency to use global evaluation of a ratee in making dimension-specific ratings

forced choice checklist

raters choose two items from a group of four that best describe the employee, purpose is to reduce rather bias/distortion

reasons for low interrater reliability

raters don't often agree and the time frames may be different

recency error

raters heavily weight their most recent interactions/observations of ratee

global evaluation-specific ratings

raters overall opinion influences each specific rating, if the rater has an overall positive opinion they may give good ratings on all dimensions, and if the rater has an overall negative opinion they may overlook specific dimensions that the ratee does with excellence

similar-to-me error

raters tend to give more favorable ratings to ratees who are like themselves

why does leniency happen?

raters want to look good, be liked, keep peace in the workplace or personality traits of the raters

what is one of the biggest issues with performance appraisals?

ratings are made by people, and people are biased, self-serving, and have limited cognitive abilities

personality traits of the raters

ratings by "agreeable" personalities are more lenient than "conscientious" personalities

what happens with distributional errors?

ratings do not discriminate (range restriction) and employee moral may decrease

Inter-rater Reliability

ratings made by different individuals should correlate highly

graphic rating scales

ratings made using a visual depiction of a scale, raters judge how much of each particular trait the ratees possess, or where on the dimension the ratees fall with respect to organizational expectations

upward appraisal ratings

ratings provide by individuals whose status in the organizational hierarchy is below that of ratees

3 types of learner characteristics

readiness, self-efficacy, motivation to learn

jingle-jangle fallacy

refer to the erroneous assumptions that two different things are the same because they bear the same name OR that two identical or almost identical things are different because they are labeled differently

emotional intelligence (EI)

refers to ability of individuals to generate, recognize, express, understand, and evaluate their own (and others') emotions, may help predict other aspects of job performance by cognitive ability, research has shown that EI is correlated with task task performance as well

organizational politics

refers to how some individuals may deliberately try to enhance their status or position in the organization, may involve deliberate attempts to influence others, gain favoritism, or sabotage co-workers, often a source of stress and dissatisfaction among employees

instructional design

refers to how the training is actually conducted, includes teaching and evaluation methods, and good instructional design has clear goals and teaches in ways that are effective for the learning goals

situational judgement tests (SJTs)

refers to pencil-and-paper tests or video scenarios that measure applicants' judgment in work settings, video examples found to be better predicts than pencil-and-paper tests, better when based on job analysis

Construct Validity

reflects whether a measurement instrument/process accurately measures the underlying construct it purports to measure -assessed qualitatively ("Does this measure reflect the construct we're trying to measure?") -assessed quantitatively through convergent and divergent validity

speed test

relatively easy items, short time limit, individual must complete as many items as possible before time expires

leanring

relatively permanent change in behavior that occurs as a result of experience or practice

Reliable

repeatedly gives the same or similar results when applied repeatedly to the same quantity

performance tests

require manipulation of object/piece of equipment, can be used to test physical skills that can't be measured with pencil and paper tests

criticism of PAQ

requires a high reading level, poorly suited for jobs in management, too abstract

social desirability effects

respondent answers may be influenced by interviewer

research

satisfaction with the process

median

score in the middle of a distribution

Convergent Validity

scores on the measure are related to other measures of the same construct

Equivalent Forms Reliability

scores on two different but equivalent versions of the same test, given to the same people, should correlate highly

Incremental Validity

seeks to answer if the new test adds much information that might be obtained with simpler, already existing methods.

Hawthorne Studies

studies conducted during the 1920s and 1930s that suggested the importance of the informal organization. It was concluded that work efficiency is influenced by work environment

what does SME stand for?

subject matter experts

what happens when LMX is high?

subordinates received favorable performance appraisal regardless of their objective performance criteria - evidence that being liked by the boss leads to a bit of favoritism with performance appraisals

subordinates and organizational politics

subordinates use impression management to get favorable results (for instance, acting differently when the boss is around)

what is KSAO required for?

successful job performance

reaction criteria

successful performance appraisals should be mindful of how both raters and ratees will react to the process - often neglected with designing performance appraisals

synthetic validity

suggests job components common across jobs are related to the same KSAs or human attributes (we can use large databases to estimate validity without having to do studies of specific examples) - debate over whether this approach is superior or inferior to existing approaches

statistic

summarizes in a single number the values, characteristics, or scores describing a series of cases

inclusive of many perspectives example

supervisor vs. customers

recent trend in organizations is the increased frequency of telework

supervisors do not have the opportunity to observe how work is completed, supervisor must rely on indirect info to evaluate performance (indirect info isn't as valued as direct observation), potential for performance appraisals of teleworkers to be more biased and/or less accurate

low interrater reliability

supervisors, co-workers, and customers may make very different evaluations of the same person because they may use very different criteria for their judgements

performance appraisal

systematic review and evaluation of job performance and the provision of performance feedback

what does psychology seek to do?

systematically study human behavior

criticism of CMQ

takes about 3 hour to complete

cons of interviews

takes more time and money to administer and social desirability effects

task inventory approach

task statements generated by subject matter experts (SMEs)

examples of tasks

task to teach psych 2402

essential functions

tasks that are significant and meaningful aspects of the job

goal of training and development

teach employees KSAOs they need to perform well

fundamental attribution error (FAE)

tendency to explain other people's behavior int arms of their personality or dispositions - especially true for negative behaviors , may overestimate how much individuals are to blame for poor outcomes, may underestimate the role of external influences

leniency

tendency to give ratings that are too positive or forgiving

spatial ability

test ability to understand geometric relations, such as visualizing objects and rotating them spatially to form a particular pattern

Test of Mechanical Ability

test of ability to identify, recognize, and apply mechanical principles -used for jobs requiring operating, repairing machinery , and manual trades

Test of Physical Ability

test used for jobs that require physical strength (e.g., fire-fighter, lifeguard, etc.) -highly likely to cause an adverse impact against women

overview of reliability

test-retest, parallel forms, interrupter, and internal consistency

specific cognitive ability tests

testing specific forms of knowledge

work samples

testing that attempt a standardized mini version of job tasks by creating situations that resemble performance in the job environment or by having applicants provide an example of how they would treat a job-related problem

validity generalization

testing whether the validity of predictor in other organizations or other job types

Mental/Cognitive Ability Tests

tests of general mental/cognitive ability -advantages: high validity, inexpensive and easy to administer -disadvantage: cognitive ability scores sometimes correlate with parent's socio-economic status

who set up guidelines to determine if organizations were being discriminatory in their hiring and promotions based on race and/or gender?

the EEOC

Job Skills/Work Sample Test

the applicant is asked to perform actual job-related tasks to test skills -high validity predictor of job performance

Industrial Organizational Psychology

the application of psychological concepts and methods to optimizing human behavior in workplaces. How work-related cognition, effects and bhehavior are all connected

Scientific Management

the application of scientific principles to increase efficiency in the workplace

Bandwidth-fidelity dilemma

the assessment of gain or loss in analytical and predictive power from using broad-band versus narrow-band personality assessments.

Causation

the belief that events occur in predictable ways and that one event leads to another

Validity Coefficient

the correlation coefficient (r) between a predictor and a criterion

validity

the degree that a test accurately measures the variable of interest - are IQ tests true measures of what people think of as intelligence & are personality tests good predictors of people's behavior?

what do I/O psychologists apply learning principles to?

the design and implementation of training programs

Content Validity

the extent to which a test samples the behavior that is of interest

Reliability

the extent to which a test yields consistent results, as assessed by the consistency of scores on two halves of the test, on alternate forms of the test, or on retesting

Construct Validity

the extent to which variables measure what they are supposed to measure

what are ratings impacted by?

the goals of the performance appraisal and the accountability of the rater

predictor variable

the independent variable in a correlational study that is used to predict the score on another variable

what is most research driven by?

the inductive process, but is also possible to start with data

Unstructured Interview

the interviewer asks (essentially) whatever question s come to mind, in whatever order -don't have high levels of reliability and validity as a predictor of job performance -tend to be unsystematic, allowing bias and subjectivity to have a big impact

what does a discrepancy (gap) between actual performance and an ideal, a norm, a minimum, a desired state, an expected state lead to?

the need for training

examples of psychomotor tests

the perdue pegboard test and vision and hearing tests

what happens with a smaller selection ratio?

the potential utility of the selection battery is greater - emphasizes the importance of recruitment for increasing the number of applicants so organization has more options

Employee Recruitment

the process of attracting potential workers to apply for jobs

Study Design

the program that directs the researcher along the path of systematically collecting, analyzing, and interpreting data The guide // the path

R squared

the proportion of the total variation in a dependent variable explained by an independent variable

Test-retest Reliability

the same test given to the same people at two different times should correlate highly

what will happen if validity (R^2) shrinks by a lot?

then there was something about the original sample or organization that doesn't generalize (situational specificity)

business necessity defense that state the selection battery was entirely job-related and treats both minority and majority-applicants fairly

they are accepting that the plaintiff's statistics are correct, but that other factors are leading to the differences (for example, a limited pool of applicants)

what happens to employees low on FO?

they are not motivated by a high FE environment

why are halo errors problematic?

they can't accurately identify areas that need improvement

what happens to employees high on FO?

they tend to seek feedback more often than those low on FO

extraneous variable

things you didn't mean to manipulate that may influence measurements, confounds

who did error training reduce feelings of self-efficacy for?

those high in conscientiousness

disadvantages of BARS

time and money intensive and no evidence that it is more accurate than other formats

what is archival research able to minimize?

time developing measures and collecting data

disadvantages of employee comparison methods

time intensive and not well received by raters (paired comparison), or ratees (forced distribution)

feedback (knowledge or results)

timely and useful feedback is important, work best when delivered immediately after behavior, frequent feedback improves performance over time, both positive and negative feedback can be beneficial (as long as it is delivered with skill AND the learner is open to feedback)

use of meta-analysis

to combine the results of multiple studies to arrive at the best estimate of the true relationship

what are the 4 goals of science?

to describe, explain, predict, and control/influence

what is a control needed for?

to make a predictor

goal of selection battery

to maximize hits and correct rejections and to minimize misses and false alarms

participation

to what extent is the ratee involved in the process? can include self-assessment or expressing ideas during the appraisal session/interview, strong positive relationship between employee participation in the process and perceptions of fairness and justice

criticisms of task-oriented techniques

too narrowly focused on the tasks for a particular job (focus on the details; may miss the big picture), may miss similarities between different job types

R^2

total amount of outcome explained by multiple predictors

how to teach specific KSAOs that will contribute to the organizations goals?

train

when is training needed?

training should be based on a needs assessment, due to a deficiency in knowledge, skill, or ability

validity "shrinkage"

typically validity will decrease a bit for different samples

grutter vs. bollinger (2003)

university of michigan law school sued by caucasian female who argued she was denied admission due to racial preferences and the supreme court ruled for the defendant citing that diversity is a compelling interest for the law school and the school considered elements of diversity on an individual basis (did not make race the defining feature of its decision)

gratz vs. bollinger (2003)

university of michigan sued by two caucasians who argued they were denied admission because preference was shown to "underrepresented minorities" an the supreme court ruled for the plaintiffs because awarding global points ignored individual differences and qualifications

age discrimination in employment act (ADEA)

unlawful to discriminate against a person 40 years or older because of age with respect to any employment-related decision

examples of task oriented approach for a professor

writing syllabi, lecturing, etc.

problems caused by inadequate performance appraisals

wrong people are promoted or fired, legal problems, disgruntled employees

is psychology a science?

yes

example of readiness

you can't learn calculus if you don't understand algebra

example of concurrent validity

your current GPA predicting how well you will do in a class this semester


संबंधित स्टडी सेट्स

Digital Photography I and II Quizzes

View Set

Module G Homework and assessment

View Set

Sculptor Capital Management Interview

View Set