Term
|
Definition
Definition of Performance Management |
|
|
Term
|
Definition
Well-designed PA system should be based on JA |
|
|
Term
|
Definition
Only 1 in 10 employees think their firm's appraisal system helps them improve their performance |
|
|
Term
|
Definition
Critical Incidents Technique |
|
|
Term
Austin & Crespin (2006); Austin & Villanova (1992) |
|
Definition
The criterion problem: The difficulties involved in the process of conceptualizing and measuring performance that are multidimensional and appropriate for different purposes
We are often unclear which specific aspect of JP is the target criterion, or if the criterion is overall JP, what the criterion measure is actually tapping! |
|
|
Term
- Choice of dimensions depends on how broadly one initially construes the conceptual criterion - Dimensions of criteria are context dependent - Failing to articulate the values involved in decisions to include some measure of performance criteria while excluding others |
|
Definition
3 reasons that the criterion problem may exist |
|
|
Term
|
Definition
The "Ultimate Criterion": A hypothetical construct that describes the full domain of performance (and includes everything that ultimately defines success on the job) |
|
|
Term
|
Definition
Criterion contamination, deficiency, and relevance |
|
|
Term
|
Definition
Maximal v. Typical performance |
|
|
Term
Steps in Criterion Development (Guion (1961) |
|
Definition
- Job Analysis - Development of measures of actual behavior relative to expected behavior as identified in the JA - Identification of the criterion measures (dimensions) underlying such measure by FA - Ensure reliability and construct validity - Determine predictive validity of each IV for each one of the criterion measures/dimensions. |
|
|
Term
|
Definition
Designed JP model for U.S. Navy (narrow set of jobs) - Downtime behaviors - Task performance - Interpersonal behaviors - Destructive behaviors (e.g., accidents) |
|
|
Term
|
Definition
Conducted meta and found that a general factor of performance accounts for a great deal of the relevant true score covariances among observed scores - Average true score between-rater corr between dimensions when controlling for halo: 54 (general factor accounts for 60.3% of variance) - Not controlling for halo: Supervisors had 33% inflation and Peers had 63% inflation |
|
|
Term
Multiple Factor Model (Campbell, 1990; 1993) |
|
Definition
Job Specific Task Proficiency* Non-Job Specific Task Proficiency Written and Oral Communication Demonstrating Effort* Maintaining Personal Discipline* Facilitating Peer and Team Performance Supervisory/Leadership Management Admin |
|
|
Term
Adaptive Performance (Pulakos et al., 2000) |
|
Definition
8 dimensions of adaptability on the job: - Handling emergencies/crisis situations - Handling work stress - Solving problems creatively - Dealing with uncertain/unpredictable situations - Learning work tasks, technologies, and procedures - Demonstrating cultural adaptability - Demonstrating physically oriented adaptability |
|
|
Term
|
Definition
Conducted an EFA and found that Pulakos' adaptive performance taxonomy is actually one factor
Also, found that the HPI dimensions of adjustment and ambition were significant predictors of overall adaptive performance |
|
|
Term
|
Definition
Task Performance, OCB, CWB
Also, do not acknowledge source variance as valid, because they say that interrater reliability is the basis of our science. Correction for unreliability in ratings will get us closer to true score! |
|
|
Term
Borman & Motowidlo (1993; 1997) |
|
Definition
Task and contextual performance |
|
|
Term
|
Definition
OCB is performance that supports the social and psychological environment in which task performance takes place (NOT extra role behavior) |
|
|
Term
|
Definition
Behaviors that run counter to the organization; Can be conceptualized as CWB-I vs CWB-O |
|
|
Term
|
Definition
OCB-CWB = -.32
OCB and CWB are relatively independent aspects of performance |
|
|
Term
|
Definition
TP and CWB are weighted highest (either equally or one more than the other) |
|
|
Term
|
Definition
In team based cultures, contextual performance is considered/weighted more than TP
Support for Borman's proposition of different weights across sources? |
|
|
Term
|
Definition
Gender stereotypic prescriptions matter (OCBs are not "optional" for women like they are for men) |
|
|
Term
Borman, White, & Dorsey (1995) |
|
Definition
Effect of interpersonal factors in ratings; peers are influenced by interpersonal factors when providing ratings, but supervisors are not |
|
|
Term
|
Definition
Lab study on effect of OCB; OCB contributed significantly to PA situations (in addition to TP) |
|
|
Term
|
Definition
Lit review, found 7 dimensions of OCB: - Helping behaviors - Sportsmanship - Organizational loyalty - Organizational compliance - Individual initiative - Self-development - Civic virtue |
|
|
Term
|
Definition
Used EFA, MDS, and Cluster analysis to find 3 factor model of OCB: - OCB-I - OCB-O - OCB-Task/Job |
|
|
Term
LePine et al. (2002) Meta |
|
Definition
Showed strong relationships among OCB dimensions; Notes that we should think of OCB as a latent/aggregate construct |
|
|
Term
Stewart & Nandkeolyar (2006) |
|
Definition
Study of sales reps: Found evidence for within-person fluctuations in JP, which depended on situational opportunities (e.g., referrals received) |
|
|
Term
Barrick & Zimmerman (2009) |
|
Definition
Voluntary turnover and JP: -.25 |
|
|
Term
Turnover Absences Sales Production Rates Job and Salary Levels Work Samples Job Knowledge Tests Disciplinary Cases |
|
Definition
|
|
Term
|
Definition
It is best to use a combo of subjective and objective measures |
|
|
Term
|
Definition
Examined equivalence of subjective and objective measures of company performance - Convergent/discriminant validity supported; equivalent relationships with a range of IVs - Should use combo of subjective and objective measures whenever possible |
|
|
Term
|
Definition
Average relationship between objective and subjective measures: .39 |
|
|
Term
|
Definition
|
|
Term
|
Definition
Computer Adaptive Rating Scales (CARS): paired comparison format using an interactive, computer adaptive format to select items - Reliable, lower standard error than other formats, high validity, more accurate |
|
|
Term
|
Definition
Conducted test in field with Canadian Forces; found CARS had about 20% lower standard error versus the BARs format |
|
|
Term
|
Definition
Estimated that the variance accounted for in psychometric quality by rating format was as little as 4% (called for a moratorium on PA formats) |
|
|
Term
|
Definition
Peer ratings of PA may be most useful in the team setting |
|
|
Term
|
Definition
2 Assumptions about 360 Feedback: - Multiple sources cover different portions of the criterion space - Having additional rating sources provides incremental validity over a single source
Hypotheses: - Raters at diff levels use diff dimensions or weight dimensions differently - Raters at different levels actually observe different performance |
|
|
Term
|
Definition
Found evidence for incremental validity against external criteria
Found that peer and subordinate ratings provide incremental validity in objective performance over supervisor ratings
Peer and subordinate ratings correlated positively with getting along variables (e.g., A); Supervisor ratings correlated positively with getting ahead variables (e.g., GMA) |
|
|
Term
|
Definition
Says it is wrong to ignore source effects |
|
|
Term
|
Definition
PA Ratings... Two supervisors: .50 Two peers: .37 Two subordinates: .30 Supervisors and peers: .34
Notes that you can increase reliability by increasing # of raters |
|
|
Term
|
Definition
Feedback from individuals other than supervisors may provide new info; Can also reinforce/support supervisor feedback
Minimal evidence that 360 ratings enhance the bottom line for orgs that use it |
|
|
Term
|
Definition
Supervisor ratings may be more reliable than peer ratings. Intrarater reliability in all cases are higher than interrater reliability; use interrater reliability in corrections |
|
|
Term
|
Definition
Introduced concept of Dynamic Criteria |
|
|
Term
Potential reasons why criteria is dynamic (Steel-Johnson et al., 2000) |
|
Definition
- Changes over time in validity coefficients (Changing task model, Changing subject model) - Changes in rank ordering of scores on the criterion |
|
|
Term
|
Definition
Distinguishes between transition stage (e.g., training stages; GMA is most important here) and maintenance stage (e.g., when tasks/procedures are mastered; dispositions like motivation are most important here) |
|
|
Term
|
Definition
Situational strength - strong situations are powerful to the degree that they lead everyone to construe events in the same way |
|
|
Term
|
Definition
Work samples are good predictors of JP (.33), and has incremental validity over GMA (.06) |
|
|
Term
|
Definition
Cognitive Process Model:
-Observing performance -Storing info about performance -Retrieving info about performance from memory -Translating retrieved info into ratings
Throughout this process, rater may use schemata to help interpret/organize their experiences |
|
|
Term
|
Definition
People often use stereotypes/schemata |
|
|
Term
Common Rater Heuristics (Borman, 1991) |
|
Definition
Schemata: Categories of frames of reference that help interpret/organize their experience. Reference concepts used by raters to make judgments about groups
Types of Schemata: -Prototype: Highlight model/typical features of a category; Good example of a schema (e.g., Joe is a perfect example of a sports nut) -Stereotype: Beliefs about a certain group (e.g., all Joes are sports nuts) -Script: Events or event sequences that are remembered as being representative of a person's actions |
|
|
Term
|
Definition
Implicit personality theory:
Entity theorists - Anchor ratings on first impression (think personal attributes are largely a fixed entity)
Incremental theorists - Adjust their ratings to reflect observed performance episodes (think personal attributes are relatively malleable)
Entity IPT can be changed through training! |
|
|
Term
|
Definition
Potential Errors in PA: Halo Error Distributional Error Leniency Error Severity Error Central Tendency Error "Similar to Me" Error First Impression Error Recency/Primacy Friendship Systematic Distortion (errors that are based on assumptions about what behaviors should go together instead of the actual covariation of behaviors) |
|
|
Term
Additional Factors/Cues influencing the rating process |
|
Definition
Rater personality (A more lenient; C more accurate) Rater mood Rater motivation Race Gender Similarity Accountability Rating Purpose |
|
|
Term
|
Definition
Accountability and consensus enhanced rater accuracy |
|
|
Term
Pritchard et al (2008) Meta |
|
Definition
ProMES intervention aimed at enhancing the productivity in work units within orgs through performance measurement and feedback - Utilized goal setting, rewards, and feedback to increase motivation and performance - Evidence shows significant gains in overall productivity (d = 1.44; most effective for technical jobs, d = 2.15) |
|
|
Term
Woehr & Huffcutt (1994) Meta |
|
Definition
Examined rater training (Rater error training, perf dimension training, behavior observation training, and FOR training)
Found FOR training to improve accuracy (not so much halo) - Raters calibrate their understanding of scales - Effect size of .83 compared to no training group |
|
|
Term
|
Definition
Productivity is the ratio/output relative to input into some productive process - It is the performance measure of a SYSTEM and not of an employee - Notes that this is NOT the ultimate criterion |
|
|
Term
|
Definition
Six components of a legally defensible PA System: - Perform JA to determine perf dimensions - Develop rating form to assess dimensions - Train raters - Have higher management review ratings and allow EE's to appeal their evals - Document perf and maintain detailed records - Provide assistance and counseling to poor performing EE's prior to taking action against them |
|
|
Term
|
Definition
Perceptions of fairness in the PA process are very important |
|
|
Term
|
Definition
Total turnover lead to decreased org performance (-.15)
Worse for voluntary turnover and reduction-in-force turnover |
|
|
Term
|
Definition
|
|
Term
|
Definition
|
|
Term
|
Definition
Provide recommendations to improve the quality of PA systems: - PA systems must be congruent with org culture/principles - The primary purpose of PA should be to enhance EE performance - Modification of the current PA system should be done with active involvement of those affected - The focus of appraisal should be on behavior - Workers must be engaged/measured on both TP and OCB behaviors - Workers should be judged by absolute standards - Managers should be responsible for appraisals |
|
|
Term
|
Definition
Invoked the lexical hypothesis; Examined all JP measures used n published articles over the years, and categorized them into 25 summary dimensions. Then, correlations from studies were obtained using these dimensions, and these correlations were meta-analyzed to determine the true score correlations between dimensions. Finally, FA was used to analyze these true score correlations and to derive a set of 10 JP categories to summarize overall JP. Dimensions include interpersonal competence, admin competence, quality, productivity, effort, job knowledge, leadership, compliance/acceptance of authority, communications competence, and an overall JP dimension. |
|
|
Term
|
Definition
Finds that explaining the purpose of electronic performance monitoring increases perceptions of interactional justice, thereby increasing trust in manager |
|
|
Term
|
Definition
Feedback for PA: - Cover both positives and negatives - Discuss not more than 2 limitations in any 1 interview - Participative approach - good level of communication between appraiser and appraisee on a day-to-day basis outside of the appraisal |
|
|
Term
|
Definition
Developed an 18-factor solution of job performance for managerial jobs
Had SMEs sort Critical Incidences into a set of dimensions that they felt adequately captured the criterion space. Then, an indirect similarity matrix was created, nothing the extent to which the dimensions were placed in similar or opposing dimensions, by raters. Lastly, an EFA was used to empirically (inductively) derive 18 meaningful dimensions of JP form the indirect similarity matrix. |
|
|
Term
|
Definition
Train raters to encode, store and recall JP episodes.
Ratings capture = rating biases (54%) + actual JP (25%) + error (8%) |
|
|
Term
|
Definition
Reward systems are not effective in that the types of behaviors rewarded are often those which they person or entity doing the rewarding is trying to discourage, while the behavior desired is not being rewarded at all (pubs vs. quality teaching) |
|
|
Term
|
Definition
Feedback interventions improved performance on average (d = .41)
However, over 1/3 of the time, feedback interventions decreased performance. Notes that FI effectiveness decreased as attention moves up the hierarchy closer to the self and away from the task. Also, praise FI had negative effects. |
|
|
Term
|
Definition
|
|