Term
|
Definition
Personality is a unique constellation of psychological traits and states |
|
|
Term
Rothstein & Goffin (2006) |
|
Definition
VG can be much more complicated with personality than with GMA; Faking could get in the way of validity
CAT may reduce testing time by up to 50% |
|
|
Term
|
Definition
Trait activation theory - personality traits are expressed as responses to trait relevant situational cues |
|
|
Term
Motowidlo et al's (1997) model of the determinants of task and contextual performance |
|
Definition
Personality variables lead to contextual habits, contextual skill, and contextual knowledge, which in turn leads to contextual performance.
Personality variables also relate to task habits, which in turn lead to task performance. |
|
|
Term
|
Definition
Personality inventories don't usually exhibit subgroup differences. However, the applicant pool characteristics (SR, SDs, and mean different among groups) have more influence on AI than characteristics of the selection system (e.g., validity) |
|
|
Term
|
Definition
Pers chars are validty predictors of perf in virtually all occupations; They do not result in AI; using well-developed pers measures is a way to promote social justice and increases org productivity |
|
|
Term
Ones et al. (2007); Tett & Christiansen (2007) |
|
Definition
It is counterproductive for the science and practice of I/O psychology to just write off the whole domain of individual differences in personality |
|
|
Term
Rothstein & Goffin (2006) |
|
Definition
30% of American companies use pers tests for selection |
|
|
Term
|
Definition
- Pers tests have low validity and have changed little over time - Personality only accounts for about 5% of JP variance - Can be faked by applicants - Applicants report the least favorable attitudes toward pers and integrity tests compared to other selection procedures |
|
|
Term
Murphy & Dzisecyznski (2005) |
|
Definition
Say the theories linking personality constructs and JP are often vague and unconvincing |
|
|
Term
|
Definition
Did big lit review and decided that personality shouldn't be used for selection unless their validity has been specifically and competently determined for that specific situaiton |
|
|
Term
|
Definition
- Says that behavior patterns may be stable but still variable depending on the situation (situational specificity) - Argue that behaviors of individuals was not sufficiently consistent across time and situations to allow valid predictions by means of personality measures |
|
|
Term
|
Definition
"Most behavioralists will no more be persuaded by data supporting the validity of personality measurement than creationists will be persuaded by data supporting evolutionary theory" |
|
|
Term
|
Definition
FFM; used a lexical approach: got most of the socially relevant and salient personality characteristics that were encoded in language, and factor analyzed to reduce the number of dimensions to 5
E and N are most robust, followed by C |
|
|
Term
|
Definition
Recent Personality Annual Review
Argue that the B5 confounds constructs, and merges variables that are too heterogeneous; Say the B5 is inadequate for representing lots of terms (e.g., religiosity, fashionableness, prejudice) |
|
|
Term
|
Definition
Seven Factor Model; lexical approach that included moods and emotional activity
5 factors correspond to B5, and also had 2 additional factors: positive valence and negative valence |
|
|
Term
|
Definition
HEXACO Model: Extends the FFM to include a sixth factor (Honesty-Humility) |
|
|
Term
Facets of Extraversion (Costa & McCrae, 2002 and then DeYoung et al., 2007) |
|
Definition
- Warmth, Gregariousness, Assertiveness, Activity, Excitement Seeking, Positive Emotions
- Enthusiasm, Assertiveness |
|
|
Term
Facets of Conscientiousness (Costa & McCrae, 2002 and then DeYoung et al., 2007) |
|
Definition
- Competence/Self-Efficacy, Order/Orderliness, Dutifulness, Achievement-Striving, Self-Discipline, Deliberation
- Industriousness, Orderliness |
|
|
Term
Facets of Agreeableness (Costa & McCrae, 2002 and then DeYoung et al., 2007) |
|
Definition
- Trust, Compliance/Mortality, Altruism, Straightforwardness/Cooperation, Modesty, Tender-mindedness/Sympathy
- Compassion, Politeness |
|
|
Term
Facets of Neuroticism (Costa & McCrae, 2002 and then DeYoung et al., 2007) |
|
Definition
- Hostility/Anger, Depression, Self-Consciousness, Impulsiveness/Immoderation, Vulnerability
- Volatility, Withdrawal |
|
|
Term
Facets of Openness (Costa & McCrae, 2002 and then DeYoung et al., 2007) |
|
Definition
- Fantasy/Imagination, Aesthetics/Artistic Interests, Feelings/Emotionality, Actions/Adventurousness, Ideas/Intellect, Values/Liberalism
- Intellect, Aesthetic Openness |
|
|
Term
Dudley et al. (2006) Meta |
|
Definition
Examined 4 C facets and found somewhat higher relationships between the facets and task performance compared to overall C and task perf (.22 vs .16). The difference was largely due to a single facet, dependability (r = .46)
Showed that overall performance and CWB was predicted by global conscientiousness and dependability; task performance was predicted by achievement
Degree that narrow traits predicted performance above global conscientiousness depended on performance criteria (e.g., task performance, job dedication) and job type (e.g., sales, managerial). |
|
|
Term
|
Definition
Developed and evaluated a 6-2-1 hierarchical framework where each B5 factor was comprised of 2 lower order facets derived from DeYoung et al
Demonstrates that lower order traits matter when predicting work perf; We need to look more into facets!
In 13/15 trait criterion combos, the NEO facets best predicted each criterion (overall performance, task performance, contextual performance). For the other two, the DeYoung Facets explained the most variance |
|
|
Term
|
Definition
Broke E up into sociability, surgency, and positive emotions, and then relate them to OCBs.
Found sociability wasn't related, surgency was negatively related, and positive emotions was positively related
This could explain why at the factor level, E isn't related to OCB!!! (The facets cancel each other out) |
|
|
Term
|
Definition
Investigated facets of O, and found that of the 8 facets studied, 6 had higher correlations with task performance than did the overall openness construct (-.10 vs -.07)
Suggests that the use of narrower measures of openness in organizational research could increase predictive validities |
|
|
Term
|
Definition
Same test presented either paper/pencil or on PC is roughly equivalent |
|
|
Term
|
Definition
Can recover normative info in forced-choice tests by presenting unidimensional item pairs that have different levels of the same trait (10% of same dimension items can help combat ipsativity problems) |
|
|
Term
|
Definition
Ideal point models more precisely measure all points on the trait continuum compared to Likert-type scales
Based on Thurstone (1928) - Assumes people endorse items that are closer to their true trait level than items that are further away from their true trait level |
|
|
Term
|
Definition
- Other reports are related to self reports, but provide incremental validity - Higher validity than self-reports (.21 and .37) |
|
|
Term
|
Definition
When predicting outcomes, "other" ratings had significantly higher validities compared to self ratings when predicting academic perf and JP, and had incremental validity over self ratings for both criteria |
|
|
Term
Huffcutt et al. (2001) Meta |
|
Definition
Found that personality ratings in both high (.14-.35) and low (.12-.25) structured interviews correlate with JP |
|
|
Term
|
Definition
Although interviewers can and do assess pers during interviews, they are not able to assess those traits that would best predict later job success (e.g., C) |
|
|
Term
|
Definition
Can be trained to evaluate personality in interviews |
|
|
Term
|
Definition
Implicit Trait Policies (SJTs)
Personality affects judgments of the effectiveness of behavioral episodes that express those personality traits; Standing on dimensions impacts your judgment of behaviors that are effective or not
Implicit measures likely measure the same trait, but may tap into different aspects of the construct |
|
|
Term
|
Definition
Personality SJTs are difficult to fake |
|
|
Term
Van Iddekinge et al. (2012) Meta |
|
Definition
Integrity tests predict performance (.18) and especially CWBs (.32) |
|
|
Term
Shaffer & Postlewaite (2012) Meta |
|
Definition
Contextualizing items ("at work") increases the validity of personality |
|
|
Term
|
Definition
Says that there is no AI at hte B5 level, but there is potential AI at the facet level (Use of pers measures does not uniformly circumvent AI concerns) |
|
|
Term
|
Definition
Personality is more predictive for those low on g than high |
|
|
Term
Beatty et al. (2011) Meta |
|
Definition
Similar validities for noncognitive assessments in proctored vs. unproctored settings |
|
|
Term
Chiaburu et al. (2011) Meta |
|
Definition
CAO (then ES and then E) predict OCBs; B5 had incremental validity over JS in OCBs
Also, B5 more related to OCBs than Task Performance Multiple R: (.19) Task perf vs. (.28) OCB |
|
|
Term
|
Definition
ES, A, C predict CWB-O and CWB-I |
|
|
Term
|
Definition
All B5 predict leadership effectiveness All B5 (except A) predict leader emergence |
|
|
Term
|
Definition
E, ES, C predict adaptability (.11 to .14)
Military samples only |
|
|
Term
|
Definition
The validity of C generalizes across criterion types and all occasions, and ES was related to a variety of outcomes
O and A more valid in training
E important in jobs like sales and managers |
|
|
Term
|
Definition
- Found personality led to JP in military samples - Sample mean weighted corr: .24 for B5 - Stronger in studies with JA - Usefulness of personality remains stable over time - Faking didn't seem to lower criterion-related validity - Also found LOC (pos) and Type A personality (neg) related to JP |
|
|
Term
|
Definition
CSE constructs (.16 to .19 separately) relate to JP |
|
|
Term
Judge et al. (2008) Review |
|
Definition
- Job knowledge, goal setting, and motivation mediate C-JP relationship - N and C relate to motivation - JS predicted by CEN, PA, NA, CSE - Related to CWB (CAN) - Related to Accidents (ONAC) |
|
|
Term
|
Definition
ARgue that reliability is usually lower for subscales compared to broader traits, and the number of items needed to reliably measure narrow personality constructs is about 3-6x the number of items typically used in pers inventories
Demonstrated that the B5 can predict work behaviors better than a 16-factor model like the 16PF
Says it is important to match predictor and criterion on the level of specificity. Since JP is super complex and multidimensional and has a strong general factor, it makes sense to focus on broadband predictors to predict this criterion |
|
|
Term
|
Definition
Demonstrated that facet level personality factors were better able to predict behaviors and concluded that aggregating facets to broad level predictors could threaten the predictive power of personality tests
Trait scales increased the predictive accuracy of factors by 8% |
|
|
Term
|
Definition
Nature of the criterion dictates the choice of predictors and mathcing predictors with criteria always enhances validity |
|
|
Term
|
Definition
Examined stability of personality. Of the 6 trait categories, 4 demonstrated significant change in middle and old age |
|
|
Term
Roberts & DelVecchio (2000) |
|
Definition
Examined trait consistency at specific periods in the life course
Test-retest showed trait consistency increased from childhood, to college, to midlife to reaching a plateau around .74 between ages 50 and 70 |
|
|
Term
|
Definition
Use of warnings and instruction of personality tests have not improved perceptions |
|
|
Term
Dullaghan & Borman (2009) |
|
Definition
Use of prompt explaining why personality test is being used does nothing to improve applicant perceptions |
|
|
Term
|
Definition
Defines faking as a deliberate attempt to present oneself in a particular manner to achieve some desirable outcome |
|
|
Term
|
Definition
Posits people fake due to self-deception (they believe their self-reports) or because of impression management (conscious distortion) |
|
|
Term
|
Definition
Says faking may occur because it's hard to remember past behavior and difficult to communicate about behaviors accurately |
|
|
Term
Birkeland et al. (2006) meta |
|
Definition
Compared job applicants and non-applicant personality scale scores across 33 studies. Across all job types, applicants scored significantly higher than non-applicants on E (d = .11), ES (d = .44), C (d = .45), O (d = .13), A wasn't sig
ES and C are the most easily faked
For certain jobs (e.g., sales), the rank ordering of mean differences changed substantially, suggesting that job applicants distort responses on personality dimensions that are viewed as particularly job relevant |
|
|
Term
|
Definition
Compared personality scores between fake-good and honest responding conditions
Results indicated that all B5 factors were equally fakeable (d values between .54 to .93 for within-subjects designs) and faking was largest on scales of social desirability (2.26) |
|
|
Term
|
Definition
Depending on the magnitude and proportion of faking, it can have an impact on the predictive validity of personality |
|
|
Term
|
Definition
Faking influences passing rates in multiple hurdle systems |
|
|
Term
|
Definition
People who could identify the targeted criteria of an interview an an AC had higher JP. This ability was incrementally reltaed to JP and beyond GMA |
|
|
Term
Bradley & Hauenstein (2006) |
|
Definition
Faking doesn't affect construct validity, and leads to small differences in criterion-related validity |
|
|
Term
|
Definition
Very few people know what the profile for a particular job is, and virtual no one can fake an entire profile
Notes that personality is a stable predictor over time |
|
|
Term
|
Definition
The results of faking studies may not be generalizable across testing situations
says we should focus on faking PREVENTION vs detection |
|
|
Term
|
Definition
Found large effects of retesting for internal promotion candidates who had failed a personality exam, whereas passing candidates replicated their initial profile |
|
|
Term
|
Definition
For personality to be a viable selection tool, methods are needed to decrease both the ability (forced choice) and motivation (warnings) to fake |
|
|
Term
e.g., Barrick & Mount, 1996; Ones & Vis, 1998 |
|
Definition
Although IM and SD is found in personality data, the predictive validity for the entire group is unaffected in real-world settings (applicant or incumbent settings) |
|
|
Term
|
Definition
Faking is hard to detect because it may be a very complex whole test strategy that is at play |
|
|
Term
|
Definition
Warnings are not that effective, and can affect would-be honest responders too
- Once item parameters are estimated from IRT-based methods, non-adaptive MDPP tests can be created - Adaptive tests yield the same estimation accuracies as nonadaptive tests twice as long.
Notes that IRT aberrance detection methods are technically complex and imperfect (may lead a would-be-honest respondent to be flagged as aberrant, & vice versa) |
|
|
Term
Vasilopolous et al. (2005) |
|
Definition
Verification warnings increase correlation between personality and g; Thus, it is important to take extra care to test for AI and subgroup differences |
|
|
Term
Chernyshenko et al (2012) |
|
Definition
Review research into MDPP methods and note that results have generally been favorable; should note though that they are typically complex, time consuming to create, and expensive |
|
|
Term
Chernyshenko et al. (2009) |
|
Definition
- Compared likert style, unidimensional pairwise preference, and MDPP - Found MDPP require complex judgments that could increase cognitive load - Pairwise preference items could be simpler than likert, because they don't require judgments regarding degrees of assent - MDPP didn't affect validity - All 3 yielded equally precise trait scores - All 3 provided similar correlations with health and study behaviors within the personality domain - Overall, says it is premature to get rid of FC |
|
|
Term
|
Definition
Correcting scores (for SD) doesn't match honest scores (effect sizes from .01 to -.23) |
|
|
Term
|
Definition
There seems to be valuable individual difference info in SD scores, so partialing this variance out takes away meaningful variance |
|
|
Term
Vasilopulous et al. (2000) |
|
Definition
Job familiarity moderates the relationship between faking and response latencies |
|
|
Term
|
Definition
Found participants in a fake-good condition fixated more on extremeties of a personality response scale |
|
|
Term
|
Definition
Faking is a concern for all tests, not just personality |
|
|
Term
|
Definition
C becomes a sronger predictor of performance than GMA as time passes; it is a better predictor of performance change |
|
|
Term
|
Definition
The PPRF is a JA form to be used in making hypotheses about personality predictors of JP |
|
|
Term
|
Definition
If many applicants and many positions, use personality to screen out. If few applicants and few openings, use to identify excellent applicants |
|
|
Term
|
Definition
Relationships between C and E with performance were stronger for managers whose jobs allowed a higher degree of autonomy |
|
|
Term
|
Definition
Found that the expression of C was correlated with the immediacy of a task and that E and A were correlated with the friendliness of customers in a customer service job.
Used ESM and found personality states at work vary meaningfully within person and that this variation is related to situational factors. |
|
|
Term
|
Definition
To prevent faking in biodata, you can ask respondents to elaborate on both verifiable and subtle items. The items retain their validity through this elaboration |
|
|
Term
|
Definition
Says best prediction is always achieved with narrow criteria |
|
|
Term
Christiansen et al. (2005) |
|
Definition
MDPP tests can reduce faking. But, smart applicants can figure out what is job related |
|
|
Term
|
Definition
Including additional predictors in battery reduces faking |
|
|
Term
|
Definition
Found that the MDPP response format is susceptible to faking |
|
|
Term
|
Definition
Sample of over 30,000 managers
Found that applicants do fake to improve their scores on the second try. They note that Hogan et al (07) didn't find a difference because they retested everybody and most of those people had no motivation to change their score |
|
|
Term
|
Definition
O is least faked; Participants high in C and integrity are less likely to fake (important, because we are usually trying to select people with high C) |
|
|
Term
|
Definition
Can categorize reliably by getting along vs. getting ahead, FFM, and HPI (7 scales)
Feel E and O are too braod and should be ambition and surgency as well as intellectance and school success |
|
|
Term
|
Definition
Dark triad related to CWB: Machiavellianism (.25), Narcissism (.43), Psychopathy (.07) |
|
|
Term
|
Definition
Argue that faking requires a shorter latency than honest responding, because when responding accurately, people have to introspect and engage in self-referrant process, which takes time. |
|
|
Term
|
Definition
Found that faking requires a longer response latency, because they have to evaluate for social desirability when trying to fake. |
|
|
Term
|
Definition
Response latencies are ineffective when individuals knew that they were being timed. |
|
|
Term
|
Definition
Found that order drives early JP, and industriousness drives later JP (as it drives them to set more difficult goals and work harder to achieve them) |
|
|
Term
|
Definition
Meta-analysis. Warnings have an effect of about d =.20 to .30 |
|
|