Term
|
Definition
Associated with job analysis, training, selection, and performance measurement |
|
|
Term
Organizational Psychology |
|
Definition
deals w motivation, work attitudes, leadership and organizational developments |
|
|
Term
SIOP (Society for I/O Psychologists) |
|
Definition
professional association that I/O psychologists affiliate w |
|
|
Term
|
Definition
scientists-practitioner model MA or PhDs |
|
|
Term
|
Definition
Walter Dill Scott- said psyc not only applies to people's minds -Theory and Practice of Advertising Hugo Munsterberg - Pysc and Industrial Efficency |
|
|
Term
History of I/O (WWI-1920s) |
|
Definition
Yerkes-president of APA - developed the Army Alpha (literate) and Army Beta (illiterate) tests - military uses I/O extensively Bruce Moore- first I/O PhD from Carnegie Tech |
|
|
Term
History of I/O (1930s-WWII) |
|
Definition
Hawthorne studies -organizational side emerges |
|
|
Term
History of I/O since the 1980s |
|
Definition
-cognitive revolution -internet application of I/O -work-family issues -teams -legal issues -justice ("perceived justice") -360 degree feedback (raters) |
|
|
Term
What makes a good theory (define terms) Parsimony, Precision, Testability, Usefulness, Generativity |
|
Definition
Parsimony:simplicity, directness, to the point Precision: specific Testability: have to be able to test it Usefulness: can it be used in practice Generativity: should generate additional research |
|
|
Term
|
Definition
Induction: gather data and then reason out a theory (specific to general) Deduction: start with a theory and then gather data (general to specific) - science uses both (Cyclical Inductive-Deductive Model of Research) |
|
|
Term
The Cyclical inductive-deductive model of research |
|
Definition
-starts with either data or theory -most research is driven by inductive process- original theory likely came from somewhere |
|
|
Term
|
Definition
allow to test the hypothesis and infer causal inference |
|
|
Term
|
Definition
want to conduct experiments that allow us to infer causality (cause and effect) from studies |
|
|
Term
Independent vs Dependent Variable |
|
Definition
IV: anything manipulated; sometimes called Predictor, Precursor, or Antecedent DV: variable of interest, sometimes called Criteria, Outcomes, or Consequences |
|
|
Term
Extraneous Variable and Control |
|
Definition
Extraneous variable: (confounds) any other variable that can contaminate the results, prevents being able to claim causation Control: eliminate all alternative explanations (confounds) for findings |
|
|
Term
Internal vs External Validity |
|
Definition
Internal Validity: control for all extraneous outcomes External Validity: realistic, extent to which results generalize across other people, settings, etc (field studies) |
|
|
Term
|
Definition
Formulate Hypothesis > Design Study > Collect Data > Analyze Data > Report Findings |
|
|
Term
Field and Quasi Experiments |
|
Definition
Field: take advantage of realism for external validity issues - use random assignment and manipulation with in the real world settings Quasi: field study WITHOUT random assignment/manipulation -very common in I/O psyc (cant always manipulate things to get info) |
|
|
Term
|
Definition
-neither random assignment nor manipulation -also called correlational design or descriptive approach -able to describe relationships -great deal gained |
|
|
Term
|
Definition
-observation of someone or something in their natural environment *Participant observation: observer tries to blend in with those being observed *Unobtrusive Observation: tries to objectively and unobtrusively observe, but not blend in |
|
|
Term
|
Definition
-examination of an individual, group, or society -beneficial in terms of description and providing details about typical/exceptional individuals |
|
|
Term
|
Definition
-rely on secondary data sets collected by others for general or specific purposes -little control over what others study/report |
|
|
Term
|
Definition
-assignment of #s to objects of events in such a way as to represent specified attributes (dimensions along which an individual can be measured and along which they vary) -Measurement Error: things that can make measurements inaccurate |
|
|
Term
|
Definition
the consistency of stability of a measure |
|
|
Term
|
Definition
-reflects consistency of a test over time -stability coefficient -administer test at time 1 and at time 2 and see if individual scores are similar |
|
|
Term
Parallel Forms Reliability |
|
Definition
-extent to which two individual forms of a test are similar measures of the same construct -Coefficient of Equivalence: i.e. two forms of same final |
|
|
Term
|
Definition
-extent to which multiple raters/judges agree on ratings made -examine the correlation between ratings -helps protect against interpersonal bias |
|
|
Term
Internal Consistency Reliability |
|
Definition
-indication of interrelatedness of items (how well the items hang together) -Split-Half: split test in half by odd or even #s -Inter-Item: look at relationship between every item for consistency *rule of thumb for reliability: should be greater than 7.0 |
|
|
Term
|
Definition
-Construct validity: extent to which a test measures the underlying construct it was intended to -Construct: abstract quality that is not observable and is difficult to measure |
|
|
Term
Two types of evidence used to demonstrate construct validity |
|
Definition
Content Validity: degree to which a test covers a representative sample of the quality being assessed Criterion-Related Validity: degree to which a test is a good predictor of attitudes, behavior or performance |
|
|
Term
Approaches to Criterion-Related Validity |
|
Definition
-predictive validity - score at one time predicts criteria at a later time -concurrent validity - test predicts criteria that is measured at the same time as test |
|
|
Term
Components of Construct Validity |
|
Definition
-Convergent Validity: measure is related to other measurements of similar constructs -divergent validity: measure is not related to measures of dissimilar constructs |
|
|
Term
Measures of Central Tendency |
|
Definition
mode: most common # median: middle # mean: average # |
|
|
Term
|
Definition
range: spread from highest to lowest # variance: more useful than range, low = close to average standard deviation: square root of variance, retains original metric score |
|
|
Term
correlation coefficient (r) |
|
Definition
index of the strength of relationship between two variables |
|
|
Term
|
Definition
process of defining a job in terms of its components/tasks and knowledge/skills required to do the job |
|
|
Term
terms: element, task, position, job |
|
Definition
element: smallest unit of work activity task: multiple elements of work performed to achieve an objective position: defined by the task an individual performs job: collection of positions similar enough to share a job title |
|
|
Term
Approaches to Job Analysis |
|
Definition
job-oriented: focus on describing various tasks, specific task description Worker-oriented: examine broad human behaviors involved in work activities, description of general facets of the job |
|
|
Term
|
Definition
Task Inventory approach: task statements generated by SMEs (subject matter experts), Put a check by things that describe the job Functional Job Analysis (FJA): task statements rated on data, ppl, and things, used to develop Dictionary of Occupational Titles (DOT) |
|
|
Term
Worker- Oriented Approaches |
|
Definition
-Job element Method (JEM): identify superior workers of the job and their characteristics -Position Analysis Questionairre (PAQ): 194 items, employees decide if task pertains to their job and rates it -Common Metric System: 2077 items measured along 80 dimensions |
|
|
Term
|
Definition
-Job description: written statements about what a job holder actually does - task requirements -Job Specifications: define the KSAOs necessary for the job - people requirements -Job Evaluation: worth of the job |
|
|
Term
|
Definition
things to consider when paying people |
|
|
Term
|
Definition
1963 men and women who do equal work must get equal pay still a gap of 29% |
|
|
Term
Doctrine of Comparable Worth |
|
Definition
Jobs of equal worth to the organization should be compensated equally |
|
|
Term
|
Definition
-defining and measuring performance criteria is difficult because of its multidimensional nature and various purposes -criteria: evaluative standards that can be used as yardsticks for measuring employee's successes or failures -poor choice of criteria is bad for the employer |
|
|
Term
Ultimate vs Actual Criterion |
|
Definition
Ultimate: includes everything that defines job performance, abstract and theatrical, complex - it is a goal and is hard to achieve Actual: best representation of the ultimate criterion and what is used in reality |
|
|
Term
"criteria" for the criteria Relevance |
|
Definition
-degree to which the actual criterion relates to the ultimate criterion (overlap) -Criterion Deficiency: stuff in the ultimate criterion that is not included in the actual measure -Criterion Contamination: stuff measured by the actual criterion that is not part of the ultimate -this occurs because of measurement bias |
|
|
Term
"criteria" for the criteria Reliability, sensitivity, practicality, and fairness |
|
Definition
reliability: unreliable criteria is not useful sensitivity: must be able to discriminate between effectful and in ineffectful employees practicality: extent to which criterion can and will be used by decision makers fairness: extent to which employees perceive criterion as just and reasonable. |
|
|
Term
two components of the criterion problem |
|
Definition
-multiple criterion vs composite criterion -multiple: performance is mulch-faceted/ multidimensional -composite: one thing, either can do the job or not *todays view is a multiple criteria one (campbells taxonomy) |
|
|
Term
8-dimensions of Campbell's Taxonomy |
|
Definition
-Job specific task proficiency(core tasks) -non job specific task proficiency (general tasks) -written/oral communication skills -effort (consistency/persistence) -personal discipline (no substance use) -Peer/team performance (help/motivate others) -supervision (managing other employees) -Management/administration (management that isn't supervision) |
|
|
Term
types of performance criteria objective vs subjective |
|
Definition
Objective: hard, non judgmental, facts, # of observations (# of absences, turnover, productivity, etc) Subjective: soft, judgmental, social judgements, evaluations from others |
|
|
Term
|
Definition
-work related activities performed by employees to contribute to technical core of the organization |
|
|
Term
|
Definition
-not "required" but helpful -activities performed by the employee that help maintain broader organizational, social, and psychological environment -also called Organizational Citizenship Behaviors (OCBs), Prosocial Organization Behaviors (POBs), or Extra-Role Behaviors -examples include enthusiasm, volunteering, altruism, etc) |
|
|
Term
Difference between Task and Contextual Performance |
|
Definition
-Task activities vary across jobs, contextual activities are similar across jobs -Task more likely to be "required" |
|
|
Term
Performance Appraisal (PA) |
|
Definition
-systematic review and evaluation of employees job performance -uses: personnel decisions, developmental purposes, documentation (legal issues) |
|
|
Term
|
Definition
-process of individual performance improvement includes: goal setting coaching/feedback performance appraisal developmental planning |
|
|
Term
rating formats for PA: Graphic Rating Scales |
|
Definition
-scales consisting of a # of traits/behaviors that the rater must judge based on how much the employee possess or based on where the employee falls on this dimension regarding expectations |
|
|
Term
Rating formats for PA: Behaviorally Anchored Rating Scales (BARS) |
|
Definition
5 steps in development: -identification of important performance dimensions (job specific) -generation of behavioral examples at all levels of effectiveness -re translation of critical incidents (CIs)back into dimensions -rating of each CI based on effectiveness -choose items with behavioral anchors *elaborate so a lot of time and money involved. |
|
|
Term
Rating formats for PA: Checklists |
|
Definition
-list of behaviors, employee has or doesn't have -weighted checklist: items rated by importance -forced choice checklist: choose 2 of 4 items to describe employee |
|
|
Term
Rating formats for PA: Employee Comparison Procedures |
|
Definition
-how well does the employee measure up to peers/colleagues -rank ordering: employees are ranked from best to worst -paired comparison: compare each employee to every other employee -forced distribution: raters distribute employees into 5 to 7 categories |
|
|
Term
Advantages/Disadvantages of Graphic Rating Scales |
|
Definition
Advantages: easy to develop and use disadvantages: lack precision in dimensions and in anchors |
|
|
Term
Advantages/Disadvantages of BARS |
|
Definition
Advantages: precise/ well defined scales, good for coaching -well received by employees and raters Disadvantages: time and money intensive, no evidence that is it more accurate than other formats |
|
|
Term
Advantages/Disadvantages of Checklists |
|
Definition
Advantages: easy to develop and to use Disadvantages: rater errors common including: halo, leniency, severity etc |
|
|
Term
Advantages/Disadvantages of Employee Comparison Methods |
|
Definition
Advantages: precise rankings are possible, useful for making administrative rewards on a limited basis Disadvantages: time intensive, not well received by employees or raters |
|
|
Term
|
Definition
Halo: employee rated high across all levels Leniency: supervisor says all my employees are great Central Tendency: all employees are rated average Severity: all employees are rated negatively |
|
|
Term
cognitive process model of PA |
|
Definition
5 steps: -observe behavior -encode information about behavior -store info -retrieve info -integrate info Problems w each step -miss important behaviors -label incorrectly -store wrong info -retrieve irrelevant info -let personal liking affect integration |
|
|
Term
Other errors: Recency Effect, First Impression error, Similar-to-Me |
|
Definition
Recency Effect:rely heavily on most recent interactions/observations First Impression: pay most attention to the initial experience with the employee Similar-to-me: more favorable to employees that the raters see as similar to themselves |
|
|
Term
Rater Error Training (RET) |
|
Definition
-aim to reduce errors through awareness of them -assumption that reducing errors increases accuracy is faulty, it actually can reduce accuracy (some ppl are great across all levels - halo effect) |
|
|
Term
Frame of Reference (FOR) (rater training) |
|
Definition
-enhance observational/categorizational skills -provides a common FOR to increase consistency of ratings -improves accuracy |
|
|
Term
the Social-Psychological Context |
|
Definition
-factors such as social, legal and organizational cultures affect performance appraisal |
|
|
Term
|
Definition
-rater and employee reactions to appraisal process are important -appraisal characteristics and organizational factors important |
|
|
Term
Supervisor - Subordinate Relationship |
|
Definition
-Leader-Member Exchange (LMX): supervisors have different types of relationships with different subordinates -these relationships can affect PA -employees that are more favorable tend to have higher ratings |
|
|
Term
Multiple-Source Feedback (360) |
|
Definition
-involves ratings from subordinates, peers, supervisors, clients and self -consistent w todays organizations -prevalent for development *3 assumptions: using multiple sources overcomes idiosyncrasies involvement in process makes participants happy multiple view points are valuable |
|
|
Term
Perceived System Knowledge (PSK) |
|
Definition
-extent to which employees understand the appraisal system -important effect on appraisal process, such as supervisor-subordinate agreement -positively relates to job attitudes and appraisal reactions |
|
|
Term
8 legal recommendations for PA |
|
Definition
-start where? -communicate performance standards in writing -recognize separate rather than overall dimensions -be objective when possible, with subjective judgments when necessary -provide an appeal mechanism -use multiple raters when possible -document everything related to the decision (for legal purposes) -train raters with written instruction |
|
|
Term
|
Definition
-Emphasis on: Adequate notice (provide info on what they will be rated on and when) fair hearing: including the appeal mechanism, judgements are based on evidence, may include a self-assessment |
|
|