I/O Psychology Flashcards Preview

EPPP > I/O Psychology > Flashcards

Flashcards in I/O Psychology Deck (141)
Loading flashcards...
1

Job Analysis 

Used to obtain info about the nature & requirements of a job; KSAO's (Knowledge, skills, attitudes, & other characteristics) used to devel. criterion measures & predictors.

Conducted to ID the essential characteristics of a job, & may be 1st step in a  job evaluation.

Provides info. to:

  • facilitates workforce training & planning programs
  • Assist w/decisions about job redesign
  • Help ID causes of accidents & other safety related probs.

2

Methods for Conducting a Job Analysis

Info. about a job can be obtaineda few ways including:

  • Observing EE's perform the job
  • Review company records
  • Interview  EE's. sups. & others familiar w/the job
  • Having EE's keep a job diary

Methods include:

  • Job-oriented techniques: Focus on work activities/tasks & conditions of work.
  • Worker-oriented techniques: Focus on KSAO's reqired for the job.
    • Position Analysis Questionnaire (PAQ)

​A systematic process of determining how a job differs from other jobs in terms of required responsibilities, activities, & skills.

3

The Position Analysis Questionnaire (PAQ) 

A frequently used structured job analysis questionnaire w/194 questions that provides info on 6 dimensions of worker activity divided into:

  • info. input 
  • mental processes,
  • work output,
  • relationships with other persons,
  • job context,
  • Interpersonal activities

​A quantitative worker-oriented method of collectin data for purposes of job analysis. 

More helpful for desining training prog. & deriving criterion measures that provide useful EE feedback.

4

Job Evaluation

Job evaluation may begin with a job analysis but is conducted for the purpose of setting wages and salaries.

Primary purpose of a(n) Job Evaluation is to obtain detailed info. about job requirements in order to facilitate decisions related to compensation.

ID compensable factors & assigning a dollar values to them, such as:

  • Skill & ED req.
  • Consequences of error
  • Degree of autonomy & responsibility
  • Establish Comparable Worth

Determine the relative worth of jobs in order to set wages & salaries.

 

5

Comparable Worth

(aka pay equity) Refers to the principle that jobs that require the same education, experience, skills, & other qualifications should pay the same wage/salary regardless of the employee's age, gender, race/ethnicity, etc.

6

Criterion Measures

Measure of job performance used to provide EE's w/performance feedback & help make decisions about salary increases & bonuses, training needs, promotions & termination.

Types:

  • Objective (direct) Measures: Include quantitative measures of production & certain types of personnel data (Not avalible for many jobs & may not provide a complete pict. of an EE's perf.)
  • Subjective Measures: Rely on judgement of the rater. More useful for eval. complex contributors to job perf. such as motivation, leadership skills & decision making ability.
    • Absolute measures
      • ​Critical Incidents
      • Forced Choice
      • Graphic Rating Scale
      • BARS
    • Relative measures
      • Paired comparison
      • Forced distribution

7

Ultimate (Conceptual) Criterion

In devel. of job perf. measure it is a measure of perf. that is theoretical & can not actually be measured.

  • A construct that can not be measured directly but instead is measured indirectly.
    • Ex: Ultimate Criterion = "Effective EE"
    • Actual Criterion = Dollar amt. of sales in a 3 mo. period

8

Subjective Criterion Measures

Rely on judgement of the rater. More useful for eval. complex contributors to job perf. such as motivation, leadership skills & decision making ability.

  • Absolute measures: Subjective perf. assess that indicates a ratee's perf. in absolute terms. Involve rating an EE w/out considering the perf. of other EE's & often takes the form of a graphic, likert type scale.
    • Critical Incident Technique (CIT)
    • BARS
  • Relative measures (techniques): Involve comparing EE's to each other on various aspects of job perf., & help reduce rater biases; less useful than absolute measures for EE feedback. Includes:
    • ​Paired comparison
    • Forced distribution




       

 

 

 

9

Relative Techniques; Types of Criterion Measures

Relative measures (techniques): Involve comparing EE's to each other on various aspects of job perf., & help reduce rater biases; less useful than absolute measures for EE feedback. Includes:

  • Paired comparison: The rater compares each EE to every other EE performing the same job. →Disadvantage is that it is time consuming as the number of EE's increases.
  • Forced distribution: The rater categorizes EE's in terms of pre-defined normal distribution.       →Disadvantage  is that it produces misleading info when perf. is not actually normally distributed.

10

Rater Bias

4 Types of rater bias that limit validity & relaiability of rating scales:

  1. Leniency Bias: Occurs when a rater consistently assigns high ratings to all ee's, regardless of how they actually do on the job. 
  2. Strictness BIas: Occurs when a rater consistently assigns low ratings to all ee's, even when they are good workers.
  3. Central Tendency Bias: Occurs when a rater consistently assigns average ratings to all ee's.
  4. Halo Bias: Occurs when the rater judges all aspects of an ee's perf. on the basis of a single aspect of perf. 

 

11

Leniency Bias


Type of rater bias that occurs when a rater consistently assigns high ratings on each dimension of performance to all employees, regardless of how they actually do on the job.  



Can be alleviated by using relative rating scales such as the forced distribution scale that categorizes ee's in terms of a predefined normal distribution. 

12

Central Tendency Bias


Occurs when a rater consistently assigns average ratings to all ee's.
 

13

Halo Bias


Occurs when the rater judges all aspects of an ee's perf. on the basis of a single aspect of perf. 
 

14

Methods for Reducing Rater Bias

Best way is to provide raters w/adequate training, especially training that helps them observe & distinguish btwn levels of performance such as:

  • Critical Incident Technique (CIT)
  • Behaviorally Anchored Rating Scales (BARS)
  • Frame-of-reference Training

15

Critical Incident Technique (CIT) 

Involves using a checklist of critical incidents (descriptions of successful & unsuccessful job behaviors) to rate each employee's job performance.

The Supervisor observes EE's & records behaviors. Then used to provide EE's w/feedback about perf. or complied into a checklist.

When incorportated into rating scales, can help reduce rater biases.

 

16


Behaviorally Anchored Rating Scales (BARS)
 

A graphic rating scale that requires the rater to choose the one behavior for each dimension of job performance that best describes the employee.

Incorporates critical incidents which improves graphic rating scales by using anchor points on the scale w/descriptions of specific behaviors representing poor to excellent perf.

Distinguishing charateristic is that it is devel. as a multi-step process that involves a team of sups, managers & other ppl familiar w/the job.

Advantage: Involvement of managers/sups. may increase motivation & accuracy when they use the scales

Disadvantage: Requires substantial time & effort to develop.

17

Frame-of-Reference Training

A type of rater training that emphasizes the multidimensional nature of job performance & focuses on the ability to distinguish between good & poor work-related behaviors. (Training focues on helping raters become good observers of behavior)

Helps ensure that the raters have the same idea about what constitutes succesful & unsuccesful job perf.

It is useful for eliminating rater biases.

18

Criterion Deficiency

The degree to which an actual criterion does NOT measure all aspects of the ultimate (conceptual) criterion & is one of the factors that limits criterion relevance.

A criterion measure can have high relaiability, but low validity (It can give consistent results but measures only some aspects of the ultimate criterion).

Criterion Deficiency = Low Validity

19

Criterion Contamination:

A bias that occurs when a rater's knowledge of an Indivs. perf. on a predictor affects how the rater rates him/her on the criterion; criterion measure assesses factors other than those it was designed to measure.

Ex: contamination is occurring when a rater's knowledge of a ratee‘s performance on a predictor affects how the rater rates the ratee on the criterion. It can artificially inflate the criterion-related validity coefficient.

20

Identifying & Validating Predictors

  1. Conduct a Job Analysis: Determine what knowledge, skills, attitudes & other characterisitics (KSO's) the job requires. This info. indicates the type of predictors that would be useful & best criterion measures to eval. job perf.
  2. Select/Devel. the Predictor & Criterion Measures 
  3. Obtain & correlate Scores on the Predictor & Criterion:  Admin. to a similar sample of ppl & correlate the 2 sets of scores on the test w/scores on the criterion to determine a criterion related coefficient. 
  4. Check for Adverse Impact: Determine if the predictor unfairly discriminates against members of a legally protected grp.
  5. Evaluate Incremental Validity: Determine if use of the predictor increases decision-making accuracy.
  6. Cross-Validate: Admin. the predictor & criterion to a new sample.

21

Adverse Impact 

Occurs when use of a selection test or other employment procedure results in substantially higher rejection rates for members of a legally protected (minority) group than for the majority group; adverse impact  is said to exist.

The result of dicrimination against indiv. protected by Title VII & related legislation due to the use of an employment practice.

Methods to ID adverse impact:

  • 80% Rule
  • Differential Validity
  • Unfairness

 

22

80% Rule

The 80% rule can be used to determine if adverse impact is occurring.

EEOC methods define when using this rule, the hiring rate for the majority group is multiplied by 80% to determine the min. hiring rate for the minority group.

Ex: If the hiring rate is 70% for men & 40% for women, then .70 x .80 = .56

  • This means the min. hiring rate for women is 56% which is less than the actual rate of 40% & indicates the selection test is having an adverse impact on women.

23

Differential Validity 

Differential validity exists when the validity coefficient of a predictor is significantly different for one subgroup than for another subgroup (e.g.. lower for African American job applicants than for White applicants) & results in a larger proportion of 1 grp being hired. 

Potential cause of Adverse Impact

Method for responding to adverse impact: When it's due to differential validity, use a diff. predictor that's equally valid for both grps

24

Unfairness

Refers to unfair hiring, placement, or related discrimination against a minority grp that occurs when members of the minority group consistently score lower on a predictor but perform approximately the same on the criterion as members of the majority group. (EEOC)

Potential cause of Adverse Impact bc members of the grp obtaining lower predictor scores will be hired less often.

Method for responding to adverse impact: When it's due to unfairness, use a different predictors cutoff scores for members of different grps.

25

Incremental Validity (Selection Ratio, Base Rate)

Incremental validity refers to the increase in decision-making accuracy resulting from the use of a new predictor.

Selection ratio: the ratio of number of jobs to job applicants.

Base rate: the percent of EE's who are performing satisfactorilly w/out the new predictor.

It is maximized when the predictor‘s validity coefficient is high, the selection ratio is low, and the base rate is moderate. 

26

In terms of incremental validity, which situation supports the use of a new predictor?

Moderated base rate w/many applicants & few job openings.

Moderated base rate suggests that there's room for improvment & a new predictor will likely increase decision making accuracy.

The situation is optimal when there are many applicants to choose from (a low selection ratio).

The degree to which a new selection technique will increase decision-making accuracy depends on several factors including:

Base rate - proportion of correct decisions w/out the new technique &

Selection Ratio - ratio of applicants to job openings.

 

27

Taylor-Russell Tables

Can be used to estimate the percent of new hires that will be successful as EE's given various combos of validity coefficients, selection ratios & base rates are known.

When the selection ratio is low (.10), the base rate moderate (near .50) & a predictor w/a low validity coefficient can improve decision making accuracy.

28

Combining Predictors
What are the three types?

Multiple Regression: A compensatory method in which good perf. on one predictor can offset poor perf. on another predictor; areas of weakness.

Multiple Cutoff: A non-compensatory method that requires that a min. score on each predictor be obtained before an applicant is considered for selection.

Multiple Hurdles: A non-compensatory method that involves adminstering predictors one at a time in a pre-determined order, w/each predictor being admin. only if the applicant has passed the previous one.

29

Predictors Used in Organizations

Include:

  • Cognitive Ability Tests/Gen. Mental Ability Tests
  • Biographical Information/Biodata
    • Biographical Information Banks
  • Interviews
  • Work Samples
    • Trainability Tests
  • Assessment Centers
    • In-basket Test
    • Leaderless Group Discussion
  • Interest Tests
  • Personality Tests
    • Big 5 Traits

30


Cognitive Ability Tests
(Gen. Mental Ability Tests)



 

Considered to be the best predictor of job perf. across different jobs & job settings.

These tests consistently produce the highest validity coefficents increase as the objectivity of the criterion measure increases