Individual and Group Assessment for Decision Making

chapter 13 individual and group assessment n.w
1 / 13
Embed
Share

Explore the complexities of individual and group assessments in decision-making processes, focusing on combining multiple assessments, issues in combining predictors, and improving individual assessment practices. Dive into criticisms, historical approaches, and the relevance of various assessment methods.

  • Assessment
  • Decision Making
  • Individual
  • Group
  • Combining Assessments

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. CHAPTER 13 INDIVIDUAL AND GROUP ASSESSMENT Complex Candidate Judgments Individual Assessments Assessment Centers CHAPTER 13 COMBINING MULTIPLE ASSESSMENTS 1

  2. ISSUES IN COMBINING PREDICTORS Decision Models Additive Well known and useful and Compensatory New concept: compensatory batteries What is either or or . if then? What does the ADA have to do with it? Judgmental (for individual assessments) v. Statistical Cf judgmental vs. Statistical which is better? CHAPTER 13 COMBINING MULTIPLE ASSESSMENTS 2

  3. ISSUES IN COMBINING PREDICTORS Tough Choices Large applicant pools and top-down, no problem What about small pools of candidates for one position? What factors influence the decision? CHAPTER 13 COMBINING MULTIPLE ASSESSMENTS 3

  4. INDIVIDUAL ASSESSMENT Usually for executives or special positions Performance is hard to define and few occupy the roles Why do these assessment opportunities attract charlatans? Holistic Approach (Henry Murray) How good it this approach? (P. Meehl, 54) Analytic Emphasis (history) Approach: Consultant visited clients to learn the job/org/context Two psychologists interviewed and rated candidates w/o access to data on them Projective tests were analyzed by clinician blind to other information Test battery developed to include Two personality/ interest inventory/abilities tests One psychologist interviewer wrote the report Two other programs: EXXON and Sears 50s Batteries included critical thinking / personality MRC .70 -.75! Exec success = forcefulness, dominance, assertiveness, confidence Although valid, legal concerns from 50s 60 damped down research CHAPTER 13 COMBINING MULTIPLE ASSESSMENTS 4

  5. IMPROVING INDIVIDUAL ASSESSMENT Criticisms of Individual Assessment Overconfidence in clinical judgments True or false? (Camerer & Johnson, 91; Highouse, 02) Psychometrics don t apply to this type of assessment Assessors wouldn t be in business if they weren t valid CHAPTER 13 COMBINING MULTIPLE ASSESSMENTS 5

  6. INDIVIDUAL ASSESSMENTS Other criticisms: 1. 2. 3. Individual assessments is rarely subjected to validation Conclusions are often unreliable (Ryan & Sackett, 89) Summaries are often influenced by one or two parts which could be done alone 1. (they are judgments! And judgments often focus on negative & early cues!) Great emphasis is usually placed on personality 1. When cognitive tests are usually more valid Actual interpersonal interaction needs to be assessed 1. But need to be done with more than one person evaluating (assessment centers are useful) May be ethically or legally questionable to seek information not explicitly relevant to the job 1. Mr. Obama, can you tell us a little about your wife, Michelle? 4. 5. 6. CHAPTER 13 COMBINING MULTIPLE ASSESSMENTS 6

  7. TO ADDRESS THESE ISSUES: Use the appropriate designs - 1. Combine evidence of relevant traits with evidence from construct validities 2. Use well-developed predictive hypotheses to dictate and justify the assessment content 3. Use more work samples (or in-basket, SJT) 4. To assess interpersonal behavior 1. Personnel records / biodata / interview structure 2. Others? CHAPTER 13 COMBINING MULTIPLE ASSESSMENTS 7

  8. ASSESSMENT CENTERS Purposes Often organizationally specific To reflect specific values and practices For managerial (Thornton & Byham, 82) Early identification of potential Promotion Development For personnel decisions OAR (overall rating) CHAPTER 13 COMBINING MULTIPLE ASSESSMENTS 8

  9. ASSESSMENT CENTERS (ORGANIZATION SPECIFIC) Purposes (calls for differences in assessment program design) Promotion (Id potential managers) succession planning Management development Assessment Center Components (need a JA) Multiattribute and should be multimethod (more that one method for an attribute) Tests and Inventories Exercises (performance tests-work samples) In-basket Leaderless group discussions do these have problems? Confounds? Interviews Should a stress interview be used? When? Give an example. Assessors Functions of Assessors (Zedeck, 86) Observer and recorder Role play Predictor Assessor Qualifications: (SMEs, HR, psychologists) Number of Assessors: (2 candidates to 1 assessor) CHAPTER 13 COMBINING MULTIPLE ASSESSMENTS 9

  10. Dimensions to be Assessed See Table 13.2 Dimensions (usually not defined but should be) Should be defined in behavioral terms (in a particular situation) Ratings Replication: Ratings on different predictors for the same dimension should generalize from one exercise to another Would you predict that happens much? Overall Assessment Ratings (OAR) Should be a definable attribute Or is it a composite of unrelated, but valid predictors? Is consensus the way to go? Can you think of an example? CHAPTER 13 COMBINING MULTIPLE ASSESSMENTS 10

  11. ASSESSMENT CENTER PROBLEMS AND ISSUES Construct Validities of Dimension Assessments Dimensional Consistency (across exercises): rs should not be high but substantial. Why? Results of factor analyses Are the factors (in Table 13.3, defined by the exercise or dimensions? Is this consistent with Sackett & Drehers, 82 findings? Reasons for inconsistency in dimension ratings Two viewpoints: Are the dimensions relative enduring or situational specific? Or contingent? Solutions? the OAR Maybe the dimensions are just a small number of cognitive and personality factors A behavioral checklist perhaps? Criterion-Related Validities (review of meta-analytic studies) 1. Predictive validity higher with multiple measures 2. Validities higher when peer evals included 3. Background and training moderates validity 4. 4 dimensions account for most of the variance 5. Validities higher for managerial progress v. future performance CHAPTER 13 COMBINING MULTIPLE ASSESSMENTS 11

  12. CHAPTER 13 COMBINING MULTIPLE ASSESSMENTS 12

  13. POINT OF VIEW What is the authors on Assessment Center validity? What do they recommend? --behaviorally based ratings -using checklists CHAPTER 13 COMBINING MULTIPLE ASSESSMENTS 13

More Related Content