Research Update

The September issue of the International Journal of Selection and Assessment (IJSA) contained a number of interesting articles. We provide a brief summary of the following articles:
  • Validity and/or Reducing Adverse Impact – Still a Complex Balancing Act

  • A New Source of Bias in the Performance Appraisal Process?

  • Knowledge and Beliefs about Assessment Tools and Techniques: UK Practitioners Perspective

  • Personality Plays an Important Role in How to Handle Retail Customers

  • Are Nonverbal Measures of Cognitive Ability more Likely to Produce Fair Employment Decisions?

  • CAT and Personality Testing: A Cautionary Note

  • Important Questions on the Construct Validation of Assessment Center Dimensions




Validity and/or Reducing Adverse Impact – Still a Complex Balancing Act



This issue contains an interesting and thoughtful exchange on the concept of Pareto-optimal predictor composites as a potential approach to alleviating (not to be confused with resolving) the validity diversity dilemma. Pareto-optimal composites refer to a specific approach for assigning weights to individual predictors in a selection composite. This approach, which is primarily contrasted with regression weighting, suggests there is value in considering alternative weighting schemes that may provide a more balanced trade-off between validity and adverse impact (AI). De Corte, Lievens, and Sackett essentially argue that the function of regression-based composites is to maximize validity. At the opposite end of the spectrum, one could adopt a weighting approach that results in the highest possible adverse impact ratio (AIR). Pareto-optimal weights reflect a more balanced tradeoff between these two approaches that allows organizations to ask “what reduction in validity are we willing to accept in order to increase diversity?” This paper is an extension of De Corte’s and colleagues 2007 work that introduced the concept of Pareto-optimal tradeoffs. The authors attempt to demonstrate that, in contrast to findings reported by Potosky, Bobko, & Roth (2005), adding non-cognitive measures to a cognitive ability test can result in more than and a modest decrease in adverse impact - if one is willing to consider alternatives to the regression weighting method. In two separate commentaries Potosky and her colleagues, and Kehoe respond to De Corte et al.’s description of the Pareto-optimal concept, their findings, and the implications and interpretability of these findings for organizations.

View full abstract/Get the article at:

http://www3.interscience.wiley.com/journal/121382614/abstract

A New Source of Bias in the Performance Appraisal Process?


The Influence of a Manager’s Own Performance Appraisal on the Evaluation of Others

Latham, Budworth, Yanar, and Whyte examine the influence of managers’ most recent performance appraisal on their subsequent appraisal of others. They hypothesized that a manager who receives a favorable performance appraisal will subsequently evaluate another person more positively than a manager who receives an unfavorable appraisal. Four separate studies were conducted to test the hypothesis – a case study, a lab experiment and two field studies. In the lab setting, managers received hypothetical feedback then were asked to rate an individual’s videotaped performance. The field studies were conducted with managers in a manufacturing plant in Canada and in a retail organization in Turkey to determine whether the lab study results would generalize to these settings. Findings from the field supported the results of the case study, the lab study, and the hypothesis that performance evaluations received by managers predicted their performance evaluations of their employees. These findings support the anchoring and adjustment hypothesis, which states that people look for a guidepost or anchor when making estimates of value under uncertainty, and these anchors can often be based on irrelevant information. This research supports the importance of rater training and the use of highly structured behavioral appraisal instruments as potential strategies for reducing bias in performance ratings.

View full abstract/get the article at:

http://www3.interscience.wiley.com/journal/121382612/abstract

Knowledge and Beliefs about Assessment Tools and Techniques: UK Practitioners Perspective



HR Professionals’ Beliefs About, and Knowledge of, Assessment Techniques and Psychometric
Tests

Furnham examines the knowledge and beliefs about various assessment tests and techniques among 255 HR and or related professionals in the UK. Respondents provided information on a number of assessment methods including interviews, references, assessment centers and “personal hunch.” They were also asked to share information about their knowledge of 21 personality tests and 19 cognitive ability tests. Practitioners considered Assessment Centers, Cognitive Ability Tests, and Work Samples to be the most valid techniques. The most widely known and used personality tests were the 16PF and the MBTI. The Belbin Team Role Inventory and the FIRO-B were also widely recognized and used. The NEO-PI-R and the 16PF were rated as most useful for selection, while the MBTI and FIRO-B were considered most useful from a development perspective. The most widely known and used Cognitive Ability tests were the Graduate and Managerial Assessment and the Watson-Glaser Critical Thinking Appraisal. Furnham considers the implication of these results for educating and informing practitioners.


View full abstract/get the article at:

http://www3.interscience.wiley.com/journal/121382621/abstract

Personality Plays an Important Role in How to Handle Retail Customers



Effects of Personality Characteristics on Knowledge, Skill, and Performance in Servicing Retail Customers

Motowidlo, Brownlee, and Schmit investigated knowledge, skill, and personality characteristics associated with effective customer service. Following from research indicating that ability affects performance primarily through its effects on technical knowledge and skill, the authors of the current study tested whether the effect of extraversion, agreeableness, and neuroticism on customer service performance is mediated through interpersonal knowledge and skill. The authors administered a situational interview to retail store associates to measure customer service knowledge, a role-play simulation to measure customer service skill, the NEO-FFI, and the Wonderlic Personnel Test were used to measure personality and cognitive ability respectively. Job performance was measured using supervisor ratings. Results from 140 associates indicated that extraversion, agreeableness, and neuroticism explain incremental variance in customer service knowledge beyond that accounted for by ability, experience, and conscientiousness. They also found that the effects of personality, ability, and experience on customer service performance all funnel through customer service knowledge and skill. There was also a moderating effect for conscientiousness such that customer service knowledge predicts performance best for those who are highly conscientious. Results suggest a causal model that could be tested in future research. These results have implications for hiring practices, lending support for matching knowledge content with interpersonally oriented personality traits.

View full abstract/get the article at:

http://www3.interscience.wiley.com/journal/121382617/abstract

Are Nonverbal Measures of Cognitive Ability more Likely to Produce Fair Employment Decisions?



Comparing the Performance of Native North Americans and Predominantly White Military Recruits on Verbal and NonVerbal Measures of Cognitive Ability

Vanderpool and Catano compared the cognitive ability of Native North Americans (Canadian First Nations Members) and predominately white Canadian Forces (CF) recruits on both verbal and non-verbal measures of cognitive ability. Previous research conducted by Lynn and Vanhanen (2006) has suggested that American Indians perform better on visual and spatial aspects of cognitive ability than they do on verbal measures. Based on these results, the authors of the current study explored whether the use of a nonverbal test could reduce the effect size between Native Americans and White applicants to <1> 1 SD. The nonverbal measures showed much smaller differences (CFAT- SA, d=.30, and SPM, d=.38.) between First Nations members and predominantly white CF recruits. Results indicated that making employment decisions solely on verbal cognitive ability scores is likely to produce adverse impact against First Nation members when verbal ability has not been identified as a bona fide occupational requirement. The authors recommend caution when making employment decisions based on total scores on verbal cognitive measures and suggest that nonverbal measures of cognitive ability are more likely to produce fair employment decisions without sacrificing the gain in utility of cognitive ability tests.

View full abstract/get the article at:

http://www3.interscience.wiley.com/journal/121382613/abstract

CAT and Personality Testing: A Cautionary Note



Effects of Changed Item Order: A cautionary note to practitioners on jumping to computerized adaptive testing for personality assessment

Ortner investigated the effects of item order on CAT delivery of the Eysenck Personality Profiler (EPP). CAT is a method for administering assessments that is tailored to each test taker in that questions are successively selected based on previous responses. The test taker’s individual trait level can be iteratively estimated during the testing process. The authors hypothesized that a questionnaire beginning with an extremely high trait level may cause test takers to agree less frequently with subsequent items as compared with questionnaires that begin with an item measuring a low trait level. They also hypothesized that participants would have longer response latencies when presented with an extreme or high trait level at the beginning of the questionnaire. The Eysenck Personality Profiler (EPP) was administered to four groups using four different test versions. Group 1 was presented the conventional item order, group 2 was presented an adaptive form with the first item representing a medium trait level, group 3 was presented an adaptive version with the first item representing a low trait level, and group 4 was presented an adaptive form with the first item representing a high or extreme trait level. Results found significant effects for mean person parameters gained for three scales: Manipulative, Hypochondriac, and Expressive. In other words, items representing a high trait level at the beginning of a questionnaire altered test takers’ performance on three of the seven personality scales. Results also indicated that longer response latencies might also be caused by effects of item order. This research has implications for practitioners and researchers who should consider the effects of item order when administering CAT for personality assessment.

View full abstract/get the article at:

http://www3.interscience.wiley.com/journal/121382609/abstract

Important Questions on the Construct Validation of Assessment Center Dimensions



Assessor Constructs in Use as the Missing Component in Validation of Assessment Center Dimensions: A critique and directions for research

Jones and Born critiqued the use of multitrait-multimethod (MTMM) approaches to measure assessment center (AC) dimension ratings and offered further recommendations for the construct validation of AC dimensions. The objective of the current research was to highlight the importance of construct definition and a priori theoretical descriptions of expected relations among rating dimensions, exercises, and dimension-exercise interactions. The authors attempted to answer the following questions (1) What are ‘traits’ and ‘constructs,’ how do we define them, and how do they relate to AC dimensions (2) What is the ‘method’ in ACs and (3) What does a correlation indicate about trait/method relationships? The results of the current research argue for clear articulation of AC dimensions in terms of trait permanence, specificity of constructs, and their interrelationships. Furthermore, the authors argue that exercises are not the only ‘method’ in ACs - other important aspects of AC methodology include raters, observations vs evaluation, and the rating process. Finally, the authors argue that application of MTMM analysis to ACs has limited value since it confounds methods with the object of study directly, by relying on a single methodological process (within-exercise ratings) to evaluate two methodological processes (within-exercise and behavioral report ratings). The authors offer several suggestions on how assessors or subject matter experts can improve their ratings of AC dimensions. This research has implications for practitioners and how they arrive at construct judgments through AC exercises.

View full abstract/get the article at:

http://www3.interscience.wiley.com/journal/121382622/abstract


No comments: