Foldes, Duehr, and Ones conducted a large-scale meta-analysis to examine the magnitude of racial group differences on measures of personality and whether these differences are likely to result in adverse impact.
Due to the paucity of analyses and research in this area, this study provides an important contribution to the literature. The current research extends beyond existing research in many ways. First, researchers included understudied racial groups such as Asian Americans and American Indians and used a reliable estimate of the magnitude of racial group differences (d-values). The authors also examined differences at both the broad factor and narrow facet levels. Data from 44 different personality assessments was used and more than 700 effect sizes contributed to the database. Results suggested that in general, racial group differences were negligible and unlikely to result in adverse impact. However, there is some concern for adverse impact for certain groups and traits but this is dependent on characteristics of the selection scenario such as the trait being measured, the effect size, the composition of the applicant pool, and the selection ratio. The authors present a summary of potential trait-group combinations that may result in adverse impact. Specifically, adverse impact could be a concern for Blacks when Emotional Stability, Anxiety, Extraversion and Sociability are measured. There is some concern for Asians when Emotional Stability, Even Tempered, Extraversion, Dominance, Sociability, and Conscientiousness are measured, and for Hispanics when Sociability is measured. For American Indians, there is some concern when Emotional Stability and Extraversion are measured and for Whites there is some concern for Conscientiousness and its facets (Achievement, Cautiousness, Order), Extraversion, and Self-Esteem. This research has implications for the use of personality assessments for selection, such that practitioners should carefully consider the job-related traits being measured, the composition of their applicant pools, and their selection ratios.
View full abstract/get the article at:
http://www.ingentaconnect.com/search/article?title=group+differences+in+personality&title_type=tka&year_from=1998&year_to=2008&database=1&pageSize=20&index=4
Read more!
Group differences in personality – Meta-analyses comparing five U.S. racial groups
Submission Deadline for EAWOP Conference Approaches
October 3, 2008 is the proposal submission deadline for the 14th European Congress of Work and Organizational Psychology to be held in Santiago de Compostela, Spain on May 13-16, 2009. Organized under the auspices of the European Association of Work and Organizational Psychology (EAWOP), the conference has the theme "Developing People in 21st Century Organizations: Global and Local Perspectives."
Read more!
Article on Emotional Intelligence
The lead article (abstract) of the September 2008 edition of the American Psychologist is a paper by John Mayer, Peter Salovey, and David Caruso reviewing the concept and subsequent controversy and confusion that have emerged in regard to the construct of emotional intelligence. The first to propose the idea of emotional intelligence, the authors explore some of the complexities which have emerged in relation to this concept since their original work in 1990 and make suggestions for future work in the area.
Read more!
Theory of Geographic Differences in Distribution of Personality Traits
In a recent article (here's the abstract) published in Perspectives on Psychological Science, Peter Rentfrow and colleagues outline their theory about how geographic differences in personality are developed and maintained. The author's website also has links to a wide variety of papers likely to be of interest which explore the relationship between personality and topics ranging from voting behavior, music preference, to matchmaking.
Read more!
International Test Commission Presentations
Many of the presentations from the International Test Commission Conference, held in July 2008 in Liverpool UK, are now available online.
Read more!
Research Update
- Validity and/or Reducing Adverse Impact – Still a Complex Balancing Act
- A New Source of Bias in the Performance Appraisal Process?
- Knowledge and Beliefs about Assessment Tools and Techniques: UK Practitioners Perspective
- Personality Plays an Important Role in How to Handle Retail Customers
- Are Nonverbal Measures of Cognitive Ability more Likely to Produce Fair Employment Decisions?
- CAT and Personality Testing: A Cautionary Note
- Important Questions on the Construct Validation of Assessment Center Dimensions
Validity and/or Reducing Adverse Impact – Still a Complex Balancing Act
This issue contains an interesting and thoughtful exchange on the concept of Pareto-optimal predictor composites as a potential approach to alleviating (not to be confused with resolving) the validity diversity dilemma. Pareto-optimal composites refer to a specific approach for assigning weights to individual predictors in a selection composite. This approach, which is primarily contrasted with regression weighting, suggests there is value in considering alternative weighting schemes that may provide a more balanced trade-off between validity and adverse impact (AI). De Corte, Lievens, and Sackett essentially argue that the function of regression-based composites is to maximize validity. At the opposite end of the spectrum, one could adopt a weighting approach that results in the highest possible adverse impact ratio (AIR). Pareto-optimal weights reflect a more balanced tradeoff between these two approaches that allows organizations to ask “what reduction in validity are we willing to accept in order to increase diversity?” This paper is an extension of De Corte’s and colleagues 2007 work that introduced the concept of Pareto-optimal tradeoffs. The authors attempt to demonstrate that, in contrast to findings reported by Potosky, Bobko, & Roth (2005), adding non-cognitive measures to a cognitive ability test can result in more than and a modest decrease in adverse impact - if one is willing to consider alternatives to the regression weighting method. In two separate commentaries Potosky and her colleagues, and Kehoe respond to De Corte et al.’s description of the Pareto-optimal concept, their findings, and the implications and interpretability of these findings for organizations.
View full abstract/Get the article at:
http://www3.interscience.wiley.com/journal/121382614/abstract
A New Source of Bias in the Performance Appraisal Process?
The Influence of a Manager’s Own Performance Appraisal on the Evaluation of Others
Latham, Budworth, Yanar, and Whyte examine the influence of managers’ most recent performance appraisal on their subsequent appraisal of others. They hypothesized that a manager who receives a favorable performance appraisal will subsequently evaluate another person more positively than a manager who receives an unfavorable appraisal. Four separate studies were conducted to test the hypothesis – a case study, a lab experiment and two field studies. In the lab setting, managers received hypothetical feedback then were asked to rate an individual’s videotaped performance. The field studies were conducted with managers in a manufacturing plant in Canada and in a retail organization in Turkey to determine whether the lab study results would generalize to these settings. Findings from the field supported the results of the case study, the lab study, and the hypothesis that performance evaluations received by managers predicted their performance evaluations of their employees. These findings support the anchoring and adjustment hypothesis, which states that people look for a guidepost or anchor when making estimates of value under uncertainty, and these anchors can often be based on irrelevant information. This research supports the importance of rater training and the use of highly structured behavioral appraisal instruments as potential strategies for reducing bias in performance ratings.
View full abstract/get the article at:
http://www3.interscience.wiley.com/journal/121382612/abstract
Knowledge and Beliefs about Assessment Tools and Techniques: UK Practitioners Perspective
HR Professionals’ Beliefs About, and Knowledge of, Assessment Techniques and Psychometric
Tests
Furnham examines the knowledge and beliefs about various assessment tests and techniques among 255 HR and or related professionals in the UK. Respondents provided information on a number of assessment methods including interviews, references, assessment centers and “personal hunch.” They were also asked to share information about their knowledge of 21 personality tests and 19 cognitive ability tests. Practitioners considered Assessment Centers, Cognitive Ability Tests, and Work Samples to be the most valid techniques. The most widely known and used personality tests were the 16PF and the MBTI. The Belbin Team Role Inventory and the FIRO-B were also widely recognized and used. The NEO-PI-R and the 16PF were rated as most useful for selection, while the MBTI and FIRO-B were considered most useful from a development perspective. The most widely known and used Cognitive Ability tests were the Graduate and Managerial Assessment and the Watson-Glaser Critical Thinking Appraisal. Furnham considers the implication of these results for educating and informing practitioners.
View full abstract/get the article at:
http://www3.interscience.wiley.com/journal/121382621/abstract
Personality Plays an Important Role in How to Handle Retail Customers
Effects of Personality Characteristics on Knowledge, Skill, and Performance in Servicing Retail Customers
Motowidlo, Brownlee, and Schmit investigated knowledge, skill, and personality characteristics associated with effective customer service. Following from research indicating that ability affects performance primarily through its effects on technical knowledge and skill, the authors of the current study tested whether the effect of extraversion, agreeableness, and neuroticism on customer service performance is mediated through interpersonal knowledge and skill. The authors administered a situational interview to retail store associates to measure customer service knowledge, a role-play simulation to measure customer service skill, the NEO-FFI, and the Wonderlic Personnel Test were used to measure personality and cognitive ability respectively. Job performance was measured using supervisor ratings. Results from 140 associates indicated that extraversion, agreeableness, and neuroticism explain incremental variance in customer service knowledge beyond that accounted for by ability, experience, and conscientiousness. They also found that the effects of personality, ability, and experience on customer service performance all funnel through customer service knowledge and skill. There was also a moderating effect for conscientiousness such that customer service knowledge predicts performance best for those who are highly conscientious. Results suggest a causal model that could be tested in future research. These results have implications for hiring practices, lending support for matching knowledge content with interpersonally oriented personality traits.
View full abstract/get the article at:
http://www3.interscience.wiley.com/journal/121382617/abstract
Are Nonverbal Measures of Cognitive Ability more Likely to Produce Fair Employment Decisions?
Comparing the Performance of Native North Americans and Predominantly White Military Recruits on Verbal and NonVerbal Measures of Cognitive Ability
Vanderpool and Catano compared the cognitive ability of Native North Americans (Canadian First Nations Members) and predominately white Canadian Forces (CF) recruits on both verbal and non-verbal measures of cognitive ability. Previous research conducted by Lynn and Vanhanen (2006) has suggested that American Indians perform better on visual and spatial aspects of cognitive ability than they do on verbal measures. Based on these results, the authors of the current study explored whether the use of a nonverbal test could reduce the effect size between Native Americans and White applicants to <1> 1 SD. The nonverbal measures showed much smaller differences (CFAT- SA, d=.30, and SPM, d=.38.) between First Nations members and predominantly white CF recruits. Results indicated that making employment decisions solely on verbal cognitive ability scores is likely to produce adverse impact against First Nation members when verbal ability has not been identified as a bona fide occupational requirement. The authors recommend caution when making employment decisions based on total scores on verbal cognitive measures and suggest that nonverbal measures of cognitive ability are more likely to produce fair employment decisions without sacrificing the gain in utility of cognitive ability tests.
View full abstract/get the article at:
http://www3.interscience.wiley.com/journal/121382613/abstract
CAT and Personality Testing: A Cautionary Note
Effects of Changed Item Order: A cautionary note to practitioners on jumping to computerized adaptive testing for personality assessment
Ortner investigated the effects of item order on CAT delivery of the Eysenck Personality Profiler (EPP). CAT is a method for administering assessments that is tailored to each test taker in that questions are successively selected based on previous responses. The test taker’s individual trait level can be iteratively estimated during the testing process. The authors hypothesized that a questionnaire beginning with an extremely high trait level may cause test takers to agree less frequently with subsequent items as compared with questionnaires that begin with an item measuring a low trait level. They also hypothesized that participants would have longer response latencies when presented with an extreme or high trait level at the beginning of the questionnaire. The Eysenck Personality Profiler (EPP) was administered to four groups using four different test versions. Group 1 was presented the conventional item order, group 2 was presented an adaptive form with the first item representing a medium trait level, group 3 was presented an adaptive version with the first item representing a low trait level, and group 4 was presented an adaptive form with the first item representing a high or extreme trait level. Results found significant effects for mean person parameters gained for three scales: Manipulative, Hypochondriac, and Expressive. In other words, items representing a high trait level at the beginning of a questionnaire altered test takers’ performance on three of the seven personality scales. Results also indicated that longer response latencies might also be caused by effects of item order. This research has implications for practitioners and researchers who should consider the effects of item order when administering CAT for personality assessment.
View full abstract/get the article at:
http://www3.interscience.wiley.com/journal/121382609/abstract
Important Questions on the Construct Validation of Assessment Center Dimensions
Assessor Constructs in Use as the Missing Component in Validation of Assessment Center Dimensions: A critique and directions for research
Jones and Born critiqued the use of multitrait-multimethod (MTMM) approaches to measure assessment center (AC) dimension ratings and offered further recommendations for the construct validation of AC dimensions. The objective of the current research was to highlight the importance of construct definition and a priori theoretical descriptions of expected relations among rating dimensions, exercises, and dimension-exercise interactions. The authors attempted to answer the following questions (1) What are ‘traits’ and ‘constructs,’ how do we define them, and how do they relate to AC dimensions (2) What is the ‘method’ in ACs and (3) What does a correlation indicate about trait/method relationships? The results of the current research argue for clear articulation of AC dimensions in terms of trait permanence, specificity of constructs, and their interrelationships. Furthermore, the authors argue that exercises are not the only ‘method’ in ACs - other important aspects of AC methodology include raters, observations vs evaluation, and the rating process. Finally, the authors argue that application of MTMM analysis to ACs has limited value since it confounds methods with the object of study directly, by relying on a single methodological process (within-exercise ratings) to evaluate two methodological processes (within-exercise and behavioral report ratings). The authors offer several suggestions on how assessors or subject matter experts can improve their ratings of AC dimensions. This research has implications for practitioners and how they arrive at construct judgments through AC exercises.
View full abstract/get the article at:
http://www3.interscience.wiley.com/journal/121382622/abstract
Read more!