Provoking behaviour: training roleplayers at assessment centres

Assessment days for evaluating work-relevant behaviours ofapplicants or job incumbents often draw on actors to perform as difficultteam-members or curious clients in meeting simulations. A recent study hasshown that these role-playing actors can be trained to effectively weave pre-writtendialogue prompts into the improvised simulations. However, whether this helpsmeasurement of participant behaviours is less clear.

The study authors Eveline Schollaert and Filip Lievens gave19 role-players training, which in one condition included explicit guidance onusing behaviour-eliciting prompts during assessment exercises; for example,"Mention that you feel bad about it" in order to provoke behavioursrelating to a dimension of interpersonal sensitivity. Such prompts are often provided in prep material, but actual usage was unknown. The authors wondered whetherrole-players could realistically increase their prompt usage through training, or whether this istoo much to ask an actor in the thick of a dynamic interaction.

At a subsequent assessment centre, the role-playersinteracted in simulations with 233 students from Ghent University. Role-playerswith prompt training were able to incorporate four to five times more promptsthan those without such training, an increase from about two prompts perexercise to 10-12.

More prompts ought to elicit more relevant behaviours, so theauthors expected observers to get a better picture of true 'candidate'performance. But this isn't clear. In the high-prompt condition, pairs ofraters watching the same role-play didn't agree any more on their ratings,suggesting the behaviours remained just as obscured as without prompts. Thatsaid, there was better correspondence of some of the ratings to other measurementsyou would expect to be related - for instance, interpersonal sensitivitycorrelated better with an Agreeableness personality score acquired pre-centre.But half of the predicted increases in correlation weren't observed.

Regarding their unsupported hypotheses, the authors wonderwhether the rating assessors should also have been trained on prompt use toencourage sensitivity to candidate reactions. I have additional concerns on thenature of the assessors -minimally trained masters students - used to drawconclusions about a professionalised domain. Nonetheless, this rare examinationof role-player impact on face to face assessments suggests training cangenerate more dimension-focused contributions, which in turn may result inmeasurements with more predictive power.

ResearchBlogging.orgSchollaert, E., & Lievens, F. (2011). The Use of Role-Player Prompts in Assessment Center Exercises International Journal of Selection and Assessment, 19 (2), 190-197 DOI: 10.1111/j.1468-2389.2011.00546.x