Can we get away with using lo-fi assessment to recruit advanced positions?

In recruitment, the promise of comparable results for less effort is understandably tempting. It's offered by the offsetting of costly assessments with alternative measures that use pencils, screens and standardised questions instead of expert assessors. However, as some sources suggest a bad hire can cost twice or more that position's annual salary, the stakes are high. A new study kicks some assessment tyres to see whether that bargain is actually a banger.

Researchers Filip Lievens and Fiona Patterson looked at recruitment into advanced roles which typically seek the skills and knowledge to hit the ground running. They took their sample of 196 successful candidates from the UK selection process for General Practitioners in medicine (GPs). To get here, you've completed two years of basic training and up to six years of prior education, by which stage you're after someone ready to go, not a future 'bright star'. Lievens and Patterson were specifically interested in how much assessment fidelity matters, meaning the extent to which assessment task and context mirror that in the actual job.

Three types of assessment were involved, all designed by experienced doctors with assistance from assessment psychologists. Written tests assessed declarative knowledge through diagnostic dilemmas such as “a 75-year-old man, who is a heavy smoker, with a blood pressure of 170/105, complains of floaters in the left eye”. Assessment centre (AC) simulations meanwhile probe skills and behaviours in an open-ended, live situation such as emulating a patient consultation; these tend to be more powerful predictors of job performance, but are costly.

The third was the situational judgement test (SJT), a pencil and paper assessment where candidates select actions in response to situations, such as a senior colleague making a non-ideal prescription. SJTs are considered by many to be “low-fidelity simulations”, losing their open-endedness and embodied qualities, but hanging on to the what-would-you-do-if? focus. The authors were interested in whether its predictive power would be in the same class as the AC simulations, or mirror the more modest validity of its pencil and paper counterpart.

The data showed that all assessments were useful predictors of job performance, as measured by supervisors after a year spent in role. Both types of simulation - AC and SJT - provided additional insight over and above that given by the rather disembodied knowledge test – each explaining about a further 6% of the variance. But in comparison with each other, the simulations were difficult to tell apart, with no significant difference in how well they predicted performance.

It should be noted that the AC simulations did capture some variance over and above the SJT, notably relating to non-cognitive aspects of job performance, such as empathy, which is important as such areas are less trainable than clinical expertise. However, this extra insight was fairly modest, just a few percentage points of variance. More expensive AC assessments can provide additional value, but the study suggests that at least in this specific recruitment domain, you can get away with a loss of fidelity if the assessments are appropriately designed.

ResearchBlogging.orgLievens, F., & Patterson, F. (2011). The validity and incremental validity of knowledge tests, low-fidelity simulations, and high-fidelity simulations for predicting job performance in advanced-level high-stakes selection. Journal of Applied Psychology, 96 (5), 927-940 DOI: 10.1037/a0023496