Showing posts with label testing. Show all posts
Showing posts with label testing. Show all posts

Can we get away with using lo-fi assessment to recruit advanced positions?

In recruitment, the promise of comparable results for less effort is understandably tempting. It's offered by the offsetting of costly assessments with alternative measures that use pencils, screens and standardised questions instead of expert assessors. However, as some sources suggest a bad hire can cost twice or more that position's annual salary, the stakes are high. A new study kicks some assessment tyres to see whether that bargain is actually a banger.

Researchers Filip Lievens and Fiona Patterson looked at recruitment into advanced roles which typically seek the skills and knowledge to hit the ground running. They took their sample of 196 successful candidates from the UK selection process for General Practitioners in medicine (GPs). To get here, you've completed two years of basic training and up to six years of prior education, by which stage you're after someone ready to go, not a future 'bright star'. Lievens and Patterson were specifically interested in how much assessment fidelity matters, meaning the extent to which assessment task and context mirror that in the actual job.

Three types of assessment were involved, all designed by experienced doctors with assistance from assessment psychologists. Written tests assessed declarative knowledge through diagnostic dilemmas such as “a 75-year-old man, who is a heavy smoker, with a blood pressure of 170/105, complains of floaters in the left eye”. Assessment centre (AC) simulations meanwhile probe skills and behaviours in an open-ended, live situation such as emulating a patient consultation; these tend to be more powerful predictors of job performance, but are costly.

The third was the situational judgement test (SJT), a pencil and paper assessment where candidates select actions in response to situations, such as a senior colleague making a non-ideal prescription. SJTs are considered by many to be “low-fidelity simulations”, losing their open-endedness and embodied qualities, but hanging on to the what-would-you-do-if? focus. The authors were interested in whether its predictive power would be in the same class as the AC simulations, or mirror the more modest validity of its pencil and paper counterpart.

The data showed that all assessments were useful predictors of job performance, as measured by supervisors after a year spent in role. Both types of simulation - AC and SJT - provided additional insight over and above that given by the rather disembodied knowledge test – each explaining about a further 6% of the variance. But in comparison with each other, the simulations were difficult to tell apart, with no significant difference in how well they predicted performance.

It should be noted that the AC simulations did capture some variance over and above the SJT, notably relating to non-cognitive aspects of job performance, such as empathy, which is important as such areas are less trainable than clinical expertise. However, this extra insight was fairly modest, just a few percentage points of variance. More expensive AC assessments can provide additional value, but the study suggests that at least in this specific recruitment domain, you can get away with a loss of fidelity if the assessments are appropriately designed.

ResearchBlogging.orgLievens, F., & Patterson, F. (2011). The validity and incremental validity of knowledge tests, low-fidelity simulations, and high-fidelity simulations for predicting job performance in advanced-level high-stakes selection. Journal of Applied Psychology, 96 (5), 927-940 DOI: 10.1037/a0023496

Are job selection methods actually measuring 'ability to identify criteria'?



While we know that modern selection procedures such as ability tests and structured interviews are successful in predicting job performance, it's much less clear how they pull off those predictions. The occupational psychology process – and thus our belief system of how things work - is essentially a) identify what the job needs b) distil this to measurable dimensions c) assess performance on your dimensions. But a recent review article by Martin Kleinman and colleagues suggests that in some cases, we may largely be assessing something else: the “ability to identify criteria”.



The review unpacks a field of research that recognises that people aren't passive when being assessed. Candidates try to squirrel out what they are being asked to do, or even who they are being asked to be, and funnel their energies towards that. When the situation is ambiguous, a so-called “weak” situation, those better at squirrelling – those with high “ability to identify criteria” (ATIC) - will put on the right performance, and those that are worse will put on Peer Gynt for the panto crowd.



Some people are better at guessing what an assessment is measuring than others, so in itself ATIC is a real phenomenon. And the research shows that higher ATIC scores are associated with higher overall assessment performance, and better scores specifically on the dimensions they correctly guess. ATIC clearly has a 'figuring-out' element, so we might suspect its effects are an artefact of it being strongly associated with cognitive ability, itself associated with better performance in many types of assessment. But if anything the evidence works the other way. ATIC has an effect over and above cognitive ability, and it seems possible that cognitive ability buffs assessment scores mainly due to its contribution to the ATIC effect.



In a recent study, ATIC, assessment performance, and candidate job performance were examined within a single selection scenario. Remarkably it found that job performance correlated better with ATIC than it did with the assessment scores themselves. In fact, the relationship between assessment scores and job performance became insignificant after controlling for ATIC. This offers the provocative possibility that the main reason assessments are useful is as a window into ATIC, which the authors consider “the cognitive component of social competence in selection situations”. After all, many modern jobs, particularly managerial ones, depend upon figuring out what a social situation demands of you.



So what to make of this, especially if you are an assessment practitioner? We must be realistic about what we are really assessing, which in no small part is 'figuring out the rules of the game'. If you're unhappy about that, there's a simple way to wipe out the ATIC effect: making the assessed dimensions transparent, turning the weak situation into a strong, unambiguous one. Losing the contamination of ATIC leads to more accurate measures of the individual dimensions you decided were important. But overall your prediction of job performance measures will be weaker, because you've lost the ATIC factor which does genuinely seem to matter. And while no-one is suggesting that it is all that matters in the job, it may be the aspect of work that assessments are best positioned to pick up.



ResearchBlogging.orgKleinmann, M., Ingold, P., Lievens, F., Jansen, A., Melchers, K., & Konig, C. (2011). A different look at why selection procedures work: The role of candidates' ability to identify criteria Organizational Psychology Review, 1 (2), 128-146 DOI: 10.1177/2041386610387000