scorecardresearch
Friday, March 29, 2024
Support Our Journalism
HomeFeaturesTalent analytics and personality tests don't really help pick the right employee

Talent analytics and personality tests don’t really help pick the right employee

Follow Us :
Text Size:

Psychometric tests such as the Myers Briggs test are used far and wide for hiring purposes as definitive indicators of personality. The truth is far from this.

Big Data — the large, voluminous data sets mined from a variety of sources including, log files, web, and social media — is everywhere. Algorithms used to process such data are also ubiquitous and are often thought to be infallible.

These algorithms claim to tell us not only about markets, but also about human behaviour. Mathematicians and statisticians use models to understand and influence consumer behaviour, trying to decode what we desire, what we think of certain brands, our spending habits, etc.

Algorithms are used to analyse and influence voting behaviour, as was the case in the US presidential election in 2016. In the healthcare industry, they can predict diseases and epidemics, reduce the chances of preventable deaths, and more. They can also predict crimes, identify who will do well in schools/colleges, and predict performance of employees.

But are these and other models using Big Data truly fail-safe? Not really. Because Big Data, like everything else, comes with its own set of limitations.

All algorithms are designed by humans, and thus, we tend to feed our own biases into algorithms. This often leads to erroneous and potentially disastrous consequences.


Also read: Intelligent people are also more helpful


Cost-effective hiring?

Consider the example of psychometric tests used for hiring decisions in organisations. Talent analytics is one way in which organisations have started using Big Data to assess candidates for jobs, increase engagement, and other human resources activities.

Big organisations looking to hire as many people as cost-effectively as possible tend to administer various available psychometric tests. These mainly include personality tests, where applicants are presented with options like “I see myself as a tense person” and “I see myself as depressed”.

The assumption when using such tests to make hiring decisions is that certain personality traits lead to success at various jobs. However, the reality is far from this.

Personality has very little to do with job performance. Keeping aside the issue that the test takers may fake their responses on personality tests, factors like motivation, job environment, and competencies could be better predictors of job performance.

Unfair, inaccurate

The accuracy of these tests are very questionable. For example, several tests measure the preference for an applicant to work alone, but the results are often interpreted as the applicant being unwilling to work with a team.

The Myers-Briggs test has been used this way in the context of hiring, despite overwhelming research pointing to its inaccuracy and inconsistency.

Furthermore, such tests need to be administered and interpreted fairly and accurately in the context of the role, requiring a trained professional who is also knowledgeable about the job cluster.

These tests could also increase unfairness and inequality. The algorithms that measure personality could potentially be used to identify applicants with mental health issues like anxiety disorder, for example, and flag them, causing applicants to be rejected on mental health grounds.

Thus, an algorithm meant to be scientific and objective furthers human bias and discrimination. Such tests are relatively cheap, especially if administered in bulk, and can be used to disqualify applicants.

The good and the bad

Another assumption made when using personality tests such as the Big Five Inventory, or its variations, are that some traits are ‘good’ and others are ‘bad’.

Neuroticism is often thought of as a ‘bad’ trait and has often been linked to poor job performance. However, moderate amounts of neuroticism is also linked to ‘good’ performance. That is, being extremely relaxed or extremely anxious could impede job performance, but moderate amounts of neuroticism could make one more vigilant and cautious, such that their work quality remains in check.

Similarly, high conscientiousness is thought of as a ‘good’ trait. But extremely high conscientiousness could make one extremely tenacious, and single-minded, and therefore maladaptive for a job role demanding flexibility. In moderate amounts, conscientiousness could be related to efficiency and ambitiousness, which could be linked to good performance.


Also read: Zebrafish has clues on looks vs personality debate for you to choose suitable partner


Success profile

Due to the sheer Big Data volume, psychometric tests and other models for hiring are continuously advancing. This data could be turned into insights for use by potential employers.

Companies often use publicly available data from websites like GitHub and Twitter to create a hiring profile for potential candidates. But when a hiring decision is made, there is no data that tells employers about the successes of rejected candidates, increasing false negatives. If a candidate is hired, there is no incentive to check whether some other employee might be a better ‘fit’, and hence, both the insight and the model goes uncorrected.

Moreover, prediction is based on data available about the past, which implies that a limited section of the population would be favoured. The chances of success for men, especially those of higher classes with more privileges, would be higher than any other demographic, propagating bias once again.

Big Data is aimed at providing insights based on ‘hidden’ patterns. If the data shows an applicant is high on neuroticism, making the workplace more conducive for them instead of rejecting outright would be the optimal use of data the algorithms threw up. Lack of standardization across cultures also poses many other problems with such tests. If the test doesn’t guarantee employee retention or selection of significantly better candidates up to a certain degree, it is effectively useless.

The problem, however, arises when we believe that conclusions based on big data provides us with the objective truth, and that numbers speak for themselves.

Numbers only speak what we want them to speak. Big Data will not help if the models used are fundamentally flawed, like the ones used for personnel selection. Hidden biases and errors during the collection of data or during analysis and interpretation would still make all the data pointless.

Arathy Puthillam is a Research Assistant at the Department of Psychology at Monk Prayogshala.

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular