Artificial intelligence is much-hyped and sometimes poorly understood. In 2016 alone, funding for AI firms rose by $5bn, so it’s little wonder that the technology is attracting so much attention.
Although it’s clear that AI is going to overturn the manual processes involved in finding and hiring talent, it’s less clear how exactly the technology will achieve this. For many recruiters and hiring managers, the promise of AI seems a long way from the daily reality of phone calls, emails and interviews with candidates.
At its most fundamental level, recruitment is a simple matchmaking function between employers and job candidates. Understood in this way, many recruitment processes are already performed by AI.
Take the example of job boards. Many now use machine learning to process data held in CVs uploaded by visitors to their site. In turn, these visitors are served jobs relevant to their skills and experience as soon as these postings appear on the site.
If you told someone twenty years ago that a mechanism for connecting job seekers with employers existed, they might conclude that computers have solved recruitment, and in the process made humans irrelevant.
It’s true that humans no longer have to sift through data held in physical folders or held online as private files. Much of this is publicly available and takes just a few seconds to find, if you have the right software.
But in reality, intelligent search has formed only a small part of the recruitment function. Although AI can tell us who’s available for a job, it won’t necessarily do a very good job of testing for personality fit or reading body language accurately. It’s not always going to do a good job of weighing data’s relevance, or even deciding whether it’s accurate.
What’s missing is what we now call ‘the human touch’. I say ‘now’ because we may soon not call it this. Not because we’ll call it the machine’s touch, but because we’ll stop boasting about things we think we’re good at if machines get better at those things than us.
But for the time being we need people to call a candidate’s bluff and instinctively understand what a hiring manager is looking for when their brief is a bit woolly. For now these people are called recruiters, but if the machine ever leads this practice and humans are required only to oversee its proper functioning, it might be more appropriate to call them technicians.
We can’t easily predict exactly what future scenarios AI will serve us, but we can be sure AI driven technologies will first arrive slowly, before mushrooming very quickly. All disruptive technologies follow this model. Electric cars are one of the latest examples, with Tesla’s production of its Model S car going through growing pains as the company irons out the technical difficulties required to meet mass demand.
At the moment, developers for AI-driven recruitment technologies are also working through the difficulties associated with serving a large number of customers relevant and targeted job ads at scale.
While technologies may not seem fully ‘intelligent’ at the moment, they will become prevalent for two main reasons. Firstly, the accessibility of data and options for data sharing are growing exponentially as companies share information through APIs. Secondly, chipsets are becoming far more powerful, and not just at big tech firms’ server centres.
AI chips in mobile phones, such as Nvidia’s for the Huawei Mate 10, will get to know the speech and behaviour patterns of its users, enabling them to find jobs that are best suited for their habits. Companies will also benefit from this data, and be able to target users more effectively for job ads.
However, this level of data collection is itself a challenge. Companies will need to collect and use data in line with incoming EU rules on privacy and data sharing, which allow data subjects to object against being used as a subject for automated data processes.
Until GDPR (as it’s called) is introduced next year, companies must focus on demonstrating that they are able to handle customers data sensitively, and provide customers a genuine benefit from their decision to share data with companies.
Access to a huge amount of data is great for AI, however, the data itself may reflect human qualities, and these are not always great. For example, it’s been proven that data tends to accept, then reflect, biases offered by data sets.
At first AI seems to solve the question of bias. By removing gender, age, race, religion, sexuality and other such variables from an algorithm for a job search, hiring managers may believe that they’re making a fairer decision. However, these biased categories are sometimes subtly echoed in other variables that will be analysed by artificial intelligence.
A science degree (typically male), a background in sales (typically straight) and an Oxbridge education (typically white) could constitute a desirable candidate for an organisation, and AI - in its current form - won’t do anything to challenge that.
Studies have shown that bad bias - in the form of prejudice or hiring purely for culture match - equals bad business. AI may in the future be used to analyse the financial value a diverse workforce could bring an organisation. This would be incredibly useful, although the limitations of this approach are more likely to be seated in the difficulty of sourcing this data as opposed to developing this technology.
For this type of AI to work, it must also be tuned to ‘good bias’, such as those useful intuitions that lead a CEO to hire a dedicated, talented workforce with a range of backgrounds and viewpoints.
As we use AI to further explore the make-up of an effective company, the practice begins to resemble alchemy. An individual who shows one part anger for every seven parts good humour, will be well-suited to work alongside an individual with three parts charisma and four parts caution, data may one day show.
Although people analytics companies like Weavee are demonstrating the usefulness of matching employees to roles based on a combination of personality- and performance-led data, for this process to be completely driven by AI, data needs to be fully representative of the people it describes.
At the moment, it is quicker and less expensive to brief a human to find a perfect candidate than it is to brief a machine. But while humans still mediate the flow of information and job candidates to employers, they are an inefficiency, and may be disrupted with their industry.
However, people data could be considered a volatile resource. The recent rise in data breaches, along with a lack of trust into how data is used is causing people to be reluctant to share information, which could hamstring AI’s progression in recruitment. AI advocates will therefore need to demonstrate that their processes are sensitive, inclusive, and limited by policy if they are going to be accepted by staff and management in workplaces.
The use of AI in hiring processes will be guided as much by politics as by innovation.
Shane McGourty, Director at AdView
Image Credit: Tatiana Shepeleva / Shutterstock