AI-assisted HR: finding gold in an ethical minefield

Blog

AI-assisted HR: finding gold in an ethical minefield

Does AI benefit or burden your (HR) Organisaton?

On 21 April, EU Commissioner Verstager presented sweeping draft legislation to regulate the use of artificial intelligence, the first such regulation in the world. It gives plenty of scope to comparatively innocent AI applications, but imposes tighter restrictions based on the level of risk involved. One of the areas identified as high-risk is Human Resources. Why? And how do we address such risks? What can AI still contribute to activities like recruitment and performance assessments?

Human resources processes have an enormous impact on the lives of working people and their dependents. Hiring, firing, promotion and demotion determine their spending power, where they live, their scope to have a family even. HR decisions, in other words, touch on fundamental human rights. This is why AI within HR is identified in the upcoming EU regulations as high-risk.

Range of benefits

Why, then, would organisations still want to introduce AI within HR? What can AI technologies like automated web-scraping, natural language processing or voice recognition add to HR processes? The simple answer: efficiency. AI can shoulder much of the administrative burden around HR processes, reducing overhead. In recruitment, meanwhile, AI makes it possible to efficiently screen a far wider selection of candidates, increasing the likelihood of finding scarce and non-conventional talent.

Moreover, with carefully constructed and documented algorithms, AI actually increases the transparency and consistency of the decision-making process for all parties, leaving less scope for mistakes. Rather than making decisions, AI augments the intelligence of the human decision makers and takes some of the pressure off them.

Another key reason for organisations to introduce AI within HR is its potential in helping them diversify their workforce to reflect the balance in society in terms of gender, ethnicity, disability, age, etc. A persistent flaw in current HR practice is bias, which skews recruitment and promotion away from minorities.

Does AI eliminate bias or create it?

HR processes are now overwhelmingly performed on humans, by humans. As such, bias is a constant threat. No one can be totally objective when forming opinions on others. The unconscious makes split-second decisions that our rational brain finds difficult to override. Decisions that favour what is familiar and like oneself. Decisions that may lead to discrimination and exclusion. The rise of artificial intelligence, therefore, has been heralded by many as a way to overcome bias in HR. If left to self-learning computers, which are untroubled by emotions and gut reactions, HR decisions will be more objective and offer more opportunity to diverse talent, they believe. But are they right?

This depends very much on how AI is applied. AI is a wonderful tool to bypass bias on an individual level. On a very different level, however, bias can creep into the very system. AI is often seen only as a forward-looking technology, but it is not. It bases its decisions on existing data. Data from a past where bias was common and documented. Systems trained on such data will absorb the bias inherent in that data without questioning it. Algorithms, unlike the people who design them, have no conscience. So if managers to date have been overwhelmingly male, the algorithm used to select the next one will favour male candidates - unless it is specifically programmed not to. If unchecked, algorithms even tend to amplify the prejudices they are fed via feedback loops.

People are more than data points

Another challenge in using AI in HR is understanding its limitations. Organisations need to guard against “data myopia” or “dataism”, and understand that there is so much nuance to people’s lives that is impossible to capture in ones and zeros – nuance that impacts their CVs and application letters. AI simply searches the data for specified criteria and data points, and cannot look beyond that to see the human being. While this is good in terms of avoiding bias, it means the criteria must be chosen very carefully. There are limits to the number and type of criteria HR can feed into the AI system. So which criteria really reflect performance or potential? Which criteria are key for which job? And how measurable are they?

If, for example, gaps in people’s CVs are flagged as unfavourable, a lower rating will be assigned to women who have had maternity leave, or people who have suffered long-term illness. If grammar and spelling are given high priority, potentially suitable candidates with dyslexia or from a disadvantaged background may be excluded. If personality criteria favour assertiveness, white males tend to top the candidate list. Meanwhile, because soft skills are very hard to capture in data points, they will be underrepresented in AI-generated profiles.

Another dilemma is transparency. A transparent set of criteria is fair and heads off discussions. But if employees know in advance what the criteria are, they may game the system to improve their ranking – giving undue advantage to those with knowledge about how such systems operate.
In fact, though, the challenges discussed here apply just as easily to current HR practices. They are inherent in our humanity. Organisations simply need to recognise that using AI is not going to make them go away. AI is not in itself the solution. AI is a tool, and a powerful one at that. Using it well brings great benefits, but if not used with care, AI may actually upscale inherent flaws in HR processes. Moreover a flawed process, instead of disadvantaging one or two people, with AI scales up to everyone subject to its decisions, potentially affecting dozens or hundreds of individuals.

Responsibility

Organisations that hope to transfer responsibility for HR decisions to AI should be cautious. AI is a tool to aid decision-making, but managers remain accountable. The whole HR department needs to own how decisions are made. We are only beginning to see how employees react to AI-generated HR decisions, how much trust there is. Are they seen as fairer or less fair than human decisions? And do employees know who to approach if they are unhappy with an AI-generated decision?

Few companies will have the capacity to develop AI-assisted HR in-house. So many decide to buy it from an external provider. External parties have a commercial interest in selling their product, but buyers often lack the know-how to challenge what they are buying, and to ascertain whether it’s right for them, in both practical and ethical terms. An additional problem is that such systems are created externally, and trained on data unknown to the buyer. Once it’s up and running, the new system is a black box to the HR team, complicating the exercise of oversight and accountability.

Think big, start small

Even a marathon starts with a single step. So when it comes to AI in HR, our advice is: think big and start small. HR first needs to profoundly understand the fundamentals of Trustworthy AI and start building the relevant capabilities. It’s good to involve your compliance department and data scientists in the workforce at this stage, before you have invested in actual tools. People are needed who can not only interpret the situation, but also stop the system if things go wrong and adjust the code or process. This is in line with the upcoming EU regulation, which calls for meaningful human oversight. Also important is having a diverse set of people, as that increases the likelihood of bias being spotted.

At the same time, you need to define what type of organisation you want to be, and develop a solid plan and strategy that also demonstrates your Digital Ethics values as an organisation. This will ensure that your investments in AI are sustainable and truly aimed at long-term value. With a desired end state in mind from day one (think big), organisations can make strategic decisions on that first, single step. By starting small and rapidly scaling, organisations will organically build trust, experience and capabilities in AI-assisted HR.

So where to start?

Deciding where to start (small) is a very different question. First, identify which organisational imperative needs to be supported by HR. If changing your organisation is a key priority, the opportunities in learning and development are interesting. Alternatively, if finding the best talent is the top priority, start in recruitment. Plenty of choice here, as AI can contribute to every step of the recruitment process. The example below provides some ideas.

Imagine a candidate journey where an unconventional candidate is identified as suitable by an algorithm. Upon approval by the recruiter, the candidate has the opportunity to get in touch for any questions on the hiring process or organisation. The candidate can choose between connecting directly with a chatbot for real-time support or with a recruiter which can take longer due to availability. People more often tend to choose to connect instantly with a chatbot due to its easy access and direct availability for support. However, some people still prefer having direct personal contact with somebody because of the warm emotional connection. Probably one will not completely replace the other - given that recruitment still is human work, but both have the potential to reinforce each other.

Having a look behind the scenes, RPA (Robotic Process Automation), helps relieve administrative burden by streamlining data entry and collation. Video interviewing technology is used to help eliminate potential bias with an impartial assessment of the candidate’s performance. Now, let’s fast forward to the successful candidate’s first day at work. Upon opening her laptop, she is greeted by the same chatbot, only this time it is offering support as part of an excellent onboarding experience. This example is not as futuristic as it might seem. Organisations that are willing to start building AI capabilities in HR early and be transparent about it will be the ones to benefit most.

Building trust

Regardless of the AI system’s technical merits, however, it will not be a success if employees don’t trust it. Employees, after all, are your organisation’s key stakeholders. Your ability to attract and retain talent, your licence to operate, depends on upholding an ethical brand image in the market and respecting the relevant stakeholder values. There is no question that employees will need convincing, and values are the right starting point for discussion.

Organisations should engage with staff about the introduction of AI-assisted HR. Openly presenting the business case. Explaining how the system will be designed to meet the requirements of upcoming EU legislation in areas like transparency, data quality, auditability and privacy. How the system augments rather than replaces human decision making. How it can actually diversify the workforce and help everyone reach their full potential.

Going boldly forward

Using AI within HR, with all its ethical implications, is like looking for gold in a minefield. Rash plans can blow up in your face. Just think of the scandal around the multinational that discovered its new AI recruitment tool was discriminating against women. Incidents like these can set back AI development in an organisation and in broader society for many years. But it would be wrong not to venture into this promising area at all. AI is here to stay. As more organisations test the terrain and taste the benefits, late adopters risk losing their competitive edge. With the right precautions, and the right know-how and support on board, there is wisdom in going boldly forward.

More information?

For more information please contact Hilary Richters, Petra Tito or Sanne Welzen via the contact details below.

Did you find this useful?