‘Walsmart’: when AI hits the shop floor
11 January 2022
In 2020, as the coronavirus pandemic hit, Walmart faced a problem. In-store demand had skyrocketed and the company urgently needed to hire tens of thousands of additional staff to its US stores.
Fortuitously, Walmart had already been working on a new AI-supported hiring system for hourly paid in-store staff. They wanted to reduce costs associated with hiring and training new staff by speeding up the process and improving retention, whilst also achieving their organisational equality, diversity and inclusion goals. The pandemic only made this task more urgent and acted as a catalyst to accelerate rollout of the new system.
A crucial component of the new system was a machine learning algorithm used in the application and hiring management system, which was subsequently rolled out at speed. The algorithm was trained to rank candidates based on the likelihood of them staying in post for at least three months. By making hiring data-driven, and based on facts not feelings, machine learning was applied in the hope of hiring better, faster and with less bias, while removing the need for multiple interviews in store. This was expected to improve outcomes, save hiring and training costs, and reduce the risk of Covid infection.
However, while a great deal of attention has been given to issues of fairness and bias in algorithms, much less attention has been paid to the equally important issue of trust. Would hiring managers be willing to put their faith in the AI to choose the right person for the job?
We conducted in-depth interviews with 14 people directly involved in developing, implementing and using Walmart’s new hiring system in the latter part of 2020 to find out.
AI in recruitment
Advances in AI and machine learning have greatly expanded the range of tasks that computers can be put to, and professional services like HR are no exception.
AI has been applied to various stages of the ‘HR cycle’ in recent years, most notably hiring. So far debate has tended to focus on issues of fairness and bias. Automated hiring systems (AHSs) using AI can make decisions and recommendations much more quickly than human hirers, and can reduce bias by making hiring more data-driven. At the same time, AI can replicate and reinforce existing inequalities. The high-profile case of Amazon’s now abandoned CV screening algorithm is just one example of AI’s potential flaws.
However, far less attention has been devoted to the question of what actually happens when organisations attempt to introduce AI into their systems and processes. If any such system is to live up to the promise of improving hiring, while reducing bias, two conditions are necessary. First, the technology must outperform human decision makers on the intended measures and, second, the human operators must use the technology as intended.
In AI we trust?
While Walmart felt the algorithm performed well in regard to the first of these two conditions (although this cannot be corroborated by our research) a lack of trust in the system, amongst users, meant some were not using the system as intended.
While nearly all of those interviewed felt the changes had made hiring faster and safer, some had reservations about the algorithm’s ability to identify good hires.
For example, some reported seeing candidates ranked at the top of the list, rated excellent by the pre-employment assessment, who they felt would not make good associates. Or, conversely, they found candidates further down the list, rated poor, who they thought would make good associates:
“…they say, ‘oh, try to hire somebody excellent or good or’, you know, whatever. But we’ve hired candidates that scored in the poor category. That actually came in and are extremely, extremely, great employees.” (Dave, HR manager)
Others felt that the algorithm lacked the human element:
“[I wouldn’t trust the list over a human] because that list is not calling people and hearing their voices and hearing what they have to say. I know that we’ve talked about automating the hiring process and stuff like that from time to time, but I think you still need to have that human interaction, because you can get a lot from people just in five minutes.” (Lindsey, HR manager)
In both situations, users were bypassing the recommendations and relying on other factors when making hiring decisions, and in one case had reverted to bringing some candidates in for in-person interviews.
The human factor in technology acceptance
Research shows us that trust and perceived usefulness are important for technology acceptance, but misalignment between the algorithm’s recommendations (based on retention) and users’ own judgements of quality (based on expected performance) undermined this. This is particularly concerning as research has shown that recruiter overconfidence in their ability to rate others is associated with poor hiring decisions. One of the key objectives of the changes to the hiring system was to avoid reliance on factors, such as appearance, that are unrelated to performance.
Walmart are aware of the importance of trust and have taken steps to increase transparency and confidence in the system by, for example, making changes to the hiring system’s user interface to highlight the qualities that put candidates nearer the top of the list. However, further work is needed to convince some users of the value of the system.
While we cannot say whether AI really does improve outcomes and reduce bias, if the technology is going to have any chance of achieving these goals, close attention to the human factor in technology adoption is essential. This will involve: i) raising user awareness of the benefits of the technology and its goals, and, ii) ensuring system performance aligns with user expectations and goals.
In the absence of these two conditions technology implementation is unlikely to succeed.