The use of AI recruitment algorithms has become increasingly common in the workplace, greatly easing the selection workload, enabling HR teams to broaden their searches, and bringing many efficiencies. But there are downsides.
The hesitation previously expressed is that algorithms can be discriminatory, cementing in past biases in an organization’s hiring patterns and excluding the possibility for new approaches and greater diversity in types of recruits. This is clearly a problem—though one that can be countered by careful management of the process and designing better algorithms.
A recent study, from Megan Fritts of the University of Arkansas and Frank Cabrera University of Wisconsin–Madison, considers another problem—which has received little attention in debates about the ethics of algorithms—that the use of recruitment algorithms will lead to a ‘dehumanization’ of the hiring process and in so doing can negatively impact employee-employer relationships.
Algorithms used for sifting through thousands of resumés may exaggerate biases but can hardly be said to be very dehumanizing. Problems really occur when AI-based assessment tools are used to analyze video interviews, or algorithms influence the final selection by recommending the best candidates from the remaining pool. By introducing artificial values to the recruitment process and removing direct human judgement, for good or bad, a ‘dehumanization’ is clearly taking place. (Here the authors mean dehumanizing in the sense of removing the human presence, not in the other sense of conceiving of humans as subhuman).
What’s the problem?
Treating humans as subhuman is always morally and ethically wrong. Whereas removing the human presence is often highly desirable—as when dishwashing was automated. Furthermore, the human presence in recruitment can in itself present problems. Recruiters can bring unacknowledged biases, or can be distracted, or just be having a bad day. Yet a problem does exist, as expressed in a recent survey of HR professionals, by the HR Research Institute, which cited ‘dehumanization’ as a greater concern than the more commonly mentioned bias/discrimination issue, when using AI hiring algorithms.
However clever a predictive algorithm is—even having been subjected to rigorous validation methods—it is unlikely to come near to having the intuitive and innate understanding that a human can bring, nor to the nuanced experience based judgment he or she can offer vis á vis the applicants likely fit with the organization. Even knowing this, companies may feel the trade-off in terms of efficiencies gained could be worth it—but there is a further problem.
The key finding of this research is that the dehumanizing effect of hiring algorithms can significantly undermine employee-employer relationships, in the sense that relationships—embryonic at the hiring/on-boarding stage—are stripped of something characteristic of the ways in which humans typically engage with one another. In human relationships, our values and the relations between our values are often complex and difficult to balance. Any attempt to impose artificial values, that only approximate but never replicate real human values, by means of predictive algorithms is bound to strike some employees and employers as uncomfortably alienating, thereby negatively impacting the employee-employer relationship.
Sadly, the state of the contemporary employee-employer relationship, even before the rise of AI in the workplace, is often reported to be far from perfect. When organizations are increasingly aware of the need to build harmonious workplaces, bring clarity around the shared aims and competing needs of employers and employees, in order to improve performance, and when corporate boards profess too have moved away from shareholder-centric principles of corporate governance to a commitment to all of their stakeholders, enhancing rather than undermining relationships should be a priority.
If our natural human agency is ruled by artificial values (clearcut and simplified) rather than by our normal values (complex and nuanced) that we act on in our real lives, we are made to feel like automatons. Even if we are chosen for the job, we do not get the satisfaction of knowing it was due to our whole self, thinking rather it was because we ticked a number of boxes correctly. The employee has no more feeling of human connection to the employer than to a machine.
As the presence of AI in the workplace increases in the coming years, hiring algorithms are bound to become widespread and the concerns raised in this study will be something HR professionals need to be aware of. The authors do not offer a definitive solution. It is likely that trade-offs need to be decided on a case-by-case basis.
Access the full research paper here: ‘AI Recruitment Algorithms and the Dehumanization Problem,’ Megan Fritts and Frank Cabrera. December 2021Ethics and Information Technology 23(15). Springer. DOI:10.1007/s10676-021-09615-w