The AI revolution has begun and is spreading into almost every aspect of people’s professional and personal lives, including recruitment.
While artists fear copyright infringements or simply being replaced, the business world and management recognize the potential for greater efficiency in various fields such as supply chain management, customer service, product development, and human resources (HR) management.
Soon, all business fields and operations will have to adopt AI in some way. However, the nature of AI and the data behind its processes and outputs mean that human biases are embedded in this technology.
Our research examined the recruitment and placement field, which widely adopts AI to automate resume screening and evaluate video interviews of job applicants.
In recruitment, AI promises greater objectivity and efficiency by eliminating human biases and increasing fairness and consistency in decision-making processes.
However, our research shows that AI can subtly – and sometimes overtly – increase biases. And the intervention of HR professionals can worsen these effects rather than mitigate them. This challenges our belief that human oversight can control and moderate AI.
As #AI becomes more and more commonplace, it's important to look at how these systems can go wrong. 🤖
— MHR (@mhr_solutions) June 6, 2024
Our latest MHR Labs update explores bias in AI recruitment and how we can protect against these issues 👇https://t.co/y0LGFhg9NZ#Recruitment #Research pic.twitter.com/xZPgyf95KU
Amplification of Human Bias
One of the reasons for using artificial intelligence in recruitment processes is the expectation of it being more objective and consistent. However, numerous studies have shown that this technology is actually highly likely to be biased. The reason for this is that artificial intelligence learns from the data sets it is trained on. If the data is flawed, the artificial intelligence will also be flawed.
Biases in the data can be exacerbated by artificial intelligence supported by human-made algorithms. These algorithms often incorporate human biases into their designs.
In interviews conducted with 22 HR professionals, we identified two common biases in recruitment: “stereotype bias” and “similarity bias”.
Stereotype bias arises when influenced by stereotypes about specific groups. For example, preferring candidates of the same gender can lead to gender inequality.
“Similarity bias” occurs when recruiters prefer candidates with similar backgrounds or interests to their own.
These biases can significantly affect the fairness of the recruitment process and are embedded in historical recruitment data. When these data are used to train artificial intelligence systems, biased artificial intelligence emerges.
If past recruitment practices have favored certain demographics, artificial intelligence will continue this trend. Mitigating these biases is challenging because algorithms can extract personal information from hidden data among other associated information.
For example, in countries where there are different lengths of military service for men and women, artificial intelligence could predict gender based on military service duration.
The persistence of these biases underscores the need for careful planning and monitoring in recruitment processes conducted by both humans and artificial intelligence to ensure fairness.
Can Humans Help?
In addition to HR professionals, we also interviewed 17 artificial intelligence developers. Our aim was to investigate how an artificial intelligence system could be developed to reduce recruitment biases.
Based on the interviews, we developed a model where HR professionals and artificial intelligence programmers exchange information and question biases while examining data sets and developing algorithms.
However, our findings indicate that the difficulty in implementing such a model lies in the educational, professional, and demographic differences between HR professionals and artificial intelligence developers.
These differences hinder effective communication, collaboration, and even the ability to understand each other. HR professionals are traditionally trained in human management and organizational behavior, while artificial intelligence developers are experts in data science and technology.
These different backgrounds can lead to misunderstandings and mismatches when working together. This is particularly a problem in small countries like New Zealand, where resources are limited and professional networks are less diverse.
If you need further assistance or clarification, feel free to ask!
Integrating HR and AI
If companies and HR professionals wish to address bias issues in AI-based recruitment, they need to make several changes.
Firstly, it’s crucial to implement a structured training program focused on developing information systems and artificial intelligence for HR professionals. This training should cover the fundamentals of artificial intelligence, identifying biases in AI systems, and strategies for reducing these biases.
Additionally, encouraging better collaboration between HR professionals and AI developers is essential. Companies should aim to form teams that include both HR and AI experts. These teams can help bridge communication gaps and better align their efforts.
Furthermore, developing culturally relevant datasets is vital for reducing biases in AI systems. HR professionals and AI developers should collaborate to ensure that the data used in AI-supported recruitment processes represents diverse demographic groups. This will help create more equitable recruitment practices.
Finally, there is a need for guidelines and ethical standards that can build trust in the use of AI in recruitment and ensure fairness. Organizations should implement policies that promote transparency and accountability in AI-supported decision-making processes.
By taking these steps and combining the strengths of HR professionals and AI developers, we can create a more inclusive and fair recruitment system.
Original Article Page: [Link]