Skip to main content

How In2Dialog Tackles Bias in Recruitment AI

Practice doesn’t necessarily make perfect, but it does make permanent

It’s often said that repetition leads to perfection, and in many cases, that holds true. However, without a clear goal and constructive feedback to assess progress, repetition doesn’t always refine an action—instead, it risks entrenching its errors.

This is a fundamental concern when it comes to the field of AI. Whilst a gross simplification, AI is in effect a practice in repetition: it learns by iterating over data, identifying patterns, and refining its outputs based on these patterns. The problem arises when those patterns are flawed or biased. Without proper oversight and course correction, AI risks amplifying and embedding these flaws into its systems.

So, what do we do when AI starts repeating things that we’d rather it didn’t? Perhaps even when it starts repeating things that aren’t just a little undesirable but actively contrary to our values?

This challenge is particularly critical in the field of recruitment, where great strides have been made over the years to make the process more ethical, fair, equal, and effective. Does AI risk entrenching the biases that we’ve worked so hard to eliminate from recruitment practices?

 

Teething Troubles: When AI Goes Awry

One infamous example of AI’s potential to repeat and amplify undesirable patterns comes from Microsoft’s chatbot, Tay. Launched in 2016, Tay was designed to learn from user interactions on Twitter. However, within hours of going live, it began to replicate and amplify the offensive and inflammatory language it encountered. Tay quickly devolved into sharing racist and misogynistic tweets, forcing Microsoft to shut it down within 24 hours.

This debacle highlighted a key issue with AI: it lacks the ability to discern right from wrong. If exposed to biased or harmful data, it will incorporate those elements into its responses. The same principle applies to recruitment AI. Studies have demonstrated that hiring algorithms can inadvertently perpetuate discriminatory practices. For instance, algorithms trained on historical hiring data may favour candidates who fit a narrow demographic profile—mirroring existing workforce biases—and disadvantage others based on factors such as gender, ethnicity, or age.

Understanding Human Bias: Can We Escape It?

Of course, bias is not a problem unique to AI. Humans, too, carry biases that stem from their upbringing, culture, environment, and experiences. These biases influence decision-making at both conscious and subconscious levels. Despite significant progress in recognising and addressing systemic bias over the last century, humans remain inherently fallible.

But humans have an advantage that AI lacks: empathy. Empathy allows us to step outside our own perspectives, question our assumptions, and understand the impact of our decisions on others. It’s what drives efforts to make hiring processes more inclusive and equitable.

Unfortunately, empathy is not a quality that can be programmed into AI. While AI can simulate understanding by analysing patterns and outcomes, it cannot truly grasp the human experience or the emotional context behind decisions.

Bias to Embedded Error: How AI Can Entrench Inequality

In their overview study  (Ethical Considerations in AI-based Recruitment) of existing empirical research in the area, authors Dena F. Mujtaba and Nihar R. Mahapatra identify that it is exactly this human factor which sets the base for potential trouble in AI models. They identity that the first challenge in addressing AI bias lies in the need to define concepts such as ‘fairness’ and ‘justice’—terms that remain deeply contested and fluid within society. Without universally agreed-upon definitions, training AI to uphold these values risks embedding subjective, culturally specific, or even contradictory interpretations. This underscores the broader issue: AI reflects not just data, but the ongoing complexities of human ethics.

From here, they go on to identify the specific ways in which bias can manifest in an AI recruitment model:

  • Training Data: If the data is biased, the AI system will learn and perpetuate that bias, often due to disproportionate representation of certain groups or inherent human bias in manual labelling.
  • Label Definitions: The way target labels, such as “good hire,” are defined can lead to biased outcomes if certain factors disproportionately impact protected groups, such as gender.
  • Feature Selection: Bias can occur when features used to predict outcomes don’t accurately represent protected groups, leading to inaccurate classifications based on factors like education.
  • Proxies: Even without direct demographic attributes, AI models can infer biases from other factors, such as the educational background of applicants.
  • Masking: Prejudiced data selection or feature engineering can intentionally or unintentionally obscure important details, resulting in biased predictions.

The authors identify a range of complex mathematical engineering techniques which are being employed in various models to reduce the impact of these five sources of bias. But even with these mitigation techniques, there still remains the challenge of selecting appropriate fairness metrics in the first place, and adapting these in contextually specific ways, especially in accordance with evolving job market dynamics.

The authors also stress the seemingly paradoxical needs of both privacy and transparency. The data sources upon which LLMs are trained should maintain a right to privacy, and yet without understanding the nature of the sources upon which the model is trained, there are questions of transparency. Moreover, candidates evaluated and rejected by AI are rarely given feedback, which undermines trust and fairness, and thus ultimately the perceived legitimacy of the system.

But despite these concerns, Mujtaba and Mahapatra express overall optimism about the potential for AI to transform recruitment practices, stating that is has the power to not only streamline processes, but render them more transparent, robust and equal. However, the authors stress that in order to do it ‘right’, there is a responsibility at every level – from users to developers, from researchers to policy makers – to make sure that said transformation is both innovative and fair.

How In2Dialog Tackles Bias in Recruitment AI

At In2Dialog, we take that responsibility seriously. We understand the risks associated with bias in AI and take proactive steps to address these challenges. Our AI-augmented recruitment tools are designed to improve hiring processes while maintaining ethical and equitable standards.

Here’s how we do it:

  • Diverse and Representative Data: Our systems are trained on datasets that prioritise diversity, ensuring that no single demographic is overrepresented. This minimises the risk of favouring one group over another.
  • Rigorous Academic Research: Our psychometric tools are derived from peer-reviewed research, and we work in conjunction with leading universities in order to continually expand and refine our understanding.
  • Continuous Monitoring and Auditing: We regularly audit our algorithms to identify and address any biases that may arise. This iterative approach ensures that our tools evolve alongside societal standards and expectations.
  • Feedback Loops: By integrating feedback from both recruiters and candidates, we refine our tools to ensure they align with real-world needs and values.
  • Transparency: We believe in making our processes as transparent as possible. We work closely with our clients and maintain a two-way dialogue, fostering trust and accountability.
  • Focus on Collaboration: Rather than replacing human decision-making, our tools are designed to enhance it. By providing recruiters with actionable insights, we empower them to make more informed and empathetic decisions.

By combining the strengths of AI with the unique human capacity for empathy and critical thinking, In2Dialog is committed to creating recruitment solutions that are not only efficient but also fair and inclusive. Our goal is to help organisations build diverse, high-performing teams while ensuring that bias has no place in the hiring process.