From hiring algorithms to credit scoring systems, we see AI making decisions that impact the lives of millions of people every day. However, while fairness is often discussed, what does it truly mean to be “fair” in a world where people have complex, multifaceted identities?
This is where intersectional fairness comes in. Intersectional fairness goes beyond just considering one demographic characteristic—such as gender or race—and instead looks at how multiple, overlapping identities influence outcomes. For instance, a Black woman’s experiences and challenges are often different from those of a White woman or a Black man. When AI systems fail to account for these nuances, they risk perpetuating biases that affect marginalized groups disproportionately.
As someone deeply involved in the AI and responsible innovation space, I’ve seen firsthand how organizations struggle with implementing fairness in their AI models. The concept of intersectional fairness, although increasingly recognized, is still in its early stages when it comes to operationalizing it effectively in AI systems. This blog post aims to explore why intersectional fairness is essential, the challenges organizations face in achieving it, and how you can start taking actionable steps toward more inclusive and fair AI systems.
The Need for Intersectional Fairness
Intersectional fairness matters because, in the real world, people don’t exist as isolated characteristics. We all embody a variety of traits—race, gender, age, sexual orientation, disability, and more—that intersect in unique ways. These intersections shape how we navigate the world, how we face challenges, and how we are treated by institutions and systems.
Take the example of hiring algorithms. Consider two candidates: one is a young White woman, and the other is an older Black man. While both may be qualified for the role, an AI model trained on biased data might favor the younger candidate simply because the data reflects a historical bias toward younger, White applicants. If the algorithm only considers gender or race independently, it misses the fact that both candidates’ experiences and challenges are influenced by a combination of factors.
In AI, failing to consider these intersections can lead to unfair outcomes, which not only harms individuals but also damages the credibility of AI systems. According to a paper by Usman Gohar and Lu Cheng, titled “A Survey on Intersectional Fairness in Machine Learning,” the traditional approach of treating fairness as a binary—where an algorithm is either “fair” or “unfair” based on isolated attributes like gender or race—doesn’t capture the complexity of real-world scenarios. We must acknowledge that fairness can’t be one-size-fits-all.
The Complications of Intersectional Fairness
While the need for intersectional fairness is clear, implementing it in AI systems is no simple task. There are several complications that organizations face when trying to address intersectional bias in machine learning models:
- Defining Fairness: One of the first challenges is defining what fairness means in an intersectional context. Is fairness the same for different groups, or should it be context-dependent? The work of researchers like Kong (2022) poses critical questions about whether AI models can truly be fair to women of colour and other marginalized groups. These definitions vary based on ethical perspectives, and they can differ from one context to another, leading to confusion and difficulties in implementation.
- Data Scarcity: For an AI system to recognize and mitigate intersectional bias, it needs access to representative data. This can be a significant issue, particularly when sensitive groups are underrepresented in the data. Gathering enough data that reflects diverse, intersecting identities is not always possible, and even when it is, there’s the challenge of ensuring the data is accurate and unbiased.
- Complexity of Algorithms: AI models are often complex and opaque, making it difficult to understand how they process and weigh different factors. Even if we can identify intersectional bias in the data, it’s challenging to redesign algorithms to address these issues without compromising the model’s efficiency or predictive accuracy.
- Ethical Dilemmas: There is a significant ethical dilemma in trying to balance fairness with other objectives, such as accuracy or business goals. In some cases, improving fairness may lead to a reduction in performance metrics. The challenge is finding a balance that satisfies all stakeholders—organizations, customers, and individuals impacted by the decisions.
Challenges in Mitigating Intersectional Bias
Mitigating intersectional bias in AI is not straightforward. We need to take a multi-faceted approach that involves data preprocessing, algorithmic adjustments, and post-processing techniques.
- Data Preprocessing: One of the first steps in tackling intersectional fairness is ensuring your data is diverse and representative. This might involve collecting additional data or using techniques like oversampling to ensure that marginalized groups are adequately represented.
- Fairness Constraints in Algorithms: Researchers like Ghosh(2023) and Foulds(2020) have proposed different fairness constraints that can be integrated directly into the learning algorithms. These constraints enforce that the model’s predictions are balanced across different groups, even when they intersect.
- Post-Processing Adjustments: After a model has been trained, post-processing techniques can help adjust the outcomes to ensure fairness across all demographic groups. This can involve adjusting the model’s outputs to ensure that no group is disproportionately disadvantaged.
Actionable Takeaways for Leaders and Stakeholders
As leaders and stakeholders in organizations, you have the power to drive change and ensure that your AI systems are fair and equitable. Here are a few practical steps you can take:
- Embrace Diversity in Data Collection: Ensure your AI systems are trained on diverse datasets that account for multiple, intersecting identities. This may require collecting more data or working with external partners who specialize in demographic research.
- Incorporate Fairness Audits: Regularly audit your algorithms for fairness, particularly with respect to intersectionality. This can help identify hidden biases that might not be immediately apparent.
- Foster a Culture of Inclusivity: Create an organizational culture that encourages inclusivity, not just in AI development but across all business functions. Collaborate with experts in diversity and ethics to incorporate a broad range of perspectives.
- Invest in Training: AI literacy and responsible AI training are crucial for ensuring your team understands the implications of intersectional fairness. Consider investing in training programs that offer guidance on identifying and mitigating bias in AI systems
Summary
Intersectional fairness is no longer just an ideal; it’s a necessity for creating AI systems that are both ethical and effective. While the challenges are significant, they are not insurmountable. By taking proactive steps to understand and address intersectional biases, organizations can build AI systems that truly reflect the diverse needs of all people.
As you consider the next steps in your own organization, remember that change doesn’t happen overnight. However, with the right tools, mindset, and commitment to fairness, we can create AI systems that are not only smarter but also more equitable.
For organizations looking to explore data augmentation further, I invite you to go through the following guidance or reach out for more tailored guidance or training. Whether you need help with specific techniques, want to develop a data augmentation strategy, or are seeking a comprehensive approach to AI model development, we are here to help.
Free Resources for Individual Fairness Design Considerations
Sampling Bias in Machine Learning
Social Bias in Machine Learning
Representation Bias in Machine Learning
Intersectional Fairness in AI for Fairness – £99
Empower your team to drive Responsible AI by fostering alignment with compliance needs and best practices.
Practical, easy-to-use guidance from problem definition to model monitoring
Checklists for every phase in the AI/ ML pipeline
AI Fairness Mitigation Package – £999
The ultimate resource for organisations ready to tackle bias at scale starting from problem definition through to model monitoring to drive responsible AI practices.



Customised AI Fairness Mitigation Package – £2499



Sources
Islam, R., Keya, K.N., Pan, S., Sarwate, A.D. and Foulds, J.R., 2023. Differential fairness: an intersectional framework for fair AI. Entropy, 25(4), p.660.
Foulds, J.R., Islam, R., Keya, K.N. and Pan, S., 2020, April. An intersectional definition of fairness. In 2020 IEEE 36th International Conference on Data Engineering (ICDE) (pp. 1918-1921). IEEE.
Ghosh, A., Genuit, L. and Reagan, M., 2021, September. Characterizing intersectional group fairness with worst-case comparisons. In Artificial Intelligence Diversity, Belonging, Equity, and Inclusion (pp. 22-34). PMLR.
Gohar, U. and Cheng, L., 2023. A survey on intersectional fairness in machine learning: Notions, mitigation, and challenges. arXiv preprint arXiv:2305.06969.
Kong, Y., 2022, June. Are “intersectionally fair” ai algorithms really fair to women of color? a philosophical analysis. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 485-494).
Wang, A., Ramaswamy, V.V. and Russakovsky, O., 2022, June. Towards intersectionality in machine learning: Including more identities, handling underrepresentation, and performing evaluation. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 336-349).