Understanding AI Fairness

AI fairness and Bias are becoming essential across ethical, social, commercial, and legal spheres. Fairness in AI encompasses a range of contexts, including avoiding discrimination based on protected characteristics like race and gender, adhering to fair processes, and ensuring equitable or consistent outcomes depending on the situation. It also involves preventing the exploitation of personal data and maintaining fairness in business practices and marketplaces.

Researchers have suggested several methods to evaluate and address bias in AI systems, but these methods are often improvised and lack a systematic structure for assessing fairness throughout the AI lifecycle. There is no universal process for assessing AI fairness across different fields and organizations. AI fairness is context-dependent, with different machine learning techniques and algorithms requiring distinct approaches based on the specific problem and requirements of each AI system. Biases and their impacts also vary depending on the application and scenario.

The ICO emphasizes the need for a “by design” approach to incorporate fairness into AI development from the start. The aim of Esdha is to provide guidance for organisations to create fair, safe and trustworthy AI systems using “by design” approach through:

  • Pre-Processing Fairness: Interventions before training (e.g., rebalancing datasets, removing sensitive attributes).
  • In-Processing Fairness: Adjustments during model training (e.g., fairness constraints, adversarial debiasing).
  • Post-Processing Fairness: Modifying outputs to achieve fairness (e.g., recalibration of predictions).

 

Moreover, current AI fairness approaches often focus on products, leaving gaps for practitioners involved in services or consulting. Addressing fairness issues in datasets we need to provide effective support for fairness across different organizational and operational contexts including:

 

Perceived Severity of Fairness-Related Harms

Quantifying harms might favour majority groups, overshadowing severe impacts on smaller, marginalized communities.

 

Ease of Data Collection and Mitigation

This approach can perpetuate disparities by neglecting groups facing significant systemic marginalization.

 

Perceived PR or Brand Impacts

Performance disparities with potential for viral attention or reputational harm can be prioritised. Examples included high-profile failures like biased resume screening systems. Focusing on PR and brand risks may sideline actual stakeholder harms, prioritizing optics over substance.

 

Customer or Market Needs

Business imperatives often dictated prioritization, with organizations favouring high-value customers. Market-driven approaches favoured privileged groups, especially in tiered geographic deployments. This reinforces existing social and structural inequities by prioritizing powerful and already privileged groups.

 

Design considerations for embedding Fairness in Requirements, Context, and Purpose

Previous research has highlighted the various dimensions of bias in AI, such as technical, legal, social, and ethical aspects. It also points out the need for fairness regulations in AI across different domains, including for people with disabilities, and establishes the importance of ethical principles in AI fairness. Therefore here are some design considerations for you to get started

Defining Requirements

  • Stakeholder Identification: Engage domain experts, ethicists, and affected communities to identify fairness priorities.
  • Contextual Sensitivity: Recognize the sociocultural and organizational contexts to understand fairness implications.
  • Use Case Alignment: Specify how fairness dimensions align with the broader purpose of the AI system.

Contextual Understanding

  • Sensitive Attributes: Identify attributes relevant to fairness, such as age, gender, or socioeconomic status.
  • Potential Disparities: Map out potential biases or disparities that could arise in the system.
  • Regulatory Compliance: Ensure alignment with legal frameworks (e.g., GDPR, Equal Employment Opportunity laws).

Purpose Articulation

  • Fairness Goals: Clearly define fairness objectives (e.g., reducing bias, ensuring equitable outcomes).
  • Success Metrics: Establish measurable fairness metrics and thresholds.
  • Transparency Commitment: Incorporate mechanisms for explaining fairness decisions to stakeholders.

 

Best practices for addressing fairness

Fairness is inherently interdisciplinary, and achieving it requires collaboration across diverse teams. Consider the following steps:

Building Interdisciplinary Teams

Include professionals with expertise in:

  • Social Sciences: To understand societal impacts and ethical implications.
  • Legal Experts: To ensure compliance with fairness-related laws and standards.
  • Ethicists: To evaluate moral considerations and align with ethical principles.
  • Technical Experts: To design and implement fairness-aware AI algorithms.

Co-Creation with Stakeholders

Engage end-users, affected communities, and organizational leaders in the development process. Their input can provide valuable insights into fairness concerns and desired outcomes.

 Ongoing Collaboration

Fairness is not a one-time task. Establish processes for continuous engagement and feedback from stakeholders throughout the AI lifecycle.

 

 

Free Resources for AI Fairness Design Considerations

Stakeholder Identification for Machine Learning 

The Solutionism Trap in Machine Learning

 
 
AI Fairness Mitigation Package – ÂŁ999

The ultimate resource for organisations ready to tackle bias at scale starting from problem definition through to model monitoring to drive responsible AI practices.

dribbble, logo, media, social Mitigate and resolve 15 Types of Fairness specific to your project with detailed guidance from problem definition to model monitoring.
dribbble, logo, media, social Packed with practical methods, research-based strategies, and critical questions to guide your team.
dribbble, logo, media, social Comprehensive checklists for every phase in the AI/ ML pipeline
Get Fairness Mitigation Package– (Delivery within 2-3 days)
 
Customised AI Fairness Mitigation Package – ÂŁ2499
We’ll customise the design cards and checklists to meet your specific use case and compliance requirements—ensuring the toolkit aligns perfectly with your goals and industry standards.
dribbble, logo, media, social Mitigate and resolve 15 Types of Fairness specific to your project with detailed guidance from problem definition to model monitoring.
dribbble, logo, media, social Packed with practical methods, research-based strategies, and critical questions specific to your use case.
dribbble, logo, media, social Customised checklists for every phase in the AI/ ML pipeline

 

Limitations and their consequences

Here are some significant limitations and their consequences (Salado & Nilchiani, 2013) for you to address in your design.  

  1. The lack of common frameworks results in the use of classification or taxonomies to identify stakeholders’ roles and responsibilities. 
  2. It may only be feasible to guarantee the inclusion of some relevant stakeholders.
  3. It may not be feasible to verify the accuracy or appropriateness of all stakeholders involved.

These three limitations lead to negative consequences in the development of AI systems: 

  1. (from Limitation 1): Creativity in the stakeholder identification process is constrained by an overreliance on predefined categories, which leads to ” outside-the-box” thinking.
  2. (from Limitation 1): Stakeholder representation must be revised because rigid categorisation excludes broader behavioural diversity.
  3. (from Limitation 2): Requirements are incomplete as not all relevant stakeholders are identified or analysed.
  4. (from Limitation 3): Requirements are inaccurate because they are derived from stakeholders who may need to be more appropriate or relevant.

 

Sources

DRCF, https://www.drcf.org.uk/publications/blogs/fairness-in-ai-a-view-from-the-drcf/

Madaio, M.A., Stark, L., Wortman Vaughan, J. and Wallach, H., 2020, April. Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In Proceedings of the 2020 CHI conference on human factors in computing systems (pp. 1-14).

Madaio, M., Egede, L., Subramonyam, H., Wortman Vaughan, J. and Wallach, H., 2022. Assessing the fairness of ai systems: Ai practitioners’ processes, challenges, and needs for support. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW1), pp.1-26.

Agarwal, A. and Agarwal, H., 2024. A seven-layer model with checklists for standardising fairness assessment throughout the AI lifecycle. AI and Ethics, 4(2), pp.299-314.

 

Share:

Related Courses & Al Consulting

Designing Safe, Secure and Trustworthy Al

Workshop for meeting EU AI ACT Compliance for Al

Contact us to discuss your requirements

Related Guidelines

Understanding Stakeholders Key stakeholders must be involved in value creation and economic profit creation throughout the design, development, and deployment

Understanding Formalism Trap Most AI teams focus solely on optimising fairness metrics like demographic parity or equalised odds rather than

Understanding Ripple Effects The ripple effect occurs when there is a failure to understand how machine learning (ML) solutions can

According to the European Commission’s guidelines, “The AI Act lays down harmonised rules for the placing on the market, putting

No data was found

To download the guide, fill it out.