The Framing Trap in Machine Learning

Understanding Framing Bias

In recent years, the spotlight on responsible AI and human-centered AI has intensified—driven by the failures of AI systems and their profound impact on individuals, organisations, and society. One significant underlying issue is what’s known as The Framing Trap in machine learning solutions. This trap arises when:

  1. The humans—whether internal teams or external communities—who will use or be affected by ML tools aren’t meaningfully included in defining the problem.
  2. Key societal considerations, like fairness, are often excluded or oversimplified during the abstraction of social problems into technical models. Since fairness is complex and context-dependent, failing to address it results in biased or incomplete solutions.

Addressing the Framing Trap requires reframing how we design ML solutions by centering people, their diverse contexts, and their lived experiences.

 

An Example of The Framing Trap in Machine Learning

Here is an example to provide an insight into the Framing Trap.

“Consider a scenario in which judges need to decide whether a defendant is detained. To assist them in their decision making process, they may be provided with a machine learning model that predicts the risk of recidivism; i.e., the risk that the defendant will re-offend. Notably, the final decision of the judge determines the real-world consequences, not the model’s prediction. Hence, if fairness is a requirement, it is not sufficient to consider the output of the model; you also need to consider how the predictions are used by the judges. 

In many real-world systems, several machine learning models are deployed at the same time at different points in the decision-making process. Unfortunately, a system of components that seem fair in isolation do not automatically imply a fair system, i.e. Fair + Fair= Fair” Weerts (2021).

 

Guide to Addressing the Framing Trap in Machine Learning

The example above demonstrates that achieving fairness goes beyond just focusing on algorithms but as an outcome of the entire system, including human and institutional decisions – that requires an understanding of the entire socio-technical context. For instance, in my work supporting clinical decision-making, I conducted several qualitative studies to explore how predictions are interpreted, acted upon, and integrated into the decision-making process.

For fairness in ML models, it’s important to evaluate the system as a whole. Fairness in individual components does not automatically mean fairness in the entire system, especially when multiple models are involved.

To address the Framing Trap, we must take a holistic approach. This means framing the problem and evaluating the solution by considering all relevant components and actors within the socio-technical system.

 Here are some resources to help you address The Framing Trap:

Free Resources for Aggregation Bias Mitigation

Best practices and design considerations for mitigation The Framing Trap in machine learning (click free download).

 
 
AI Bias Mitigation Package – £999
 
The ultimate resource for organisations ready to tackle bias at scale starting from problem definition through to model monitoring to drive responsible AI practices.
dribbble, logo, media, social Mitigate and resolve 15 Types of Bias specific to your project with detailed guidance from problem definition to model monitoring.
dribbble, logo, media, social Packed with practical methods, research-based strategies, and critical questions to guide your team.
dribbble, logo, media, social Comprehensive checklists with +75 design cards for every phase in the AI/ ML pipeline
Get Bias Mitigation Package– (Delivery within 2-3 days)
 
Customised AI Bias Mitigation Package – £2499
 
We’ll customise the design cards and checklists to meet your specific use case and compliance requirements—ensuring the toolkit aligns perfectly with your goals and industry standards.
dribbble, logo, media, social Mitigate and resolve 15 Types of Bias specific to your project with detailed guidance from problem definition to model monitoring.
dribbble, logo, media, social Packed with practical methods, research-based strategies, and critical questions specific to your use case.
dribbble, logo, media, social Customised checklists and +75 design cards for every phase in the AI/ ML pipeline
Get Customised AI Bias Mitigation Package– (Delivery within 7 days)
 

 

 

By adopting this holistic approach, organizations can minimize the risks of narrowly framed problems and ensure the development of effective, inclusive, and context-aware solutions.

 

Sources

Andrus, M., Dean, S., Gilbert, T.K., Lambert, N. and Zick, T., 2020, November. AI development for the public interest: From abstraction traps to sociotechnical risks. In 2020 IEEE International Symposium on Technology and Society (ISTAS) (pp. 72-79). IEEE.

Brunnbauer, M., Piller, G. and Rothlauf, F., 2021. idea-AI: Developing a Method for the Systematic Identification of AI Use Cases. In AMCIS.

Dhukaram, A.V. and Baber, C., 2015. Modelling elderly cardiac patients decision making using Cognitive Work Analysis: identifying requirements for patient decision aids. International Journal of Medical Informatics, 84(6), pp.430-443.

Hofmann, P., Jöhnk, J., Protschky, D. and Urbach, N., 2020, March. Developing Purposeful AI Use Cases-A Structured Method and Its Application in Project Management. In Wirtschaftsinformatik (Zentrale Tracks) (pp. 33-49).

Selbst, A.D., Boyd, D., Friedler, S.A., Venkatasubramanian, S. and Vertesi, J., 2019, January. Fairness and abstraction in sociotechnical systems. In Proceedings of the conference on fairness, accountability, and transparency (pp. 59-68).

Weerts, H.J., 2021. An introduction to algorithmic fairness. arXiv preprint arXiv:2105.05595.

Share:

Related Courses & Al Consulting

Designing Safe, Secure and Trustworthy Al

Workshop for meeting EU AI ACT Compliance for Al

Contact us to discuss your requirements

Related Guidelines

Understanding Solutionism Trap I have seen firsthand how teams leap into technical solutions without fully considering the broader social and

Understanding Ripple Effects The ripple effect occurs when there is a failure to understand how machine learning (ML) solutions can

According to the European Commission’s guidelines, “The AI Act lays down harmonised rules for the placing on the market, putting

There is increasing scrutiny around the use of artificial intelligence (AI) and algorithms, along with growing awareness of how these

No data was found

To download the guide, fill it out.