Understanding Framing Bias
In recent years, the spotlight on responsible AI and human-centered AI has intensified—driven by the failures of AI systems and their profound impact on individuals, organisations, and society. One significant underlying issue is what’s known as The Framing Trap in machine learning solutions. This trap arises when:
- The humans—whether internal teams or external communities—who will use or be affected by ML tools aren’t meaningfully included in defining the problem.
- Key societal considerations, like fairness, are often excluded or oversimplified during the abstraction of social problems into technical models. Since fairness is complex and context-dependent, failing to address it results in biased or incomplete solutions.
Addressing the Framing Trap requires reframing how we design ML solutions by centering people, their diverse contexts, and their lived experiences.
An Example of The Framing Trap in Machine Learning
Here is an example to provide an insight into the Framing Trap.
“Consider a scenario in which judges need to decide whether a defendant is detained. To assist them in their decision making process, they may be provided with a machine learning model that predicts the risk of recidivism; i.e., the risk that the defendant will re-offend. Notably, the final decision of the judge determines the real-world consequences, not the model’s prediction. Hence, if fairness is a requirement, it is not sufficient to consider the output of the model; you also need to consider how the predictions are used by the judges.
In many real-world systems, several machine learning models are deployed at the same time at different points in the decision-making process. Unfortunately, a system of components that seem fair in isolation do not automatically imply a fair system, i.e. Fair + Fair= Fair” Weerts (2021).
Guide to Addressing the Framing Trap in Machine Learning
The example above demonstrates that achieving fairness goes beyond just focusing on algorithms but as an outcome of the entire system, including human and institutional decisions – that requires an understanding of the entire socio-technical context. For instance, in my work supporting clinical decision-making, I conducted several qualitative studies to explore how predictions are interpreted, acted upon, and integrated into the decision-making process.
For fairness in ML models, it’s important to evaluate the system as a whole. Fairness in individual components does not automatically mean fairness in the entire system, especially when multiple models are involved.
To address the Framing Trap, we must take a holistic approach. This means framing the problem and evaluating the solution by considering all relevant components and actors within the socio-technical system.
Here are some resources to help you address The Framing Trap:
Free Resources for Aggregation Bias Mitigation
AI Bias Mitigation Package – £999



Customised AI Bias Mitigation Package – £2499



By adopting this holistic approach, organizations can minimize the risks of narrowly framed problems and ensure the development of effective, inclusive, and context-aware solutions.
Sources
Andrus, M., Dean, S., Gilbert, T.K., Lambert, N. and Zick, T., 2020, November. AI development for the public interest: From abstraction traps to sociotechnical risks. In 2020 IEEE International Symposium on Technology and Society (ISTAS) (pp. 72-79). IEEE.
Brunnbauer, M., Piller, G. and Rothlauf, F., 2021. idea-AI: Developing a Method for the Systematic Identification of AI Use Cases. In AMCIS.
Dhukaram, A.V. and Baber, C., 2015. Modelling elderly cardiac patients decision making using Cognitive Work Analysis: identifying requirements for patient decision aids. International Journal of Medical Informatics, 84(6), pp.430-443.
Hofmann, P., Jöhnk, J., Protschky, D. and Urbach, N., 2020, March. Developing Purposeful AI Use Cases-A Structured Method and Its Application in Project Management. In Wirtschaftsinformatik (Zentrale Tracks) (pp. 33-49).
Selbst, A.D., Boyd, D., Friedler, S.A., Venkatasubramanian, S. and Vertesi, J., 2019, January. Fairness and abstraction in sociotechnical systems. In Proceedings of the conference on fairness, accountability, and transparency (pp. 59-68).
Weerts, H.J., 2021. An introduction to algorithmic fairness. arXiv preprint arXiv:2105.05595.