The Formalism Trap in Machine Learning

Understanding Formalism Trap

Most AI teams focus solely on optimising fairness metrics like demographic parity or equalised odds rather than aligning AI to address real-world concerns like equity or accessibility.

For example, the Formalism Trap emerges when Fairness—a profoundly human and social concept—is oversimplified into purely mathematical terms.

Fairness is inherently multifaceted, encompassing:

  • Procedural Fairness means acting fairly to make decision-making processes transparent and consistent with meaningful participation.
  • Contextual Fairness: Recognising how varying contexts affect Fairness.
  • Contestable Fairness: Providing mechanisms to question and challenge decisions, ensuring accountability and adaptability.

These critical dimensions extend beyond what algorithms alone can fully address, and there is no one-size-fits-all concept.

The Formalism Trap arises when we reduce Fairness to something that can be solved purely through mathematics. However, Fairness is more than equations—it is about people and the social context.

 

Why We Cannot Simplify Everything into Algorithms

Daniel Kahneman’s work on naturalistic decision-making highlights a critical insight: real-world decisions are deeply influenced by context, intuition, and experience (Klein, 2008). Unlike controlled environments where algorithms excel, real-world situations are uncertain, complex, and shaped by inherent biases that cannot always be codified into mathematical rules.

Daniel Kahneman (2017) distinguishes between two types of thinking:

  • System 1: Fast, automatic, and intuitive decision-making with little or no effort.
  • System 2: Involves deliberate and analytical thinking, used for complex reasoning and problem-solving.

Algorithms are found to emulate System 2 thinking, attempting to analyse data methodically and provide an “optimal” decision for structured problems. However, they often fail in dynamic, high-stakes environments where human intuition (System 1) plays a crucial role.

 

An Example of The Formalism Trap in Machine Learning

Take lending decisions, for example, from Weerts (2021). A machine learning model might be designed to optimise creditworthiness based on historical data, but it cannot easily incorporate unquantifiable factors like an applicant’s resilience during financial hardship. Therefore, the machine learning model’s decision space is limited to approving or rejecting a loan application. This binary view ignores the human nuances of financial decisions. In reality, many more actions may be available, such as recommending a different type of loan or offering financial education to applicants. 

The project’s failure to consider these options was a clear case of the Formalism Trap. By forcing a rich, multifaceted problem into a narrow mathematical frame, the system could not account for what “fairness” meant in the broader context of lending practices.

 

Design Mitigations for The Formalism Trap

Breaking free from the Formalism Trap is not easy, but it is essential if we want to create AI systems that are both effective and equitable. The product team needs to formulate the problem so that a mathematical algorithm can understand it by considering how different definitions of fairness, including mathematical formalisms, help solve different groups’ problems by addressing different contextual concerns.

Free Resources

Best practices for mitigating The Formalism Trap (click Free Downloads)

 
 
AI Bias Mitigation Package – ÂŁ999

The ultimate resource for organisations ready to tackle bias at scale starting from problem definition through to model monitoring to drive responsible AI practices.

dribbble, logo, media, social Mitigate and resolve 15 Types of Bias specific to your project with detailed guidance from problem definition to model monitoring.
dribbble, logo, media, social Packed with practical methods, research-based strategies, and critical questions to guide your team.
dribbble, logo, media, social Comprehensive checklists with +75 design cards for every phase in the AI/ ML pipeline
Get Bias Mitigation Package– (Delivery within 2-3 days)
 
Customised AI Bias Mitigation Package – ÂŁ2499
We’ll customise the design cards and checklists to meet your specific use case and compliance requirements—ensuring the toolkit aligns perfectly with your goals and industry standards.
dribbble, logo, media, social Mitigate and resolve 15 Types of Bias specific to your project with detailed guidance from problem definition to model monitoring.
dribbble, logo, media, social Packed with practical methods, research-based strategies, and critical questions specific to your use case.
dribbble, logo, media, social Customised checklists and +75 design cards for every phase in the AI/ ML pipeline
Get Customised AI Bias Mitigation Package– (Delivery within 7 days)

 

 

 

Conclusion

Fairness is not a box you check—it is a process. Some of the most successful projects I have worked on embraced this reality, iterating based on ongoing feedback from affected communities and stakeholders.

Avoiding the Formalism Trap is not just about building better AI systems but also about building trust. We create technically robust and socially responsible systems when we approach Fairness with humility, acknowledging its complexity and engaging with the people it impacts. If you ever felt fairness metrics fell short or struggled to relate them to real-world challenges, I’d love to hear your thoughts. Let us keep this conversation going because Fairness is too important to leave to algorithms alone.

 

Sources

Alshenqeeti, H., 2014. Interviewing as a data collection method: A critical review. English linguistics research, 3(1), pp.39-45.

Aronson, R.E., Wallis, A.B., O’Campo, P.J., Whitehead, T.L. and Schafer, P., 2007. Ethnographically informed community evaluation: A framework and approach for evaluating community-based initiatives. Maternal and Child Health Journal, 11, pp.97-109.

Andrus, M., Dean, S., Gilbert, T.K., Lambert, N. and Zick, T., 2020, November. AI development for the public interest: From abstraction traps to sociotechnical risks. In 2020 IEEE International Symposium on Technology and Society (ISTAS) (pp. 72-79). IEEE.

Busetto, L., Wick, W. and Gumbinger, C., 2020. How to use and assess qualitative research methods. Neurological Research and practice, 2(1), p.14.

Chen, Z., Zhang, J.M., Hort, M., Harman, M. and Sarro, F., 2024. Fairness testing: A comprehensive survey and analysis of trends. ACM Transactions on Software Engineering and Methodology, 33(5), pp.1-59.

Daniel, K., 2017. Thinking, fast and slow.

Dhukaram, A.V. and Baber, C., 2015. Modelling elderly cardiac patients decision making using Cognitive Work Analysis: identifying requirements for patient decision aids. International Journal of Medical Informatics, 84(6), pp.430-443.

Ferrara, E., 2023. Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies. Sci, 6(1), p.3.

Klein, G., 2008. Naturalistic decision making. Human factors, 50(3), pp.456-460.

Kuang, K., Li, L., Geng, Z., Xu, L., Zhang, K., Liao, B., Huang, H., Ding, P., Miao, W. and Jiang, Z., 2020. Causal inference. Engineering, 6(3), pp.253-263.

Onwuegbuzie, A.J., Dickinson, W.B., Leech, N.L. and Zoran, A.G., 2009. A qualitative framework for collecting and analyzing data in focus group research. International journal of qualitative methods, 8(3), pp.1-21.

Neuhauser, L. and Kreps, G.L., 2011, March. Participatory design and artificial intelligence: Strategies to improve health communication for diverse audiences. In 2011 AAAI Spring Symposium Series.

Selbst, A.D., Boyd, D., Friedler, S.A., Venkatasubramanian, S. and Vertesi, J., 2019, January. Fairness and abstraction in sociotechnical systems. In Proceedings of the conference on fairness, accountability, and transparency (pp. 59-68).

Weerts, H.J., 2021. An introduction to algorithmic fairness. arXiv preprint arXiv:2105.05595.

Wilson, V., 2012. Research methods: interviews.

 

Share:

Related Courses & Al Consulting

Designing Safe, Secure and Trustworthy Al

Workshop for meeting EU AI ACT Compliance for Al

Contact us to discuss your requirements

Related Guidelines

According to the European Commission’s guidelines, “The AI Act lays down harmonised rules for the placing on the market, putting

Understanding Stakeholders Key stakeholders must be involved in value creation and economic profit creation throughout the design, development, and deployment

Understanding Ripple Effects The ripple effect occurs when there is a failure to understand how machine learning (ML) solutions can

AI fairness is context-sensitive, with different domains and applications requiring tailored approaches. Majumder’s (2023) research highlights the existence of numerous

No data was found

To download the guide, fill it out.