Understanding Solutionism Trap
I have seen firsthand how teams leap into technical solutions without fully considering the broader social and cultural context. The allure of data and algorithms can overshadow the nuances of human needs, leading to the solutionism trap—a tendency to apply technology as a fix-all, even when it is not the right tool. Hence, translating real-world problems into machine learning (ML) tasks often feels like solving a complex puzzle where some pieces do not quite fit.
Causes for Solutionism Trap in Machine Learning
The solutionism trap emerges when we fail to recognise that:
- Failure to recognise that Machine learning cannot fully address the social and cultural factors inherent in real-world problems. Algorithms are great at pattern recognition but need more capacity to understand human complexities.
- With a tech mindset, there is never an opportunity to evaluate whether technology is the fit to address the problem.
- Fairness and ethics are fluid concepts, often contested politically, making them difficult—if not impossible—to fully capture in mathematical terms.
- Social and cultural dynamics do not easily translate into algorithms, leaving critical aspects of a problem unsolved.
- Teams may unintentionally overestimate an ML system’s potential benefits while underestimating its limitations and risks, a common pitfall of cognitive and optimism biases.
Real-Life Examples of the Solutionism Trap in Machine Learning
One example that sticks with me comes from Weerts (2021):
“consider eligibility for social welfare benefits in the Netherlands. Although the criteria for eligibility are set in the law, some variables, e.g. living situation, are difficult to measure quantitatively. Moreover, the Dutch legal system contains the possibility to deviate from the criteria due to compelling personal circumstances. It is impossible to anticipate all context dependent situations in advance. As a result, machine learning may not be the best tool for this job. In other scenarios, machine learning may be inappropriate because it lacks human connection. Compelling personal circumstances often allow exceptions to the rules. These nuances cannot always be modelled, making ML an ill-suited tool for the job.”
This reminds me of conversations with colleagues about keeping the human element front and centre, mainly when systems deal with vulnerable populations.
Another example highlights hospital care:
“consider a person who is hospitalised. In theory, it may be possible to develop a robot nurse who is perfectly capable of performing tasks such as inserting an IV or washing the patient. However, the patient may also value the genuine interest and concern of a nurse– in other words, a human connection, something a machine learning model cannot (or even should not) provide”
This example underscores something I have always felt strongly about—some tasks are not just about efficiency. They are about humanity, and we should respect that boundary.
Issues in the Solutionism Trap in ML
Selbst et al. (2019) discussion on the solutionism trap provides valuable insights
“To understand whether to build, we must also understand the existing social system. In the risk assessment context, we need to know how arresting officers, prosecutors, and judges introduce bias into both the process and the data. We need to understand how social factors (e.g., poverty) shape criminal activity as well as the contested ideas of the role of criminal justice in the first place. We need to understand how concepts of fairness that surround assessing someone’s risk are political, contested, and may shift over time. More concretely, this leads to questions about, for example, whether judges are elected and responsive to political shifts, whether “failure to appear” is culturally and practically a proxy for being poor, or how demographics of the jurisdiction may change in the near future. If the fairness concept is contested or shifting, it might not be easily be modeled.
One might think that the uncertainty could itself be modeled, and that leads to the second issue. When there is not enough information to understand everything that is important to a context, approximations are as likely to make things worse as better. This could occur because some of the aforementioned traps have not been resolved or because there is not enough empirical evidence to know. In such a case—and especially when the stakes are high, as they are in criminal justice—it is prudent to study what might happen before implementing a technology simply based on its potential to improve the situation. But it could also be that a particular system relies on unmeasurable attributes. Trying to predict how politics will change is difficult. Human preferences are not rational and human psychology is not conclusively measurable [63]. Whether one is trying to model political preference or something else, the system could just be too complex, requiring a computationally and observationally impossible amount of information to model properly. In that case, there should be heavy resistance to implementing a new technology at all.”
Due to this complexity, EU AI Act Article 5 has prohibited AI systems for the evaluation or classification of natural persons or groups of persons over a certain period of time on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following:
(i) detrimental or unfavourable treatment of certain natural persons or groups of persons in social contexts that are unrelated to the contexts in which the data was originally generated or collected;
(ii) detrimental or unfavourable treatment of certain natural persons or groups of persons that is unjustified or disproportionate to their social behaviour or its gravity.
Â
Guide to Avoiding Solutionism Trap in ML
Experts like Selbst et al. (2019) have emphasised the need to deeply understand the social systems we aim to improve with ML. Questions like “How do biases in the justice system shape data and outcomes?” or “Can fairness remain consistent in shifting political landscapes?” often go unasked. However, these are critical to designing ethical AI. For example, we can’t assume fairness metrics will hold steady over time in criminal justice. As political priorities shift, so too do societal definitions of fairness.
Tackling representation bias requires a systematic, proactive approach. You can get started with these resources: Â
Free Resources for Solutionism Trap Mitigation
Best practice and design mitigation for Solutionism Trap (click Free Download).
AI Bias Mitigation Package – ÂŁ999
The ultimate resource for organisations ready to tackle bias at scale starting from problem definition through to model monitoring to drive responsible AI practices.



Customised AI Bias Mitigation Package – ÂŁ2499



Conclusion
Seeing these insights reflected in practice has been a rewarding and eye-opening journey for me, and I hope others can find value in taking a similar path.
Sources
Andrus, M., Dean, S., Gilbert, T.K., Lambert, N. and Zick, T., 2020, November. AI development for the public interest: From abstraction traps to sociotechnical risks. In 2020 IEEE International Symposium on Technology and Society (ISTAS) (pp. 72-79). IEEE.
Bhat, A., Coursey, A., Hu, G., Li, S., Nahar, N., Zhou, S., Kästner, C. and Guo, J.L., 2023, April. Aspirations and practice of ml model documentation: Moving the needle with nudging and traceability. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1-17).
Bilal, H. and Black, S., 2005, March. Using the ripple effect to measure software quality. In SOFTWARE QUALITY MANAGEMENT-INTERNATIONAL CONFERENCE- (Vol. 13, p. 183).
Chancellor, S., 2023. Toward practices for human-centered machine learning. Communications of the ACM, 66(3), pp.78-85.
Dhole, K., 2023, December. Large language models as SocioTechnical systems. In Proceedings of the Big Picture Workshop (pp. 66-79).
Hoel, T., Chen, W. and Pawlowski, J.M., 2020. Making context the central concept in privacy engineering. Research and Practice in Technology Enhanced Learning, 15(1), p.21.
Khlaaf, H., 2023. Toward comprehensive risk assessments and assurance of ai-based systems. Trail of Bits, 7.
Li, M., Wang, W. and Zhou, K., 2021. Exploring the technology emergence related to artificial intelligence: A perspective of coupling analyses. Technological Forecasting and Social Change, 172, p.121064.
Mordaschew, V., Herrmann, J.P. and Tackenberg, S., 2023. Methods of change impact analysis for product development: A systematic review of the literature. Proceedings of the Design Society, 3, pp.2655-2664.
Selbst, A.D., Boyd, D., Friedler, S.A., Venkatasubramanian, S. and Vertesi, J., 2019, January. Fairness and abstraction in sociotechnical systems. In Proceedings of the conference on fairness, accountability, and transparency (pp. 59-68).
Weerts, H.J., 2021. An introduction to algorithmic fairness. arXiv preprint arXiv:2105.05595.
Wexler, J., Pushkarna, M., Bolukbasi, T., Wattenberg, M., Viégas, F. and Wilson, J., 2019. The what-if tool: Interactive probing of machine learning models. IEEE transactions on visualization and computer graphics, 26(1), pp.56-65.