In machine learning, features are the inputs that are fed into a model to make predictions. However, not all features contribute equally to the model’s accuracy. Feature Sensitivity Analysis helps quantify the impact of each feature on the model’s output and allows for better-informed feature selection, which is critical for building efficient, accurate models.
Feature Sensitivity Analysis typically involves systematically varying the values of input features and observing the resulting changes in the model’s output. This method can be applied to both regression and classification models.
Why is Feature Sensitivity Analysis Important?
-
Feature Selection: By assessing the contribution of each feature to the model’s output, FSA helps to identify the most influential features. This allows practitioners to focus on the most relevant features, improving model efficiency and reducing computational overhead.
-
Interpretability: FSA provides valuable insights into how the model makes decisions, which is crucial for increasing transparency in models, especially for industries like healthcare, finance, and law where explainability is critical.
-
Model Robustness: Sensitivity analysis can help identify which features cause the model to be sensitive to small changes in input. Features that lead to large variations in output are often candidates for further scrutiny, ensuring that the model remains robust to small changes or errors in the data.
-
Handling Multicollinearity: In datasets where features are highly correlated, FSA can reveal which features contribute redundantly to the model’s predictions, allowing you to avoid multicollinearity and reduce model complexity.
Example: Sensitivity Analysis in Cardiac MRI Segmentation
In a study on cardiac MRI segmentation, Sensitivity Analysis was used to identify which features most significantly impacted the segmentation accuracy. By applying FSA, it became clear that certain features related to image texture and structure were more important than others in accurately segmenting heart tissues. This insight allowed the model to prioritize these features, increasing its predictive accuracy.
Methods for Feature Sensitivity Analysis
There are several approaches to performing Feature Sensitivity Analysis, including:
-
Partial Dependence Plots (PDP): These plots show the relationship between one or two features and the predicted outcome while holding other features constant. They are useful for understanding how changes in a feature impact the model’s predictions.
-
Permutation Feature Importance: This technique involves randomly shuffling a feature’s values and measuring the model’s performance. A large drop in performance after shuffling indicates that the feature is sensitive and influential in the prediction.
-
Global Sensitivity Analysis (GSA): This approach examines the impact of all input features on the model simultaneously. It is particularly useful when dealing with complex, non-linear models and high-dimensional data.
-
Shapley Values: Originating from cooperative game theory, Shapley values offer a way to distribute the “credit” for a model’s prediction across different features, showing the relative contribution of each feature to the prediction.
Summary
For a more in-depth exploration of best practices and design considerations, refer to our resources below, which includes practical steps and advanced techniques for tackling real-world challenges in machine learning.
Free Resources for Feature Sensitivity Analysis Design Considerations
Sampling Bias in Machine Learning
Measurement Bias in Machine Learning
Social Bias in Machine Learning
Representation Bias in Machine Learning
Feature Sensitivity Analysis for Machine Learning – £99
Empower your team to drive Responsible AI by fostering alignment with compliance needs and best practices.
Practical, easy-to-use guidance from problem definition to model monitoring
Checklists for every phase in the AI/ ML pipeline
AI Fairness Mitigation Package – £999
The ultimate resource for organisations ready to tackle bias at scale starting from problem definition through to model monitoring to drive responsible AI practices.



Customised AI Fairness Mitigation Package – £2499



Sources
Ankenbrand, M.J., Shainberg, L., Hock, M., Lohr, D. and Schreiber, L.M., 2021. Sensitivity analysis for interpretation of machine learning based segmentation models in cardiac MRI. BMC Medical Imaging, 21, pp.1-8.
Kamalov, F., 2018, December. Sensitivity analysis for feature selection. In 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA) (pp. 1466-1470). IEEE.
Zhang, P., 2019. A novel feature selection method based on global sensitivity analysis with application in machine learning-based prediction model. Applied Soft Computing, 85, p.105859.