This article explores the use of Isolation Forest for anomaly detection and how SHAP (KernelSHAP and TreeSHAP) can be applied to explain the anomalies detected, providing insights into which features contribute to anomaly scores.
Generating counterfactual explanations got a lot easier with CFNOW, but what are counterfactual explanations, and how can I use them?