This article explores the use of Isolation Forest for anomaly detection and how SHAP (KernelSHAP and TreeSHAP) can be applied to explain the anomalies detected, providing insights into which features contribute to anomaly scores.
This article explores how stochastic regularization in neural networks can improve performance on unseen categorical data, especially high-cardinality categorical features. It uses visualizations and SHAP values to understand how entity embeddings respond to this regularization technique.
Generating counterfactual explanations got a lot easier with CFNOW, but what are counterfactual explanations, and how can I use them?