Tags: shap*

The SHAP explainability algorithm or one of its implementations, such as a Python library. The SHAP algorithm is based on Shapley Functions.

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. MIT researchers developed a system that uses large language models to convert AI explanations into narrative text that can be more easily understood by users, aiming to help with better decision-making about model trustworthiness.

    The system, called EXPLINGO, leverages large language models (LLMs) to convert machine-learning explanations, such as SHAP plots, into easily comprehensible narrative text. The system consists of two parts: NARRATOR, which generates natural language explanations based on user preferences, and GRADER, which evaluates the quality of these narratives. This approach aims to help users understand and trust machine learning predictions more effectively by providing clear and concise explanations.

    The researchers hope to further develop the system to enable interactive follow-up questions from users to the AI model.
  2. An article detailing how to build a flexible, explainable, and algorithm-agnostic ML pipeline with MLflow, focusing on preprocessing, model training, and SHAP-based explanations.
  3. This article provides a non-technical guide to interpreting SHAP analyses, useful for explaining machine learning models to non-technical stakeholders, with a focus on both local and global interpretability using various visualization methods.
  4. This article explores the use of Isolation Forest for anomaly detection and how SHAP (KernelSHAP and TreeSHAP) can be applied to explain the anomalies detected, providing insights into which features contribute to anomaly scores.
  5. This article explores how stochastic regularization in neural networks can improve performance on unseen categorical data, especially high-cardinality categorical features. It uses visualizations and SHAP values to understand how entity embeddings respond to this regularization technique.
  6. Generating counterfactual explanations got a lot easier with CFNOW, but what are counterfactual explanations, and how can I use them?
  7. 2021-10-08 Tags: by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "shap"

About - Propulsed by SemanticScuttle