Tags: shap* + xai*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. MIT researchers developed a system that uses large language models to convert AI explanations into narrative text that can be more easily understood by users, aiming to help with better decision-making about model trustworthiness.

    The system, called EXPLINGO, leverages large language models (LLMs) to convert machine-learning explanations, such as SHAP plots, into easily comprehensible narrative text. The system consists of two parts: NARRATOR, which generates natural language explanations based on user preferences, and GRADER, which evaluates the quality of these narratives. This approach aims to help users understand and trust machine learning predictions more effectively by providing clear and concise explanations.

    The researchers hope to further develop the system to enable interactive follow-up questions from users to the AI model.

  2. An article detailing how to build a flexible, explainable, and algorithm-agnostic ML pipeline with MLflow, focusing on preprocessing, model training, and SHAP-based explanations.

  3. This article provides a non-technical guide to interpreting SHAP analyses, useful for explaining machine learning models to non-technical stakeholders, with a focus on both local and global interpretability using various visualization methods.

  4. Generating counterfactual explanations got a lot easier with CFNOW, but what are counterfactual explanations, and how can I use them?

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "shap+xai"

About - Propulsed by SemanticScuttle