klotz: friedman_s h-statistic* + explainability*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. This article explains the concept and use of Friedman's H-statistic for finding interactions in machine learning models.

    - The H-stat is a non-parametric method that works well with ordinal variables, and it's useful when the interaction is not linear.
    - The H-stat compares the average rank of the response variable for each level of the predictor variable, considering all possible pairs of levels.
    - The H-stat calculates the sum of these rank differences and normalizes it by the total number of observations and the number of levels in the predictor variable.
    - The lower the H-stat, the stronger the interaction effect.
    - The article provides a step-by-step process for calculating the H-stat, using an example with a hypothetical dataset about the effects of asbestos exposure on lung cancer for smokers and non-smokers.
    - The author also discusses the assumptions of the H-stat and its limitations, such as the need for balanced data and the inability to detect interactions between more than two variables.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: friedman_s h-statistic + explainability

About - Propulsed by SemanticScuttle