SHAP
- @lundbergUnifiedApproachInterpreting2017
- help users interpret the predictions of complex models
- unclear how these methods are related and when one method is preferable over another
- unified framework for interpreting predictions
- SHAP
- SHapley Additive exPlanations
- game theoretic approach to explain the output of any machine learning model
- connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions
- assigns each feature an importance value for a particular prediction
- identification of a new class of additive feature importance measures
- theoretical results showing there is a unique solution in this class with a set of desirable properties
- notable because several recent methods in the class lack the proposed desirable properties
- present new methods that show improved computational performance and/or better consistency with human intuition than previous approaches